A Strong Stability Preserving Analysis for Explicit Multistage TwoDerivative TimeStepping Schemes Based on Taylor Series Conditions
 91 Downloads
Abstract
Highorder strong stability preserving (SSP) time discretizations are often needed to ensure the nonlinear (and sometimes noninnerproduct) strong stability properties of spatial discretizations specially designed for the solution of hyperbolic PDEs. Multiderivative timestepping methods have recently been increasingly used for evolving hyperbolic PDEs, and the strong stability properties of these methods are of interest. In our prior work we explored time discretizations that preserve the strong stability properties of spatial discretizations coupled with forward Euler and a secondderivative formulation. However, many spatial discretizations do not satisfy strong stability properties when coupled with this secondderivative formulation, but rather with a more natural Taylor series formulation. In this work we demonstrate sufficient conditions for an explicit twoderivative multistage method to preserve the strong stability properties of spatial discretizations in a forward Euler and Taylor series formulation. We call these strong stability preserving Taylor series (SSPTS) methods. We also prove that the maximal order of SSPTS methods is \(p=6\), and define an optimization procedure that allows us to find such SSP methods. Several types of these methods are presented and their efficiency compared. Finally, these methods are tested on several PDEs to demonstrate the benefit of SSPTS methods, the need for the SSP property, and the sharpness of the SSP timestep in many cases.
Keywords
Strong stability preserving Taylor series Hyperbolic conservation laws Two derivative Runge KuttaMathematics Subject Classification
65M12 65L06 65L201 Introduction
In the following subsections we describe SSP RungeKutta time discretizations and present explicit multistage twoderivative methods. We then motivate the need for methods that preserve the nonlinear stability properties of the forward Euler and Taylor series base conditions. In Sect. 2 we formulate the SSP optimization problem for finding explicit twoderivative methods which can be written as the convex combination of forward Euler and Taylor series steps with the largest allowable time step, which we will later use to find optimized methods. In Sect. 2.1 we explore the relationship between SSPSD methods and SSPTS methods. In Sect. 2.2 we prove that there are order barriers associated with explicit twoderivative methods that preserve the properties of forward Euler and Taylor series steps with a positive time step. In Sect. 3 we present the SSP coefficients of the optimized methods we obtain. The methods themselves can be downloaded from our github repository [14]. In Sect. 4 we demonstrate how these methods perform on specially selected test cases, and in Sect. 5 we present our conclusions.
1.1 SSP Methods
If a method can be decomposed into such a convex combination of (3), with a positive value of \(\mathcal{{C}}>0\) then the method is called strong stability preserving (SSP), and the value \(\mathcal{{C}}\) is called the SSP coefficient. SSP methods guarantee the strong stability properties of any spatial discretization, provided, only, that these properties are satisfied when using the forward Euler method. The convex combination approach guarantees that the intermediate stages in a RungeKutta method satisfy the desired strong stability property as well. The convex combination approach clearly provides a sufficient condition for preservation of strong stability. Moreover, it has also been shown that this condition is necessary [11, 12, 16, 17].
Second and thirdorder explicit RungeKutta methods [43] and later fourthorder methods [23, 44] were found that admit such a convex combination decomposition with \(\mathcal{{C}}>0\). However, it has been proven that explicit RungeKutta methods with positive SSP coefficient cannot be more than fourthorder accurate [27, 38].
The timestep restriction (8) is comprised of two distinct factors: (1) the term \(\Delta t_{\text{FE}}\) that is a property of the spatial discretization, and (2) the SSP coefficient \(\mathcal{{C}}\) that is a property of the time discretization. Research on SSP timestepping methods for hyperbolic PDEs has primarily focused on finding highorder time discretizations with the largest allowable time step \(\Delta t\le \mathcal{{C}}\Delta t_{\text{FE}}\) by maximizing the SSP coefficient \(\mathcal{{C}}\) of the method.
Highorder methods can also be obtained by adding more steps (e.g., linear multistep methods) or more derivatives (Taylor series methods). Multistep methods that are SSP have been found [13], and explicit multistep SSP methods exist of very high order \(p>4\), but have severely restricted SSP coefficients [13]. These approaches can be combined with RungeKutta methods to obtain methods with multiple steps, and stages. Explicit multistep multistage methods that are SSP and have order \(p>4\) have been developed as well [1, 24].
1.2 Explicit Multistage TwoDerivative Methods
As in our prior work [5], we focus on using explicit multistage twoderivative methods as time integrators for evolving hyperbolic PDEs. For our purposes, the operator F is obtained by a spatial discretization of the term \(U_t= f(U)_x\) to obtain the system \(u_t = F(u)\). Instead of computing the secondderivative term \({\dot{F}}\) directly from the definition of the spatial discretization F, we approximate \({\tilde{F}} \approx {\dot{F}}\) by employing the CauchyKovalevskaya procedure which uses the PDE (1) to replace the time derivatives by the spatial derivatives, and discretize these in space.
1.3 Motivation for the New Base Conditions for SSP Analysis
However, as we will see in the example below, there are spatial discretizations for which the secondderivative condition (13) is not satisfied but the forward Euler condition (12) and the Taylor series condition (14) are both satisfied. In such cases, the SSPSD methods derived in [5] may not preserve the desired strong stability properties. The existence of such spatial discretizations is the main motivation for the current work, in which we reexamine the strong stability properties of the explicit twoderivative multistage method (9) using the base conditions (12) and (14). Methods that preserve the strong stability properties of (12) and (14) are called, herein, SSPTS methods. The SSPTS approach increases our flexibility in the choice of spatial discretization over the SSPSD approach. Of course, this enhanced flexibility in the choice of spatial discretization is expected to result in limitations on the time discretization (e.g., the twostage fourthorder method is SSPSD but not SSPTS).
Remark 1
This simple firstorder motivating example is chosen because these spatial discretizations are provably TVD and allow us to see clearly why the Taylor series base condition (14) is needed. In practice, we use higher order spatial discretizations such as WENO that do not have a theoretical guarantee of TVD, but perform well in practice. Such methods are considered in Examples 2 and 4 in the numerical tests, and provide us with similar results.
2 SSP Explicit TwoDerivative RungeKutta Methods
We can now easily establish sufficient conditions for an explicit method of the form (22) to be SSP:
Theorem 1
Proof
Definition 1
Remark 2
Theorem 1 gives us the conditions for the method (22) to be SSPTS for any time step \(\Delta t\le \mathcal{{C}}_{\text {TS}} \Delta t_{\text{FE}}\). We note, however, that while the corresponding conditions for RungeKutta methods have been shown to be necessary as well as sufficient, for the multiderivative methods we only show that these conditions are sufficient. This is a consequence of the fact that we define this notion of SSP based on the conditions (12) and (14), but if a spatial discretization also satisfies a different condition (for example, (13)) many other methods of the form (22) also give strong stability preserving results. Notable among these is the twoderivative twostage fourthorder method (15) which is SSPSD but not SSPTS. This means that solutions of (15) can be shown to satisfy the strong stability property \(\Vert u^{n+1} \Vert \le \Vert u^n \Vert\) for positive time steps, for the appropriate spatial discretizations, even though the conditions in Theorem 1 are not satisfied.
However, before we present the optimal methods in Sect. 3, we present the theoretical results on the allowable order of multistage multiderivative SSPTS methods.Find the coefficient matrices S and \({\hat{S}}\)
that maximize the value of \(\mathcal{{C}}_{\text {TS}} = \max r\)
such that the relevant order conditions (summarized in Appendix 1)
2.1 SSP Results for Explicit TwoDerivative RungeKutta Methods
In this paper, we consider explicit SSPTS twoderivative multistage methods that can be decomposed into a convex combination of (12) and (14), and thus preserve their strong stability properties. In our previous work [5] we studied SSPSD methods of the form (9) that can be written as convex combinations of (12) and (13). The following lemma explains the relationship between these two notions of strong stability.
Lemma 1
Any explicit method of the form (9) that can be written as a convex combination of the forward Euler formula (12) and the Taylor series formula (14) can also be written as a convex combination of the forward Euler formula (12) and the secondderivative formula (13).
Proof
This result recognizes that the SSPTS methods we study in this paper are a subset of the SSPSD methods in [5]. This allows us to use results about SSPSD methods when studying the properties of SSPTS methods.
The following lemma establishes the ShuOsher form of an SSPSD method of the form (9). This form allows us to directly observe the convex combination of steps of the form (12) and (13), and thus easily identify the SSP coefficient \(\mathcal{{C}}_{\text {SD}}\).
Lemma 2
 (i)
all the coefficients are nonnegative,
 (ii)
\(\beta _{ij}=0\) whenever \(\alpha _{ij}=0,\)
 (iii)
\({\hat{\beta }}_{ij}=0\) whenever \({\hat{\alpha }}_{ij}=0,\)
Proof
Definition 2
The relationship between the coefficients in (9) and (24) allows us to conclude that the matrices S and \({\hat{S}}\) must contain only nonnegative coefficients.
Lemma 3
If an explicit method of the form (9) can be converted to the ShuOsher form (24) with all nonnegative coefficients\(\alpha _{ij}, \beta _{ij}, {\hat{\alpha }}_{ij}, {\hat{\beta }}_{ij},\)for alli, j, then the coefficients\(a_{ij}, b_j, {\hat{a}}_{ij}, {\hat{b}}_j\)must be all nonnegative as well.
Proof
Now given \(\alpha _{ij} \ge 0\) and \(\beta _{ij} \ge 0\) for all i, j and \(a_{kj} \ge 0\) and \({\hat{a}}_{kj} \ge 0\) for all \(1 \le j < k \le s\), the formulae (26c) and (26d) give the result \(b_{j} \ge 0\) and \({\hat{b}}_{i} \ge 0\). Thus, all the coefficients \(a_{ij}, {\hat{a}}_{ij}, b_j, {\hat{b}}_j\) must be all nonnegative.
We wish to study only those methods for which the Butcher form (9) is unique. To do so, we follow Higueras [18] in extending the reducibility definition of Dahlquist and Jeltsch [19]. Other notions of reducibility exist, but for our purposes it is sufficient to define irreducibility as follows:
Definition 3
Lemma 4
Proof
We note that this same result, in the context of additive RungeKutta methods, is due to Higueras [18].
2.2 Order Barriers
For twoderivative multistage SSPTS methods, we find that similar results hold. A stage order of \(q=2\) is possible for explicit twoderivative methods (unlike explicit RungeKutta methods) because the first stage can be second order, i.e., a Taylor series method. However, since the first stage can be no greater than second order we have a bound on the stage order \(q \le 2\), which results in an order barrier of \(p \le 6\) for these methods. In the following results we establish these order barriers.
Lemma 5
Given an irreducible SSPTS method of the form (9), if\(b_j=0,\)then the corresponding\({\hat{b}}_j =0\).
Proof
In any SSPTS method the appearance of a secondderivative term \({\tilde{F}}\) can only happen as part of a Taylor series term. This tells us that \({\tilde{F}}\) must be accompanied by the corresponding F, meaning that whenever we have a nonzero \({\hat{a}}_{ij}\) or \(\hat{b_j}\) term, the corresponding \({a}_{ij}\) or \({b_j}\) term must be nonzero.
Lemma 6
Proof
Any irreducible method (9) that can be written as a convex combination of (12) and (14) can also be written as a convex combination of (12) and (13), according to Lemma 1. Applying Lemma 4 we obtain the condition \(b+{\hat{b}} > 0\), componentwise. Now, Lemma 5 tells us that if any component \({b}_j =0\) then its corresponding \({\hat{b}}_j =0\), so that \(b_j + {\hat{b}}_j > 0\) for each j implies that \(b_j >0\) for each j.
Theorem 2
Proof
Theorem 3
Any irreducible explicit SSPTS method of the form (9) cannot have order\(p=7\).
Proof
Note that the order barriers do not hold for SSPSD methods, because SSPSD methods do not require that all components of the vector b must be strictly positive.
3 Optimized SSP Taylor Series Methods
To accomplish this, we develop and use a matlab optimization code [14] (similar to Ketcheson’s code [26]) for finding optimal twoderivative multistage methods that preserve the SSP properties (12) and (14). The SSP coefficients of the optimized SSP explicit multistage twoderivative methods of order up to \(p=6\) (for different values of K) are presented in this section.Find the coefficient matricesS and \({\hat{S}}\) that maximize the value of \(\mathcal{{C}}_{\text {TS}} = \max r\)
such that the relevant order conditions and the SSP conditions (23a)–(23b) are all satisfied.

(M1) Methods that have the general form (9) with no simplifications.
 (M2) Methods that are constrained to satisfy the stage order two (\(q=2\)) requirement (28),$$\begin{aligned} \tau _2 = A c +{\hat{c}}  \frac{1}{2} c^2 = 0. \end{aligned}$$

(M3) Methods that satisfy the stage order two (\(q=2\)) (28) requirement and require only \({\dot{F}}(u^n)\), so they have only one secondderivative evaluation. This is equivalent to requiring that all values in \({\hat{A}}\) and \({\hat{b}}\), except those on the first column of the matrix and the first element of the vector, be zero.
3.1 FourthOrder Methods
Using the optimization approach described above, we find fourthorder methods with \(s=3,4,5\) stages for a range of \(K=0.1, \ldots , 2.0\). In Fig. 1 we show the SSP coefficients of methods of SSPTS methods of type (M1) and (M2) with \(s=3,4,5\) (in blue, red, green) plotted against the value of K. The open stars indicate methods of type (M1) while the filled circles are methods of type (M2). Filled stars are (M1) markers overlaid with (M2) markers indicating close if not equal SSP coefficients.
SSPTS coefficients of threestage fourthorder SSPTS methods
K  0.1  0.2  0.5  1.0  1.5  2.0  

(M2)  \(\mathcal{{C}}_{\text {TS}}\)  0.199 5  0.395 3  0.975 7  1.878 9  2.495 4  2.732 1 
\(\mathcal{{C}}_{\text{eff} }\)  0.033 3  0.065 9  0.162 6  0.313 1  0.415 9  0.455 3  
(M3)  \(\mathcal{{C}}_{\text {TS}}\)  0.181 8  0.333 3  0.666 7  1.000 0  1.000 0  1.000 0 
\(\mathcal{{C}}_{\text{eff} }\)  0.045 4  0.083 3  0.166 7  0.250 0  0.250 0  0.250 0 
SSPTS coefficients of fourstage fourthorder SSPTS methods
K  0.1  0.2  0.3  0.5  1.0  1.5  1.6  1.8  2.0  

(M1)  \(\mathcal{{C}}_{\text {TS}}\)  0.440 0  0.692 1  0.966 2  1.561 7  2.666 9  3.473 5  3.560 7  3.675 9  3.716 1 
\(\mathcal{{C}}_{\text{eff} }\)  0.055 0  0.086 5  0.120 8  0.195 2  0.333 4  0.434 2  0.445 1  0.459 5  0.464 5  
(M2)  \(\mathcal{{C}}_{\text {TS}}\)  0.352 3  0.656 9  0.966 2  1.561 7  2.666 9  3.473 5  3.530 1  3.585 0  3.628 2 
\(\mathcal{{C}}_{\text{eff} }\)  0.044 0  0.082 1  0.120 8  0.195 2  0.333 4  0.434 2  0.441 3  0.448 1  0.453 5  
(M3)  \(\mathcal{{C}}_{\text {TS}}\)  0.338 1  0.610 2  0.840 7  1.217 4  1.818 1  2.059 6  2.079 3  2.103 0  2.109 3 
\(\mathcal{{C}}_{\text{eff} }\)  0.067 6  0.122 0  0.168 1  0.243 5  0.363 6  0.411 9  0.415 9  0.420 6  0.421 9 
SSPTS coefficients of fivestage fourthorder methods
K  0.1  0.2  0.3  0.5  0.6  0.7  1.0  1.5  2.0  

(M1)  \(\mathcal{{C}}_{\text {TS}}\)  1.525 6  1.576 8  1.656 3  2.093 4  2.447 2  2.781 9  3.585 1  4.437 1  4.991 9 
\(\mathcal{{C}}_{\text{eff}}\)  0.152 6  0.157 7  0.165 6  0.209 3  0.244 7  0.278 2  0.358 5  0.443 7  0.499 2  
(M2)  \(\mathcal{{C}}_{\text {TS}}\)  0.587 6  1.000 3  1.331 9  2.093 4  2.447 2  2.781 9  3.538 1  4.362 9  4.661 4 
\(\mathcal{{C}}_{\text{eff}}\)  0.058 8  0.100 0  0.133 2  0.209 3  0.244 7  0.278 2  0.353 8  0.436 3  0.466 1  
(M3)  \(\mathcal{{C}}_{\text {TS}}\)  0.563 1  0.929 6  1.205 7  1.655 1  1.855 4  2.030 0  2.440 7  2.874 8  2.976 8 
\(\mathcal{{C}}_{\text{eff}}\)  0.093 9  0.154 9  0.200 9  0.275 8  0.309 2  0.338 3  0.406 8  0.479 1  0.496 1 
Four Stage SSPTS Methods While fourstage fourthorder explicit SSP RungeKutta methods do not exist, fourstage fourthorder SSPTS explicit twoderivative RungeKutta methods do. Fourstage fourthorder methods do not necessarily satisfy the stage order two (\(q=2\)) condition. These methods have a more nuanced behavior: for very small \(K<0.2\), the optimized SSP methods have stage order \(q=1\). For \(0.2< K< 1.6\) the optimized SSP methods have stage order \(q=2\). Once K becomes larger again, for \(K \ge 1.6\), the optimized SSP methods are once again of stage order \(q=1\). However, the difference in the SSP coefficients is very small (so small it does not show on the graph) so the (M2) methods can be used without significant loss of efficiency.
As seen in Table 2, the methods with the special structure (M3) have smaller SSP coefficients. But when we look at the effective SSPTS coefficient we notice that, once again, for smaller K they are more efficient. Table 2 shows that the (M3) methods are more efficient when \(K \le 1.5\), and remain competitive for larger values of K.
FiveStage Methods The optimized fivestage fourthorder methods have stage order \(q=2\) for the values of \(0.5 \le K \le 7\), and otherwise have stage order \(q=1\). The SSP coefficients of these methods are shown in the green line in Fig. 1, and the SSP and effective SSP coefficients for all three types of methods are compared in Table 3. We observe that these methods have higher effective SSP coefficients than the corresponding fourstage methods.
3.2 FifthOrder SSPTS Methods
While fifthorder explicit SSP RungeKutta methods do not exist, the addition of a second derivative which satisfies the Taylor Series condition allows us to find explicit SSPTS methods of fifth order. For fifth order, we have the result (in Sect. 2.2 above) that all methods must satisfy the stage order \(q=2\) condition, so we consider only (M2) and (M3) methods. In Fig. 2 we show the SSPTS coefficients of M2(s,5,K) methods for \(s=4,5,6\).
FourStage Methods Fourstage fifthorder methods exist, and their SSPTS coefficients are shown in blue in Fig. 2. We were unable to find M3(4,5,K) methods, possibly due to the paucity of available coefficients for this form.
FiveStage Methods The SSP coefficient of the fivestage M2 methods can be seen in red in Fig. 2. We observe that the SSP coefficient of the M2(5,5,K) methods plateaus with respect to K. As shown in Table 4, methods with the form (M3) have a significantly smaller SSP coefficient than that of (M2). However, the effective SSP coefficient is more informative here, and we see that the (M3) methods are more efficient for small values of \(K \le 0.5\), but not for larger values.
SSPTS coefficients and effective SSPTS coefficients of fifthorder methods
K  0.1  0.2  0.3  0.5  1.0  1.5  1.6  1.8  2.0  

M2(5,5,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.380 2  0.744 8  1.089 2  1.687 7  2.928 1  3.810 2  3.847 9  3.887 9  3.897 1 
\(\mathcal{{C}}_{\text{eff}}\)  0.038 0  0.074 5  0.108 9  0.168 8  0.292 8  0.381 0  0.384 8  0.388 8  0.389 7  
M3(5,5,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.329 8  0.597 7  0.818 6  1.062 5  1.062 5  1.062 5  1.062 5  1.062 5  1.062 5 
\(\mathcal{{C}}_{\text{eff}}\)  0.055 0  0.099 6  0.136 4  0.177 1  0.177 1  0.177 1  0.177 1  0.177 1  0.177 1  
M2(6,5,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.567 7  1.023 0  1.458 1  2.210 2  3.874 9  4.920 1  5.000 2  5.090 3  5.130 1 
\(\mathcal{{C}}_{\text{eff}}\)  0.047 3  0.085 2  0.121 5  0.184 2  0.322 9  0.410 0  0.416 7  0.424 2  0.427 5  
M3(6,5,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.539 8  0.937 0  1.259 2  1.691 4  1.820 8  1.820 8  1.820 8  1.820 8  1.820 8 
\(\mathcal{{C}}_{\text{eff}}\)  0.077 1  0.133 9  0.179 9  0.241 6  0.260 1  0.260 1  0.260 1  0.260 1  0.260 1 
SSPTS coefficients and effective SSPTS coefficients of sixthorder SSPTS methods
K  0.1  0.2  0.3  0.5  1.0  1.5  2.0  

M2(5,6,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.144 1  0.228 0  0.278 0  0.324 2  0.350 0  0.353 6  0.355 5 
\(\mathcal{{C}}_{\text{eff}}\)  0.014 4  0.022 8  0.027 8  0.032 4  0.035 0  0.035 4  0.035 5  
M2(6,6,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.294 4  0.515 7  0.672 5  0.904 4  1.522 5  2.000 2  2.196 6 
\(\mathcal{{C}}_{\text{eff}}\)  0.024 5  0.043 0  0.056 0  0.075 4  0.126 9  0.166 7  0.183 1  
M2(7,6,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.398 1  0.715 8  0.973 4  1.421 7  2.037 6  2.564 8  2.779 4 
\(\mathcal{{C}}_{\text{eff}}\)  0.028 4  0.051 1  0.069 5  0.101 6  0.145 5  0.183 2  0.198 5  
M3(7,6,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.354 7  0.600 7  0.805 9  0.894 1  0.894 7  0.894 7  0.894 7 
\(\mathcal{{C}}_{\text{eff}}\)  0.044 3  0.075 1  0.100 7  0.111 8  0.111 8  0.111 8  0.111 8  
M3(8,6,K)  \(\mathcal{{C}}_{\text {TS}}\)  0.549 5  0.975 4  1.288 2  1.643 5  1.736 9  1.736 9  1.736 9 
\(\mathcal{{C}}_{\text{eff}}\)  0.061 1  0.108 4  0.143 1  0.182 6  0.193 0  0.193 0  0.193 0 
3.3 SixthOrder SSPTS Methods
3.4 Comparison with Existing Methods
Next, we wish to compare the methods in this work to those in [32], which was the first paper to consider an SSP property based on the forward Euler and Taylor series base conditions. The approach used in our work is somewhat similar to that in [32] where the authors consider building time integration schemes which can be composed as convex combinations of forward Euler and Taylor series time steps, where they aim to find methods which are optimized for the largest SSP coefficients. However, there are several differences between our approach and the one of [32], which results in the fact that in this paper we are able to find more methods, of higher order, and with better SSP coefficients. In addition, in the present work we find and prove an order barrier for SSPTS methods.
The first difference between our approach and the approach in [32] is that we allow computations of \({\dot{F}}\) of the intermediate values, rather than only \({\dot{F}}(u^n)\). Another way of saying this is that we consider SSSPTS methods that are not of type M3, while the methods considered in [32] are all of type M3. In some cases, when we restrict our search to M3 methods and \(K=1\), we find methods with the same SSP coefficient as in [32]. For example, HBT34 matches our SSPTS M3(3,4,1) method with an SSP coefficient of \(\mathcal{{C}}_{\text {TS}}=1\), HBT44 matches our SSPTS M3(4,4,1) method with \(\mathcal{{C}}_{\text {TS}}=\frac{20}{11}\), HBT54 matches our SSPTS M3(5,4,1) method with \(\mathcal{{C}}_{\text {TS}}=2.441\), and HBT55 matches our SSPTS M3(5,5,1) method with an SSP coefficient of \(\mathcal{{C}}_{\text {TS}}=1.062\). While methods of type M3 have their advantages, they are sometimes suboptimal in terms of efficiency, as we point out in the tables.
The second difference between the SSPTS methods in this paper and the methods in [32] is that in [32] only one method of order \(p>4\) is reported, while we have many fifth and sixthorder methods of various types and stages, optimized for a variety of K values.
The most fundamental difference between our approach and the approach in [32] is that our methods are optimized for the relationship between the forward Euler restriction and the Taylor series restriction while the timestep restriction in the methods of [32] is defined as the most restrictive of the forward Euler and Taylor series timestep conditions. Respecting the minimum of the two cases will still satisfy the nonlinear stability property, but this approach does not allow for a balance between the restrictions considered, which can lead to severely more restrictive conditions. In our approach we use the relationship between the two timestep restrictions to select optimal methods. For this reason, the methods we find have larger allowable time steps in many cases. To understand this a little better consider the case where the forward Euler condition is \(\Delta t_{\text {FE}} \le \Delta x\) and the Taylor series condition is \(\Delta t_{\text {TS}} \le \frac{1}{2}\Delta x\). In the approach used in [32], the base timestep restriction is then \(\Delta t_{\max } = \max \{ \Delta t_{\text {FE}}, \Delta t_{\text {TS}} \} \le \frac{1}{2}\Delta x\). The HBT23 method in [32] is a thirdorder scheme with two stages which has an SSP coefficient of \(\mathcal{{C}}_{\text {TS}}=1\), so the allowable time step with this scheme will be the same \(\Delta t\le \mathcal{{C}}_{\text {TS}} \Delta t_{\max } \le \frac{1}{2}\Delta x\). On the other hand, using our optimal SSPTS M2(2,3,0.5) scheme, which has an SSP coefficient \(\mathcal{{C}}_{\text {TS}}=0.75\), the allowable time step is \(\Delta t\le \mathcal{{C}}_{\text {TS}} \Delta t_{\text {FE}} \le \frac{3}{4} \Delta x\), a 50% increase. This is not only true when \(K<1\): consider the case where \(\Delta t_{\text {FE}}\le \frac{1}{2}\Delta x\) and \(\Delta t_{\text {TS}} \le \Delta x\). Once again the HBT23 method in [32] will have a timestep restriction of \(\Delta t\le \mathcal{{C}}_{\text {TS}} \Delta t_{\max } \le \frac{1}{2}\Delta x\), while our M2(2,3,2) method has an SSP coefficient \(\mathcal{{C}}_{\text {TS}}=1.88\), so that the overall timestep restriction would be \(\Delta t\le \frac{1.88}{2} \Delta x=0.94 \Delta x\), which is 88% larger. Even when the two base conditions are the same (i.e., \(K=1\)) and we have \(\Delta t_{\text {FE}} \le \Delta x\) and \(\Delta t_{\text {TS}} \le \Delta x\), the HBT23 method in [32] gives an allowable time step of \(\mathcal{{C}}_{\text {TS}}=1\) while our SSPTS M2(2,3,1) has an SSP coefficient \(\mathcal{{C}}_{\text {TS}}=1.5\), so that our method allows a time step that is 50% larger.^{3} These simple cases demonstrate that our methods, which are optimized for the value of K, will usually allow a larger SSP coefficient that the methods obtained in [32].
4 Numerical Results
4.1 Overview of Numerical Tests
We wish to test our methods on what are now considered standard benchmark tests in the SSP community. In this subsection we preview our results, which we then present in more detail throughout the remainder of the section.
First, in the tests in Sect. 4.3 we focus on how the strong stability properties of these methods are observed in practice, by considering the total variation of the numerical solution. We focus on two scalar PDEs: the linear advection equation and Burgers’ equation, using simple firstorder spatial discretizations which are known to satisfy a total variationdiminishing property over time for the forward Euler and Taylor series building blocks. We want to ensure that our numerical approximation to these solutions observe similar properties as long as the predicted SSP timestep restriction, \(\Delta t \le \mathcal{{C}}_{\text {TS}} \Delta t_{\text {FE}}\), is respected. These scalar onedimensional partial differential equations are chosen for their simplicity so we may understand the behavior of the numerical solution, but the discontinuous initial conditions may lead to instabilities if standard time discretization techniques are employed. Our tests show that the methods we design here preserve these properties as expected by the theory.
In Example 2, we extend the results from Example 1 to the case where we use the higher order weighted essentially nonoscillatory (WENO) method, which is not probably TVD but gives results that have very small increases in total variation. We demonstrate that our methods outperform other methods, such as the SSPSD MDRK methods in [5], and that nonSSP methods that are standard in the literature do not preserve the TVD property for any time step.
It is important to notice that the SSPTS methods we designed depend on the value of K in (14). However, in practice we often do not know the exact value of K. In Example 3 we investigate what happens when we use spatial discretizations with a given value of K with time discretization methods designed for an incorrect value of K. We conclude that although in some cases a smaller step size is required, for methods of type M3 there is generally no adverse result from selecting the wrong value of K.
In Example 4 we investigate the increased flexibility in the choice of spatial discretization that results from relying on the (12) and (14) base conditions. The only constraint in the choice of differentiation operators \(D_x\) and \({\tilde{D}}_x\) (described at the end of Sect. 1.2) is that the resulting building blocks must satisfy the monotonicity conditions (12) and (14) in the desired convex functional \(\Vert \cdot \Vert\). As noted above, this constraint is less restrictive than requiring that (12) and (13) are satisfied: any spatial discretizations for which (12) and (13) are satisfied will also satisfy (14). However, there are some spatial discretizations that satisfy (12) and (14) that do not satisfy (13). In Example 4 we find that choosing spatial discretizations that satisfy (12) and (14) but not (13) allows for larger time steps before the rise in total variation. And finally, in Example 5, we demonstrate the positivitypreserving behavior of our methods when applied to a nonlinear system of equations.
4.2 On the Numerical Implementation of the Second Derivative
4.3 Example 1: TVD FirstOrder Finite Difference Approximations
In this section we use firstorder spatial discretizations, that are probably total variation diminishing (TVD), coupled with a variety of timestepping methods. We look at the maximal rise in total variation.
Example 1: \(\mathcal{{C}}_{\text {TS}}^{\text {pred}}\) and \(\mathcal{{C}}_{\text {TS}}^{\text {obs}}\) for SSPTS M2 and M3 methods
Method  \(\mathcal{{C}}_{\text {TS}}^{\text {pred}}\)  \(\mathcal{{C}}_{\text {TS}}^{\text {obs}}\)  \(\mathcal{{C}}_{\text{eff}}^{\text {pred}}\)  \(\mathcal{{C}}_{\text{eff}}^{\text {obs}}\) 

Linear advection  
FE  1.000 0  1.000 0  1.00  1.00 
TS  1.000 0  1.000 0  0.50  0.50 
M2(3,4,1)  1.878 8  1.878 8  0.31  0.31 
M3(3,4,1)  1.000 0  1.000 0  0.25  0.25 
M2(4,4,1)  2.666 8  2.666 8  0.33  0.33 
M3(4,4,1)  1.818 1  1.818 1  0.36  0.36 
M2(5,4,1)  3.538 1  3.629 1  0.35  0.36 
M3(5,4,1)  2.440 6  2.440 6  0.40  0.40 
M2(4,5,1)  2.186 4  2.223 9  0.27  0.27 
M2(5,5,1)  2.928 0  3.168 1  0.29  0.31 
M3(5,5,1)  1.062 5  1.571 0  0.17  0.26 
M2(6,5,1)  3.874 9  3.874 9  0.32  0.32 
M3(6,5,1)  1.820 7  1.956 2  0.26  0.27 
M2(5,6,1)  0.350 0  1.939 8  0.03  0.19 
M2(6,6,1)  1.522 5  2.354 8  0.12  0.19 
M2(7,6,1)  2.115 0  2.369 5  0.15  0.19 
M3(7,6,1)  0.894 6  1.320 7  0.11  0.16 
M3(8,6,1)  1.736 9  1.986 1  0.19  0.22 
Burgers’  
FE  1.000 0  1.000 0  1.00  1.00 
TS  1.000 0  1.000 0  0.50  0.50 
M2(3,4,1)  1.878 8  1.878 8  0.31  0.31 
M3(3,4,1)  1.000 0  1.000 0  0.25  0.25 
M2(4,4,1)  2.666 8  2.666 8  0.33  0.33 
M3(4,4,1)  1.818 1  1.818 1  0.36  0.36 
M2(5,4,1)  3.538 1  3.610 2  0.35  0.36 
M3(5,4,1)  2.440 6  2.440 6  0.40  0.40 
M2(4,5,1)  2.186 4  2.213 0  0.27  0.27 
M2(5,5,1)  2.928 0  3.100 9  0.29  0.31 
M3(5,5,1)  1.062 5  1.543 6  0.17  0.25 
M2(6,5,1)  3.874 9  3.874 9  0.32  0.32 
M3(6,5,1)  1.820 7  2.000 3  0.26  0.28 
M2(5,6,1)  0.350 0  1.923 9  0.03  0.19 
M2(6,6,1)  1.522 5  2.287 5  0.12  0.19 
M2(7,6,1)  2.115 0  2.318 9  0.15  0.16 
M3(7,6,1)  0.894 6  1.289 3  0.11  0.16 
M3(8,6,1)  1.736 9  1.973 4  0.19  0.21 

Forward Euler condition\(u^{n+1}_j = u^n_j + \frac{\Delta t}{\Delta x} \left( u^n_{j+1}  u^n_j \right)\) is TVD for \(\Delta t \le \Delta x\), and

Taylor series condition\(u^{n+1}_j = u^n_j + \frac{\Delta t}{\Delta x} \left( u^n_{j+1}  u^n_j \right) + \frac{1}{2} \left( \frac{\Delta t}{\Delta x} \right) ^2 \left( u^n_{j+2}  2 u^n_{j+1} + u^n_{j} \right)\) is TVD for \(\Delta t \le \Delta x\).
For all of our simulations for this example, we use a fixed grid of \(M=601\) points, for a grid size \(\Delta x = \frac{1}{600}\), and a time step \(\Delta t = \lambda \Delta x\) where we vary \(\lambda\) from \(\lambda = 0.05\) until beyond the point where the TVD property is violated. We step each method forward by \(N=50\) timesteps and compare the performance of the various timestepping methods constructed earlier in this work, for \(K = 1\). We define the observed SSP coefficient \(\mathcal{{C}}_{\text {TS}}^{\text {obs}}\) as the multiple of \(\Delta t_{\text{FE}}\) for which the maximal rise in total variation exceeds \(10^{10}\).
We verify that the observed values of \(\Delta t_{\text{FE}}\) and K match the predicted values, and test this problem to see how well the observed SSP coefficient \(\mathcal{{C}}_{\text {TS}}^{\text {obs}}\) matches the predicted SSP coefficient \(\mathcal{{C}}_{\text {TS}}^{\text {pred}}\) for the fourth, fifth, and sixthorder methods. The results are listed in the upper half of Table 6.
The results from these two studies show that the SSPTS methods provide a reliable guarantee of the allowable time step for which the method preserves the strong stability condition in the desired norm. For methods of order \(p=4\), we observe that the SSP coefficient is sharp: the predicted and observed values of the SSP coefficient are identical for all the fourthorder methods tested. For methods of higher order (\(p=5,6\)) the observed SSP coefficient is often significantly higher than the minimal value guaranteed by the theory.
4.4 Example 2: Weighted Essentially Nonoscillatory (WENO) Approximations
For the spatial discretization, we use the fifthorder finite difference WENO method [20] in space, as this is a highorder method that can handle shocks. We describe this method in Appendix 3. Recall that the motivation for the development of SSP multistage multiderivative timestepping is for use in conjunction with highorder methods for problems with shocks. Ideally, the specially designed spatial discretizations satisfy (12) and (14). Although the weighted essentially nonoscillatory (WENO) methods do not have a theoretical guarantee of this type, in practice we observe that these methods do control the rise in total variation, as long as the stepsize is below a certain threshold.
Below, we refer to the WENO method on a flux with \(f'(u) \ge 0\) as \(\hbox {WENO}^+\) defined in (41) and to the corresponding method on a flux with \(f'(u) \le 0\) as \(\hbox {WENO}^\) defined in (42). Because \(f'(u)\) is strictly nonnegative in this example, we do not need to use flux splitting, and use \(D =\hbox {WENO}^+\). For the second derivative we have the freedom to use \({\tilde{D}}_x=\hbox {WENO}^+\) or \({\tilde{D}}_x=\hbox {WENO}^\). In this example, we use \({\tilde{D}}_x=D_x=\hbox {WENO}^+\). In Example 4 below we show that this is more efficient.
In Fig. 4(a), we compare the performance of our SSPTS M3(7,5,1) and SSPTS M2(4,5,1) methods, which both have eight function evaluations per time step, and our SSPTS M3(5,5,1), which has six function evaluations per time step, to the SSPSD MDRK(3,5,2) in [5] and nonSSP RK(6,5) DormandPrince method [8], which also have six function evaluations per time step. We note that we use the SSPSD MDRK(3,5,2) (designed for \(K=2\)) because this method performs best compared to other explicit twoderivative multistage methods designed for different values of K. Clearly, the nonSSP method is not safe to use on this example. The M3 methods are most efficient, allowing the largest time step per function evaluation before the total variation begins to rise.
This conclusion is also the case for the sixthorder methods. In Fig. 4(b), we compare our SSPTS M3(9,6,1) and M2(5,6,1) methods, which both have ten function evaluations per time step, and our M3(7,6,1), which has eight function evaluations per time step, to the SSPSD MDRK(4,6,1) and nonSSP RK(8,6) method given in Verner’s paper table [52], which also have eight function evaluations per time step. Clearly, the nonSSP method is not safe to use on this example. The M3 methods are most efficient, allowing the largest time step per function evaluation before the total variation begins to rise.
4.5 Example 3: Testing Methods Designed with Various Values of K
4.6 Example 4: The Benefit of Different Base Conditions
In [5] we use the choice of \(D_x= {\text {WENO}}^{+}\) defined in (41), followed by \({\tilde{D}}_x={\text {WENO}}^{}\) defined in (42), by analogy to the firstorder finite difference for the linear advection case \(U_t = U_x\), where we use a differentiation operator \(D_x^{+}\) followed by the downwind differentiation operator \(D_x^{}\) to produce a centered difference for the second derivative. In fact, this approach makes sense for these cases because it respects the properties of the flux for the second derivative and consequently satisfies the secondderivative condition (13). However, if we simply wish the Taylor series formulation to satisfy a TVDlike condition, we are free to use the same operator (\({\text {WENO}}^{+}\) or \({\text {WENO}}^{}\), as appropriate) twice, and indeed this gives a larger allowable \(\Delta t\).
4.7 Example 5: Nonlinear Shallow Water Equations
The predicted and observed values of \(\lambda = \alpha \frac{\Delta t}{\Delta x}\) (where \(\Delta x=\frac{1}{200}\)) for which positivity of the height of the water is preserved in the shallow water equations in Example 5
Method  \(\lambda ^{\text {pred}}\)  \(\lambda ^{\text {obs}}\)  Method  \(\lambda ^{\text {pred}}\)  \(\lambda ^{\text {obs}}\) 

Forward Euler  1.000 00  1.010 58  Taylor series  1.000 00  1.025 98 
Dormand Prince  0.000 00  0.000 00  nonSSPRK(8,6)  0.000 00  0.000 00 
SSPSD MDRK(3,5,2)  –  1.031 76  SSPSD MDRK(4,6,1)  –  1.078 03 
SSPTS M2(4,5,1)  2.186 48  3.010 05  SSPTS M2(5,6,1)  0.350 01  2.484 11 
SSPTS M3(5,5,1)  1.062 53  1.785 93  SSPTS M3(7,6,1)  0.894 68  1.640 84 
SSPTS M3(6,5,1)  1.820 79  2.125 79  SSPTS M3(9,6,1)  2.598 60  3.033 87 
First, we investigate the behavior of the base methods in terms of the positivitypreserving time step. In other words, we want to get a numerical value for \(\Delta t_{\text{FE}}, K\). To do so, we numerically study the positivity behavior of the forward Euler and Taylor series approach. To do this, we evolve the solution forward for more time steps with different values of \(\lambda = \alpha \frac{\Delta t}{\Delta x}\) to identify the predicted positivitypreserving value \(\lambda ^{\text {pred}}\). Using the approach, we see that as we increase the number of steps the predicted value of the positivity preserving value, \(\lambda ^{\text {pred}}_{\text {FE}} \rightarrow 1\) and \(\lambda ^{\text {pred}}_{\text {TS}} \rightarrow 1\), for both forward Euler and Taylor series. We are not able to numerically identify \({\tilde{K}}\) resulting from the secondderivative condition, which cannot be evolved forward as it does not approximate the solution to the ODE at all.
In Table 7 we compare the positivitypreserving time step of a variety of numerical time integrators. We consider the fifthorder SSPTS methods M2(4,5,1), M3(5,5,1), and M3(6,5,1), and compare their performance to the SSPSD MDRK(3,5,2) method in [5], and the nonSSP DormandPrince method. We also consider the sixthorder SSPTS methods M2(5,6,1), M3(7,6,1), and M3(9,6,1), as well as the SSPSD MDRK(4,6,1) from [5] and the nonSSPRK(8,6) method. Positivity of the water height is measured at each stage for a total of \(N=60\) time steps. We report the largest allowable value of \(\lambda = \alpha \frac{\Delta t}{\Delta x}\) (\(\alpha\) is the maximal wavespeed for the domain) for which the solution remains positive. For each method, the predicted values \(\lambda ^{\text {pred}}\) are obtained by multiplying the SSP coefficient \(\mathcal{{C}}_{\text {TS}}\) of that method by \(\lambda ^{\text {pred}}_{\text {FE}} = \lambda ^{\text {pred}}_{\text {TS}} = 1\). For the SSPSD MDRK methods we do not make a prediction as we are not able to identify \({\tilde{K}}\) resulting from the secondderivative condition.
In Table 7 we show that all of our SSPTS methods preserve the positivity of the solution for values larger than those predicted by the theory \(\lambda ^{\text {obs}} > \lambda ^{\text {pred}}\), and that even for the SSP MSRK methods there is a large region of values \(\lambda ^{\text {obs}}\) for which the solution remains positive. However, the nonSSP methods permit no positive time step that retains positivity of the solution, highlighting the importance of SSP methods.
5 Conclusions
In [5] we introduced a formulation and base conditions to extend the SSP framework to multistage multiderivative timestepping methods, and the resulting SSPSD methods. While the choice of base conditions we used in [5] gives us more flexibility in finding SSP timestepping schemes, it limits the flexibility in the choice of the spatial discretization. In the current paper we introduce an alternative SSP formulation based on the conditions (12) and (14) and investigate the resulting explicit twoderivative multistage SSPTS time integrators. These base conditions are relevant because some commonly used spatial discretizations may not satisfy the secondderivative condition (13) which we required in [5], but do satisfy the Taylor series condition (14). This approach decreases the flexibility in our choice of time discretization because some time discretizations that can be decomposed into convex combinations of (12) and (13) cannot be decomposed into convex combinations of (12) and (14). However, it increases the flexibility in our choice of spatial discretizations, as we may now consider spatial methods that satisfy (12) and (14) but not (13). In the numerical tests we showed that this increased flexibility allowed for more efficient simulations in several cases.
In this paper, we proved that explicit SSPTS methods have a maximum obtainable order of \(p=6\). Next we formulated the proper optimization procedure to generate SSPTS methods. Within this new class we were able to organize our schemes into three sub categories that reflect the different simplifications used in the optimization. We obtained methods up to and including order \(p=6\) thus breaking the SSP order barrier for explicit SSP RungeKutta methods. Our numerical tests show that the SSPTS explicit twoderivative methods perform as expected, preserving the strong stability properties satisfied by the base conditions (12) and (14) under the predicted timestep conditions. Our simulations demonstrate the sharpness of the SSPTS condition in some cases, and the need for SSPTS timestepping methods. Furthermore the numerical results indicate that the added freedom in the choice of spatial discretization results in larger allowable time steps. The coefficients of the SSPTS methods described in this work can be downloaded from [14].
Footnotes
 1.
Note that here we use \({\dot{F}}\) to indicate that these methods are designed for the exact time derivative of F. However, in practice we use the approximation \({\tilde{F}}\) as explained above.
 2.
In this work we use \(\odot\) to denote componentwise multiplication.
 3.
These efficiency measures do not account for the fact that the methods in [32] are of type SSPTS M3 and so require fewer funding evaluations. Correcting for this, our methods are still 10%–40% more efficient.
Notes
Acknowledgements
The work of D. C. Seal was supported in part by the Naval Academy Research Council. The work of S. Gottlieb and Z. J. Grant was supported by the AFOSR Grant #FA95501510235. A part of this research is sponsored by the Office of Advanced Scientific Computing Research; US Department of Energy, and was performed at the Oak Ridge National Laboratory, which is managed by UTBattelle, LLC under Contract no. DeAC0500OR22725. This manuscript has been authored by UTBattelle, LLC, under contract DEAC0500OR22725 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paidup, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
References
 1.Bresten, C., Gottlieb, S., Grant, Z., Higgs, D., Ketcheson, D.I., Németh, A.: Strong stability preserving multistep RungeKutta methods. Math. Comput. 86, 747–769 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
 2.Bunya, S., Kubatko, E.J., Westerink, J.J., Dawson, C.: A wetting and drying treatment for the RungeKutta discontinuous Galerkin solution to the shallow water equations. Comput. Methods Appl. Mech. Eng. 198, 1548–1562 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 3.Chan, R.P.K., Tsai, A.Y.J.: On explicit twoderivative RungeKutta methods. Numer. Algorithms 53, 171–194 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 4.Cheng, J.B., Toro, E.F., Jiang, S., Tang, W.: A subcell WENO reconstruction method for spatial derivatives in the ADER scheme. J. Comput. Phys. 251, 53–80 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Christlieb, A., Gottlieb, S., Grant, Z., Seal, D.C.: Explicit strong stability preserving multistage twoderivative timestepping schemes. J. Sci. Comput. 68, 914–942 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
 6.Cockburn, B., Shu, C.W.: TVB RungeKutta local projection discontinuous Galerkin finite element method for conservation laws II: general framework. Math. Comput. 52, 411–435 (1989)MathSciNetzbMATHGoogle Scholar
 7.Daru, V., Tenaud, C.: High order onestep monotonicitypreserving schemes for unsteady compressible flow calculations. J. Comput. Phys. 193, 563–594 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 8.Dormand, J.R., Prince, P.J.: A family of embedded RungeKutta formulae. J. Comput. Appl. Math. 6, 19–26 (1980)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Du, Z., Li, J.: A Hermite WENO reconstruction for fourth order temporal accurate schemes based on the GRP solver for hyperbolic conservation laws. J. Comput. Phys. 355, 385–396 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
 10.Dumbser, M., Zanotti, O., Hidalgo, A., Balsara, D.S.: ADERWENO finite volume schemes with spacetime adaptive mesh refinement. J. Comput. Phys. 248, 257–286 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 11.Ferracina, L., Spijker, M.N.: Stepsize restrictions for the totalvariationdiminishing property in general RungeKutta methods. SIAM J. Numer. Anal. 42, 1073–1093 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 12.Ferracina, L., Spijker, M.N.: An extension and analysis of the ShuOsher representation of RungeKutta methods. Math. Comput. 249, 201–219 (2005)MathSciNetzbMATHGoogle Scholar
 13.Gottlieb, S., Ketcheson, D.I., Shu, C.W.: Strong Stability Preserving RungeKutta and Multistep Time Discretizations. World Scientific Press, London (2011)CrossRefzbMATHGoogle Scholar
 14.Gottlieb, S., Grant, Z.J., Seal, D.C.: Explicit SSP multistage twoderivative methods with Taylor series base conditions. https://github.com/SSPmethods/SSPTSmethods. Accessed 1 Mar 2018
 15.Harten, A.: High resolution schemes for hyperbolic conservation laws. J. Comput. Phys. 49, 357–393 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Higueras, I.: On strong stability preserving time discretization methods. J. Sci. Comput. 21, 193–223 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 17.Higueras, I.: Representations of RungeKutta methods and strong stability preserving methods. SIAM J. Numer. Anal. 43, 924–948 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
 18.Higueras, I.: Characterizing strong stability preserving additive RungeKutta methods. J. Sci. Comput. 39(1), 115–128 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 19.Jeltsch, R.: Reducibility and contractivity of RungeKutta methods revisited. BIT Numer. Math. 46(3), 567–587 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
 20.Jiang, G.S., Shu, C.W.: Efficient implementation of weighted ENO schemes. J. Comput. Phys. 126, 202–228 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 21.Kastlunger, K., Wanner, G.: On Turan type implicit RungeKutta methods. Computing (Arch. Elektron. Rechnen) 9, 317–325 (1972)MathSciNetzbMATHGoogle Scholar
 22.Kastlunger, K.H., Wanner, G.: Runge Kutta processes with multiple nodes. Computing (Arch. Elektron. Rechnen) 9, 9–24 (1972)MathSciNetzbMATHGoogle Scholar
 23.Ketcheson, D.I.: Highly efficient strong stability preserving RungeKutta methods with lowstorage implementations. SIAM J. Sci. Comput. 30, 2113–2136 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 24.Ketcheson, D.I., Gottlieb, S., Macdonald, C.B.: Strong stability preserving twostep RungeKutta methods. SIAM J. Numer. Anal. 2618–2639 (2012)Google Scholar
 25.Ketcheson, D.I., Macdonald, C.B., Gottlieb, S.: Optimal implicit strong stability preserving RungeKutta methods. Appl. Numer. Math. 52, 373 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 26.Ketcheson, D.I., Parsani, M., Ahmadia, A.J.: RKOpt: software for the design of RungeKutta methods, version 0.2. https://github.com/ketch/RKopt. Accessed 15 Feb 2018
 27.Kraaijevanger, J.F.B.M.: Contractivity of RungeKutta methods. BIT 31, 482–528 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
 28.Kurganov, A., Tadmor, E.: New highresolution schemes for nonlinear conservation laws and convectiondiffusion equations. J. Comput. Phys. 160, 241–282 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
 29.Li, J., Du, Z.: A twostage fourth order timeaccurate discretization for LaxWendroff type flow solvers I. hyperbolic conservation laws. SIAM J. Sci. Comput. 38, 3046–3069 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
 30.Liu, X.D., Osher, S., Chan, T.: Weighted essentially nonoscillatory schemes. J. Comput. Phys. 115, 200–212 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
 31.Mitsui, T.: RungeKutta type integration formulas including the evaluation of the second derivative. i. Publ. Res. Inst. Math. Sci. 18, 325–364 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
 32.NguyenBa, T., NguyenThu, H., Giordano, T., Vaillancourt, R.: Onestep strongstabilitypreserving Hermite–Birkhoff–Taylor methods. Sci. J. Riga Tech. Univ. 45, 95–104 (2010)zbMATHGoogle Scholar
 33.Obreschkoff, N.: Neue quadraturformeln. Abh. Preuss. Akad. Wiss. Math.Nat. Kl. 1940(4), 20 (1940)MathSciNetzbMATHGoogle Scholar
 34.Ono, H., Yoshida, T.: Twostage explicit RungeKutta type methods using derivatives. Jpn. J. Ind. Appl. Math. 21, 361–374 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 35.Osher, S., Chakravarthy, S.: High resolution schemes and the entropy condition. SIAM J. Numer. Anal. 21, 955–984 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
 36.Pan, L., Xu, K., Li, Q., Li, J.: An efficient and accurate twostage fourthorder gaskinetic scheme for the Euler and NavierStokes equations. J. Comput. Phys. 326, 197–221 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
 37.Qiu, J., Dumbser, M., Shu, C.W.: The discontinuous Galerkin method with LaxWendroff type time discretizations. Comput. Methods Appl. Mech. Eng. 194, 4528–4543 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
 38.Ruuth, S.J., Spiteri, R.J.: Two barriers on strongstabilitypreserving time discretization methods. J. Sci. Comput. 17, 211–220 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
 39.Seal, D.C., Guclu, Y., Christlieb, A.J.: Highorder multiderivative time integrators for hyperbolic conservation laws. J. Sci. Comput. 60, 101–140 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 40.Shintani, H.: On onestep methods utilizing the second derivative. Hiroshima Math. J. 1, 349–372 (1971)MathSciNetCrossRefzbMATHGoogle Scholar
 41.Shintani, H.: On explicit onestep methods utilizing the second derivative. Hiroshima Math. J. 2, 353–368 (1972)MathSciNetCrossRefzbMATHGoogle Scholar
 42.Shu, C.W.: Totalvariation diminishing time discretizations. SIAM J. Sci. Stat. Comput. 9, 1073–1084 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 43.Shu, C.W., Osher, S.: Efficient implementation of essentially nonoscillatory shockcapturing schemes. J. Comput. Phys. 77, 439–471 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 44.Spiteri, R.J., Ruuth, S.J.: A new class of optimal highorder strongstabilitypreserving time discretization methods. SIAM J. Numer. Anal. 40, 469–491 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
 45.Stancu, D.D., Stroud, A.H.: Quadrature formulas with simple Gaussian nodes and multiple fixed nodes. Math. Comput. 17, 384–394 (1963)MathSciNetCrossRefzbMATHGoogle Scholar
 46.Sweby, P.K.: High resolution schemes using flux limiters for hyperbolic conservation laws. SIAM J. Numer. Anal. 21, 995–1011 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
 47.Tadmor, E.: Approximate solutions of nonlinear conservation laws in Advanced Numerical Approximation of Nonlinear Hyperbolic Equations. Lectures Notes from CIME Course Cetraro, Italy, 1997, Number 1697 in Lecture Notes in Mathematics. Springer, Berlin (1998)Google Scholar
 48.Toro, E., Titarev, V.A.: Solution of the generalized Riemann problem for advectionreaction equations. Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 458, 271–281 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
 49.Toro, E.F., Titarev, V.A.: Derivative Riemann solvers for systems of conservation laws and ADER methods. J. Comput. Phys. 212, 150–165 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
 50.Tsai, A.Y.J., Chan, R.P.K., Wang, S.: Twoderivative RungeKutta methods for PDEs using a novel discretization approach. Numer. Algorithms 65, 687–703 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 51.Turán, P.: On the theory of the mechanical quadrature. Acta Universitatis Szegediensis Acta Scientiarum Mathematicarum 12, 30–37 (1950)MathSciNetzbMATHGoogle Scholar
 52.Verner, J.: Explicit RungeKutta methods with estimates of the local truncation error. SIAM J. Numer. Anal. 15, 772–790 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 53.Xing, Y., Zhang, X., Shu, C.W.: Positivitypreserving high order wellbalanced discontinuous Galerkin methods for the shallow water equations. Adv. Water Resour. 33, 1476–1493 (2010)CrossRefGoogle Scholar
 54.Zhang, X., Shu, C.W.: On Maximumprinciplesatisfying High Order Schemes for Scalar Conservation Laws. J. Comput. Phys. 229, 3091–3120 (2010)MathSciNetCrossRefzbMATHGoogle Scholar