$hp$-Adaptive Galerkin Time Stepping Methods for Nonlinear Initial Value Problems

This work is concerned with the derivation of an a posteriori error estimator for Galerkin approximations to nonlinear initial value problems with an emphasis on finite-time existence in the context of blow-up. The stucture of the derived estimator leads naturally to the development of both h and hp versions of an adaptive algorithm designed to approximate the blow-up time. The adaptive algorithms are then applied in a series of numerical experiments, and the rate of convergence to the blow-up time is investigated.


Introduction
In this paper we focus on continuous Galerkin (cG) and discontinuous Galerkin (dG) time stepping discretizations as applied to abstract initial value problems of the form u (t) = F(t, u(t)), t ∈ (0, T ), u(0) = u 0 . (1.1) A posteriori error estimators for linear problems tend to be unconditional, that is, they always hold independent of the problem data and the size of the discretization parameters. For nonlinear problems, the situation is more complicated since the existence of a solution to an appropriate error equation (and, thus, of an error bound) usually requires that either the data or the discretization parameters are sufficiently small. As a result, a posteriori error estimators for nonlinear problems tend to be conditional, that is, they only hold provided that an a posteriori verifiable condition (which can be either explicit or implicit) is satisfied. For nonlinear time-dependent problems, there are two commonly used approaches for deriving conditional a posteriori error bounds: continuation arguments, cf. [4,13], and fixed point arguments, cf. [5,14]. The a posteriori error bounds that we derive here are obtained by utilizing a local continuation argument.
Galerkin time stepping methods for initial value problems are based on weak formulations and for both the cG and dG time stepping schemes, the test spaces consist of polynomials that are discontinuous at the time nodes. In this way, the discrete Galerkin formulations decouple into local problems on each time step and the discretizations can therefore be understood as implicit onestep schemes. In the literature, Galerkin time stepping schemes have been extensively analyzed for ordinary differential equations (ODEs), cf. [2,[6][7][8][9]12].
A key feature of Galerkin time stepping methods is their great flexibility with respect to the size of the time steps and the local approximation orders which lends these schemes well to an hp-framework. The hp-versions of the cG and dG time stepping schemes were introduced and analyzed in the works [16,17,19,21]. In particular, in the articles [16,21] which focus on initial value problems with uniform Lipschitz nonlinearities, the use of the Banach fixed point theorem made it possible to prove existence and uniqueness results for the discrete Galerkin solutions which are independent of the local approximation orders; these results have been extended to discrete Peano-type existence results in the context of more general nonlinearities in [11]. We emphasize that the hp-approach is well known for its ability to approximate smooth solutions with possible local singularities at high algebraic or even exponential rates of convergence; see, e.g., [3,17,18,20] for the numerical approximation of problems with start-up singularities. In light of this, a main aim of this paper is to establish through numerical experiments whether or not hp-refinement, utilizing the derived a posteriori error estimator, can lead to exponential convergence towards the blow-up time for the case where (1.1) exhibits finite time blow-up.
Outline. The remainder of our article is organized as follows. In Section 2, we introduce the hp-cG and hp-dG time stepping schemes while in Section 3 we develop a posteriori error bounds for these schemes. We design h as well as hp version adaptive algorithms to approximate the blow-up time in Section 4 before applying them to some numerical experiments in Section 5. Finally, we draw conclusions and comment on possible future research in Section 6.
Notation. Let H denote a real Hilbert space with inner product (·, ·) H and induced norm · H as before. Given an interval I = (a, b), the Bochner space C(Ī; H) consists of all functions u :Ī → H that are continuous onĪ with values in H. Moreover, for 1 ≤ p ≤ ∞, we define the norm

Galerkin Time Discretizations
In this section, we briefly recall the hp-cG and hp-dG time stepping schemes for the discretisation of (1.1). To this end, on the open interval I = (0, T ), we introduce a set of time nodes, 0 := t 0 < t 1 < · · · < t M −1 < t M := T , which define a time partition M := {I m } M m=1 of I into M open time intervals I m := (t m−1 , t m ), m = 1, . . . , M . The length k m := t m − t m−1 (which may be variable) of the time interval I m is called the time step length. Furthermore, to each interval I m we associate a polynomial degree r m ∈ N 0 which takes the role of a local approximation order. Then, given a (real) Hilbert space X ⊆ H and some r ∈ N 0 , the set signifies the space of all polynomials of degree at most r on an interval J ⊂ R with values in X.
In practical computations, the Hilbert space H on which (1.1) is based will typically be replaced by a finite-dimensional subspace H m ⊂ H on each interval I m , 1 ≤ m ≤ M . In this context, it is useful to define the H-orthogonal projection π m : H → H m given by 2.1. hp-cG Time Stepping. With the above definitions, the (fully-discrete) hp-cG time marching scheme is iteratively given as follows: with the initial condition U (t 0 ) = π 1 u 0 for m = 1, and U (t m−1 ) = π m U (t − m−1 ) for m ≥ 2; here, the one-sided limits of a piecewise continuous function U at each time node t m are given by Note that in order to enforce the initial condition on each time step, the local trial space has one degree of freedom more than the local test space in the hp-cG scheme. Furthermore, if H 1 = . . . = H M , we remark that the hp-cG solution U is globally continuous on [0, T ].
The strong form of (2.1) on I m is where Π rm−1 m denotes the L 2 -projection operator onto the space P rm−1 (I m ; H m ); see [11] for details. Whilst the strong form (2.2) can be exploited for the purpose of deriving a posteriori error estimates, it is well known that employing such a straightfoward approach leads to suboptimal error estimates, cf. [1]. This issue will be addressed in the derivation of our error bound.

hp-dG Time
Stepping. The (fully-discrete) hp-dG time marching scheme is iteratively given as follows: For 1 ≤ m ≤ M , we seek U | Im ∈ P rm (I m ; H m ) such that We emphasize that, in contrast to the continuous Galerkin formulation, the trial and test spaces are the same for the dG scheme; this is due to the fact that the initial values are weakly imposed (by means of an upwind flux) on each time interval.
In order to find the strong formulation of (2.3), we require the use of a lifting operator. More precisely, given some real Hilbert space X ⊆ H, we define L rm m : cf. [19,Section 4.1]. Then, in view of this definition with X = H m and proceeding as in [11], we obtain the strong formulation of the dG time stepping method (2.3) on I m , viz., where Π rm m denotes the L 2 -projection onto P rm (I m ; H m ).

A Posteriori Error Analysis
The goal of this section is to derive L ∞ a posteriori error bounds for the hp-cG and hp-dG approximations to (1.1). To this end, we require some structural assumptions on the nonlinearity F. Specifically, F : [0, T ] × H → H is assumed to satisfy F(·, 0) ∈ L 1 ((0, T ); H) along with the local H-Lipschitz estimate Here, L : [0, T ] × R + 0 × R + 0 → R + 0 is a known function that satisfies L(·, a, b) ∈ L 1 ((0, T ); H) for any a, b ∈ R + 0 and that is continuous and monotone increasing in the second and third arguments. Under these assumptions on F, problem (1.1) admits a unique (local) solution u ∈ C([0, T ]; H).

Preliminary Error Bound.
In order to remedy the suboptimal error estimates that would result from using (2.2) and (2.5) directly, we follow the approach proposed in [1] by introducing the reconstruction U m of U | Im which is defined over each closed intervalĪ m , 1 ≤ m ≤ M , by 3). Since we will be dealing with the reconstruction U m , it will also be necessary to introduce the reconstruction error given by We will proceed with the error analysis by first proving an L ∞ -error bound for e m .
To formulate the error equation, we begin by substituting (1.1) into the definition of e m , viz., Adding and subtracting additional terms yields where R denotes the residual given by Using the triangle inequality and Bochner's Theorem implies that Moreover, applying the local H-Lipschitz estimate (3.1) together with the monotonicity of L yields for t ∈Ī m . Here, η res m denotes the residual estimator given by η res where the projection estimator η proj m is given by For later intervals, the unknown term e m (t m−1 ) H needs to be replaced with the known term e m−1 (t m−1 ) H . To this end, for m ≥ 1, we have Recall that the hp-cG method (2.1) satisfies the strong form (2.2) on I m . Thus, we have that By the definition of the L 2 -projection an equivalent formulation of the above is Substituting this into (3.9) implies that for the hp-cG method we have For the hp-dG method (2.3), we have the strong form (2.5) on I m . Thus, it follows that By definition of the lifting operator L rm m from (2.4) we arrive at Equivalently, Since this is the same form as for the hp-cG method then for the hp-dG method we also have that where ψ m is chosen such that Finally, applying Gronwall's inequality to (3.18) for t ∈Ī m , 1 ≤ m ≤ M , yields the following result.
where ψ m satisfies (3.19) and G m is given by In order to transform (3.21) into an a posteriori error bound, we will employ a continuation argument, cf., e.g., [4]. To this end, for 1 ≤ m ≤ M , we define the set 23) where δ > 1 is a parameter to be chosen. Note that I m is closed since e m is time-continuous and, obviously, bounded. Moreover, I m is assumed to be non-empty; this is certainly true for the first interval since 0 ∈ I 1 and is a posteriori verifiable for later intervals. To state the full error bound, we require some additional definitions. Specifically, define the function ϕ m : Additionally, if it exists, we introduce for m ≥ 1 and we let δ 0 := 1 for convenience.
We are now ready to establish the following conditional a posteriori error bound for both Galerkin time stepping methods. Proof. We omit the trivial case ψ m = 0. Since δ m exists, there is some δ > 1 with ϕ m (δ) < 0.
Suppose that e m (t m−1 ) exists then t m−1 ∈ I m so I m is non-empty, closed and bounded and thus has some maximal time t . Let us first make the assumption that t < t m and work towards a contradiction. Substituting the error bound from I m into (3.21) and using the monotonicity of L implies that This is a contradiction since we had assumed that the set I m had maximal time t < t m . Hence, it follows that e m L ∞ (Im;H) ≤ δψ m . Taking the limit δ → δ m we deduce (3.27). To complete the proof, we note that we can conclude by recursion that e m (t m−1 ) exists if δ 0 , . . . , δ m−1 exist and 0 ∈ I 1 . Since 0 ∈ I 1 trivially and δ 0 , . . . , δ m−1 exist by premise, the original assumption that e m (t m−1 ) exists is unneeded.
In some sense, U m is a better approximation to u| Im than U | Im ; thus from a practical standpoint it is often best to use Theorem 3.26 directly, however, for some applications a bound on the error rather than on the reconstruction error may be required so we introduce the following corollary. Of the three components that make up the error estimator, the term δ m−1 ψ m−1 is a bound for the error on the previous time step while η proj m−1 + η res m represents the local (additive) contribution to the error estimator on the current time step. The correct interpretation of δ m , then, is that it is an a posteriori approximation to the growth rate of the error on I m ; this becomes clear upon consideration of the following simple example. In fact, the following corollary shows that δ m is the expected local growth rate for globally H-Lipschitz F. Proof. From the definition of the function ϕ m , it follows that ϕ m (δ) = e Lkm − δ. Therefore, δ m = inf{δ > 1 : e Lkm − δ < 0} = e Lkm . Since δ m exists and is finite for any k m , we conclude that the error bound of Theorem 3.26 holds unconditionally on I m .

Adaptive Algorithms
The error estimators derived in the previous section will form the basis of an adaptive strategy to estimate the blow-up time of (1.1). In particular, we will consider both a h-adaptive approach and an hp-adaptive approach. For the remainder of this section, we assume that H 1 = . . . = H M for simplicity but remark that both adaptive algorithms can be easily modified to account for variable finite-dimensional subspaces.

A h-Adaptive
Approach. The first difficulty encountered in the construction of a hadaptive algorithm is that both (2.1) and (2.3) are implicit methods which means that the existence of a numerical solution is not guaranteed. It is tempting to conduct an existence analysis such as in [11] to obtain bounds on the length of the time steps needed in order to yield a numerical solution, however, such bounds are inherently pessimistic. Since existence can be determined a posteriori, our h-adaptive algorithm just reduces the time step length until a numerical solution exists.
The second difficulty is how to approximate δ m ; unfortunately, it is impossible to give a precise characterization for how to do this for any given F primarily because F may be 'pathological'.
Fortunately, however, most nonlinearities of interest do not fall into this category. Suppose that F is chosen such that ϕ m has, at most, a small finite number of zeros then it should be possible to approximate the zeros of ϕ m numerically. Since δ m satisfies ϕ m (δ m ) = 0, we then only need to check the roots of ϕ m and verify that one of our numerical approximations,δ m , satisfies ϕ m (δ m ) < 0. In our numerical experiments, we employ a Newton method to findδ m with an initial guess close to one on I 1 (the proof of Lemma 3.28 implies that δ m ≈ 1 for k m ≈ 0) and an initial guess close to δ m−1 on I m for m ≥ 2; this approach works well for the studied problems.
As is standard in finite element algorithms for linear problems, the time step is halved and the numerical solution recomputed until η res m is below the tolerance tol, however, we must also account for the nonlinear scaling that enters through δ m . The structure of the error estimator implies that the interval I 1 is the most important since the residual estimator on I 1 propagates through the remainder of the error estimator. Similarly, each successive subinterval I m is less important than the previous subinterval I m−1 with the term δ m−1 measuring the extent to which this is true. To account for this, we increase the tolerance by the factor δ m after computations on I m are complete.
We then advance to the next interval using the previous time step length as a reference and continue in this way until δ m no longer exists; the adaptive algorithm is then terminated and it outputs the total number of degrees of freedom (DoFs) used along with the sum of all the time step lengths, T , as an estimate for T ∞ . The pseudocode for the h-adaptive algorithm is given in Algorithm 1. m ← m + 1, tol ← δ m * tol, r m+1 = r m , k m+1 = k m .

4.2.
An hp-Adaptive Approach. The basic outline of the hp-adaptive algorithm will be the same as that of the h-adaptive algorithm; the only difficulty lies in how we choose locally between h-refinement and p-refinement. The theory of the hp-FEM suggests that p-refinement is superior to h-refinement if the solution is 'smooth enough', so we perform p-refinement if U | Im is smooth; otherwise, we do h-refinement. The pseudocode for the hp-adaptive algorithm is very similar to Algorithm 1; the difference lies in replacing the simple time step bisections in lines (8:) and (19:) by the following procedure: if U | Im+1 is smooth then r m+1 ← r m+1 + 1. else k m+1 ← km+1 /2. end if Compute U | Im+1 and determine η res m+1 . In the numerical experiments, we consider only real-valued ODEs, and so we remark specifically on the estimation of smoothness in this special case. There are many ways to estimate smoothness of the numerical solution (see [15] for an overview); we choose to use a computationally simple approach from [10] based on Sobolev embeddings. Here, the basic idea is to monitor the constant in the Sobolev embedding H 1 (I m ) → L ∞ (I m ) in order the classify a given function as locally smooth or not. Specifically, we define the smoothness indicator θ m : which, intuitively, takes values close to one if u is smooth and values close to zero otherwise; see the reasoning of [10] for details. Following [10], we characterize U | Im ∈ P rm (I m ; R) as smooth if here, values around θ = 0.85 were observed to produce the best results in our numerical experiments below. We conclude this section with a corollary on the magnitude of the reconstruction error under either the h-adaptive algorithm or the hp-adaptive algorithm. In order to state the corollary, we require some additional notation. To that end, we denote the initial tolerance by tol * and define the a posteriori approximation to the growth rate of the error on (0, t m ) by We are now ready to state the corollary. Proof. To begin the proof, we will first prove by induction that ψ m ≤ m δ m−1 tol . For I 1 , we have that ψ 1 = η res 1 ≤ tol , so the base case is verified. Assuming that the bound is true for m − 1, then recalling the definition of ψ m from (3.32) as well as the scaling nature of the tolerances in the h-adaptive and hp-adaptive algorithms, that is, η res m ≤ δ m−1 tol , we have

Numerical Experiments
In this section, we will apply the adaptive algorithms developed in the previous section to two real-valued initial value problems. In both numerical experiments, we approximate the implicit Galerkin methods (2.1) and (2.3) with an explicit Picard-type iteration; cf. [11]. 5.1. Example 1. In this numerical example, we consider (1.1) with the polynomial nonlinearity F(t, u) = u 2 . Through separation of variables the exact solution is given by which has blow-up time given by T ∞ = u −1 0 . Note that for any v 1 , v 2 ∈ R, the nonlinearity F satisfies so we set L(|v 1 |, |v 2 |) = |v 1 | + |v 2 |. Thus, in this case, we have in (3.24). We begin by applying the h-adaptive algorithm to (1.1) with initial condition u(0) = 1 for both Galerkin time stepping methods utilizing polynomials up to and including degree r = 4. We reduce the tolerance and observe the rate of convergence of the blow-up time error |T − T ∞ |. The results given in Figure 1 show that for both Galerkin methods where r is the polynomial degree and #DoFs signifies the total number of degrees of freedom. Next, we apply the hp-adaptive algorithm to (1.1) with the same data. We reduce the tolerance and observe exponential convergence of the blow-up time error for both Galerkin methods, viz., with some constant b > 0, as shown in Figure 1.
For the hp-adaptive algorithm under the hp-cG scheme (2.1), we also observe the effectivity indices of the error estimator (with respect to the reconstruction error) given by , over the course of each computational run for different tolerances and record the best effectivity indices observed in a given computational run versus the inverse of the distance to the blow-up time in Figure 2. The results show that the error estimator seems to account well for the behaviour of the reconstruction error in certain situations with an effectivity index of 70 for a blow-up problem being particularly impressive; in other situations, however, the effectivity indices observed are much larger. Such a large variation in the effectivity indices observed is actually to be expected since in order to provide an upper bound for the error in any situation, the error estimator must account for the worst possible scenario to a blow-up problem which may not actually be realised in practise. Finally, for the hp-adaptive algorithm under the hp-cG scheme (2.1), we plot the value of δ m versus the inverse of the distance to the blow-up time over the course of the final computation and display the results in Figure 2. If we denote the distance from the blow-up time at time t m by ε m := |t m − T ∞ | then Figure 2 implies the relationship δ m ∝ ε −2 m . Thus from Corollary 4.3, we infer that there exists some constant C > 0 that is independent of the distance to the blow-up time and the maximum time step length such that max 1≤m≤M e m L ∞ (Im) ≤ CM ε −2 M tol .
Since u L ∞ (0,T ) = ε −1 M , we conclude that the error estimator blows up at a faster rate than the exact solution in this example; moreover, we infer from Figure 2 that in the best case scenario (for the error estimator), this rate appears to be quasi-optimal in the sense that it mirrors the rate of the reconstruction error. Finally, we remark that this result gives weight to the interpretation of δ M as an a posteriori approximation to the blow-up rate of the error on (0, T ) for nonlinear F.

Example 2.
In this numerical example, we consider (1.1) with the exponential nonlinearity F(t, u) = e u . Through separation of variables the exact solution is given by which has blow-up time given by T ∞ = e −u0 . Note that for any v 1 , v 2 ∈ R, the nonlinearity F satisfies so we set L(|v 1 |, |v 2 |) = 1 /2(e |v1| + e |v2| ). Thus for this nonlinearity, we have in (3.24). As in Example 1, we apply the h-adaptive algorithm to (1.1) with initial condition u(0) = 1 for both Galerkin methods utilizing polynomials up to and including degree r = 4. We reduce the tolerance and observe the rate of convergence to the blow-up time. The results given in Figure 3 show that for this example we also have that for both Galerkin methods where r is the polynomial degree. Next, we apply the hp-adaptive algorithm to (1.1) with the same data. As in Example 1, the tolerance is reduced and the results given in Figure 3 show exponential convergence to the blow-up time, viz., with some constant b > 0, for both Galerkin methods. Additionally, for the hp-adaptive algorithm under the hp-cG scheme (2.1), we observe the effectivity indices (with respect to the reconstruction error) over the course of each computational run for different tolerances and record the best effectivity indices observed in a given computational run in Figure 4. The range of effectivity indices observed over all computational runs is comparable to the range observed in the previous example with large variation and an effectivity index of around 60 in the best case, cf. Figure 4.
Finally, for the hp-adaptive algorithm under the hp-cG scheme (2.1), we plot the value of δ m over the course of the final computation with the results displayed in Figure 4. With the notation from the previous example, we deduce that δ m ∝ ε −1 m , for this example. Thus from Corollary 4.3, we infer that there exists some constant C > 0 that is independent of the distance to the blow-up time and the maximum time step length such that max 1≤m≤M e m L ∞ (Im) ≤ CM ε −1 M tol .
Since u L ∞ (0,T ) = log(ε −1 M ), we remark that the error estimator blows up at a faster rate than the exact solution in this example as well. Moreover, Figure 4 implies that this rate is again quasi-optimal in the best case scenario for the error estimator. To conclude, we remark that this result gives additional weight to the interpretation of δ M as an a posteriori approximation to the blow-up rate of the error on (0, T ) when F is nonlinear.

Conclusions
In this paper, we derived a conditional a posteriori error bound through a continuation argument for Galerkin time discretizations of general nonlinear initial value problems; the derived bound was then used to drive h and hp versions of an adaptive algorithm designed to approximate the blow-up time. Numerical experiments indicate that the h-version of the adaptive algorithm attains algebraic convergence towards the blow-up time while the hp-version of the adaptive algorithm attains exponential convergence towards the blow-up time. This work constitutes an important step towards deriving hp-version a posteriori error bounds for the semilinear heat equation that are robust with respect to the distance from the blow-up time, thereby extending the results in the works [4,14].