hp-Adaptive Galerkin Time Stepping Methods for Nonlinear Initial Value Problems

This work is concerned with the derivation of an a posteriori error estimator for Galerkin approximations to nonlinear initial value problems with an emphasis on finite-time existence in the context of blow-up. The structure of the derived estimator leads naturally to the development of both h and hp versions of an adaptive algorithm designed to approximate the blow-up time. The adaptive algorithms are then applied in a series of numerical experiments, and the rate of convergence to the blow-up time is investigated.


Introduction
In this paper we focus on continuous Galerkin (cG) and discontinuous Galerkin (dG) time stepping discretizations as applied to abstract initial value problems of the form Here, u : (0, T ) → H , for some T > 0, denotes the unknown solution with values in a real Hilbert space H with inner product (·, ·) H and induced norm · H . The initial value u 0 ∈ H prescribes the value of the solution at t = 0, and F : [0, T ]× H → H is a possibly nonlinear operator. We emphasize that we include the case of F being unbounded in the sense that Note that the existence interval for solutions may be arbitrarily small even for smooth F . Indeed, for certain data the solution of (1.1) can become unbounded in finite time, i.e., This effect is commonly termed finite-time blow-up or sometimes just blow-up and the quantity T ∞ is called the blow-up time.
The main contributions of this paper are as follows: • The derivation of conditional a posteriori error bounds for hp-cG and hp-dG approximations to the nonlinear initial value problem (1.1). • The design of efficient adaptive algorithms that lead to accurate approximation of the blow-up time in the case where problem (1.1) exhibits finite time blow-up.
To the best of our knowledge, this is the first time that an adaptive algorithm has been developed for hp-cG and hp-dG time-stepping methods based on rigorous a posteriori error control for problems of the form (1.1). The adaptive procedure that we propose includes both h-adaptive and hp-adaptive variants. Indeed, one of the contributions of this paper is to motivate how we can choose effectively between h-or p-adaptivity locally while taking into account the possible singular behavior of the problem under consideration. In this sense, these results extend the h-adaptive algorithm analyzed for some special cases of (1.1) and Euler-type time discretizations in [5]. In particular, the inclusion of higher order time-stepping schemes and hp-adaptivity allows us to overcome key limitations encountered in [5]. A posteriori error estimators for linear problems tend to be unconditional, that is, they always hold independent of the problem data and the size of the discretization parameters. For nonlinear problems, the situation is more complicated since the existence of a solution to an appropriate error equation (and, thus, of an error bound) usually requires that either the data or the discretization parameters are sufficiently small. As a result, a posteriori error estimators for nonlinear problems tend to be conditional, that is, they only hold provided that an a posteriori verifiable condition (which can be either explicit or implicit) is satisfied. For nonlinear time-dependent problems, there are two commonly used approaches for deriving conditional a posteriori error bounds: continuation arguments, cf. [5,14], and fixed point arguments, cf. [6,15]. The a posteriori error bounds that we derive here are obtained by utilizing a local continuation argument.
Galerkin time stepping methods for initial value problems are based on weak formulations and for both the cG and dG time stepping schemes, the test spaces consist of polynomials that are discontinuous at the time nodes. In this way, the discrete Galerkin formulations decouple into local problems on each time step and the discretizations can therefore be understood as implicit one-step schemes. In the literature, Galerkin time stepping schemes have been extensively analyzed for ordinary differential equations (ODEs), cf. [3,[7][8][9][10]13].
A key feature of Galerkin time stepping methods is their great flexibility with respect to the size of the time steps and the local approximation orders which lends these schemes well to an hp-framework. The hp-versions of the cG and dG time stepping schemes were introduced and analyzed in the works [19,20,22,24]. In particular, in the articles [19,24] which focus on initial value problems with uniform Lipschitz nonlinearities, the use of the Banach fixed point theorem made it possible to prove existence and uniqueness results for the discrete Galerkin solutions which are independent of the local approximation orders; these results have been extended to discrete Peano-type existence results in the context of more general nonlinearities in [12]. We emphasize that the hp-approach is well known for its ability to approximate smooth solutions with possible local singularities at high algebraic or even exponential rates of convergence; see, e.g., [4,20,21,23] for the numerical approximation of problems with start-up singularities. In light of this, a main aim of this paper is to establish through numerical experiments whether or not hp-refinement, utilizing the derived a posteriori error estimator, can lead to exponential convergence towards the blow-up time for the case where (1.1) exhibits finite time blow-up.

Outline
The remainder of our article is organized as follows. In Sect. 2, we introduce the hp-cG and hp-dG time stepping schemes while in Sect. 3 we develop a posteriori error bounds for these schemes. We design h as well as hp version adaptive algorithms to approximate the blow-up time in Sect. 4 before applying them to some numerical experiments in Sect. 5. Finally, we draw conclusions and comment on possible future research in Sect. 6.

Notation
Let H denote a real Hilbert space with inner product (·, ·) H and induced norm · H as before. Given an interval I = (a, b), the Bochner space C(Ī ; H ) consists of all functions u :Ī → H that are continuous onĪ with values in H . Moreover, for 1 ≤ p ≤ ∞, we define the norm and we let L p (I ; H ) be the space of all measurable functions u : I → H such that the corresponding norm is bounded. Note that L 2 (I ; H ) is a Hilbert space with inner product and norm given by respectively.

Galerkin Time Discretizations
In this section, we briefly recall the hp-cG and hp-dG time stepping schemes for the discretisation of (1. (which may be variable) of the time interval I m is called the time step length. Furthermore, to each interval I m we associate a polynomial degree r m ∈ N 0 which takes the role of a local approximation order. Then, given a (real) Hilbert space X ⊆ H and some r ∈ N 0 , the set signifies the space of all polynomials of degree at most r on an interval J ⊂ R with values in X .
In practical computations, the Hilbert space H on which (1.1) is based will typically be replaced by a finite-dimensional subspace H m ⊂ H on each interval I m , 1 ≤ m ≤ M. In this context, it is useful to define the H-orthogonal projection π m : H → H m given by

hp-cG Time Stepping
With the above definitions, the (fully-discrete) hp-cG time marching scheme is iteratively given as follows: with the initial condition U (t 0 ) = π 1 u 0 for m = 1, and U (t m−1 ) = π m U (t − m−1 ) for m ≥ 2; here, the one-sided limits of a piecewise continuous function U at each time node t m are given by Note that in order to enforce the initial condition on each time step, the local trial space has one degree of freedom more than the local test space in the hp-cG scheme. Furthermore, if H 1 = · · · = H M , we remark that the hp-cG solution U is globally continuous on [0, T ]. Finally, we observe that if r m = 1 then (2.1) in fact yields the well known Crank-Nicolson method provided that the midpoint quadrature rule is applied to the right-hand side of (2.1).
The strong form of (2.1) on I m is where r m −1 m denotes the L 2 -projection operator onto the space P r m −1 (I m ; H m ); see [12] for details. Whilst the strong form (2.2) can be exploited for the purpose of deriving a posteriori error estimates, it is well known that employing such a straightforward approach leads to suboptimal error estimates, cf. [2]. This issue will be addressed in the derivation of our error bound.

hp-dG Time Stepping
The (fully-discrete) hp-dG time marching scheme is iteratively given as follows: 3) is in fact just the implicit Euler method provided that the right rectangle quadrature rule is applied to the right-hand side of (2.3).
In order to find the strong formulation of (2.3), we require the use of a lifting operator. More precisely, given some real Hilbert space X ⊆ H , we define L r m m : cf. [22,Section 4.1]. Then, in view of this definition with X = H m and proceeding as in [12], we obtain the strong formulation of the dG time stepping method (2.3) on I m , viz., where r m m denotes the L 2 -projection onto P r m (I m ; H m ).

A Posteriori Error Analysis
The goal of this section is to derive L ∞ a posteriori error bounds for the hp-cG and hp-dG approximations to (1.1). To this end, we require some structural assumptions on the nonlin- 0 and that is continuous and monotone increasing in the second and third arguments. By employing a classical Picard approach, it can be shown under these assumptions on F that problem (1.1) admits a unique (local) solution u ∈ C([0, T ]; H ) for T > 0 sufficiently small.

Preliminary Error Bound
In order to remedy the suboptimal error estimates that would result from using (2.2) and (2.5) directly, we follow the approach proposed in [2] by introducing the reconstruction U m of This formulation of the reconstruction U m is the same as that introduced in [2] for the cG time stepping scheme, and in [17] for the dG time stepping scheme, but with the initial value projected onto the space H m for convenience; it was also used in [1,16] for the Crank-Nicolson method (i.e., the cG method with r m = 1). For t ∈ I m , 1 ≤ m ≤ M, we define the error e(t) := u(t) − U (t) ∈ H where U is either the hp-cG solution from (2.1) or the hp-dG solution from (2.3). Since we will be dealing with the reconstruction U m , it will also be necessary to introduce the reconstruction error given by e m (t) We will proceed with the error analysis by first proving an L ∞ -error bound for e m .
To formulate the error equation, we begin by substituting (1.1) into the definition of e m , viz., Adding and subtracting additional terms yields where R denotes the residual given by Using the triangle inequality and Bochner's Theorem implies that Moreover, applying the local H -Lipschitz estimate (3.1) together with the monotonicity of L yields Here, η res m denotes the residual estimator given by η res where the projection estimator η proj m is given by For later intervals, the unknown term e m (t m−1 ) H needs to be replaced with the known term e m−1 (t m−1 ) H . To this end, for m ≥ 1, we have Recall that the hp-cG method (2.1) satisfies the strong form (2.2) on I m . Thus, we have that By the definition of the L 2 -projection an equivalent formulation of the above is Substituting this into (3.9) implies that for the hp-cG method we have For the hp-dG method (2.3), we have the strong form (2.5) on I m . Thus, it follows that By definition of the lifting operator L r m m from (2.4) we arrive at Equivalently, Since this is the same form as for the hp-cG method then for the hp-dG method we also have that Applying the triangle inequality to (3.12) and (3.16) yields with η proj m from (3.8). Substituting this result into (3.7) gives where ψ m is chosen such that Finally, applying Gronwall's inequality to (3.18) for t ∈Ī m , 1 ≤ m ≤ M, yields the following result.
where ψ m satisfies (3.19) and G m is given by

Continuation Argument
In order to transform (3.20) into an a posteriori error bound, we will employ a continuation argument, cf., e.g., [5]. To this end, for 1 ≤ m ≤ M, we define the set where δ > 1 is a parameter to be chosen. Note that I m is closed since e m is time-continuous and, obviously, bounded. Moreover, I m is assumed to be non-empty; this is certainly true for the first interval since 0 ∈ I 1 and is a posteriori verifiable for later intervals. To state the full error bound, we require some additional definitions. Specifically, define the function for m ≥ 1 and we let δ 0 := 1 for convenience.
We are now ready to establish the following conditional a posteriori error bound for both Galerkin time stepping methods. Proof We omit the trivial case ψ m = 0. Since δ m exists, there is some δ > 1 with ϕ m (δ) < 0. Suppose for a moment that the existence of e m (t m−1 ) is taken for granted, then t m−1 ∈ I m so I m is non-empty, closed and bounded and thus has some maximal time t . Let us first make the assumption that t < t m and work towards a contradiction. Substituting the error bound from I m into (3.20) and using the monotonicity of L implies that This is a contradiction since we had assumed that the set I m had maximal time t < t m .
Hence, it follows that e m L ∞ (I m ;H ) ≤ δψ m . Taking the limit δ → δ m we deduce (3.25).
To complete the proof, we note that we can conclude by recursion that e m (t m−1 ) exists if δ 0 , . . . , δ m−1 exist and 0 ∈ I 1 . Since 0 ∈ I 1 trivially and δ 0 , . . . , δ m−1 exist by premise, the original assumption that e m (t m−1 ) exists is unneeded.
In some sense, U m is a better approximation to u| I m than U | I m ; thus from a practical standpoint it is often best to use Theorem 3.1 directly, however, for some applications a bound on the error rather than on the reconstruction error may be required so we introduce the following corollary. Proof The bound follows directly from rewriting the error, viz., e = e m + U m − U , the triangle inequality and the reconstruction error bound of Theorem 3.1.

Computable Error Bound
In order to yield fully computable error bounds we must give an explicit characterization of ψ m from (3.19). Theorem 3.1 implies that Of the three components that make up the error estimator, the term δ m−1 ψ m−1 is a bound for the error on the previous time step while η proj m−1 + η res m represents the local (additive) contribution to the error estimator on the current time step. The correct interpretation of δ m , then, is that it is an a posteriori approximation to the growth rate of the error on I m ; this becomes clear upon consideration of the following simple example. In fact, the following corollary shows that δ m is the expected local growth rate for globally H -Lipschitz F .

Adaptive Algorithms
The error estimators derived in the previous section will form the basis of an adaptive strategy to estimate the blow-up time of (1.1). In particular, we will consider both a h-adaptive approach and an hp-adaptive approach. For the remainder of this section, we assume that H 1 = · · · = H M for simplicity but remark that both adaptive algorithms can be easily modified to account for variable finite-dimensional subspaces.

A h-Adaptive Approach
The first difficulty encountered in the construction of a h-adaptive algorithm is that both (2.1) and (2.3) are implicit methods which means that the existence of a numerical solution is not guaranteed. It is tempting to conduct an existence analysis such as in [12] to obtain bounds on the length of the time steps needed in order to yield a numerical solution, however, such bounds are inherently pessimistic. Since existence can be determined a posteriori, our h-adaptive algorithm just reduces the time step length until a numerical solution exists. The second difficulty is how to approximate δ m ; unfortunately, it is impossible to give a precise characterization for how to do this for any given F primarily because F may be 'pathological'. Fortunately, however, most nonlinearities of interest do not fall into this category. Suppose that F is chosen such that ϕ m has, at most, a small finite number of zeros then it should be possible to approximate the zeros of ϕ m numerically. Since δ m satisfies ϕ m (δ m ) = 0, we then only need to check the roots of ϕ m and verify that one of our numerical approximations,δ m , satisfies ϕ m (δ m ) < 0. In our numerical experiments, we employ a Newton method to findδ m with an initial guess close to one on I 1 (the proof of Lemma 3.1 implies that δ m ≈ 1 for k m ≈ 0) and an initial guess close to δ m−1 on I m for m ≥ 2; this approach works well for the studied problems.
As is standard in finite element algorithms for linear problems, the time step is halved and the numerical solution recomputed until η res m is below the tolerance tol, however, we must also account for the nonlinear scaling that enters through δ m . The structure of the error estimator implies that the interval I 1 is the most important since the residual estimator on I 1 propagates through the remainder of the error estimator. Similarly, each successive subinterval I m is less important than the previous subinterval I m−1 with the term δ m−1 measuring the extent to which this is true. To account for this, we increase the tolerance by the factor δ m after computations on I m are complete.
We then advance to the next interval using the previous time step length as a reference and continue in this way until δ m no longer exists; the adaptive algorithm is then terminated and it outputs the total number of degrees of freedom (DoFs) used along with the sum of all the time step lengths, T , as an estimate for T ∞ . The pseudocode for the h-adaptive algorithm is given in Algorithm 1.

An hp-Adaptive Approach
The basic outline of the hp-adaptive algorithm will be the same as that of the h-adaptive algorithm; the only difficulty lies in how we choose locally between h-refinement and prefinement. The theory of the hp-FEM suggests that p-refinement is superior to h-refinement if the solution is 'smooth enough', so we perform p-refinement if U | I m is smooth; otherwise, we do h-refinement. The pseudocode for the hp-adaptive algorithm is very similar to Algorithm 1; the difference lies in replacing the simple time step bisections in lines (8:) and (19:) by the following procedure: if U | I m+1 is smooth then r m+1 ← r m+1 + 1. else k m+1 ← k m+1 /2. end if Compute U | I m+1 and determine η res m+1 .
3: while U | I 1 does not exist do 4: k 1 ← k 1 /2, and attempt to compute U | I 1 . 5: end while 6: Compute η res 1 . 7: while η res 1 > tol do 8: k 1 ← k 1 /2, compute U | I 1 and determine η res 1 . 9: end while 10: Set m = 0 and attempt to compute δ 1 . 11: while δ m+1 exists do 12: m ← m + 1, tol ← δ m * tol, r m+1 = r m , k m+1 = k m . 13: Attempt to compute U | I m+1 . 14: while U | I m+1 does not exist do 15: In the numerical experiments, we consider only real-valued ODEs, and so we remark specifically on the estimation of smoothness in this special case. There are many ways to estimate smoothness of the numerical solution (see [18] for an overview); we choose to use a computationally simple approach from [11] based on Sobolev embeddings. Here, the basic idea is to monitor the constant in the Sobolev embedding H 1 (I m ) → L ∞ (I m ) in order the classify a given function as locally smooth or not. Specifically, we define the smoothness indicator θ m : which, intuitively, takes values close to one if u is smooth and values close to zero otherwise; see the reasoning of [11] for details. Following [11], we characterize U | I m ∈ P r m (I m ; R) as smooth if here, values around θ = 0.85 were observed to produce the best results in our numerical experiments below. We conclude this section with a corollary on the magnitude of the reconstruction error under either the h-adaptive algorithm or the hp-adaptive algorithm. In order to state the corollary, we require some additional notation. To that end, we denote the initial tolerance by tol * and define the a posteriori approximation to the growth rate of the error on (0, t m ) by We are now ready to state the corollary. Proof To begin the proof, we will first prove by induction that ψ m ≤ m δ m−1 tol . For I 1 , we have that ψ 1 = η res 1 ≤ tol , so the base case is verified. Assuming that the bound is true for m − 1, then recalling the definition of ψ m from (3.28) as well as the scaling nature of the tolerances in the h-adaptive and hp-adaptive algorithms, that is, η res This completes the proof.

Numerical Experiments
In this section, we will apply the adaptive algorithms developed in the previous section to two real-valued initial value problems. In both numerical experiments, we approximate the implicit Galerkin methods (2.1) and (2.3) with an explicit Picard-type iteration; cf. [12].

Example 1
In this numerical example, we consider (1.1) with the polynomial nonlinearity F (t, u) = u 2 . Through separation of variables the exact solution is given by which has blow-up time given by T ∞ = u −1 0 . Note that for any v 1 , v 2 ∈ R, the nonlinearity F satisfies so we set L(|v 1 |, |v 2 |) = |v 1 | + |v 2 |. Thus, in this case, we have in (3.23). We begin by applying the h-adaptive algorithm to (1.1) with initial condition u(0) = 1 for both Galerkin time stepping methods utilizing polynomials up to and including degree r = 4. We reduce the tolerance and observe the rate of convergence of the blow-up time error |T − T ∞ |. The results given in Fig. 1 show that for both Galerkin methods where r is the polynomial degree and #DoFs signifies the total number of degrees of freedom. Next, we apply the hp-adaptive algorithm to (1.1) with the same data. We reduce the tolerance and observe exponential convergence of the blow-up time error for both Galerkin methods, viz., with some constant b > 0, as shown in Fig. 1.
For the hp-adaptive algorithm under the hp-cG scheme (2.1), we also observe the effectivity indices of the error estimator (with respect to the reconstruction error) given by over the course of each computational run for different tolerances and record the best effectivity indices observed in a given computational run versus the inverse of the distance to the blow-up time in Fig. 2. The results show that the error estimator seems to account well for the behaviour of the reconstruction error in certain situations with an effectivity index of 70 for a blow-up problem being particularly impressive; in other situations, however, the effectivity indices observed are much larger. Such a large variation in the effectivity indices observed is Since u L ∞ (0,T ) = ε −1 M , we conclude that the error estimator blows up at a faster rate than the exact solution in this example; moreover, we infer from Fig. 2 that in the best case scenario (for the error estimator), this rate appears to be quasi-optimal in the sense that it mirrors the rate of the reconstruction error. Finally, we remark that this result gives weight to the interpretation of δ M as an a posteriori approximation to the blow-up rate of the error on (0, T ) for nonlinear F .

Example 2
In this numerical example, we consider (1.1) with the exponential nonlinearity F (t, u) = e u . Through separation of variables the exact solution is given by which has blow-up time given by T ∞ = e −u 0 . Note that for any v 1 , v 2 ∈ R, the nonlinearity F satisfies As in Example 1, we apply the h-adaptive algorithm to (1.1) with initial condition u(0) = 1 for both Galerkin methods utilizing polynomials up to and including degree r = 4. We reduce the tolerance and observe the rate of convergence to the blow-up time. The results given in Fig. 3 show that for this example we also have that |T − T ∞ | ∝ (#DoFs) −(r +1) , for both Galerkin methods where r is the polynomial degree. Next, we apply the hp-adaptive algorithm to (1.1) with the same data. As in Example 1, the tolerance is reduced and the results given in Fig. 3 show exponential convergence to the blow-up time, viz., with some constant b > 0, for both Galerkin methods.
Additionally, for the hp-adaptive algorithm under the hp-cG scheme (2.1), we observe the effectivity indices (with respect to the reconstruction error) over the course of each computational run for different tolerances and record the best effectivity indices observed in a given computational run in Fig. 4. The range of effectivity indices observed over all Since u L ∞ (0,T ) = log(ε −1 M ), we remark that the error estimator blows up at a faster rate than the exact solution in this example as well. Moreover, Fig. 4 implies that this rate is again quasi-optimal in the best case scenario for the error estimator. To conclude, we remark that this result gives additional weight to the interpretation of δ M as an a posteriori approximation to the blow-up rate of the error on (0, T ) when F is nonlinear. reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.