Introduction

Consider the following subdiffusion problem, with \(\alpha \in (0, 1)\),

$$\begin{aligned}&\, _{0}^{C}D_{t}^{\alpha } u(t) + A u(t) = f(t), \quad 0 < t \le T, \end{aligned}$$
(1)
$$\begin{aligned}&u(0) = u_{0}, \end{aligned}$$
(2)

where \(u_{0}\) is the initial value and f is the source function which will be specified later and where \( \, _{0}^{C}D_{t}^{\alpha } u(t)\) denotes the Caputo fractional derivative defined by, see [7],

$$\begin{aligned} \, _{0}^{C}D_{t}^{\alpha } u(t) = \frac{1}{\varGamma (1- \alpha )} \int _{0}^{t} (t-s)^{-\alpha } u^{\prime } (s) \, \mathrm {d}s, \end{aligned}$$

and \(A: {\mathcal {D}} (A) \subset {\mathcal {H}} \rightarrow {\mathcal {H}}\) denotes the elliptic operator satisfying the following resolvent estimate, with some \(\pi /2< \theta < \pi \), see, e.g., [25, 37],

$$\begin{aligned} \Vert (z +A)^{-1} \Vert \le C | z|^{-1}, \quad \forall \, z \in \varSigma _{\theta } = \{ z \in {\mathbb {C}} \backslash \{0\}: \; | \text{ arg } \, z | \le \theta \}. \end{aligned}$$
(3)

Here \({\mathcal {H}}\) denotes a suitable Hilbert space. For example, \({\mathcal {H}} = L_{2}(\varOmega )\) and \(A= - \varDelta \) with definition domain \(D(A) = H^{2}(\varOmega ) \cap H_{0}^{1}(\varOmega )\). Here \(L_{2}(\varOmega ), H_{0}^{1}(\varOmega )\) and \(H^{2}(\varOmega )\) denote the standard Sobolev spaces and \(\varOmega \subset {\mathbb {R}}^{d}, d=1, 2, 3\) is a domain with smooth boundary \(\partial \varOmega \).

Note that \(z^{\alpha } \in \varSigma _{\theta ^{\prime }}\) with \( \theta ^{\prime } = \alpha \theta < \pi \) for all \( z \in \varSigma _{\theta }\) which implies that, by (3), with some \(\pi /2< \theta < \pi \),

$$\begin{aligned} \Vert (z^{\alpha } +A)^{-1} \Vert \le C |z|^{-\alpha } , \quad \forall \, z \in \varSigma _{\theta } = \{ z \in {\mathbb {C}} \backslash \{0\}: \; | \text{ arg } \, z | \le \theta \}. \end{aligned}$$
(4)

Let \(S_{h} \subset H_{0}^{1}(\varOmega )\) denote the piecewise linear finite element space defined on the triangulation of \(\varOmega \). The finite element method of (1), (2) is to find \(u_{h}(t) \in S_{h}\) such that for \(u_{0} \in L_{2}(\varOmega ), f(t) \in L_{2}(\varOmega )\) with \(f_{h}(t) = P_{h} f (t)\),

$$\begin{aligned}&\, _{0}^{C}D_{t}^{\alpha } u_{h}(t) + A_{h} u_{h}(t) = f_{h}(t), \quad 0 < t \le T, \end{aligned}$$
(5)
$$\begin{aligned}&u_{h}(0) = u_{0h} =P_{h} u_{0}, \end{aligned}$$
(6)

where \(A_{h}: S_{h} \rightarrow S_{h}\) denotes the discrete analogue of the elliptic operator A defined by, see Thomée [37, Chapter 12],

$$\begin{aligned} (A_{h} \psi , \chi ) = A( \psi , \chi ), \quad \forall \psi , \chi \in S_{h}, \end{aligned}$$

where \(A(\cdot , \cdot )\) denotes the bilinear form associated with A. Here \(P_{h}: L_{2}(\varOmega ) \rightarrow S_{h}\) denotes the \(L_{2}\) projection operator defined by

$$\begin{aligned} (P_{h} v, \chi ) = ( v, \chi ), \quad \forall \chi \in S_{h}. \end{aligned}$$

Note that the minimal eigenvalue of \(A_{h}\) is bounded below by that of A since \(S_{h}\) is a subspace of \(L_{2}(\varOmega )\), we have, with some \(\pi /2< \theta < \pi \), see Lubich et al. [25, page 6],

$$\begin{aligned} \Vert (z^{\alpha } +A_{h})^{-1} \Vert \le C |z|^{-\alpha } , \quad \forall \, z \in \varSigma _{\theta } = \{ z \in {\mathbb {C}} \backslash \{0\}: \; | \text{ arg } \, z | \le \theta \}. \end{aligned}$$
(7)

Let \( v_{h}(t) = u_{h}(t) - u_{0h}\). Then (5), (6) can be written equivalently as

$$\begin{aligned}&\, _{0}^{C}D_{t}^{\alpha } v_{h}(t) + A_{h} v_{h}(t) = f_{h}(t) -A_{h} u_{0h}, \quad 0< t \le T, \end{aligned}$$
(8)
$$\begin{aligned}&v_{h}(0) = 0. \end{aligned}$$
(9)

In this work, we shall construct and analyze a continuous Galerkin time stepping method for solving (8), (9). It is more convenient for analyzing the time stepping method for (8), (9) than for (5), (6), see Lubich et al. [25].

Many application problems are modeled by using (1), (2), see, e.g., [1, 9, 10, 32], etc. It is not possible to find the analytic solution of (1), (2). Therefore we need to design and analyze the numerical methods for solving (1), (2). Note that the Caputo fractional derivative is a nonlocal operator which makes the numerical analysis of the subdiffusion equation (1), (2) more difficult than the diffusion equation with the integer order derivative. There are two popular ways to approximate the Caputo fractional derivative in literature. One way is to use the convolution quadrature formula to approximate the Caputo fractional derivative, see, e.g., [25, 26, 46]. Another way is to use the L-type schemes to approximate the Caputo fractional derivative, see, e. g., [16, 20,21,22,23, 34, 45], etc. Under the assumptions that the solutions of (1), (2) are sufficiently smooth, the numerical methods constructed based on both convolution quadrature and L-type schemes have the optimal convergence orders, see [23, 34], etc. However, Stynes et al. [36] and Stynes [35] showed that in general the solutions of (1), (2) have the limited regularities and the solutions behave as \(O(t^{\alpha })\) even for smooth initial data, which implies that the solutions of (1), (2) are not in \(C^{1}[0, T]\), see also [33]. Hence the higher order numerical methods constructed by convolution quadrature and L-type schemes for solving (1), (2) have only first order \(O(\tau )\) convergence for both smooth and nonsmooth data, see, e.g., [11, 41], etc. To obtain the optimal convergence orders of the higher order numerical methods for solving (1), (2), one may use the corrected schemes to correct the weights of the starting steps of the numerical methods, see, e.g., [12, 42], etc., or use the graded meshes to capture the singularities of the solutions of the subdiffusion problems, see, e.g., [16, 36, 45], etc. Discontinuous Galerkin methods are also well studied for solving fractional subdiffusion equations, see, e.g., [29,30,31] and the references therein. There are other numerical methods for solving fractional partial differential equations, see, e.g., [2,3,4,5,6, 8, 13,14,15, 24, 27, 28, 39, 43, 44, 46], etc.

Recently, Li et al. [17] analyzed the L1 scheme for solving the superdiffusion problem with the fractional order \(\alpha \in (1, 2)\) based on the Petrov–Galerkin method. This scheme is first proposed and analyzed by Sun and Wu [34] in the framework of finite difference method under the assumptions that the solution of the problem is sufficiently smooth. Li et al. [17] proved that, without any regularity assumptions for the solution of the problem and without using the corrections of the weights and the graded meshes, the L1 scheme has the convergence order \(O(\tau ^{3-\alpha }), \alpha \in (1,2 )\) for both smooth and nonsmooth data when the elliptic operator A is assumed to be self-adjoint, positive semidefinite and densely defined in a suitable Hilbert space, see also [18, 19], etc.

The purpose of this paper is to consider the continuous Galerkin method for solving the subdiffusion problem with \(\alpha \in (0, 1)\) by using the similar argument as in Li et al. [17] for the superdiffusion problem with \(\alpha \in (1,2)\). We prove that, without any regularity assumptions for the solution of the problem and without using the corrections of the weights and the graded meshes, the proposed time stepping method has the convergence order \(O(\tau ^{1+\alpha }), \, \alpha \in (0, 1)\) for general sectorial elliptic operators satisfying the resolvent estimate (3).

The main contributions of this paper are the following:

  1. 1.

    A continuous Galerkin time stepping method for solving subdiffusion problem is introduced for general sectorial elliptic operators satisfying the resolvent estimate (3).

  2. 2.

    The convergence order \(O(\tau ^{1+\alpha }), \alpha \in (0, 1) \) of the proposed time stepping method for solving homogeneous subdiffusion problem is proved by using the Laplace transform method.

  3. 3.

    The convergence order \(O(\tau ^{1+\alpha }), \alpha \in (0, 1) \) of the proposed time stepping method for solving inhomogeneous subdiffusion problem is also proved by using the Laplace transform method.

The paper is organized as follows. In Sect. 2, we introduce the continuous Galerkin time stepping method for solving subdiffusion problem. In Sect. 3, the error estimates of the proposed time stepping methods are proved by using discrete Laplace trnsform method for the subdiffusion problems. In Sect. 4, some numerical examples for both fractional ordinary differential equations and subdiffusion problems are given to verify the theoretical results. An Appendix with three lemmas is given in Sect. 5.

By C we denote a positive constant independent of the functions and parameters concerned, but not necessarily the same at different occurrences. We assume that \(\alpha \in (0, 1)\) in this paper and, for simplicity, we will not explicitly write this assumption in many places.

A Continuous Galerkin Time Stepping Method for (8), (9)

In this section, we shall introduce a continuous Galerkin time stepping method for solving (8), (9).

Let \(0 = t_{0}< t_{1}< \dots < t_{N}=T\) be a partition of [0, T] and \(\tau \) the time step size. We define the following trial space, with \(k=0, 1, 2, \dots , N-1\) and \(q=2\),

$$\begin{aligned} W^{1} = \{ X \in H^1(0, T; S_{h}) : \; X |_{[t_{k}, t_{k+1}]} = \sum _{j=0}^{q-1} \psi _{j} t^{j}, \; \psi _{j} \in S_{h} \}, \end{aligned}$$

and the test space

$$\begin{aligned} W^{0} = \{ X \in L^{2} (0, T; S_{h}) : \; X |_{(t_{k}, t_{k+1})} = \psi , \; \psi \in S_{h} \}, \end{aligned}$$

respectively. We remark that the trial space \(W^{1}\) is a continuous function space and the test space is a discontinuous piecewise constant function space with respect to time t. The trial space \(W^{1}\) is, due to the continuous restriction, of one order higher than the test space \(W^{0}\).

Let \(u_{0} \in L_{2}(\varOmega )\) and \( f \in L^{2}(0, T; L_{2}(\varOmega ))\). The continuous Galerkin time stepping method for solving (8), (9) is to find \(V_{h} \in W^{1}\) with \(V_{h}(0)=0\), such that

$$\begin{aligned}&\int _{0}^{T} \Big ( \, _{0}^{R} D_{t} ^{\alpha } V_{h}(t), \chi \Big ) \, \mathrm {d}t +\int _{0}^{T} A \big (V_{h}(t), \chi \big ) \, \mathrm {d}t \nonumber \\&\quad = \int _{0}^{T} (f_{h}(t), \chi ) \, \mathrm {d}t - \int _{0}^{T} (A_{h} u_{0h}, \chi ) \, \mathrm {d}t, \quad \forall \chi \in W^{0}, \end{aligned}$$
(10)

where \(\, _{0}^{R} D_{t} ^{\alpha } V_{h} (t)\) denotes the Riemann-Liouville fractional derivative and \((\cdot , \cdot )\) denotes the inner product in \(L_{2}(\varOmega )\).

Let \(V_{h}^{k} = V_{h}(t_{k}), \, k=0, 1, 2, \dots , N\) denote the approximate solution of \(v_{h}(t_{k})\) in (8), (9). We shall show that the solutions \(V_{h}^{k}, k=0, 1, 2, \dots , N\) of (10) satisfy the following abstract operator form: with \(V_{h}^{0}= V_{h}(0)=0\),

$$\begin{aligned}&V_{h}^{1} (b_{k+1}-b_{k}) + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k}+ V_{h}^{k+1} \big ) \nonumber \\&\quad = \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t+ \tau ^{\alpha } (-A_{h} u_{0h}), \quad \text{ for } \; k=0, \end{aligned}$$
(11)
$$\begin{aligned}&V_{h}^{1} ( b_{k+1} - b_{k}) + \sum _{j=1}^{k} \Big ( V_{h}^{j+1}- 2 V_{h}^{j} + V_{h}^{j-1} \Big ) (b_{k-j+1}- b_{k-j}) + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k} + V_{h}^{k+1} \big ) \nonumber \\&\quad = \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t+ \tau ^{\alpha } (-A_{h} u_{0h} ), \quad \text{ for } \; k=1, 2, \dots , N-1, \end{aligned}$$
(12)

where

$$\begin{aligned} b_{j}= \frac{j^{2-\alpha }}{\varGamma (3- \alpha )}, \quad j=0, 1, 2, \dots N. \end{aligned}$$
(13)

In fact, we have, on each \((t_{k}, t_{k+1}), \; k=0, 1, 2, \dots , N-1\),

$$\begin{aligned}&\int _{t_{k}}^{t_{k+1}} \Big ( \, _{0}^{R}D_{t}^{\alpha } V_{h}(t), \, \chi \Big ) \, \mathrm {d}t + \int _{t_{k}}^{t_{k+1}} A(V_{h}(t), \chi )) \, \mathrm {d}t \\&\quad = \int _{t_{k}}^{t_{k+1}} ( f_{h}(t), \, \chi ) \, \mathrm {d}t + \int _{t_{k}}^{t_{k+1}} ( A_{h} u_{0h}, \, \chi ) \, \mathrm {d}t, \quad \forall \; \chi \in W^{0}. \end{aligned}$$

On each subinterval \((t_{k}, t_{k+1}), k=1, 2, \dots , N-1\), (similarly we may consider the subinterval \((t_{0}, t_{1})\)), we may write, for \(\forall \chi \in W^{0}\),

$$\begin{aligned}&\int _{t_{k}}^{t_{k+1}} \Big ( \, _{0}^{R}D_{t}^{\alpha } V_{h}(t), \, \chi \Big ) \, \mathrm {d}t = \int _{t_{k}}^{t_{k+1}} \Big ( D^{1} \, _{0}^{R}D_{t}^{\alpha -1} V_{h}(t), \, \chi \Big ) \, \mathrm {d}t \\&\quad = \int _{t_{k}}^{t_{k+1}} \Big ( \Big [ \frac{1}{\varGamma (1- \alpha )} \Big ( \int _{0}^{t_{1}} + \int _{t_{1}}^{t_{2}} + \dots + \int _{t_{k-1}}^{t_{k}} + \int _{t_{k}}^{t} \Big ) (t-s)^{-\alpha } V_{h}(s) \, ds \Big ]^{\prime }, \chi \Big ) \, \mathrm {d}t. \end{aligned}$$

Note that \(V_{h} \in W^{1}\) and therefore \( V_{h}(t) = V_{h}(t_{k}) + \big (V_{h}(t_{k+1}) - V_{h}(t_{k}) \big ) \frac{t- t_{k}}{\tau }\) on \( t \in (t_{k}, t_{k+1}), \, k=0, 1, \dots , N-1\). Hence we have, for \(\forall \chi \in W^{0}\), with \(k=1, 2, \dots , N-1\),

$$\begin{aligned}&\int _{t_{k}}^{t_{k+1}} \frac{1}{\varGamma (1- \alpha )} \Big ( \Big [ \int _{0}^{t_{1}} (t-s)^{-\alpha } V_{h}(s) \, ds \Big ]^{\prime }, \chi \Big ) \, \mathrm {d}t \\&\quad = \frac{\big (V_{h}^{1}, \chi \big ) }{\tau } \Big [ \frac{\tau ^{2-\alpha }}{\varGamma (3- \alpha )} \Big ( (k+1)^{2-\alpha } - 2 k^{2-\alpha } + (k-1)^{2- \alpha } \Big ) \\&\qquad -\frac{\tau ^{2- \alpha }}{\varGamma (2- \alpha )} ( k^{1- \alpha } -(k-1)^{1- \alpha }) \Big ], \end{aligned}$$

and, with \(l=2, 3, \dots , k\),

$$\begin{aligned}&\int _{t_{k}}^{t_{k+1}} \frac{1}{\varGamma (1- \alpha )} \Big ( \Big [ \int _{t_{l-1}}^{t_{l}} (t-s)^{-\alpha } V_{h}(s) \, ds \Big ]^{\prime }, \, \chi \Big ) \, \mathrm {d}t \\&\quad = \big (V_{h}^{l-1}, \chi \big ) \frac{\tau ^{1-\alpha }}{\varGamma (2- \alpha )} \Big [ (k-l+2)^{1-\alpha } - 2 (k-l+1)^{1-\alpha } + (k-l)^{1-\alpha } \Big ] \\&\qquad + \frac{\big ( V_{h}^{l} - V_{h}^{l-1}, \chi \big ) }{\tau } \Big [ \frac{\tau ^{2-\alpha }}{\varGamma (3- \alpha )} \Big ( (k-l+2)^{2-\alpha } - 2 (k-l+1)^{2-\alpha } + (k-l)^{2- \alpha } \Big ) \\&\qquad -\frac{\tau ^{2- \alpha }}{\varGamma (2- \alpha )} \Big ( (k-l+1)^{1- \alpha } -(k-l)^{1- \alpha } \Big ) \Big ], \end{aligned}$$

and

$$\begin{aligned}&\int _{t_{k}}^{t_{k+1}} \frac{1}{\varGamma (1- \alpha )} \Big ( \Big [ \int _{t_{k}}^{t} (t-s)^{-\alpha } V_{h}(s) \, ds \Big ]^{\prime }, \, \chi \Big ) \, \mathrm {d}t \\&\quad = ( V_{h}^{k}, \chi ) \frac{\tau ^{1-\alpha }}{\varGamma (2- \alpha )} \Big ( 1^{1-\alpha } - 0^{1-\alpha } \Big ) + \frac{\big ( V_{h}^{k+1} - V_{h}^{k}, \chi \big ) }{\tau } \frac{\tau ^{2-\alpha }}{\varGamma (3- \alpha )} \Big ( 1^{2-\alpha } - 0^{2-\alpha } \Big ). \end{aligned}$$

Hence we have

$$\begin{aligned}&\int _{t_{k}}^{t_{k+1}} \Big ( \, _{0}^{R}D_{t}^{\alpha } V_{h}(t), \, \chi \Big ) \, \mathrm {d}t \\&\quad = \tau ^{1-\alpha } \Big [ (V_{h}^{1}, \chi ) (b_{k+1} - b_{k}) + \Big ( V_{h}^{2} - 2V_{h}^{1} + V_{h}^{0}, \chi \Big ) (b_{k}- b_{k-1}) \\&\qquad + \dots + \Big (V_{h}^{k+1} -2 V_{h}^{k} + V^{k-1}, \, \chi \Big ) (b_{1}-b_{0}) \Big ], \quad \forall \chi \in W^{0}, \end{aligned}$$

where \(b_{k}, k=0, 1, 2, \dots , N\) are defined in (13).

Further we have, with \(k=0, 1, 2, \dots , N-1\),

$$\begin{aligned} \int _{t_{k}}^{t_{k+1}} A(V_{h}(t), \chi ) \, \mathrm {d}t = \frac{1}{2} \tau A \big ( V_{h}^{k} + V_{h}^{k+1}, \chi \big ), \quad \forall \chi \in W^{0}. \end{aligned}$$

Together these estimates we obtain the time stepping method (11), (12).

Remark 1

The time stepping method (11), (12) has the similar form as the time discretization scheme introduced in [17, (4)] for the superdiffusion problem with \(1< \alpha <2\).

Error Estimates of the Time Stepping Method (11), (12)

In this section, we will show the error estimates of the abstract time stepping method (11), (12) by using the Laplace transform method proposed originally by Lubich et al. [25] and developed by Jin et al. [11, 12], Yan et al. [42] and Wang et al. [38], etc.

The Homogeneous Case with \(f=0\) and \(u_{0} \ne 0\)

In this subsection, we will consider the error estimates of the time stepping method (11), (12) with \(f=0\) and \(u_{0} \ne 0\). We thus consider the following homogeneous problem, with \(v_{h}(0)=0\) and \(u_{0h} = P_{h} u_{0}, \, u_{0} \in L_{2}(\varOmega )\),

$$\begin{aligned}&\, _{0}^{C}D_{t}^{\alpha } v_{h}(t) + A_{h} v_{h}(t) = -A_{h} u_{0h}, \quad 0< t \le T. \end{aligned}$$
(14)

The abstract time stepping method (11), (12) for solving (14) is now reduced to, with \(V_{h}^{0}=0\),

$$\begin{aligned}&V_{h}^{1} (b_{k+1}-b_{k}) + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k}+ V_{h}^{k+1} \big ) = \tau ^{\alpha } (-A_{h} u_{0h}), \quad \text{ for } \; k=0, \end{aligned}$$
(15)
$$\begin{aligned}&V_{h}^{1} ( b_{k+1} - b_{k}) + \sum _{j=1}^{k} \Big ( V_{h}^{j+1}- 2 V_{h}^{j} + V_{h}^{j-1} \Big ) (b_{k-j+1}- b_{k-j}) \nonumber \\&\quad + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k} + V_{h}^{k+1} \big ) = \tau ^{\alpha } (-A_{h} u_{0h} ), \quad \text{ for } \; k=1, 2, \dots . \end{aligned}$$
(16)

We then have the following theorem:

Theorem 1

Let \(v_{h}(t_{n})\) and \(V_{h}^{n}, \; n=0, 1, 2, \dots , \) be the solutions of (14) and (15), (16), respectively. Assume that \(u_{0h} = P_{h} u_{0}, \, u_{0} \in L_{2}(\varOmega )\). Then we have

$$\begin{aligned} \Vert V_{h}^{n}- v_{h}(t_{n}) \Vert \le C \big ( \tau ^{1+ \alpha } t_{n}^{-1 - \alpha } + \tau ^{2} t_{n}^{-2} \big ) \Vert u_{0} \Vert . \end{aligned}$$

Proof

Step 1: Find the exact solution of (14). Let \({\hat{v}}_{h}(z)\) denote the Laplace transform of \(v_{h}(t)\). Taking the Laplace transform in (14), we have

$$\begin{aligned} {\hat{v}}_{h} (z) = (z^{\alpha } +A_{h})^{-1} (-A_{h} u_{0h}) z^{-1}, \end{aligned}$$

which implies that, by using the inverse Laplace transform, with \(n=1, 2, \dots ,\)

$$\begin{aligned} v_{h}(t_{n}) = \frac{1}{ 2 \pi i} \int _{\varGamma } e^{t_{n} z} (z^{\alpha } +A_{h})^{-1} \big ( -A_{h} u_{0h} \big ) z^{-1} \, \mathrm {d}z, \end{aligned}$$
(17)

where, with some \(\pi /2< \theta < \pi \),

$$\begin{aligned} \varGamma = \{ z \in {\mathbb {C}}: \; |\text{ arg } \, z | = \theta \} \cup \{ 0 \}, \end{aligned}$$
(18)

with \(\mathfrak {I}z\) running from \(-\infty \) to \(\infty \).

Taking the variable change \(z= {\bar{z}}/\tau \), we may write (17) as

$$\begin{aligned} v_{h}(t_{n}) = \frac{1}{ 2 \pi i} \int _{\varGamma } e^{n {\bar{z}}} ({\bar{z}}^{\alpha } + \tau ^{\alpha } A_{h})^{-1} \big ( -\tau ^{\alpha } A_{h} \big ) u_{0h} {\bar{z}}^{-1} \, \mathrm {d} {\bar{z}}. \end{aligned}$$
(19)

For simplicity of the notations, we replace \({\bar{z}}\) by z in (19), then (17) can be written as

$$\begin{aligned} v_{h}(t_{n}) = \frac{1}{ 2 \pi i} \int _{\varGamma } e^{n z} (z^{\alpha } + \tau ^{\alpha } A_{h})^{-1} \big ( -\tau ^{\alpha } A_{h} \big ) u_{0h} z^{-1} \, \mathrm {d}z. \end{aligned}$$
(20)

Step 2: Find the approximate solutions \(V_{h}^{n}, n=1, 2, \dots \) of (15), (16).

Denote

$$\begin{aligned} z_{\tau }^{\alpha } = \frac{2}{e^{z}+1} \psi (z), \quad \text{ or } \quad z_{\tau }= \Big ( \frac{2}{e^{z}+1} \psi (z) \Big )^{1/\alpha }, \end{aligned}$$
(21)

where

$$\begin{aligned} \psi (z) = e^{-z} (e^{z}-1)^{3} {\tilde{b}} (z). \end{aligned}$$
(22)

Here \( {\tilde{b}} (z) = \sum _{j=0}^{\infty } b_{j} e^{-jz}\) with \(b_{j}, j=0, 1, 2, \dots , \) defined by (13), denotes the discrete Laplacian transform of \(\{ b_{j} \}_{j=0}^{\infty }\).

By Lemma 1, we see that \((z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h})^{-1}\) is well defined. Further we shall prove that the solutions \(V_{h}^{n}, n=1, 2, \dots \) of (15), (16) take the following forms:

$$\begin{aligned} V_{h}^{n} = \frac{1}{2 \pi i} \int _{\varGamma _{\tau }} e^{nz} (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h})^{-1} (-\tau ^{\alpha } A_{h}) u_{0h} \Big ( \frac{2}{e^{z}+1} \Big ) \Big (\sum _{j=0}^{\infty } e^{-jz} \Big ) \, \mathrm {d}z, \end{aligned}$$
(23)

where, with \(\varGamma \) defined by (18),

$$\begin{aligned} \varGamma _{\tau } = \{ z \in \varGamma : \; | \mathfrak {I}z| \le \pi \}. \end{aligned}$$
(24)

In fact, multiplying the \((k+1)\)th equation in (15), (16) by \(e^{-kz}, \; k=0, 1, 2, \dots \), we obtain

$$\begin{aligned}&V_{h}^{1} (b_{k+1}-b_{k}) e^{-kz} + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k}+ V_{h}^{k+1} \big ) e^{-kz} = \tau ^{\alpha } (-A_{h} u_{0h}) e^{-kz}, \; \text{ for } \; k=0, \end{aligned}$$
(25)
$$\begin{aligned}&V_{h}^{1} ( b_{k+1} - b_{k}) e^{-kz} + \sum _{j=1}^{k} \Big ( V_{h}^{j+1}- 2 V_{h}^{j} + V_{h}^{j-1} \Big ) (b_{k-j+1}- b_{k-j}) e^{-kz} \nonumber \\&\quad + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k} + V_{h}^{k+1} \big ) e^{-kz} = \tau ^{\alpha } (-A_{h} u_{0h} ) e^{-kz}, \; \text{ for } \; k=1, 2, \dots . \end{aligned}$$
(26)

Summing the equations in (25), (26) from \(k=0\) to \(k=\infty \), we get

$$\begin{aligned}&\sum _{k=0}^{\infty } V_{h}^{1} ( b_{k+1} - b_{k}) e^{-k z} + \sum _{k=1}^{\infty } \Big [ \sum _{j=1}^{k} \big ( V_{h}^{j+1} - 2 V_{h}^{j} + V_{h}^{j-1} \big ) ( b_{k-j+1}- b_{k-j}) \Big ] e^{-kz} \nonumber \\&\quad + \sum _{k=0}^{\infty } \frac{1}{2} \tau ^{\alpha } A_{h} ( V_{h}^{k} + V_{h}^{k+1}) e^{-kz} = \sum _{k=0}^{\infty } \tau ^{\alpha } (-A_{h} u_{0h} ) e^{-kz}. \end{aligned}$$
(27)

We remark that the second summation in (27) starts from \(k=1\) since the left hand side of (25) only has two terms. Note that, since \(b_{0}=0\),

$$\begin{aligned} \sum _{k=0}^{\infty } (V_{h}^{1}) (b_{k+1} -b_{k}) e^{-kz} = V_{h}^{1} \Big ( \sum _{k=0}^{\infty } b_{k+1} e^{-k z} - \sum _{k=0}^{\infty } b_{k} e^{-kz} \Big ) = V_{h}^{1} {\tilde{b}}(z) (e^{z} -1). \end{aligned}$$

Let \({\widetilde{V}}_{h} (z) = \sum _{n=0}^{\infty } V_{h}^{n} e^{-nz}\) denote the discrete Laplace transform of \(\{ V_{h}^{n} \}_{n=0}^{\infty }\). We have, after some simple calculations,

$$\begin{aligned} \sum _{k=1}^{\infty } \Big ( \sum _{j=1}^{k} V_{h}^{j+1} b_{k-j+1} \Big ) e^{-kz}&={\tilde{b}} (z) e^{2z} \Big ( {\widetilde{V}}_{h}(z)- V_{h}^{1} e^{-z} \Big ), \\ \sum _{k=1}^{\infty } \Big ( \sum _{j=1}^{k} V_{h}^{j+1} b_{k-j} \Big ) e^{-kz}&= {\tilde{b}} (z) e^{z} \Big ( {\widetilde{V}}_{h}(z)- V_{h}^{1} e^{-z} \Big ), \\ \sum _{k=1}^{\infty } \Big ( \sum _{j=1}^{k} V_{h}^{j} b_{k-j+1} \Big ) e^{-kz}&= {\tilde{b}} (z) e^{z} {\widetilde{V}}_{h}(z), \\ \sum _{k=1}^{\infty } \Big ( \sum _{j=1}^{k} V_{h}^{j} b_{k-j} \Big ) e^{-kz}&= {\tilde{b}} (z) {\widetilde{V}}_{h}(z), \\ \sum _{k=1}^{\infty } \Big ( \sum _{j=1}^{k} V_{h}^{j-1} b_{k-j+1} \Big ) e^{-kz}&= {\tilde{b}} (z) {\widetilde{V}}_{h}(z), \\ \sum _{k=1}^{\infty } \Big ( \sum _{j=1}^{k} V_{h}^{j-1} b_{k-j} \Big ) e^{-kz}&= {\tilde{b}} (z) e^{-z} {\widetilde{V}}_{h}(z). \end{aligned}$$

Further we have

$$\begin{aligned} \sum _{k=0}^{\infty } \Big ( \frac{1}{2} \tau ^{\alpha } A_{h} \big ( V_{h}^{k} + V_{h}^{k+1} \big ) e^{-kz} \Big ) = \frac{1}{2} \tau ^{\alpha } A_{h} (e^{z} +1) {\widetilde{V}}_{h}(z). \end{aligned}$$

Thus we get

$$\begin{aligned}&V_{h}^{1} {\tilde{b}} (z) (e^{z}-1) + {\tilde{b}} (z) e^{2z} \big ( {\widetilde{V}}_{h}(z)- V_{h}^{1} e^{-z} \big ) - {\tilde{b}}(z) e^{z} \big ( {\widetilde{V}}_{h}(z)- V_{h}^{1} e^{-z} \big ) \\&\qquad -2 {\tilde{b}} (z) e^{z} {\widetilde{V}}_{h}(z) + 2 {\tilde{b}}(z) {\widetilde{V}}_{h}(z) + {\tilde{b}}(z) {\widetilde{V}}_{h}(z) - {\tilde{b}} (z) e^{-z} {\widetilde{V}}_{h}(z) + \frac{1}{2} \tau ^{\alpha } A_{h} (e^{z} +1) {\widetilde{V}}_{h}(z) \\&\quad = \sum _{k=0}^{\infty } \tau ^{\alpha } (-A_{h} u_{0h}) e^{-kz}, \end{aligned}$$

which implies that, with \(z_{\tau }\) defined by (21),

$$\begin{aligned} {\widetilde{V}}_{h}(z)&= \Big ( \psi (z) + \frac{\tau ^{\alpha }}{2}A_{h} ( e^{z} +1) \Big )^{-1} \sum _{k=0}^{\infty } \tau ^{\alpha } (-A_{h} u_{0h}) e^{-kz} \\&= \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} (-\tau ^{\alpha } A_{h}) u_{0} \Big ( \frac{2}{e^{z}+1} \Big ) \Big (\sum _{j=0}^{\infty } e^{-jz} \Big ). \end{aligned}$$

With \(\zeta = e^{-z}\), we may write \({\widetilde{V}}_{h} (z) = \sum _{n=0}^{\infty } V_{h}^{n} e^{-nz} = \sum _{n=0}^{\infty } V_{h}^{n} \zeta ^{n}\). Using the Taylor expansion of the analytic function around the origin, we have, for \(\rho \) small enough, see Lubich et al. [25, (3.9)] and Jin et al. [11],

$$\begin{aligned} V_{h}^{n} = \frac{1}{2 \pi i} \int _{|\zeta | = \rho } \Big [ \sum _{n=0}^{\infty } V_{h}^{n} \zeta ^{n} \Big ] \, \mathrm {d} \zeta = -\frac{1}{2 \pi i} \int _{\varGamma ^{0}} e^{zn} {\widetilde{V}}_{h} (z) \, \mathrm {d} z, \end{aligned}$$
(28)

where the contour \(\varGamma ^{0} := \{ z = - \ln (\rho ) + i y: \; | y | \le \pi \}\) is oriented counterclockwise. By deforming the contour \(\varGamma ^{0}\) to \(\varGamma _{\tau }\) defined by (24) and using the periodicity of the exponential function, we obtain (23).

Subtracting (23) from (19), we have

$$\begin{aligned} v_{h}(t_{n}) - V_{h}^{n} = I_{1} + I_{2}, \end{aligned}$$

where

$$\begin{aligned} I_{1}=&\frac{1}{2 \pi i} \int _{\varGamma / \varGamma _{\tau }} e^{nz} \big (z^{\alpha } + \tau ^{\alpha }A_{h} \big )^{-1} (-\tau ^{\alpha } A_{h}) u_{0h} z^{-1} \, \mathrm {d}z, \\ I_{2}=&\frac{1}{2 \pi i} \int _{\varGamma _{\tau }} e^{nz} \Big [ \big (z^{\alpha } + \tau ^{\alpha }A_{h} \big )^{-1} (-\tau ^{\alpha } A_{h}) u_{0h} z^{-1} \\&- \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} (-\tau ^{\alpha } A_{h}) u_{0h} \Big ( \frac{2}{e^{z}+1} \Big ) \Big (\sum _{j=0}^{\infty } e^{-jz} \Big ) \Big ] \, \mathrm {d}z. \end{aligned}$$

For \(I_{1}\), we have, by the resolvent estimate (7), with some constant \(c>0\),

$$\begin{aligned} \Vert I_{1} \Vert&= \Big \Vert \frac{1}{ 2 \pi i} \int _{\varGamma / \varGamma _{\tau }} e^{nz} \big (z^{\alpha } + \tau ^{\alpha }A_{h} \big )^{-1} (-\tau ^{\alpha } A_{h}) u_{0h} z^{-1} \, \mathrm {d}z \Big \Vert \\&\le C \int _{\varGamma / \varGamma _{\tau }} | e^{nz}| \Vert u_{0} \Vert |z|^{-1} \, |\mathrm {d}z| \le C \int _{\pi }^{\infty } e^{-c n r} \Vert u_{0} \Vert r^{-1} \, \mathrm {d}r \\&\le C \int _{\pi }^{\infty } e^{-cnr} \, \mathrm {d}r \Vert u_{0} \Vert \le C n^{-1} e^{-cn \pi } \Vert u_{0} \Vert \\&\le C n^{-1 -\alpha } \big ( n^{\alpha } e^{-c n \pi } \big ) \Vert u_{0} \Vert \le C n^{-1 -\alpha } \Vert u_{0} \Vert \le C \tau ^{1+ \alpha } t_{n}^{-1 - \alpha } \Vert u_{0} \Vert , \end{aligned}$$

where we use the variable change \( z= r e^{i \theta }\) in the second inequality above.

For \(I_{2}\), we have, by Lemma 2,

$$\begin{aligned} \Vert I_{2} \Vert =&\Big \Vert \frac{u_{0}}{2 \pi i} \int _{\varGamma _{\tau }} e^{nz} \Big [ \big (z^{\alpha } + \tau ^{\alpha } A_{h})^{-1} (-\tau ^{\alpha } A_{h}) z^{-1} \\&- \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} ( -\tau ^{\alpha } A_{h}) \Big ( \frac{2}{e^{z}+1}\Big ) \Big ( \sum _{j=0}^{\infty } e^{-jz} \Big ) \Big ] \, \mathrm {d}z \Big \Vert \\ \le&C \int _{\varGamma _{\tau }} |e^{nz}| \Big \Vert \big (z^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} ( - \tau ^{\alpha } A_{h}) z^{-1} \\&- \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} ( - \tau ^{\alpha } A_{h}) \frac{2}{e^{z}+1} \sum _{j=0}^{\infty } e^{-jz} \Big \Vert \, |\mathrm {d}z| \Vert u_{0} \Vert \\ \le&C \Vert u_{0} \Vert \int _{\varGamma _{\tau }} |e^{nz}| \big ( |z|^{\alpha } + |z| \big ) \, | dz| \le C \Vert u_{0} \Vert \int _{0}^{\pi } e^{-c nr } ( r^{\alpha } + r ) \, \mathrm {d}r \\ \le&C \Vert u_{0} \Vert \int _{0}^{\pi } e^{-c nr } ( ( nr)^{\alpha } n^{-1 - \alpha } + (nr) n^{-2} ) \, \mathrm {d} (nr) \\ \le&C \big ( n^{-1-\alpha } + n^{-2} \big ) \Vert u_{0} \Vert \le C \big ( \tau ^{1+ \alpha } t_{n}^{-1-\alpha } + C \tau ^{2} t_{n}^{-2} \big ) \Vert u_{0} \Vert . \end{aligned}$$

Together these estimates complete the proof of Theorem 1. \(\square \)

Remark 2

In (28), we follow the approach in Lubich et al. [25, (3.9)] and Jin et al. [11] and obtain the formula for \(V_{h}^{n}\) by using the Laplace transform method. By Lemma 1, we may show that \(V_{h}^{n}\) is well defined since \(z_{\tau }^{\alpha } \in \varSigma _{\theta }\) for some \(\theta \in (\pi /2, \pi )\). An alternative approach for obtaining the formula of \(V_{h}^{n}\) is given in Xie et al. [17], where the authors do not apply the Taylor expansion of the analytic function around the origin, and instead directly calculate the inverse Laplace transform \(V_{h}^{n} = \frac{1}{2 \pi i} \int _{a - i \pi }^{a+i \pi } e^{nz} {\widetilde{V}}_{h} (z) \, \mathrm {d} z\) for some \(a>0\), see [17, Lemma 3.6]. In particular, they obtain the stability estimate of \(V_{h}^{n}\), i.e., [17, Lemma 3.1], which implies that \({\widetilde{V}}_{h}(z) = \sum _{n=0}^{\infty } V_{h}^{n} e^{-nz}\) is well defined for \(z \in {\mathbb {C}}^{+}\).

Remark 3

In the above estimates for \(\Vert I_{1} \Vert \) and \( \Vert I_{2}\Vert \), the constants c and C depend on the angle \(\theta \) of the integral path \(\varGamma \). Moreover, the constant C will tend to \(\infty \) as \(\theta \rightarrow \pi /2\). In our proof of Lemma 1, we require that \(\theta \) is sufficiently close to \(\pi /2\), so that the constant in Theorem 1 could be very large. The similar remark is also valid for the error estimates in Theorem 2 in the inhomogeneous case in the next section.

The Inhomogeneous Case with \(f \ne 0\) and \(u_{0}=0\)

In this subsection, we will consider the error estimates of the time stepping method (11), (12) with \(f \ne 0\) and \(u_{0}=0\). We thus consider the following inhomogeneous problem, with \(v_{h}(0)=0\),

$$\begin{aligned}&\, _{0}^{C}D_{t}^{\alpha } v_{h}(t) + A_{h} v_{h}(t) = f_{h}(t), \quad 0< t \le T. \end{aligned}$$
(29)

The abstract time stepping method (11), (12) for solving (29) is now reduced to, with \(V_{h}^{0}=0\),

$$\begin{aligned}&V_{h}^{1} (b_{k+1}-b_{k}) + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k}+ V_{h}^{k+1} \big ) = \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t, \; \text{ for } \; k=0, \end{aligned}$$
(30)
$$\begin{aligned}&V_{h}^{1} ( b_{k+1} - b_{k}) + \sum _{j=1}^{k} \Big ( V_{h}^{j+1}- 2 V_{h}^{j} + V_{h}^{j-1} \Big ) (b_{k-j+1}- b_{k-j}) \nonumber \\&\quad + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k} + V_{h}^{k+1} \big ) = \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t, \; \text{ for } \; k=1, 2, \dots . \end{aligned}$$
(31)

We then have the following theorem:

Theorem 2

Let \(v_{h}(t_{n})\) and \(V_{h}^{n}, \; n=0, 1, 2, \dots , \) be the solutions of (29) and (30), (31), respectively. Assume that \(\int _{0}^{t} (t-s)^{-1+\epsilon } \Vert f^{\prime } (s) \Vert \, \mathrm {d}s < \infty \) for any \(t>0\) and \(\epsilon >0\). Then we have

$$\begin{aligned} \Vert V_{h}^{n}- v_{h}(t_{n}) \Vert \le C \tau ^{1+ \alpha } t_{n}^{-1 - \alpha } \Vert f(0) \Vert + C \tau ^{1+ \alpha - \epsilon } \int _{0}^{t_{n}} (t_{n}-s)^{-1+\epsilon } \Vert f^{\prime } (s) \Vert \, \mathrm {d}s. \end{aligned}$$

Proof

Step 1: Find the exact solution of (29). Taking the Laplace transform in (29), we have

$$\begin{aligned} {\hat{v}}_{h}(z) = (z^{\alpha } +A_{h})^{-1} {\hat{f}}_{h} (z), \end{aligned}$$

which implies that, by using the inverse Laplace transform, with \(n=1, 2, \dots ,\)

$$\begin{aligned} v_{h}(t_{n}) =\int _{0}^{t_{n}} E_{h}(t_{n}-t) f_{h}(t) \, \mathrm {d}t, \end{aligned}$$
(32)

where, with \(\varGamma \) defined by (18),

$$\begin{aligned} E_{h}(t_{n})= \frac{\tau ^{\alpha -1}}{2 \pi i} \int _{\varGamma } e^{nz} \big ( z^{\alpha } + \tau ^{\alpha }A_{h} \big )^{-1} \, \mathrm {d}z. \end{aligned}$$
(33)

Step 2: Find the approximate solutions \(V_{h}^{n}, n=1, 2, \dots \) of (30), (31). Denote, with \(z_{\tau }^{\alpha }\) and \(\varGamma _{\tau }\) defined by (21) and (24), respectively,

$$\begin{aligned} E_{h}^{j}= \frac{\tau ^{\alpha -1}}{2 \pi i} \int _{\varGamma _{\tau }} e^{jz} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z, \quad j=1, 2, \dots , \end{aligned}$$

we shall show that the solutions \(V_{h}^{n}, n=1, 2, \dots \) of (30), (31) have the following form:

$$\begin{aligned} V_{h}^{n} =\int _{0}^{t_{n}} {\tilde{E}}_{h} (t_{n}-t) f_{h}(t) \, \mathrm {d}t, \end{aligned}$$
(34)

where

$$\begin{aligned} {\tilde{E}}_{h} (t) = {\left\{ \begin{array}{ll} &{} E_{h}^{1}, \quad t_{0}< t< t_{1}, \\ &{} E_{h}^{2}, \quad t_{1}< t \le t_{2}, \\ &{} \; \vdots \qquad \qquad \vdots \\ &{} E_{h}^{n}, \quad t_{n-1} < t \le t_{n}. \end{array}\right. } \end{aligned}$$
(35)

In fact, multiplying the \((k+1)\)th equation in (30), (31) by \(e^{-kz}, \; k=0, 1, 2, \dots \), we have

$$\begin{aligned}&V_{h}^{1} (b_{k+1}-b_{k}) e^{-kz} + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k}+ V_{h}^{k+1} \big ) e^{-kz} \nonumber \\&\quad = \Big ( \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t\Big ) e^{-kz}, \; \text{ for } \; k=0, \end{aligned}$$
(36)
$$\begin{aligned}&V_{h}^{1} ( b_{k+1} - b_{k}) e^{-kz} + \sum _{j=1}^{k} \Big ( V_{h}^{j+1}- 2 V_{h}^{j} + V_{h}^{j-1} \Big ) (b_{k-j+1}- b_{k-j}) e^{-kz} \nonumber \\&\quad + \frac{1}{2} \tau ^{\alpha } A_{h} \big (V_{h}^{k} + V_{h}^{k+1} \big ) e^{-kz} = \Big ( \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t\Big ) e^{-kz}, \; \text{ for } \; k=1, 2, \dots . \end{aligned}$$
(37)

Summing the equations in (36), (37) from \(k=0\) to \(k=\infty \), we get

$$\begin{aligned}&\sum _{k=0}^{\infty } V_{h}^{1} ( b_{k+1} - b_{k}) e^{-k z} + \sum _{k=1}^{\infty } \Big [ \sum _{j=1}^{k} \big ( V_{h}^{j+1} - 2 V_{h}^{j} + V_{h}^{j-1} \big ) ( b_{k-j+1}- b_{k-j}) \Big ] e^{-kz} \\&\quad + \sum _{k=0}^{\infty } \frac{1}{2} \tau ^{\alpha } A_{h} ( V_{h}^{k} + V_{h}^{k+1}) e^{-kz} = \sum _{k=0}^{\infty } \Big ( \tau ^{\alpha -1} \int _{t_{k}}^{t_{k+1}} f_{h}(t) \, \mathrm {d}t\Big ) e^{-kz}. \end{aligned}$$

Using the similar argument as for the discrete solutions for the homogeneous problem in previous subsection, we may obtain

$$\begin{aligned} {\widetilde{V}}_{h}(z)&= \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \sum _{j=0}^{\infty } \Big [ \tau ^{\alpha -1} \Big ( \int _{t_{j}}^{t_{j+1}} f_{h}(t) \, \mathrm {d}t \Big ) e^{-jz} \Big ]. \end{aligned}$$

By using the inverse discrete Laplacian transform, we get

$$\begin{aligned} V_{h}^{n}&= \frac{1}{2 \pi i} \int _{\varGamma _{\tau }} e^{nz} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \sum _{j=0}^{\infty } \Big [ \tau ^{\alpha -1} \Big ( \int _{t_{j}}^{t_{j+1}} f_{h}(t) \, \mathrm {d}t \Big ) e^{-jz} \, \mathrm {d} z \Big ] \nonumber \\&= \sum _{j=0}^{\infty } \Big [ \int _{t_{j}}^{t_{j+1}} f_{h}(t) \, \mathrm {d}t \Big ] \Big [ \frac{\tau ^{\alpha -1}}{2 \pi i} \int _{\varGamma _{\tau }} e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z \Big ]. \end{aligned}$$
(38)

We shall prove that, for \(j \ge n\) with any fixed \(n=1, 2, \dots \),

$$\begin{aligned} \int _{\varGamma _{\tau }} e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z =0. \end{aligned}$$
(39)

Assuming (39) holds at the moment, we then have, by (38),

$$\begin{aligned} V_{h}^{n}&= \sum _{j=0}^{n-1} \Big [ \int _{t_{j}}^{t_{j+1}} f_{h}(t) \, \mathrm {d}t \Big ] \Big [ \frac{\tau ^{\alpha -1}}{2 \pi i} \int _{\varGamma _{\tau }} e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z \Big ] \\&= \sum _{j=0}^{n-1} \int _{t_{j}}^{t_{j+1}} E_{h}^{n-j} f_{h}(t) \, \mathrm {d}t = \int _{0}^{t_{n}} {\tilde{E}}_{h} (t_{n}-t) f_{h}(t) \, \mathrm {d}t, \end{aligned}$$

which shows (34).

It remains to prove (39). We shall follow the idea of the proof for [17, Lemma 3.6]. By Cauchy integral formula, for any real number \( a >0\), we have

$$\begin{aligned}&\int _{\varGamma _{\tau }} e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z \nonumber \\&\quad = \int _{a-i \pi }^{a+i \pi } e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z. \end{aligned}$$
(40)

To estimate the integral, we need to consider the bound of \(\Big \Vert \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \Big \Vert \). We have, by the resolvent estimate (7),

$$\begin{aligned} \Big \Vert \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \Big \Vert \le |z_{\tau }^{\alpha }|^{-1} \Big | \frac{2}{e^{z}+1} \Big | \le C | \psi (z)|^{-1}, \quad \forall \, z \in \varSigma _{\theta }, \end{aligned}$$
(41)

where \(z_{\tau }^{\alpha }\) and \(\psi \) are defined in (21) and (22), respectively.

Note that, by (22)

$$\begin{aligned} \psi (z)&= e^{-z} (e^{z} -1)^{3} {\tilde{b}} (z) = e^{-z} (e^{z} -1)^{3} \Big ( \sum _{j=1}^{\infty } \frac{j^{2-\alpha }}{\varGamma (3- \alpha )} e^{-jz} \Big ) \\&= \frac{1}{\varGamma (3- \alpha )} e^{-z} (e^{z} -1)^{3} Li_{\alpha -2} (e^{-z}), \end{aligned}$$

where \(Li_{\alpha -2}(z)\) denotes the polylogarithm function. By the singular expansion of the function \(Li_{p}(e^{-z}), \, p \in {\mathbb {C}}, \, p \ne 1, 2, \dots \), we have, see Jin et al. [11, Lemma 3.2],

$$\begin{aligned} Li_{p}(e^{-z}) \sim \varGamma (1- p) z^{p-1} + \sum _{k=0}^{\infty } (-1)^{k} \zeta (p-k) \frac{z^{k}}{k!} \quad \text{ as } \; z \rightarrow 0, \end{aligned}$$

where \(\zeta \) is the Riemann zeta function. Thus we have, with some suitable constants \(c_{0}, c_{1}, \dots \),

$$\begin{aligned} \psi (z)&= \frac{1}{\varGamma (3- \alpha )} e^{-z} (e^{z} -1)^{3} Li_{\alpha -2} (e^{-z}) \\&= e^{-z} (e^{z} -1)^{3} \big ( z^{\alpha -3} + c_{0} + c_{1} z + \dots \big ) \\&= z^{\alpha } + \frac{1}{2} z^{\alpha +1} + \dots , \quad \text{ as } \, z \rightarrow 0, \end{aligned}$$

which implies that \( \lim _{z \rightarrow 0} \frac{z^{\alpha }}{\psi (z)} = 1\).

Further we observe that \(\lim _{z \rightarrow \infty } \frac{z^{\alpha }}{\psi (z)} =0\). Hence we get

$$\begin{aligned} \Big | \frac{z^{\alpha }}{\psi (z)} \Big | \le C, \quad \forall \, z \in \varSigma _{\theta }, \end{aligned}$$

which implies that, by (41),

$$\begin{aligned} \Big \Vert \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \Big \Vert \le C | \psi (z)|^{-1} \le C |z|^{-\alpha }, \quad \forall \, z \in \varSigma _{\theta }. \end{aligned}$$
(42)

Therefore, by (40),

$$\begin{aligned}&\Big \Vert \int _{\varGamma _{\tau }} e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z \Big \Vert \nonumber \\&\quad = \Big \Vert \int _{a-i \pi }^{a+i \pi } e^{(n-j)z} \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \, \mathrm {d}z \Big \Vert \nonumber \\&\quad \le \int _{a-i \pi }^{a+i \pi } \Big | e^{(n-j)z} \Big | \Big \Vert \big (z_{\tau }^{\alpha } + \tau ^{\alpha } A_{h} \big )^{-1} \Big ( \frac{2}{e^{z}+1} \Big ) \Big \Vert \, |\mathrm {d}z| \nonumber \\&\quad \le \int _{a-i \pi }^{a+i \pi } \Big | e^{(n-j)z} \Big | |z|^{-\alpha } \, |\mathrm {d}z| \le C \int _{-\pi }^{\pi } e^{(n-j)a} a^{-\alpha } \, \mathrm {d}y \le C e^{(n-j)a} a^{-\alpha }. \end{aligned}$$
(43)

Note that \( \lim _{a \rightarrow \infty } e^{(n-j)a} a^{-\alpha } =0\) for \(j \ge n\) with any fixed \(n=1, 2, \dots \), which implies that (39) holds.

We next consider the error estimates \(v_{h}(t_{n})- V_{h}^{n}\). Subtracting (34) from (32), we have

$$\begin{aligned}&v_{h}(t_{n}) - V_{h}^{n} = \int _{0}^{t_{n}} (E_{h} - {\tilde{E}}_{h} ) (t_{n}-t) f_{h}(t) \, \mathrm {d}t \\&\quad = \int _{0}^{t_{n}} (E_{h}- {\tilde{E}}_{h})(t_{n}-t) \Big [ f_{h}(0) + \int _{0}^{t} f_{h}^{\prime } (s) \, \mathrm {d}s \Big ] \, \mathrm {d}t\\&\quad = f_{h}(0) \Big [ \int _{0}^{t_{n}} (E_{h}- {\tilde{E}}_{h} ) (s) \, \mathrm {d}s \Big ] + \int _{0}^{t_{n}} \Big [ \int _{0}^{t_{n}-t} ( E_{h}- {\tilde{E}}_{h} ) (s) \, \mathrm {d}s \Big ] f_{h}^{\prime } (t) \, \mathrm {d}t\\&\quad = f_{h}(0) {\mathcal {E}}_{h} (t_{n}) + \int _{0}^{t_{n}} {\mathcal {E}}_{h} (t_{n}-t) f_{h}^{\prime } (t) \, \mathrm {d}t, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {E}}_{h} (t) = \int _{0}^{t} \big ( E_{h} - {\tilde{E}}_{h}) (s) \, \mathrm {d}s, \quad 0 < t \le T. \end{aligned}$$
(44)

By Lemma 3, we have, with any small \(\epsilon >0\),

$$\begin{aligned} \Vert v_{h}(t_{n}) - V_{h}^{n} \Vert&\le \Vert f_{h}(0) \Vert \Vert {\mathcal {E}}_{h} (t_{n}) \Vert + \int _{0}^{t_{n}} \Vert {\mathcal {E}}_{h} (t_{n}-t) \Vert \Vert f_{h}^{\prime } (t) \Vert \, \mathrm {d}t \nonumber \\&\le C \tau ^{1+ \alpha } t_{n}^{-1} \Vert f_{h}(0) \Vert + C \int _{0}^{t_{n}} \tau ^{1+ \alpha - \epsilon } (t_{n}-t)^{-1 + \epsilon } \Vert f_{h}^{\prime } (t) \Vert \, \mathrm {d}t. \end{aligned}$$
(45)

Together these estimates complete the proof of Theorem 1. \(\square \)

Numerical Simulations

In this section, we will consider some numerical examples for solving both fractional ordinary differential equations and subdiffusion problems by using the time stepping method (11), (12).

Fractional Ordinary Differential Equation

In this subsection, we shall consider the numerical simulations for solving the following fractional ordinary differential equation, with \(0< \alpha <1\),

$$\begin{aligned}&\, _{0}^{C}D^{\alpha }_{t} y(t) + \lambda y(t) = g(t), \quad 0 < t \le T, \end{aligned}$$
(46)
$$\begin{aligned}&y(0) = y_{0}, \end{aligned}$$
(47)

where \(g: {\mathbb {R}} \rightarrow {\mathbb {R}}\) is a suitable function, \(y_{0} \in {\mathbb {R}}\) is the initial value and \(\lambda >0\).

Let \( 0 = t_{0}< t_{1}< \dots < t_{N}=T\) be a partition of [0, T] and \( \tau \) the step size. Let \(Y^{j} \approx y(t_{j}), \, j=0, 1, \dots , N\) denote the approximations of \(y(t_{j})\). The time stepping method (12) for solving (46), (47) can be written as

$$\begin{aligned}&\sum _{j=0}^{n} w_{j} Y^{n-j} + \frac{\tau ^{\alpha } \lambda }{2} \big ( Y^{n-1} + Y^{n} \big ) = \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} g(t) \, \mathrm {d}t, \quad n=1, 2, \dots , N, \end{aligned}$$
(48)
$$\begin{aligned}&Y^{0} = y_{0}, \end{aligned}$$
(49)

where the weights \(w_{j}, j=0, 1, 2, \dots , N\) are defined as below.

For \(n=1\), the time stepping method (48), (49) is reduced to

$$\begin{aligned} w_{0} Y^{1} + w_{1} Y^{0} + \frac{\tau ^{\alpha } \lambda }{2} (Y^{0} + Y^{1} ) = \tau ^{\alpha -1} \int _{t_{0}}^{t_{1}} g(t) \, \mathrm {d}t, \end{aligned}$$

where, with \(b_{j}, j=0, 1\) defined by (13),

$$\begin{aligned}&w_{0} = - b_{0} + b_{1}, \\&w_{1} = b_{0} - b_{1}. \end{aligned}$$

For \(n=2\), the time stepping method (48), (49) is reduced to

$$\begin{aligned} w_{0} Y^{2} + w_{1} Y^{1} + w_{2} Y^{0} + \frac{\tau ^{\alpha } \lambda }{2} (Y^{1} + Y^{2} ) = \tau ^{\alpha -1} \int _{t_{1}}^{t_{2}} g(t) \, \mathrm {d}t, \end{aligned}$$

where, with \(b_{j}, j=0, 1, 2\) defined by (13),

$$\begin{aligned}&w_{0} = - b_{0} + b_{1}, \\&w_{1} = 2 b_{0} - 3 b_{1} + b_{2}, \\&w_{2} = -b_{0} +2 b_{1} - b_{2}. \end{aligned}$$

For \(n=3\), the time stepping method (48), (49) is reduced to

$$\begin{aligned} w_{0} Y^{3} + w_{1} Y^{2} + w_{2} Y^{1} + w_{3} Y^{0} + \frac{\tau ^{\alpha } \lambda }{2} (Y^{2} + Y^{3} ) = \tau ^{\alpha -1} \int _{t_{2}}^{t_{3}} g(t) \, \mathrm {d}t, \end{aligned}$$

where, with \(b_{j}, j=0, 1, 2, 3\) defined by (13),

$$\begin{aligned}&w_{0} = - b_{0} + b_{1}, \\&w_{1} = 2 b_{0} - 3 b_{1} + b_{2}, \\&w_{2} = -b_{0} +3 b_{1} - 3 b_{2} + b_{3}, \\&w_{3} = -b_{1} + 2 b_{2} - b_{3}. \end{aligned}$$

For \(n \ge 4\), the time stepping method (48), (49) is reduced to

$$\begin{aligned} w_{0} Y^{n} + w_{1} Y^{n-1} + \dots + w_{n} Y^{0} + \frac{\tau ^{\alpha } \lambda }{2} (Y^{n-1} + Y^{n} ) = \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} g(t) \, \mathrm {d}t, \end{aligned}$$

where, with \(b_{j}, j=0, 1, 2, \dots , n\) defined by (13),

$$\begin{aligned}&w_{0} = - b_{0} + b_{1}, \\&w_{1} = 2 b_{0} - 3 b_{1} + b_{2}, \\&w_{l} = -b_{l-2} +3 b_{l-1} - 3 b_{l} + b_{l+1}, \quad l=2, 3, \dots , n-1, \\&w_{n} = -b_{n-2} + 2 b_{n-1} - b_{n}. \end{aligned}$$

Example 1

Our first example is a homogeneous problem. Choose \(g(t) =0\) in (46) and the initial value \(y_{0}=1\) in (47). In this case the problem has the following exact solution, with \(\alpha \in (0, 1)\),

$$\begin{aligned} y(t) = E_{\alpha , 1} (- \lambda t^{\alpha }) y_{0}= E_{\alpha , 1} (- \lambda t^{\alpha }), \end{aligned}$$

where \(E_{\alpha , 1} (z)\) denotes the Mittag–Leffler function.

We choose \(T=2\) and \(\lambda =1\). The exact solution can be calculated by using the MATLAB function mlf.m. We obtain the approximate solutions with the different step sizes \(\tau =1/20, 1/40, 1/80, 1/160\). In Table 1, we observe that the experimentally determined convergence order of the time stepping method (48), (49) is about \(O(\tau ^{2})\) which is better than theoretical convergence order \(O(\tau ^{1+\alpha })\) for small \(\alpha \in (0, 1)\). In Table 1, we also compare the errors and convergence orders of the time stepping method (48), (49) with the popular L1 scheme [11] and the modified L1 scheme [42]. It is well known that the L1 scheme has only \(O(\tau )\) convergence due to the singularity of the solution of the fractional differential equation [11] . After correcting the starting step, the modified L1 scheme has the optimal convergence order \(O(\tau ^{2- \alpha })\) [42]. We observe that the time stepping method (48), (49) indeed captures the singularities of the problem more accurately than the L1 and modified L1 schemes. We remark that the modified L1 scheme has the convergence order \(O(\tau ^{2- \alpha }), \alpha \in (0, 1)\), however in order to observe this convergence order for small \(\alpha \), we need to consider the error at sufficiently large T. For example, we indeed observe the convergence order \(O(\tau ^{2-\alpha })\) of the modified L1 scheme for \(\alpha =0.3\) when we choose \(T \ge 10\). In Table 1, the convergence order of the modified L1 scheme is not close to \(O(\tau ^{2-\alpha })\) for \(\alpha =0.3\) since \(T=2\) is not big enough to observe the required convergence order. Since the purpose of this paper is to show the convergence orders of the time stepping method (48), (49), we will not study further the numerical behaviors of the modified L1 scheme.

Table 1 Time convergence orders in Example 1 at \(T=2\)

Example 2

Our second example is an inhomogeneous problem with initial value \(y_{0}=0\). We choose \(\lambda =1\) and assume that the exact solution of (46) is \(y(t) =t^{\beta }, \, \beta >0\) and

$$\begin{aligned} g(t) = \frac{\varGamma (\beta +1)}{\varGamma (\beta +1 - \alpha )} t^{\beta - \alpha } + t^{\beta }. \end{aligned}$$

We choose \(T=1\) and obtain the approximate solutions with the different step sizes \(\tau =1/10, 1/20, 1/40, 1/80, 1/160\). In Table 2, we show the experimentally determined orders of convergences with \(\beta =1.1\) for the time stepping method (48), (49). We also observe that the convergence orders are about \(O(\tau ^{2})\) which are better than the theoretical convergence order \(O(\tau ^{1+\alpha })\) for small \(\alpha \in (0, 1)\).

Table 2 Time convergence orders in Example 2 at \(T=1\)

Example 3

The final example for the fractional ordinary differential equation is an inhomogeneous problem with nonzero initial value. We choose \(\lambda =1\) and assume that the exact solution of (46) is \(y(t) =t^{\beta } +1, \, \beta >0\). The initial value \(y_{0} =1\) and

$$\begin{aligned} g(t)= \frac{\varGamma (\beta +1)}{\varGamma (\beta +1 - \alpha )} t^{\beta - \alpha } + t^{\beta } +1. \end{aligned}$$

We choose \(T=1\) and obtain the approximate solutions with the different step sizes \(\tau =1/10, 1/20, 1/40, 1/80, 1/160\). In Table 3, we show the experimentally determined orders of convergences with \(\beta =1.1\) for the time stepping method (48), (49). We also observe that the convergence orders are about \(O(\tau ^{2})\) which are better than the theoretical convergence order \(O(\tau ^{1+\alpha })\) for small \(\alpha \in (0, 1)\).

Table 3 Time convergence orders in Example 3 at \(T=1\)

Subdiffusion Problem

Now we turn to the numerical examples for solving the following subdiffusion problem, with \( 0< \alpha <1\),

$$\begin{aligned}&\, _{0}^{C}D^{\alpha }_{t} u(t, x) - \frac{\partial ^2 u(t, x)}{\partial x^2} = f(t, x), \quad 0 \le t \le T, \; 0< x< 1, \end{aligned}$$
(50)
$$\begin{aligned}&u(0, x) = u_{0}(x), \end{aligned}$$
(51)
$$\begin{aligned}&u(t, 0) = u(t, 1) =0. \end{aligned}$$
(52)

Let \(0 = t_{0}< t_{1}< \dots < t_{N}=T\) be a partiton of the time interval [0, T] and \(\tau \) the time step size. Let \( 0= x_{0}< x_{1}< \dots < x_{M}=1\) be a partition of the space interval [0, 1] and h the space step size. Let \(S_{h} \subset H_{0}^{1}(0, 1)\) be the piecewise linear finite element space defined by

$$\begin{aligned} S_{h} =&\{ \chi \in C[0, 1]: \; \chi \; \text{ is } \text{ the } \text{ piecewise } \text{ linear } \text{ function } \text{ defined } \text{ on } \; [0,1] \\&\, \text{ and } \; \chi (0) = \chi (1) =0 \}. \end{aligned}$$

The finite element method of (50), (52) is to find \(u_{h} (t) \in S_{h}\) such that

$$\begin{aligned}&\Big ( \, _{0}^{C}D^{\alpha }_{t} u_{h}(t), \chi \Big ) + ( \nabla u_{h}(t), \nabla \chi ) = ( f_{h}(t), \chi ), \quad \forall \; \chi \in S_{h}, \end{aligned}$$
(53)
$$\begin{aligned}&u_{h}(0) = P_{h} u_{0}, \end{aligned}$$
(54)

where \(P_{h}: L_{2}(0,1) \rightarrow S_{h}\) denotes the \(L_{2}\) projection operator.

Let \(U^{n} \approx u_{h} (t_{n}), n=0, 1, \dots , N\) be the approximation of \(u_{h} (t_{n})\). We define the following time discretization scheme for solving \(U^{n} \in S_{h}\), with \(n=1, 2, \dots , N\), and for \(\forall \, \chi \in S_{h}\),

$$\begin{aligned}&\Big ( \sum _{j=0}^{n} w_{j} U^{n-j}, \chi \Big ) + \tau ^{\alpha } \Big ( \nabla \frac{U^{n-1} + U^{n}}{2}, \nabla \chi \Big ) = \Big ( \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} f_{h}(t) \, \mathrm {d}t, \chi \Big ), \end{aligned}$$
(55)
$$\begin{aligned}&U^{0} = P_{h} u_{0}, \end{aligned}$$
(56)

where \(w_{j}, j=0, 1, 2, \dots , N\) are defined as in Sect. 4.1.

Let \(\varphi _{1}(x), \varphi _{2}(x), \dots , \varphi _{M-1} (x)\) be the linear finite element basis functions defined by, with \(j=1, 2, \dots , M-1\),

$$\begin{aligned} \varphi _{j} (x) = \left\{ \begin{array}{llll} &{} \frac{x-x_{j-1}}{x_{j}- x_{j-1}}, \quad x_{j-1}< x< x_{j}, \\ &{} \frac{x- x_{j+1}}{x_{j}- x_{j+1}}, \quad x_{j}< x < x_{j+1}, \\ &{} 0, \quad \text{ otherwise }. \end{array}\right. \end{aligned}$$

To find the solution \(U^{n} \in S_{h}, \; n=0, 1, \dots , N\), we assume that

$$\begin{aligned} U^{n} = \sum _{k=1}^{M-1} \alpha _{k}^{n} \varphi _{k}, \end{aligned}$$

for some coefficients \(\alpha _{k}^{n},\, k=1, 2, \dots , M-1\). Choose \(\chi = \varphi _{l}, \, l=1, 2, \dots , M-1\) in (55), we have

$$\begin{aligned}&\sum _{j=0}^{n} w_{j} \Big ( \sum _{k=1}^{M-1} \alpha _{k}^{n-j} (\varphi _{k}, \varphi _{l}) \Big ) + \tau ^{\alpha } \sum _{k=1}^{M-1} \frac{\alpha _{k}^{n-1} + \alpha _{k}^{n}}{2} ( \nabla \varphi _{k}, \nabla \varphi _{l}) \nonumber \\&\quad = \Big ( \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} f_{h}(t) \, \mathrm {d}t, \varphi _{l} \Big ), \quad \forall \, \chi \in S_{h}, \end{aligned}$$
(57)
$$\begin{aligned}&U^{0} = P_{h} u_{0} = \sum _{k=1}^{M-1} \alpha _{k}^{0} \varphi _{k}, \end{aligned}$$
(58)

Denote

$$\begin{aligned} \mathbf {\alpha }^{n} = \left( \begin{array}{cccc} \alpha _{1}^n \\ \alpha _{2}^n \\ \vdots \\ \alpha _{M-1}^n \end{array} \right) _{(M-1)\times 1}, \quad \mathbf {u}^{0} = \left( \begin{array}{cccc} (u_{0}, \varphi _{1}) \\ (u_{0}, \varphi _{2}) \\ \vdots \\ (u_{0}, \varphi _{M-1}) \end{array} \right) _{(M-1)\times 1}, \end{aligned}$$

and

$$\begin{aligned} \mathbf {F}^{n} = \left( \begin{array}{cccc} \Big ( \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} f_{h}(t) \, \mathrm {d}t, \varphi _{1} \Big ) \\ \Big ( \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} f_{h}(t) \, \mathrm {d}t, \varphi _{2} \Big ) \\ \vdots \\ \Big ( \tau ^{\alpha -1} \int _{t_{n-1}}^{t_{n}} f_{h}(t) \, \mathrm {d}t, \varphi _{M-1} \Big ) \end{array} \right) _{(M-1)\times 1}. \end{aligned}$$

Further we denote the mass and stiffness metrics by

$$\begin{aligned} \mathbf{M} = \Big ( (\varphi _{k}, \varphi _{l}) \Big )_{k, l=1}^{M-1} = \left( \begin{array}{cccc} \frac{2}{3} &{} \frac{1}{6} &{} &{} 0\\ \frac{1}{6} &{} \ddots &{} \ddots &{} \\ &{} \ddots &{} \ddots &{} \frac{1}{6} \\ 0 &{} &{} \frac{1}{6} &{} \frac{2}{3} \end{array} \right) _{(M-1)\times (M-1)}, \end{aligned}$$

and

$$\begin{aligned} \mathbf{S} = \Big ( (\nabla \varphi _{k}, \nabla \varphi _{l}) \Big )_{k, l=1}^{M-1} = \left( \begin{array}{cccc} 2 &{} -1 &{} &{} 0\\ -1 &{} \ddots &{} \ddots &{} \\ &{} \ddots &{} \ddots &{} -1 \\ 0 &{} &{} -1 &{} 2 \end{array} \right) _{(M-1)\times (M-1)}, \end{aligned}$$

respectively. Then the scheme (57), (58) can be written into the following matrix form

$$\begin{aligned}&\sum _{j=0}^{n} w_{j} \mathbf {\alpha }^{n-j} + \tau ^{\alpha } (\mathbf{M} ^{-1} \mathbf{S} ) \frac{\mathbf {\alpha }^{n-1}+ \mathbf {\alpha }^{n}}{2} =\mathbf{M} ^{-1} \mathbf {F}^{n}, \end{aligned}$$
(59)
$$\begin{aligned}&\mathbf {\alpha }^{0} =\mathbf{M} ^{-1} \mathbf {u}^{0}, \end{aligned}$$
(60)

which can be solved by using the similar MATLAB programs as for solving (48), (49).

Example 4

In this example, we shall consider a homogeneous subdiffusion problem. We choose \(f(t, x)=0\) and the initial value \(u_{0}(x) = x(1-x)\) in (50), (52). In this case, the exact solution is

$$\begin{aligned} u(t) = E_{\alpha , 1} (-t^{\alpha } A) u_{0}, \end{aligned}$$

where \(E_{\alpha ,1}(z)\) is the Mittag–Leffler function and \(A= \frac{\partial ^2}{\partial x^2}\) with \(D(A) = H_{0}^{1}(0, 1) \cap H^2 (0, 1)\).

In our numerical simulation, we let \(T=2\). Choose the space step size \(h= 2^{-10}\) and the different time step sizes \(\tau =1/10, 1/20, 1/40, 1/80, 1/160\), we get the different approximate solutions. The exact solution is calculated by using the MATLAB function mlf.m.

We observe that, in Table 4, the experimentally determined convergence orders are better than the theoretical convergence orders \(O(\tau ^{1+\alpha })\). For \(\alpha > 0.5\), the table shows a second order convergence rate. We also compare the errors and convergence orders of the method (48), (49) with the popular L1 scheme [11] and the modified L1 scheme [42]. We see that the time stepping method (48), (49) captures the singularities of the problem more accurately than the L1 and modified L1 schemes.

Table 4 Time convergence orders in Example 4 at \(T=2\)

Example 5

Consider an inhomogeneous subdiffusion problem. Assume that the exact solution of (50)–(52) is \( u(t, x) =t^{\beta } x(1-x), \, \beta >0\) and

$$\begin{aligned} f(t, x) =x(1-x) \Big ( \frac{\varGamma (\beta +1)}{\varGamma (\beta +1 - \alpha )} t^{\beta - \alpha } \Big ) + 2 t^{\beta }. \end{aligned}$$

The initial value \(u_{0}(x) = 0\).

We choose \(T=2\) and \(\beta = \alpha \) which implies that the solution \(u(\cdot , x) \in C[0, T]\), but \(u(\cdot , x) \notin C^{1}[0, T]\) for any fixed x. We choose the space step size \(h= 2^{-10}\) and the different time step sizes \(\tau =1/10, 1/20, 1/40, 1/80, 1/160\) to get the approximate solutions. In Table 5, we observe that the experimentally determined convergence orders are higher than the theoretical convergence orders \(O(\tau ^{1+\alpha })\).

Table 5 Time convergence orders in Example 5 at \(T=2\)

Example 6

Consider an inhomogeneous subdiffusion problem with nonzero initial value. Assume that the exact solution of (50)–(52) is \( u(t, x) =(t^{\beta }+1) x(1-x), \, \beta >0\) and

$$\begin{aligned} f(t, x) =x(1-x) \Big ( \frac{\varGamma (\beta +1)}{\varGamma (\beta +1 - \alpha )} t^{\beta - \alpha } \Big ) + 2 t^{\beta } +2. \end{aligned}$$

The initial value \(u_{0}(x) = x(1-x)\).

We observe that the experimentally determined convergence orders depend on the smoothness of the solution with respect to the time variable t. In our numerical simulation, we choose \(\beta \in (1, 2)\) which implies that \(u(\cdot , x) \in C^{1}[0, T]\), but \(u(\cdot , x) \notin C^{2}[0, T]\) for any fixed x. We choose \(T=2\) and the space step size \(h= 2^{-10}\) and the different time step sizes \(\tau =1/10, 1/20, 1/40, 1/80, 1/160\) to get the approximate solutions. In Table 6, we show the experimentally determined orders of convergence with \( \beta =1.1\), and we see that the convergence orders are consistent with our theoretical results.

Table 6 Time convergence orders in Example 6 at \(T=2\)

Example 7

Consider an inhomogeneous subdiffusion problem in two-dimensional case. With \(x= (x_1, x_2), \; x_{1} \in [0, 1], \, x_{2} \in [0, 1]\), assume that the exact solution of (50)-(52) is \( u(t, x) =(t^{\beta }+1) x_{1} (1-x_{1}) x_{2} (1-x_{2}), \, \beta >0\) and

$$\begin{aligned} f(t, x) =x_{1} (1-x_{1}) x_{2} (1-x_{2}) \Big ( \frac{\varGamma (\beta +1)}{\varGamma (\beta +1 - \alpha )} t^{\beta - \alpha } \Big ) + (2 t^{\beta } +2) \big ( x_{1} (1-x_{1})+ x_{2}(1-x_{2}) \big ). \end{aligned}$$

The initial value \(u_{0}(x) = x_{1} (1-x_{1}) x_{2} (1-x_{2})\).

We use the same parameters and the time and space step sizes as in Example 6 except in this example we need space partitions in both \(x_{1}\) and \(x_{2}\) directions. In Table 7, we show the experimentally determined convergence orders with \(\beta =1.1\). We see that the numerical results are consistent with the theoretical results for this example.

Table 7 Time convergence orders in Example 7 at \(T=2\)