1 Introduction

Fractional calculus, as a generalization of classical calculus, has been an intriguing topic for many famous mathematicians since the end of the 17th century. During the last 4 decades, many scholars have been working on the development of theory for fractional derivatives and integrals, found their ways in the world of fractional calculus and their applications. For more detailed information on the historical background, we refer the interested reader to the following books: [6, 21, 22, 34,35,36, 38] and [20]. As an application of fractional calculus, differential equations possessing terms with fractional derivatives in the space- or time- or space-time direction have become very important in many areas. Particularly, in recent years a huge amount of interesting and surprising fractional models have been proposed. Here, we mention just a few typical applications: in the theory of Hankel transforms [15], in financial models [40, 42], in elasticity theory [5], in medical applications [23, 39], in geology [8, 27], in physics [7, 10, 33] and many more.

Similar to the work for ordinary differential equations, investigation of numerical methods for time-fractional differential equations (tfDEs) started its development. This paper will consider numerical approaches to tfDEs of the form

$$\begin{aligned} {^{C}}D^{\alpha }u(t)=f(t,u(t)),\quad t\in (0, T] \end{aligned}$$
(1.1)

with initial condition \(u(0)=u_{0}\), where the operator \({^{C}}D^{\alpha }\) denotes Caputo fractional derivative of order \(\alpha \), whose definition will be given in Definition 2.1 in the next section. As shown in [12], if f(tu(t)) is continuous and satisfies the Lipschitz condition with respect to u, then equation (1.1) possesses a unique solution in C[0, T]. For this case, (1.1) combined with the initial condition is equivalent to the following Volterra-type integral equation:

$$\begin{aligned} u(t)=u_{0}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-\xi )^{\alpha -1}f(\xi , u(\xi ))\mathrm {d}\xi ,\quad t\in (0, T]. \end{aligned}$$
(1.2)

With respect to numerical approximations for (1.2), two general approaches, called product integration method and fractional linear multistep methods, have been widely discussed. In these cases, a general discrete form of (1.2) is written as

$$\begin{aligned} u_{n}=u_{0}+(\Delta t)^{\alpha }\sum _{j=0}^{n}\omega _{n-j}f(t_{j}, u_{j})+(\Delta t)^{\alpha }\sum _{j=0}^{m}w_{n,j}f(t_{j}, u_{j}), \quad n\ge k \end{aligned}$$
(1.3)

with fixed \(m\in {\mathbb {N}}\). Fractional linear multistep methods were originally proposed in [30] in the mid eighties of the last century. This type of methods construct the convolution quadrature weights \(\{\omega _{j}\}_{j=0}^{\infty }\) satisfying

$$\begin{aligned} \sum _{j=0}^{\infty }\omega _{j}\xi ^{j}= \left( \frac{\sigma (1/\xi )}{\rho (1/\xi )}\right) ^{\alpha }, \end{aligned}$$

where \((\rho , \sigma )\) denote the classical implicit linear multistep formulae. For the motivation behind this idea we refer to [28]. [29] and [31] discuss the accuracy and stability properties of this type of methods. We can see they highly benefit from those of the corresponding multistep methods. Another more straightforward approach to generate the weights \(\{\omega _{j}\}\) and \(\{w_{n,j}\}\) is based on product integration, that is to replace the integrand \(f(\xi , u(\xi ))\) by some piecewise interpolation polynomials, and construct their fractional integrals of order \(\alpha \) as approximations of the integral in (1.2). On the accuracy and efficiency of these methods applied to some Volterra-type integral equations with irregular kernels, we can refer to [9, 11, 13, 26] and [6, 24]. In addition, [18] applies exponential integrators to fractional order problems. Generalized Adams methods and so-called m-steps methods are utilized by [1, 2].

Under the framework of product integration, recently, some new numerical approximations of the Caputo fractional derivative of order \(\alpha \in (0,1)\), named L1 method [25], L1-2 method [17], L2-\(1_\sigma \) method [4] and method [32], were proposed and applied for solving time-fractional differential equations. These methods are based on piecewise linear or quadratic interpolating polynomials approximations. In this paper, we generalize the approach by improving the degree of the piecewise polynomial to \(3\le k\le 6\) to approximate the function that possesses suitable smoothness. For this situation higher order of accuracy can be obtained. We establish local truncation errors and global errors estimates of the numerical schemes for (1.1) in detail. In addition, we mainly study the numerical stability of the L1 method, L1-2 method, method in [32] and higher-order methods proposed in this paper. We apply the technique in [31] to the investigation of the stability regions of this type of numerical methods. Further, we give rigorous proof that L1 method, L1-2 method and method in [32] possess \(A(\frac{\pi }{2})\)-stability. Numerical experiments confirm our theoretical analyses and show that this class of methods are \(A(\theta )\)-stable uniformly for \(0<\alpha <1\), and for some specific \(\alpha \), A-stability can even be obtained.

The paper is organized as follows. Section 2 introduces numerical approximations of the Caputo fractional derivative of order \(\alpha \in (0,1)\), and applies them to the discretization of problem (1.1). The local truncation errors of the proposed methods are discussed. Sections 3 and 4 respectively treat the stability and convergence of the discrete methods. In Sect. 5, numerical experiments confirm our theoretical considerations with respect to order of convergence and stability restrictions.

2 Approximations of Caputo Fractional Derivatives Using Continuous Piecewise Polynomials

We first introduce fractional derivatives in the Caputo sense:

Definition 2.1

([12]) Let \(\alpha >0\), and \(n=\lceil \alpha \rceil \), the \(\alpha \)-th order Caputo fractional derivative of a function u(t) on [0, T] is defined by

$$\begin{aligned} {^{C}}D^{\alpha }u(t)=\frac{1}{\Gamma (n-\alpha )} \int _{0}^{t}\frac{u^{(n)}(\xi )}{(t-\xi )^{\alpha -n+1}}\mathrm {d}\xi \end{aligned}$$
(2.1)

whenever \(u^{(n)}(t)\in L^{1}[0, T]\). In particular, the Caputo fractional derivative of order \(\alpha \in (0,1)\) is defined by

$$\begin{aligned} {^C}D^{\alpha } u(t)=\frac{1}{\Gamma (1-\alpha )} \int _{0}^{t}(t-\xi )^{-\alpha }u^{(1)}(\xi )\mathrm {d}\xi \end{aligned}$$
(2.2)

whenever \(u^{(1)}(t)\in L^{1}[0, T]\).

Next, we will derive a class of numerical approximations of the Caputo fractional derivative of order \(\alpha \in (0,1)\) by constructing a series of continuous piecewise polynomials. The main idea is as follows.

Let \({\mathscr {I}}=[0, T]\) be an interval and the \(M+1\) nodes \(\{t_{i}\}_{i=0}^{M}\) define a partition

$$\begin{aligned} 0=t_{0}<t_{1}\cdots<t_{M-1}<t_{M}=T. \end{aligned}$$
(2.3)

Assume that \(p_{j, q}^{k}(t)\) are a class of polynomials of degree \(k\ge 1\) with compact supports \({\mathscr {I}}_{j}=[t_{j-1}, t_{j}]\). Their coefficients are uniquely determined by the following \(k+1\) conditions

$$\begin{aligned} p_{j, q}^{k}(t_{n})=u(t_{n}),\quad n=j+q-1, j+q-2,\ldots , j+q-k-1. \end{aligned}$$
(2.4)

Here the index q records the number of shifts of the \(k+1\) interpolating nodes \(\{t_{n}\}_{n=j-1-k}^{j-1}\), and the sign of q indicates the direction of the shift. Then we have

$$\begin{aligned} p_{j, q}^{k}(t)=\sum _{n=j+q-k-1}^{j+q-1} u(t_{n})\prod _{\begin{array}{c} m=j+q-k-1 \\ m\ne n \end{array}}^{j+q-1}\frac{t-t_{m}}{t_{n}-t_{m}},\quad t\in {\mathscr {I}}_{j}. \end{aligned}$$
(2.5)

If the partition (2.3) is equidistant, i.e., \(t_{n}=n\Delta t,~0\le n\le M\) and \(\Delta t=\frac{T}{M}\), then (2.5) can be written as

$$\begin{aligned} p_{j, q}^{k}(t)=\sum _{n=0}^{k}\frac{\nabla ^{n}u(t_{j+q-1})}{n!(\Delta t)^{n}}\prod _{l=0}^{n-1}(t-t_{j+q-1-l}),\quad t\in {\mathscr {I}}_{j}. \end{aligned}$$
(2.6)

For convenience of notation, let \(t=t_{j-1}+s\Delta t\), we get

$$\begin{aligned} p_{j,q}^{k}(t)=\sum _{r=0}^{k}\left( {\begin{array}{c}s-q+r-1\\ r\end{array}}\right) \nabla ^{r}u(t_{j+q-1}), \quad 0<s<1, \end{aligned}$$
(2.7)

where \(\left( {\begin{array}{c}s-q+r-1\\ r\end{array}}\right) \) denotes a binomial coefficient, and the r-th order backward difference operators \(\nabla ^{r}\) satisfy

$$\begin{aligned} \nabla ^{0} u(t_{i})=u(t_{i}),\quad \nabla ^{r}u(t_{i})=\nabla ^{r-1}u(t_{i})-\nabla ^{r-1}u(t_{i-1}), \quad r\ge 1. \end{aligned}$$

Let

$$\begin{aligned} C_{p}^{k}({\mathscr {I}})=\{v(t)\in C({\mathscr {I}}):~~ v(t)=\sum _{l=0}^{k}a_{j,l}t^{l} \text { on } {\mathscr {I}}_{j}\} \end{aligned}$$

be the space of continuous piecewise polynomials of degree at most k. On the uniform grid, we construct a class of polynomials of the form

$$\begin{aligned} P_{i}^{k}(t)=\sum _{j=1}^{k-i}p_{j,k-j}^{k-1}(t) +\sum _{j=k}^{n}p_{j-i+1,i}^{k}(t)+\sum _{j=n-i+2}^{n}p_{j,n+1-j}^{k}(t), \end{aligned}$$
(2.8)

where \(1\le i\le k\le 6\) and \(t\in (t_{n-1}, t_{n}]\) for \(1\le n\le M\). \(\sum _{j=1}^{k-i}p_{j,k-j}^{k-1}(t)=0\) and \(\sum _{j=n-i+2}^{n}p_{j,n+1-j}^{k}(t)=0\) if \(k-i<1\) and \(n-i+2>n\), respectively. Then \(P_{i}^{k}(t)\) are considered as approximations of the function u(t) in (2.2) in the space \(C_{p}^{k}({\mathscr {I}})\).

Correspondingly, we propose the operator

$$\begin{aligned} D_{k,i}^{\alpha }u(t)=\frac{1}{\Gamma (1-\alpha )} \int _{0}^{t}(t-\xi )^{-\alpha }\frac{\mathrm {d} P_{i}^{k}}{\mathrm {d} \xi }\mathrm {d}\xi \end{aligned}$$
(2.9)

for \(t\in {\mathscr {I}}\) as an approximation to (2.2). If \(t=t_{n}\), we rewritten (2.9) as

$$\begin{aligned} \begin{aligned} D_{k,i}^{\alpha }u_{n}&=\frac{(\Delta t)^{-\alpha }}{\Gamma (1-\alpha )}\sum _{j=1}^{n}\int _{0}^{1} (n-j+1-s)^{-\alpha }\frac{\mathrm {d} P_{i}^{k}(t_{j-1}+s\Delta t)}{\mathrm {d}s}\mathrm {d}s \\&=(\Delta t)^{-\alpha }\sum _{j=0}^{k-1}w_{n,j}^{(k,i)}u_{j}+(\Delta t)^{-\alpha }\sum _{j=0}^{n}\omega _{n-j}^{(k,i)}u_{j}, \end{aligned} \end{aligned}$$
(2.10)

where \(u_{n}:=u(t_{n})\).

Remark 1

The construction of \(P_{i}^{k}(t)\) in (2.8) mainly depends on the continuity requirement on the interval \({\mathscr {I}}\), i.e., the interpolation conditions

$$\begin{aligned} p_{j,q}^{k}(t_{n})=u(t_{n}),\quad n=j-1, j \end{aligned}$$
(2.11)

should be satisfied. This means that on each \({\mathscr {I}}_{j}\), the conditions \(j+q-1\ge j\) and \(j+q-k-1\le j-1\) in (2.4) should be satisfied, which yields \(1\le q\le k\). Therefore, for \(k=1\), there is only one piecewise polynomial, denoted by \(P_{1}^{1}(t)\), in the space \(C_{p}^{1}({\mathscr {I}})\). Moreover, (2.8) yields

$$\begin{aligned} P_{1}^{1}(t)=\sum _{j=1}^{n}p_{j,1}^{1}(t). \end{aligned}$$

It is easy to see that \(P_{1}^{1}(t)\) coincides with the known L1 method proposed in [25]. For \(k=2\), we can choose \(p_{j,1}^{1}(t)\), \(p_{j,1}^{2}(t)\) and \(p_{j,2}^{2}(t)\) on each \({\mathscr {I}}_{j}\) such that (2.11) holds. To preserve the convolution property as much as possible, here we provide two cases

$$\begin{aligned} P_{1}^{2}(t)=p_{1,1}^{1}(t)+\sum _{j=2}^{n}p_{j,1}^{2}(t) \quad \text {and}\quad P_{2}^{2}(t)=\sum _{j=1}^{n-1}p_{j,2}^{2}(t)+p_{n,1}^{2}(t), \end{aligned}$$
(2.12)

where \(t\in (t_{n-1}, t_{n}]\). The two cases in (2.12) coincide with the approximate methods discussed in [17] and [32], respectively. In addition, as presented in (2.8), we restrict our further discussion to the case \(i\le k\). Because under the condition, the corresponding discrete operators \(D_{k,i}^{\alpha }u_{n}\) in (2.10) can be computed with the least starting values.

In the following part, we present the explicit representations of the weight coefficients \(\{w_{n,j}^{(k,i)}\}\) and \(\{\omega _{j}^{(k,i)}\}\) for \(1\le i\le k\le 3\) as examples. Note that in the case \(1\le i\le k\le 2\), the weight coefficients have been derived by [17, 25, 32] in a similar way. Here we rewrite them into the form of integrals for convenience of further theoretical analyses. First, we define a class of integrals of the form

$$\begin{aligned} I_{n,q}^{r}=\left\{ \begin{array}{ll} \frac{1}{\Gamma (1-\alpha )}\int \limits _{0}^{1}(n+1-s)^{-\alpha } \mathrm {d}\left( {\begin{array}{c}s-q+r-1\\ r\end{array}}\right) ,&{}\quad n\ge 0, \\ 0, &{}\quad n<0, \\ \end{array} \right. \end{aligned}$$
(2.13)

where \(q, r\in {\mathbb {N}}^{+}\) and \(n\in {\mathbb {Z}}\). If we denote

$$\begin{aligned} \begin{aligned} I_{n}&:=I_{n,q}^{1},\quad \forall ~q=1,2,\ldots , \\ \nabla ^{k}I_{n,q}^{r}&=\nabla ^{k-1}I_{n,q}^{r}-\nabla ^{k-1}I_{n-1,q}^{r}, \quad \forall ~k\in {\mathbb {N}}^{+}, \end{aligned} \end{aligned}$$

then the weight coefficients can be expressed as

$$\begin{aligned} \left\{ \begin{aligned} (k,i)=(1,1):&w_{m,0}=-I_{m},\quad m\ge 1, \quad \omega _{n}=\nabla I_{n},\quad n\ge 0, \\ (k,i)=(2,1):&w_{m,0}=2I_{m-1,1}^{2}-I_{m,1}^{2}-I_{m}, \quad w_{m,1}=-I_{m-1,1}^{2},m\ge 2, \\&\omega _{n}=\nabla I_{n}+\nabla ^{2} I_{n,1}^{2},\quad n\ge 0, \\ (k,i)=(2,2):&w_{m,0}=-\nabla I_{m+1,1}^{2}+I_{m,2}^{2}, \quad w_{m,1}=-I_{m,1}^{2},\quad m\ge 2, \\&\omega _{0}=I_{0}+I_{1}+I_{0,1}^{2}+I_{1,2}^{2},\quad \omega _{1}=\nabla I_{2}-I_{0}+I_{2,2}^{2}-2I_{0,1}^{2}-2I_{1,2}^{2}, \\&\omega _{2}=\nabla I_{3}+\nabla ^{2}I_{3,2}^{2}+I_{0,1}^{2}, \quad \omega _{n}=\nabla I_{n+1}+\nabla ^{2} I_{n+1,2}^{2},n\ge 3, \\ \end{aligned} \right. \end{aligned}$$

and

\((k,i)=(3,1):\) :
$$\begin{aligned} \left\{ \begin{aligned}&w_{m,0}=-\nabla I_{m}-I_{m,1}^{2}+2I_{m-1,1}^{2}+I_{m-1,2}^{2}-I_{m,1}^{3}+3I_{m-1,1}^{3}-3I_{m-2,1}^{3},\\&w_{m,1}=-2I_{m-1}-2I_{m-1,2}^{2}-I_{m-1,1}^{2}-I_{m-1,1}^{3}+3I_{m-2,1}^{3},\\&w_{m,2}=I_{m-1}+I_{m-1,2}^{2}-I_{m-2,1}^{3},\quad m\ge 3, \\&\omega _{n}=\nabla I_{n}+\nabla ^{2}I_{n,1}^{2}+\nabla ^{3}I_{n,1}^{3}, \quad n\ge 0, \\ \end{aligned} \right. \end{aligned}$$
(2.14)
\((k,i)=(3,2):\) :
$$\begin{aligned} \left\{ \begin{aligned}&w_{m,0}=-\nabla I_{m+1}-I_{m+1,2}^{2}+2I_{m,2}^{2}-I_{m+1,2}^{3}+3I_{m,2}^{3}-3I_{m-1,2}^{3},\\&w_{m,1}=-I_{m}-I_{m,2}^{2}-I_{m,2}^{3}+3I_{m-1,2}^{3},\\&w_{m,2}=-I_{m-1,2}^{3},\quad m\ge 3, \\&\omega _{0}=I_{0}+I_{1}+I_{1,2}^{2}+I_{0,1}^{2}+I_{1,2}^{3}+I_{0,1}^{3}, \\&\omega _{1}=\nabla I_{2}-I_{0}+I_{2,2}^{2}-2I_{1,2}^{2}-2I_{0,1}^{2}+I_{2,2}^{3}-3I_{1,2}^{3}-3I_{0,1}^{3}, \\&\omega _{2}=\nabla I_{3}+\nabla ^{2}I_{3,2}^{2}+I_{0,1}^{2}+I_{3,2}^{3}-3I_{2,2}^{3}+3I_{1,2}^{3}+3I_{0,1}^{3}, \\&\omega _{3}=\nabla I_{4}+\nabla ^{2}I_{4,2}^{2}+\nabla ^{3}I_{4,2}^{3}-I_{0,1}^{3}, \\&\omega _{n}=\nabla I_{n+1}+\nabla ^{2}I_{n+1,2}^{2}+\nabla ^{3}I_{n+1,2}^{3}, \quad n\ge 4,\\ \end{aligned} \right. \end{aligned}$$
(2.15)
\((k,i)=(3,3)\)::
$$\begin{aligned} \left\{ \begin{aligned}&w_{m,0}=-\nabla I_{m+2}-\nabla ^{2} I_{m+2,3}^{2}-I_{m+2,3}^{3}+3I_{m+1,3}^{3}-3I_{m,3}^{3},\\&w_{m,1}=-\nabla I_{m+1}-I_{m+1,3}^{2}+2I_{m,3}^{2}-I_{m+1,3}^{3}+3I_{m,3}^{3},\\&w_{m,2}=-I_{m}-I_{m,3}^{2}-I_{m,3}^{3},\quad m\ge 3, \\&\omega _{0}=I_{0}+I_{1}+I_{2}+I_{0,1}^{2}+I_{1,2}^{2}+I_{2,3}^{2}+I_{0,1}^{3}+I_{1,2}^{3}+I_{2,3}^{3}, \\&\omega _{1}=\nabla I_{3}-I_{0}-I_{1}+I_{3,3}^{2}-2I_{2,3}^{2}-2I_{1,2}^{2}-2I_{0,1}^{2}+I_{3,3}^{3}-3I_{2,3}^{3}-3I_{1,2}^{3}-3I_{0,1}^{3},\\&\omega _{2}=\nabla I_{4}+\nabla ^{2}I_{4,3}^{2}+I_{1,2}^{2}+I_{0,1}^{2}+I_{4,3}^{3}-3I_{3,3}^{3}+3I_{2,3}^{3}+3I_{0,1}^{3}+3I_{1,2}^{3}, \\&\omega _{3}=\nabla I_{5}+\nabla ^{2} I_{5,3}^{2}+\nabla ^{3} I_{5,3}^{3}-I_{1,2}^{3}-I_{0,1}^{3},\\&\omega _{n}=\nabla I_{n+2}+\nabla ^{2}I_{n+2,3}^{2}+\nabla ^{3}I_{n+2,3}^{3}, \quad n\ge 4.\\ \end{aligned} \right. \nonumber \\ \end{aligned}$$
(2.16)

It can be observed that when \(\alpha \rightarrow 1\), the operator \(D_{k,i}^{\alpha }u_{n}\) in (2.10) recovers the k-step BDF method.

Using (2.10), we construct the discrete schemes

$$\begin{aligned} D_{k,i}^{\alpha } u_{n}=f(t_{n},u_{n}), \quad n\ge k, \end{aligned}$$
(2.17)

as approximations of Eq. (1.1). If starting values are given, then we define the local truncation errors of the n-th step by

$$\begin{aligned} \tau _{n}^{(k,i)}=D_{k,i}^{\alpha }u(t_{n})-{^{C}}D^{\alpha }u(t_{n}), \quad n\ge k, \end{aligned}$$
(2.18)

where u(t) is the exact solution of (1.1).

Theorem 2.1

Let \(0<\alpha <1\) and \(1\le k\le 6\). If \(u(t)\in C^{k+1}[0, T]\), then for \(1\le i<k\), it holds that

$$\begin{aligned} \tau _{n}^{(k,i)}=O\left( (t_{n-k+i})^{-\alpha -1}\Delta t^{k+1}+\Delta t^{k+1-\alpha }\right) ,\quad n\ge k. \end{aligned}$$
(2.19)

In particular,

$$\begin{aligned} \tau _{n}^{(k,k)}=O(\Delta t^{k+1-\alpha }), \quad n\ge k. \end{aligned}$$
(2.20)

Proof

From (2.7), we have

$$\begin{aligned} p_{j,q}^{k}(t)-u(t)=u^{(k+1)}(\xi _{j})\left( {\begin{array}{c}s-q+k\\ k+1\end{array}}\right) (\Delta t)^{k+1} \end{aligned}$$
(2.21)

for \(t=t_{j-1}+s\Delta t\), \(0\le s\le 1\), where \(t_{j+q-k-1}\le \xi _{j}\le t_{j+q-1}\).

Inspired by [17], integration by part yields

$$\begin{aligned} \begin{aligned} \tau _{n}^{(k,i)}&=\frac{1}{\Gamma (1-\alpha )}\sum _{j=1}^{n}\int _{t_{j-1}}^{t_{j}}(t_{n}-t)^{-\alpha } \left( \frac{\mathrm {d} P_{i}^{k}(t)}{\mathrm {d}t} -\frac{\mathrm {d} u(t)}{\mathrm {d} t}\right) \mathrm {d}t \\&=\frac{-\alpha }{\Gamma (1-\alpha )}\sum _{j=1}^{n} \int _{t_{j-1}}^{t_{j}}(t_{n}-t)^{-\alpha -1}\left( P_{i}^{k}(t)-u(t)\right) \mathrm {d}t \\&=\frac{-\alpha (\Delta t)^{-\alpha }}{\Gamma (1-\alpha )} \sum _{j=1}^{n}\int _{0}^{1}(n-j+1-s)^{-\alpha -1}\left( P_{i}^{k}(t_{j-1}+s\Delta t)-u(t_{j-1}+s\Delta t)\right) \mathrm {d}s \end{aligned} \end{aligned}$$
(2.22)

for \(n\ge k\). Substituting (2.8) and (2.21) into the last formula of (2.22) and taking \(i=k\), we get

$$\begin{aligned} \begin{aligned} \left| \tau _{n}^{(k,k)}\right|&\le \frac{\alpha \left( \Delta t\right) ^{k+1-\alpha } }{\Gamma (1-\alpha )}\max _{\xi \in {\mathscr {I}}}\left| u^{(k+1)}(\xi )\right| \left( \sum _{j=1}^{n-k+1}\right| \int _{0}^{1} (n-j+1-s)^{-\alpha -1}\left( {\begin{array}{c}s\\ k+1\end{array}}\right) \mathrm {d}s\Big |\\&\quad \ +\sum _{j=n-k+2}^{n}\Big |\int _{0}^{1}(n-j+1-s)^{-\alpha -1} \left( {\begin{array}{c}s+k-n-1+j\\ k+1\end{array}}\right) \mathrm {d}s\Big |\Big ), \end{aligned} \end{aligned}$$

and for \(1\le i\le k-1\),

$$\begin{aligned} \begin{aligned}&\left| \tau _{n}^{(k,i)}\right| \le \frac{\alpha \left( \Delta t\right) ^{-\alpha } }{\Gamma (1-\alpha )} ((\Delta t)^{k}\max _{\xi \in \cup _{j=1}^{k-i}{\mathscr {I}}_{j}}|u^{(k)}(\xi )| \sum _{j=1}^{k-i}\Big |\int _{0}^{1}(n-j+1-s)^{-\alpha -1} \left( {\begin{array}{c}s+j-1\\ k\end{array}}\right) \mathrm {d}s\Big |\\&\quad +(\Delta t)^{k+1}\max _{\xi \in {\mathscr {I}}}\left| u^{(k+1)}(\xi )\right| \sum _{j=k-i+1}^{n-i+1}\Big |\int _{0}^{1}(n-j+1-s)^{-\alpha -1}\left( {\begin{array}{c}s+k-i\\ k+1\end{array}}\right) \mathrm {d}s\Big | \\&\quad +(\Delta t)^{k+1}\max _{\xi \in {\mathscr {I}}}\left| u^{(k+1)}(\xi )\right| \sum _{j=n-i+2}^{n}\Big |\int _{0}^{1}(n-j+1-s)^{-\alpha -1}\left( {\begin{array}{c}s+k-n-i+j\\ k+1\end{array}}\right) \mathrm {d}s\Big |\Big ). \end{aligned} \end{aligned}$$

Since for any \(q\le k\) with \(q, k\in {\mathbb {N}}^{+}\), the factor \((1-s)\) is included in \(\left( {\begin{array}{c}s-q+k\\ k+1\end{array}}\right) \) and the term \(\frac{1}{1-s}\left( {\begin{array}{c}s-q+k\\ k+1\end{array}}\right) \) is bounded for \(0\le s \le 1\), we have

$$\begin{aligned} \begin{aligned} |\tau _{n}^{(k,k)}|\le&\frac{\alpha \left( \Delta t\right) ^{k+1-\alpha } }{\Gamma (1-\alpha )}C^{(k)}\sum _{j=1}^{n}\int _{0}^{1}(n-j+1-s)^{-\alpha -1}(1-s)\mathrm {d}s\\ \le&\frac{\alpha \left( \Delta t\right) ^{k+1-\alpha } }{\Gamma (1-\alpha )}C^{(k)}\left( \sum _{j=1}^{n-1}\int _{0}^{1}(n-j+1-s)^{-\alpha -1}\mathrm {d}s+\int _{0}^{1}(1-s)^{-\alpha }\mathrm {d}s\right) \\ \le&\left( \Delta t\right) ^{k+1-\alpha }C^{(k)} \left( \frac{1}{\Gamma (1-\alpha )}+\frac{1}{\Gamma (2-\alpha )}\right) , \end{aligned} \end{aligned}$$

where \(C^{(k)}\) are bounded and depend on \(u^{(k+1)}\) and k. On the other hand, if \(i<k\), it holds that

$$\begin{aligned} \begin{aligned} |\tau _{n}^{(k,i)}|\le&\frac{\alpha }{\Gamma (1-\alpha )}C^{(k,i)}\Big ((\Delta t)^{k-\alpha }\sum _{j=1}^{k-i}\int _{0}^{1}(n-j+1-s)^{-\alpha -1}\mathrm {d}s \\&+(\Delta t)^{k+1-\alpha }\sum _{j=1}^{n}\int _{0}^{1}(n-j+1-s)^{-\alpha -1}(1-s)\mathrm {d}s\Big ) \\ \le&C^{(k,i)}\Big (\frac{\alpha }{\Gamma (1-\alpha )}(\Delta t)^{k+1}(k-i)(t_{n-k+i})^{-\alpha -1} \\&+(\Delta t)^{k+1-\alpha }\big (\frac{1}{\Gamma (1-\alpha )}+\frac{1}{\Gamma (2-\alpha )}\big )\Big ), \end{aligned} \end{aligned}$$

where \(C^{(k,i)}\) are constants dependent on \(u^{(k)}\), \(u^{(k+1)}\) and ki. \(\square \)

3 Stability Analysis

To analyse the stability of discrete schemes (2.17) with initial value \(u(0)=u_{0}\), we apply (2.9) to the test equation

$$\begin{aligned} {^{C}}D^{\alpha }u(t)=\lambda u(t), \quad \lambda \in {\mathbb {C}} \end{aligned}$$
(3.1)

and obtain

$$\begin{aligned} D_{k,i}^{\alpha }u_{n}=\lambda u_{n}, \quad n\ge k. \end{aligned}$$
(3.2)

We rewrite (3.2) as the formal power series form

$$\begin{aligned} \sum _{n=0}^{\infty }D_{k,i}^{\alpha }u_{n+k}\xi ^{n} =\lambda \sum _{n=0}^{\infty }u_{n+k}\xi ^{n}. \end{aligned}$$
(3.3)

Replaced by (2.10), formula (3.3) becomes

$$\begin{aligned} \omega ^{(k,i)}(\xi ){\mathscr {U}}(\xi )=z{\mathscr {U}}(\xi ) +g^{(k,i)}(\xi ), \end{aligned}$$
(3.4)

where \(z:=\lambda (\Delta t)^{\alpha }\). The above notations are defined as

$$\begin{aligned} \begin{aligned} {\mathscr {U}}(\xi )&=\sum \limits _{n=0}^{\infty }u_{n+k}\xi ^{n}, \omega ^{(k,i)}(\xi )=\sum _{n=0}^{\infty } \omega _{n}^{(k,i)}\xi ^{n}, \\ g^{(k,i)}(\xi )&=-\sum _{j=0}^{k-1}u_{j}\sum _{n=0}^{\infty } \left( w_{n+k,j}^{(k,i)}+\omega _{n+k-j}^{(k,i)}\right) \xi ^{n}. \end{aligned} \end{aligned}$$
(3.5)

Theorem 3.1

The stability region of (3.2) with \(1\le i\le k\le 6\) is \({\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\).

Remark 2

The stability region of a method applied to the test equation (3.1) is the set of \(z=\lambda (\Delta t)^{\alpha }\in {\mathbb {C}}\) with \(\Delta t>0\) such that \(u_{n}\rightarrow 0\) as \(n\rightarrow \infty \) whenever the starting values \(u_{0}, \ldots , u_{k-1}\) are bounded.

Proof

If we denote the stability region of (3.2) by \(S^{(k,i)}\), then the proof of \(S^{(k,i)}={\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\) is equivalent to proving \(S^{(k,i)}\supseteq {\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\) and \(S^{(k,i)}\subseteq {\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\), i.e., to prove that if \(z\in {\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\), then \(z\in S^{(k,i)}\) and if \(z\not \in {\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\), then \(z\not \in S^{(k,i)}\).

On the one hand, if \(z\in {\mathbb {C}}\backslash \{\omega ^{(k,i)}(\xi ): |\xi |\le 1\}\) and \(|z|\le 1\), then \(z-\omega ^{(k,i)}(\xi )\ne 0\) for \(|\xi |\le 1\). Thus, by Lemmas A.4, A.5 and Theorem A.1, the coefficient sequence of the reciprocal of \(z-\omega ^{(k,i)}(\xi )\) is in \(l^{1}\) and the coefficient sequence of \(g^{(k,i)}(\xi )\) tends to zero.

If \(|z|>1\), formula (3.4) can be rewritten as

$$\begin{aligned} {\mathscr {U}}(\xi )=\frac{\frac{g^{(k,i)}(\xi )}{z}}{\frac{\omega ^{(k,i)}(\xi )}{z}-1}, \end{aligned}$$

in which case the coefficient sequence of the reciprocal of \(\frac{\omega ^{(k,i)}(\xi )}{z}-1\) is in \(l^{1}\), and the coefficient sequence of \(\frac{g^{(k,i)}(\xi )}{z}\) converges to zero. In addition, if \(\lim \limits _{n\rightarrow \infty }\sum \limits _{j=0}^{n}|l_{i}|=L<+\infty \) and \(\lim \limits _{j\rightarrow \infty }c_{j}=0\), then \(\lim \limits _{n\rightarrow \infty }\sum \limits _{j=0}^{n}l_{n-j}c_{j}=0\) follows. This implies \(u_{n}\rightarrow 0\) as \(n\rightarrow \infty \).

On the other hand, assume that \(z=\omega ^{(k,i)}(\xi _{0})\) for some \(|\xi _{0}|\le 1\), then formula (3.4) becomes

$$\begin{aligned} \left( \omega ^{(k,i)}(\xi )-\omega ^{(k,i)}(\xi _{0})\right) {\mathscr {U}}(\xi )=g^{(k,i)}(\xi ). \end{aligned}$$
(3.6)

If applying the methods (2.10) on a constant function, we obtain from Theorem 2.1 that the corresponding truncation errors are zero, which leads to

$$\begin{aligned} \sum _{j=0}^{k-1}w_{n,j}^{(k,i)}+\sum _{j=0}^{n} \omega _{n-j}^{(k,i)}=0,\quad n\ge k, \end{aligned}$$

and consequently,

$$\begin{aligned} \begin{aligned}&\sum _{n=k}^{\infty }\left( \sum _{j=0}^{k-1}w_{n,j}^{(k,i)}+\sum _{j=0}^{n}\omega _{n-j}^{(k,i)}\right) \xi ^{n-k} \\&\quad =\sum _{n=0}^{\infty }\left( \sum _{j=0}^{k-1}w_{n+k,j}^{(k,i)}+\sum _{j=0}^{n+k}\omega _{n+k-j}^{(k,i)}\right) \xi ^{n} \\&\quad =\sum _{n=0}^{\infty }\sum _{j=0}^{k-1}\left( w_{n+k,j}^{(k,i)}+\omega _{n+k-j}^{(k,i)} \right) \xi ^{n}+\frac{\omega ^{(k,i)}(\xi )}{1-\xi }=0. \\ \end{aligned} \end{aligned}$$

Assume that \(u_{0}=\cdots =u_{k-1}\ne 0\), with the expression of \(g^{(k,i)}(\xi )\), we find \( g^{(k,i)}(\xi )=u_{0}\frac{\omega ^{(k,i)}(\xi )}{1-\xi }\). If \(\omega ^{(k,i)}(\xi _{0})=0\), then \({\mathscr {U}}(\xi )=\frac{u_{0}}{1-\xi }\), which means that \(u_{n}=u_{0}\) for all \(n\in {\mathbb {N}}\). If \(\omega ^{(k,i)}(\xi _{0})\ne 0\), then we have

$$\begin{aligned} {\mathscr {U}}(\xi )(1-\xi )\frac{\omega ^{(k,i)}(\xi ) -\omega ^{(k,i)}(\xi _{0})}{\xi -\xi _{0}}=u_{0}\frac{\omega ^{(k,i)} (\xi )-\omega ^{(k,i)}(\xi _{0})}{\xi -\xi _{0}}+u_{0} \frac{\omega ^{(k,i)}(\xi _{0})}{\xi -\xi _{0}}. \end{aligned}$$

If assume that \(u_{n}\rightarrow 0\) as \(n\rightarrow \infty \), then from Lemma A.6, it follows that the coefficient sequence of \((1-\xi )\frac{\omega ^{(k,i)}(\xi )-\omega ^{(k,i)}(\xi _{0})}{\xi -\xi _{0}}\) is in the space \(l^{1}\). This indicates that the coefficient sequence of \({\mathscr {U}}(\xi )(1-\xi )\frac{\omega ^{(1,1)}(\xi )-\omega ^{(1,1)}(\xi _{0})}{\xi -\xi _{0}}\) tends to zero. In addition, Lemma A.3 presents that the coefficient sequence of \(\frac{\omega ^{(k,i)}(\xi )-\omega ^{(k,i)}(\xi _{0})}{\xi -\xi _{0}}\) converges to zero. However, the divergence of the coefficient sequence of \(\frac{1}{\xi -\xi _{0}}\) for \(|\xi _{0}|\le 1\) leads to a contradiction. Thus, there exist some nonzero bounded initial values \(\{u_{i}\}_{i=0}^{k-1}\) such that \(u_{n} \not \rightarrow 0\) as \(n\rightarrow \infty \), which indicates \(z\not \in \mathrm {S}^{(k,i)}\). \(\square \)

Analogous to the \(A(\theta )\)-stability for classical ODE mentioned in [19], we define \(A(\theta )\)-stability of methods for fractional ODE.

Definition 3.1

A method is said to be \(A(\theta )\)-stable with \(\theta \in [0, \pi -\frac{\alpha \pi }{2})\) and \(0<\alpha <1\), if the sector

$$\begin{aligned} S_{\theta }=\{z: |\mathrm {arg}(-z)|\le \theta , z\ne 0\} \end{aligned}$$

is contained in the stability region.

Theorem 3.2

The method (3.2) is \(A(\frac{\pi }{2})\)-stable for \(1\le i\le k\le 2\).

Proof

For \(\theta =\frac{\pi }{2}\) in Definition 3.1, it suffices to prove \(\mathrm {S}_{\frac{\pi }{2}}\subseteq S^{(k,i)}\) for \(1\le i\le k\le 2\), namely, to prove \(\omega ^{(k,i)}(\xi )=0\) for some \(|\xi |\le 1\) and \(\mathrm {Re}(\omega ^{(k,i)}(\xi ))>0\) otherwise.

First of all, it can be readily verified that \(\omega ^{(k,i)}(1)=0\), which implies \(0\not \in \mathrm {S}_{\frac{\pi }{2}}\). Next we prove the results for the case \((k,i)=(1,1)\), \((k,i)=(2,1)\) and \((k,i)=(2,2)\) separately.

Case \((k,i)=(1,1)\): from the expression of \(\omega ^{(1,1)}(\xi )\), we obtain

$$\begin{aligned} \omega ^{(1,1)}(\xi )=I_{0}+\sum _{j=1}^{\infty }\nabla I_{j}\xi ^{j}=(1-\xi )I(\xi ), \end{aligned}$$
(3.7)

where \(I(\xi )=\sum \limits _{n=0}^{\infty }I_{n}\xi ^{n}\). Lemma A.1 and Theorem A.2 yield

$$\begin{aligned} I_{n}=\int _{0}^{1}r^{n}\mathrm {d}\sigma (r), \quad n\in {\mathbb {N}}, \end{aligned}$$
(3.8)

where \(\sigma (r)\) is a non-decreasing function. Suppose that \(|\xi |<1\), substituting (3.8) into (3.7) yields

$$\begin{aligned} \mathrm {Re}\Big (\omega ^{(1,1)}(\xi )\Big )=\mathrm {Re} \Big ((1-\xi )\sum _{n=0}^{\infty }\int _{0}^{1}r^{n}\xi ^{n} \mathrm {d}\sigma (r)\Big )=\int _{0}^{1}\mathrm {Re} \Big (\frac{1-\xi }{1-r\xi }\Big )\mathrm {d}\sigma (r). \end{aligned}$$

Let \(\xi =|\xi |(\cos \theta +i\sin \theta )\), then

$$\begin{aligned} \frac{1-\xi }{1-r\xi }=\frac{\left( 1-(r+1)|\xi |\cos \theta +r|\xi |^{2} \right) +i\left( (r-1)|\xi |\sin \theta \right) }{(1-r|\xi |\cos \theta )^{2}+(r|\xi |\sin \theta )^{2}}. \end{aligned}$$

For \(0\le r\le 1\) and \(|\xi |<1\), we find

$$\begin{aligned} \begin{aligned} 1-(r+1)|\xi |\cos \theta +r|\xi |^{2}&\ge \min \left( (1-|\xi | \cos \theta )^{2},1-|\xi |\cos \theta \right) , \\ 1-2r|\xi |\cos \theta +r^{2}|\xi |^{2}&\le (1+r|\xi |)^{2}\le 4, \end{aligned} \end{aligned}$$

which yield

$$\begin{aligned} \int _{0}^{1}\mathrm {Re}\left( \frac{1-\xi }{1-r\xi }\right) \mathrm {d}\sigma (r) \ge \frac{\min \left( (1-|\xi |\cos \theta )^{2},1-|\xi | \cos \theta \right) }{4}I_{0}. \end{aligned}$$

Case \((k,i)=(2,1)\): using the definition of \(\omega ^{(2,1)}(\xi )\), we observe

$$\begin{aligned} \begin{aligned} \omega ^{(2,1)}(\xi )=&\sum _{n=0}^{\infty }\left( \nabla I_{n}+\nabla ^{2}I_{n,1}^{2}\right) \xi ^{n} \\ =&\,(1-\xi )I(\xi )+(1-\xi )^{2}I_{1}^{2}(\xi ) \\ =&\,(1-\xi )\left( I(\xi )-2I_{1}^{2}(\xi )+(3-\xi )I_{1}^{2}(\xi )\right) , \end{aligned} \end{aligned}$$
(3.9)

where

$$\begin{aligned} I(\xi )=\sum _{n=0}^{\infty }I_{n}\xi ^{n},\qquad I_{1}^{2}(\xi )=\sum _{n=0}^{\infty }I_{n,1}^{2}\xi ^{n}. \end{aligned}$$

Lemmas A.1, A.2 and Theorem A.2 yield

$$\begin{aligned} I_{n}-2I_{n,1}^{2}=\int _{0}^{1}r^{n}\mathrm {d}\upsilon (r), \quad n=0,1,\ldots \end{aligned}$$
(3.10)

and

$$\begin{aligned} I_{n,1}^{2}=\int _{0}^{1}r^{n}\mathrm {d}\gamma (r), \quad n=0,1,\ldots , \end{aligned}$$
(3.11)

where both \(\upsilon \) and \(\gamma \) are non-decreasing functions. Then for \(|\xi |<1\),

$$\begin{aligned} \mathrm {Re}\Big (\omega ^{(2,1)}(\xi )\Big )=\int _{0}^{1} \mathrm {Re}\Big (\frac{1-\xi }{1-r\xi }\Big )\mathrm {d}\upsilon (r) +\int _{0}^{1}\mathrm {Re}\Big (\frac{(1-\xi )(3-\xi )}{1-r\xi }\Big ) \mathrm {d}\gamma (r). \end{aligned}$$

Moreover,

$$\begin{aligned} \begin{aligned}&\frac{(1-\xi )(3-\xi )}{1-r\xi }\\&\quad =\frac{(3-4|\xi |\cos \theta +|\xi |^{2}\cos 2\theta ) (1-r|\xi |\cos \theta )+(4-2|\xi |\cos \theta )r|\xi |^{2}\sin ^{2} \theta }{(1-r|\xi |\cos \theta )^{2}+(r|\xi |\sin \theta )^{2}} \\&\qquad \quad +\,i\frac{(3r-|\xi |^{2}r -4+2|\xi |\cos \theta )|\xi |\sin \theta }{(1-r|\xi |\cos \theta )^{2} +(r|\xi |\sin \theta )^{2}}. \end{aligned} \end{aligned}$$

Since

$$\begin{aligned} \begin{aligned} 3-4|\xi |\cos \theta +|\xi |^{2}\cos 2\theta =3-4|\xi |\cos \theta +2|\xi |^{2}\cos ^{2}\theta -|\xi |^{2}\ge 2(1-|\xi |\cos \theta )^{2}, \end{aligned} \end{aligned}$$

we get

$$\begin{aligned} \int _{0}^{1}\mathrm {Re}\Big (\frac{(1-\xi )(3-\xi )}{1-r\xi } \Big )\mathrm {d}\gamma (r) \ge \frac{\min \Big ((1-|\xi |\cos \theta )^{3},(1-|\xi |\cos \theta )^{2} \Big )}{2}I_{0,1}^{2}. \end{aligned}$$

Case \((k,i)=(2,2)\): the series \(\omega ^{(2,2)}(\xi )\) satisfies

$$\begin{aligned} \begin{aligned} \omega ^{(2,2)}(\xi )&=I_{0}(1-\xi )+I_{0,1}^{2}(1-\xi )^{2} +(1-\xi )\sum _{n=0}^{\infty }I_{n+1}\xi ^{n}+(1-\xi )^{2} \sum _{n=0}^{\infty }I_{n+1,2}^{2}\xi ^{n} \\&=I_{0,1}^{2}(1-\xi )(3-\xi )+(1-\xi ^{2})\sum _{n=0}^{\infty } I_{n+1,1}^{2}\xi ^{n}+(1-\xi )\left( I(\xi )-2I_{1}^{2}(\xi )\right) , \end{aligned}\nonumber \\ \end{aligned}$$
(3.12)

since for \(n\ge 0\), the relation \(I_{n}+I_{n,2}^{2}=I_{n,1}^{2}\) yields

$$\begin{aligned} \begin{aligned}&(1-\xi )\sum _{n=0}^{\infty }I_{n+1}\xi ^{n}+(1-\xi )^{2} \sum _{n=0}^{\infty }I_{n+1,2}^{2}\xi ^{n} \\&\quad =(1-\xi )\Big (\sum _{n=0}^{\infty }I_{n+1,1}^{2}\xi ^{n} -\xi \sum _{n=0}^{\infty }I_{n+1,2}^{2}\xi ^{n}\Big ) \\&\quad =(1-\xi ^{2})\sum _{n=0}^{\infty }I_{n+1,1}^{2} \xi ^{n}+(1-\xi )\sum _{n=0}^{\infty } \left( I_{n+1}-2I_{n+1,1}^{2}\right) \xi ^{n+1}\\&\quad =(1-\xi ^{2})\sum _{n=0}^{\infty }I_{n+1,1}^{2} \xi ^{n}+(1-\xi )\left( I(\xi )-2I_{1}^{2}(\xi )-(I_{0}-2I_{0,1}^{2})\right) . \end{aligned} \end{aligned}$$

Suppose that \(|\xi |<1\), substituting (3.10) and (3.11) into (3.12), we obtain

$$\begin{aligned} \begin{aligned} \mathrm {Re}\Big (\omega ^{(2,2)}(\xi )\Big )=&\int _{0}^{1}\mathrm {Re} \Big ((1-\xi )(3-\xi )\Big )\mathrm {d}\gamma (r) \\&+\int _{0}^{1}r\mathrm {Re}\Big (\frac{1-\xi ^{2}}{1-r\xi }\Big ) \mathrm {d}\gamma (r)+\int _{0}^{1}\mathrm {Re}\Big (\frac{1-\xi }{1-r\xi }\Big )\mathrm {d}\upsilon (r). \end{aligned} \end{aligned}$$

Furthermore,

$$\begin{aligned} \begin{aligned} \frac{1-\xi ^{2}}{1-r\xi }=&\frac{(1-|\xi |^{2}\cos 2\theta )(1-r|\xi | \cos \theta )+r|\xi |^{3}\sin \theta \sin 2\theta }{(1-r|\xi |\cos \theta )^{2}+(r|\xi |\sin \theta )^{2}} \\&+i\frac{(1-|\xi |^{2}\cos 2\theta )r|\xi |\sin \theta -(1-r|\xi |\cos \theta )\rho ^{2}\sin 2\theta }{(1-r|\xi | \cos \theta )^{2}+(r|\xi |\sin \theta )^{2}}. \end{aligned} \end{aligned}$$

Since for \(0\le r\le 1\),

$$\begin{aligned} \begin{aligned}&(1-|\xi |^{2}\cos 2\theta )(1-r|\xi |\cos \theta ) +r|\xi |^{3}\sin \theta \sin 2\theta \\&\quad =1-|\xi |^{2}\cos 2\theta -r|\xi |\cos \theta +r|\xi |^{3}\cos \theta \\&\quad \ge (1-|\xi |^{2})(1-|\xi ||\cos \theta |), \end{aligned} \end{aligned}$$

we obtain

$$\begin{aligned} \begin{aligned} \int _{0}^{1}r\mathrm {Re}\Big (\frac{1-\xi ^{2}}{1-r\xi }\Big ) \mathrm {d}\gamma (r) \ge&\frac{(1-|\xi |^{2})(1-|\xi ||\cos \theta |)}{4} \int _{0}^{1}r\mathrm {d}\gamma (r) \\ =&\frac{(1-|\xi |^{2})(1-|\xi ||\cos \theta |)}{4}I_{1,1}^{2}. \end{aligned} \end{aligned}$$

Finally, for \(1\le i\le k\le 2\), we conclude

$$\begin{aligned} \mathrm {Re}\Big (\omega ^{(k,i)}(\xi )\Big )\ge \frac{\min \left( (1-|\xi |\cos \theta )^{2}, 1-|\xi |\cos \theta \right) }{4}I_{0}>0, \quad |\xi |<1. \end{aligned}$$

In addition, according to Lemma A.6, there exist constants \(M^{(k,i)}>0\) such that

$$\begin{aligned} |\omega ^{(k,i)}(\xi )-\omega ^{(k,i)}(\xi _{0})| \le \frac{M^{(k,i)}}{|1-\xi |} |\xi -\xi _{0}|,\quad \xi \ne 1. \end{aligned}$$

This implies that \(\omega ^{(k,i)}(\xi )\) is continuous for \(|\xi |\le 1\) and \(\xi \ne 1\). Therefore, for any fixed \(\xi \) lying on the unit circle, the angle of which satisfies \(\mathrm {arg}(\xi )=\theta _{\xi }\ne 0\), there exists a sequence \(\xi _{n}=(1-\frac{1}{n})\xi , n=1, 2, \ldots \), with \(|\xi _{n}|<1\), such that

$$\begin{aligned} \mathrm {Re}\Big (\omega ^{(k,i)}(\xi )\Big )=\lim \limits _{n\rightarrow \infty } \mathrm {Re}\Big (\omega ^{(k,i)}(\xi _{n})\Big )\ge \frac{I_{0}}{4}\min \left( (1-\cos \theta _{\xi })^{2}, 1-\cos \theta _{\xi }\right) >0. \end{aligned}$$

\(\square \)

4 Convergence Analysis

In this section, we will establish the error estimate for (2.17). Assume that u(t) is the exact solution of (1.1), then it satisfies

$$\begin{aligned} D_{k,i}^{\alpha }u(t_{n})=f(t_{n}, u(t_{n}))+\tau _{n}^{(k,i)},\quad k\le n\le N, \end{aligned}$$
(4.1)

where the difference operator \(D_{k,i}^{\alpha }\) and the local truncation error \(\tau _{n}^{(k,i)}\) are defined by (2.10) and (2.18), respectively. Suppose that \(u_{n}^{(k,i)}\) is the solution of (2.17) for each ki, we denote global errors by

$$\begin{aligned} e_{n}^{(k,i)}=u(t_{n})-u_{n}^{(k,i)}, \quad 0\le n\le N. \end{aligned}$$
(4.2)

Subtracting (2.17) by (4.1) yields

$$\begin{aligned} D_{k,i}^{\alpha }e_{n}^{(k,i)}=\delta f_{n}^{(k,i)}+\tau _{n}^{(k,i)},\quad k\le n\le N, \end{aligned}$$
(4.3)

where \(\delta f_{n}^{(k,i)}=f(t_{n}, u(t_{n}))-f(t_{n}, u_{n}^{(k,i)})\). From (2.10) and (4.3), we have

$$\begin{aligned} \sum _{m=0}^{k-1}w_{n,m}^{(k,i)}e_{m}^{(k,i)}+\sum _{j=0}^{n} \omega _{n-j}^{(k,i)}e_{j}^{(k,i)} =(\Delta t)^{\alpha }\delta f_{n}^{(k,i)}+(\Delta t)^{\alpha }\tau _{n}^{(k,i)},\quad k\le n\le N. \end{aligned}$$
(4.4)

Multiplying \(\xi ^{n-k}\) on both sides of (4.4) and summing up for all \(n\ge k\), we obtain

$$\begin{aligned} \begin{aligned}&\sum _{n=0}^{\infty }\sum _{m=0}^{k-1}\left( w_{n+k,m}^{(k,i)} +\omega _{n+k-m}^{(k,i)}\right) e_{m}^{(k,i)}\xi ^{n} +\sum _{n=0}^{\infty }\sum _{j=k}^{n+k}\omega _{n+k-j}^{(k,i)}e_{j}^{(k,i)}\xi ^{n}\\&\quad =(\Delta t)^{\alpha }\sum _{n=0}^{\infty } \delta f_{n+k}^{(k,i)}\xi ^{n}+(\Delta t)^{\alpha } \sum _{n=0}^{\infty }\tau _{n+k}^{(k,i)}\xi ^{n}. \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \omega ^{(k,i)}(\xi )e^{(k,i)}(\xi ) =\sum _{m=0}^{k-1}e_{m}^{(k,i)}s_{m}^{(k,i)}(\xi ) +(\Delta t)^{\alpha }\delta f^{(k,i)}(\xi )+(\Delta t)^{\alpha }\tau ^{(k,i)}(\xi ), \end{aligned}$$
(4.5)

where

$$\begin{aligned} \begin{aligned}&s_{m}^{(k,i)}(\xi ):=\sum _{n=0}^{\infty }s_{n,m}^{(k,i)}\xi ^{n}=-\sum _{n=0}^{\infty }\left( w_{n+k,m}^{(k,i)}+\omega _{n+k-m}^{(k,i)}\right) \xi ^{n},\quad e^{(k,i)}(\xi )=\sum _{n=0}^{\infty }e_{n+k}^{(k,i)}\xi ^{n}, \\&\omega ^{(k,i)}(\xi )=\sum _{n=0}^{\infty }\omega _{n}^{(k,i)}\xi ^{n}, \delta f^{(k,i)}(\xi )=\sum _{n=0}^{\infty }\delta f_{n+k}^{(k,i)}\xi ^{n},\quad \tau ^{(k,i)}(\xi )=\sum _{n=0}^{\infty }\tau _{n+k}^{(k,i)}\xi ^{n}. \end{aligned}\nonumber \\ \end{aligned}$$
(4.6)

Theorem 4.1

Let u(t) and \(u_{n}\), \(k\le n\le N\) be the solutions of Eqs. (1.1) and (4.4), respectively. Assume that f(tu(t)) in (1.1) satisfies the Lipschitz continuous condition with respect to u. If \(u(t)\in C^{k+1}[0, T]\), then

(i):

for \(1\le k\le 3\),

$$\begin{aligned} |e_{n}^{(k,k)}|\le C^{(k,k)}\left( \sum _{m=0}^{k-1}|e_{m}^{(k,k)}|+ \left( \Delta t\right) ^{(k+1)}+\left( \Delta t\right) ^{k+1-\alpha }t_{n-1}^{\alpha }\right) , \quad k\le n\le N, \end{aligned}$$
(4.7)
(ii):

for \(1\le i<k\le 3\),

$$\begin{aligned} |e_{n}^{(k,i)}|\le C^{(k,i)}\left( \sum _{m=0}^{k-1}|e_{m}^{(k,i)}|+\left( \Delta t\right) ^{k}+\left( \Delta t\right) ^{k+1-\alpha }t_{n-1}^{\alpha }\right) , \quad k\le n\le N, \end{aligned}$$
(4.8)

where \(\Delta t>0\) is sufficiently small, \(N\Delta t=T\), and \(C^{(k,i)}>0\) are independent of N and n.

Proof

Substituting formula (B.14) into (4.5), and using (B.21), we have

$$\begin{aligned} \begin{aligned} e^{(k,i)}(\xi )=\frac{r^{(k,i)}(\xi )}{(1-\xi )^{\alpha }} \left( \sum _{m=0}^{k-1}e_{m}^{(k,i)}s_{m}^{(k,i)}(\xi )+(\Delta t)^{\alpha }\delta f^{(k,i)}(\xi )+(\Delta t)^{\alpha }\tau ^{(k,i)}(\xi )\right) .\quad \end{aligned} \end{aligned}$$
(4.9)

For \(k\le n\le N\), we rewrite (4.9) in the equivalent form

$$\begin{aligned} e_{n}^{(k,i)}= & {} \displaystyle \sum _{m=0}^{k-1}e_{m}^{(k,i)}\sum _{j=0}^{n-k} r_{n-k-j}^{(k,i)}\sum _{i=0}^{j}g_{j-i}^{(-\alpha )}s_{i,m}^{(k,i)}+(\Delta t)^{\alpha }\sum _{j=0}^{n-k}r_{n-k-j}^{(k,i)} \sum _{i=0}^{j}g_{j-i}^{(-\alpha )}\delta f_{i+k}^{(k,i)} \nonumber \\&\displaystyle +\,(\Delta t)^{\alpha }\sum _{j=0}^{n-k}r_{n-k-j}^{(k,i)} \sum _{i=0}^{j}g_{j-i}^{(-\alpha )}\tau _{i+k}^{(k,i)}, \end{aligned}$$
(4.10)

where coefficients \(g_{n}^{(-\alpha )}\) are given in Lemma B.2. Since f(tu(t)) satisfies the Lipschitz continuous condition by assumption, there exist constants \(L^{(k,i)}>0\) such that \(|\delta f_{n}^{(k,i)}|\le L^{(k,i)}|e_{n}^{(k,i)}|\) for \(k\le n\le N\). It follows that

$$\begin{aligned} \left| e_{n}^{(k,i)}\right|\le & {} \sum _{m=0}^{k-1}\left| e_{m}^{(k,i)}\right| \sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{i=0}^{j}g_{j-i}^{(-\alpha )}\left| s_{i,m}^{(k,i)}\right| \nonumber \\&+(\Delta t)^{\alpha }\sum _{j=0}^{n-k}g_{n-k-j}^{(-\alpha )} \sum _{i=0}^{j}\left| r_{j-i}^{(k,i)}\right| \left( L^{(k,i)}\left| e_{i+k}^{(k,i)}\right| +\left| \tau _{i+k}^{(k,i)}\right| \right) . \end{aligned}$$
(4.11)

On the one hand, by (B.11) and (B.20), there exist constants \({\tilde{c}}_{k,i}>0\), such that \( |s_{n,0}^{(k,i)}|\le c_{k,i}\frac{n^{-\alpha }}{\Gamma (1-\alpha )}\le {\tilde{c}}_{k,i}g_{n}^{(\alpha -1)}\). Hence, we obtain

$$\begin{aligned} \sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{i=0}^{j}g_{j-i}^{(-\alpha )}\left| s_{i,0}^{(k,i)}\right|\le & {} {\tilde{c}}_{k,i}\sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{i=0}^{j}g_{j-i}^{(-\alpha )}g_{i}^{(\alpha -1)} \nonumber \\\le & {} {\tilde{c}}_{k,i}\sum _{j=0}^{\infty }\left| r_{j}^{(k,i)}\right| ={\tilde{c}}_{k,i}M_{\alpha }^{(k,i)}, \end{aligned}$$
(4.12)

where \(\sum _{i=0}^{j}g_{j-i}^{(-\alpha )}g_{i}^{(\alpha -1)}=1\) for any \(j\ge 0\) in view of the identity \((1-\xi )^{-\alpha }(1-\xi )^{\alpha -1}=(1-\xi )^{-1}\). On the other hand, there exist constants \({\tilde{c}}_{m}^{(k,i)}>0, m\ge 1\), such that \(|s_{n,m}^{(k,i)}|\le c_{m}^{(k,i)}\frac{n^{-\alpha -1}}{|\Gamma (-\alpha )|}\le {\tilde{c}}_{m}^{(k,i)}|g_{n}^{(\alpha )}|\). This gives

$$\begin{aligned} \sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{l=0}^{j} g_{j-l}^{(-\alpha )}\left| s_{l,m}^{(k,i)}\right|\le & {} {\tilde{c}}_{m}^{(k,i)} \sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{l=0}^{j}g_{j-l}^{(-\alpha )}\left| g_{l}^{(\alpha )}\right| \nonumber \\\le & {} 2{\tilde{c}}_{m}^{(k,i)}\sum _{j=0}^{n-k} \left| r_{n-k-j}^{(k,i)}\right| g_{j}^{(-\alpha )}, \end{aligned}$$
(4.13)

where the last inequality holds since it is satisfied that \(\sum _{l=0}^{j}g_{j-l}^{(-\alpha )}g_{l}^{(\alpha )}=0\) for any \(j\ge 1\), and \(\sum _{l=0}^{j}g_{j-l}^{(-\alpha )}|g_{l}^{(\alpha )}|=g_{j}^{(-\alpha )}g_{0}^{(\alpha )}-\sum _{l=1}^{j}g_{j-l}^{(-\alpha )}g_{l}^{(\alpha )}=2g_{j}^{(-\alpha )}\) follows from Lemma B.2. In addition, the sequences \(\{r_{n}^{(k,i)}\}\) belong to \(l^{1}\), and \(g_{n}^{(-\alpha )}\rightarrow 0\) as \(n\rightarrow \infty \). Therefore, \(\sum _{j=0}^{n-k}|r_{n-k-j}^{(k,i)}|g_{j}^{(-\alpha )}\rightarrow 0\) as \(n\rightarrow \infty \). Then, the sequences \(\sum _{j=0}^{n-k}|r_{n-k-j}^{(k,i)}|\sum _{l=0}^{j}g_{j-l}^{(-\alpha )}|s_{l,m}^{(k,i)}|\) can be bounded by \(2{\tilde{c}}_{m}^{(k,i)}M_{\alpha }^{(k,i)}\).

In the cases \(1\le k\le 3\), recalling \(|\tau _{n}^{(k,k)}|\le C_{\alpha }^{(k)}\left( \Delta t\right) ^{k+1-\alpha }\) uniformly for \(n\ge k\) in Theorem 2.1, together with (B.17), we have

$$\begin{aligned} (\Delta t)^{\alpha }\sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,k)}\right| \sum _{i=0}^{j}g_{j-i}^{(-\alpha )}\left| \tau _{i+k}^{(k,k)}\right|= & {} \displaystyle (\Delta t)^{\alpha }\sum _{j=0}^{n-k}g_{n-k-j}^{(-\alpha )}\sum _{i=0}^{j}\left| r_{j-i}^{(k,k)}||\tau _{i+k}^{(k,k)}\right| \nonumber \\\le & {} \displaystyle \left( \Delta t\right) ^{k+1}C_{\alpha }^{(k)}M_{\alpha }^{(k,k)}\sum _{j=0}^{n-k}g_{j}^{(-\alpha )} \nonumber \\\le & {} \displaystyle \left( \Delta t\right) ^{k+1}C_{\alpha }^{(k)}M_{\alpha }^{(k,k)}\big (1+C\sum _{j=1}^{n-k}\frac{j^{\alpha -1}}{\Gamma (\alpha )}\big ) \nonumber \\\le & {} \displaystyle \left( \Delta t\right) ^{k+1}C_{\alpha }^{(k)}M_{\alpha }^{(k,k)}\big (1+ \frac{C}{\Gamma (\alpha )}\int _{0}^{n-k}t^{\alpha -1}\mathrm {d}t\big ) \nonumber \\\le & {} \displaystyle {\tilde{C}}_{\alpha }^{(k,k)}\left( (\Delta t)^{k+1}+(\Delta t)^{k+1-\alpha }t_{n-k}^{\alpha }\right) . \end{aligned}$$
(4.14)

In other cases \(1\le i<k\le 3\), according to Theorem 2.1, there exist constants \(C_{\alpha }^{(k,i)}>0\), such that

$$\begin{aligned} |\tau _{n}^{(k,i)}|\le C_{\alpha }^{(k,i)}\left( (\Delta t)^{k-\alpha }\frac{(n-k)^{-\alpha -1}}{|\Gamma (-\alpha )|}+\frac{(\Delta t)^{k+1-\alpha }}{\Gamma (1-\alpha )}\right) , \quad n\ge k. \end{aligned}$$

Together with (B.17), it follows that

$$\begin{aligned} \begin{aligned} (\Delta t)^{\alpha }\sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{l=0}^{j}g_{j-l}^{(-\alpha )}\left| \tau _{l+k}^{(k,i)}\right| \le&\,C_{\alpha }^{(k,i)}\Big ((\Delta t)^{k}{\tilde{c}}_{\alpha } \sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{l=0}^{j}g_{j-l}^{(-\alpha )}\left| g_{l}^{(\alpha )}\right| \\&+(\Delta t)^{k+1}\sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| \sum _{l=0}^{n-k}g_{l}^{(-\alpha )}\Big ) \\ \le&\,C_{\alpha }^{(k,i)}\Big (2(\Delta t)^{k}{\tilde{c}}_{\alpha }\sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| g_{j}^{(-\alpha )} \\&+(\Delta t)^{k+1}\sum _{j=0}^{n-k}\left| r_{n-k-j}^{(k,i)}\right| g_{n-k}^{(-\alpha -1)}\Big )\\ \le \,&{\tilde{C}}_{\alpha }^{(k,i)}\left( (\Delta t)^{k}+(\Delta t)^{k+1-\alpha }t_{n-k}^{\alpha }\right) . \end{aligned} \end{aligned}$$
(4.15)

Therefore formula (4.11) becomes

$$\begin{aligned} \begin{aligned} \left| e_{n}^{(k,i)}\right| \le&(\Delta t)^{\alpha }L^{(k,i)}\Big (\sum _{j=0}^{n-k-1}g_{n-k-j}^{(-\alpha )}\sum _{l=0}^{j}\left| r_{j-l}^{(k,i)}||e_{l+k}^{(k,i)}\right| +g_{0}^{(-\alpha )}\sum _{l=0}^{n-k-1}\left| r_{n-k-l}^{(k,i)}||e_{l+k}^{(k,i)}\right| \\&+g_{0}^{(-\alpha )}\left| r_{0}^{(k,i)}||e_{n}^{(k,i)}\right| \Big )+\delta _{n}^{(k,i)}, \quad n\ge k. \end{aligned} \end{aligned}$$

For \(1\le i\le k\le 3\), we obtain from (4.12), (4.13), (4.14) and (4.15) that

$$\begin{aligned} \delta _{n}^{(k,k)}=C_{\alpha }^{(k,k)}\left( \sum _{m=0}^{k-1}\left| e_{m}^{(k,k)}\right| +\left( \Delta t\right) ^{(k+1)} +\left( \Delta t\right) ^{k+1-\alpha }t_{n-1}^{\alpha }\right) , \quad n\ge k \end{aligned}$$

and

$$\begin{aligned} \delta _{n}^{(k,i)}=C_{\alpha }^{(k,i)}\left( \sum _{m=0}^{k-1}|e_{m}^{(k,i)}|+\left( \Delta t\right) ^{k}+\left( \Delta t\right) ^{k+1-\alpha }t_{n-1}^{\alpha }\right) , \quad n\ge k, \end{aligned}$$

where \(C_{\alpha }^{(k,i)}=\max \{{\tilde{c}}_{k,i}M_{\alpha }^{(k,i)}, 2{\tilde{c}}_{m}^{(k,i)}M_{\alpha }^{(k,i)}, {\tilde{C}}_{\alpha }^{(k,i)} \}\). Let \(\Delta t>0\) be sufficiently small. Then there exist bounded constants \(c_{k,i}^{*}\) such that \(0<\frac{1}{1-(\Delta t)^{\alpha }L^{(k,i)}g_{0}^{(-\alpha )}|r_{0}^{(k,i)}|}\le c_{k,i}^{*}\), and

$$\begin{aligned} \left\{ \begin{aligned} |e_{k}^{(k,i)}|&\le {\tilde{\delta }}_{k}^{(k,i)}, \\ |e_{n}^{(k,i)}|&\le {\tilde{\delta }}_{n}^{(k,i)}+(\Delta t)^{\alpha }c_{k,i}^{*}L^{(k,i)}\Big (\sum _{j=0}^{n-k-1}g_{n-k-j}^{(-\alpha )}\sum _{l=0}^{j}|r_{j-l}^{(k,i)}||e_{l+k}^{(k,i)}|\\&\quad +g_{0}^{(-\alpha )}\sum _{l=0}^{n-k-1}|r_{n-k-l}^{(k,i)}||e_{l+k}^{(k,i)}|\Big ), \quad n\ge k+1, \end{aligned} \right. \end{aligned}$$
(4.16)

where \({\tilde{\delta }}_{n}^{(k,i)}=c_{k,i}^{*}\delta _{n}^{(k,i)}\). Next, assume that \(\{p_{n}^{(k,i)}\}_{n\ge 0}\) are a series of non-negative sequences satisfying

$$\begin{aligned} \left\{ \begin{aligned}&p_{0}^{(k,i)}={\tilde{\delta }}_{k}^{(k,i)}, \\&p_{n}^{(k,i)}={\tilde{\delta }}_{n+k}^{(k,i)}+\frac{(\Delta t)^{\alpha }{\tilde{L}}^{(k,i)}}{\Gamma (\alpha )}\sum _{j=0}^{n-1}(n-j)^{\alpha -1}p_{j}^{(k,i)}, \quad n\ge 1, \end{aligned} \right. \end{aligned}$$
(4.17)

where \({\tilde{L}}^{(k,i)}\) are chosen such that

$$\begin{aligned} {\tilde{L}}^{(k,i)}=\max \{c_{k,i}^{*}L^{(k,i)}M_{\alpha }^{(k,i)}\left( 1+\Gamma (\alpha )g_{1}^{(-\alpha )}\right) , c_{k,i}^{*}L^{(k,i)}M_{\alpha }^{(k,i)}g_{n}^{(-\alpha )}n^{1-\alpha }\Gamma (\alpha )\}. \end{aligned}$$

Then using the weakly singular discrete Gronwall inequality in [14], we conclude that \(\{p_{n}^{(k,i)}\}_{n\ge 1}\) is monotonically increasing with respect to n, and satisfy

$$\begin{aligned} p_{n}^{(k,i)}\le {\tilde{\delta }}_{n+k}^{(k,i)}E_{\alpha } \left( {\tilde{L}}^{(k,i)}(n\Delta t)^{\alpha }\right) , \quad n\ge 1, \end{aligned}$$

where \(E_{\alpha }(\cdot )\) denote Mittag-Leffler functions. In addition, from (4.16) and (4.17), we have \(|e_{n}^{(k,i)}|\le p_{n-k}^{(k,i)}\) for \(n\ge k\), and consequently,

$$\begin{aligned} |e_{n}^{(k,i)}|\le {\tilde{\delta }}_{n}^{(k,i)}E_{\alpha } \left( {\tilde{L}}^{(k,i)}(n-k)^{\alpha }\Delta t^{\alpha }\right) \le {\tilde{\delta }}_{n}^{(k,i)}E_{\alpha }\left( {\tilde{L}}^{(k,i)} T^{\alpha }\right) \end{aligned}$$

for \(k\le n\le N\). \(\square \)

Remark 3

Note that the error estimates (4.7) and (4.8) are uniform for all \(n\ge k\). For those \(t_{n}\) away from the origin, under the conditions \(e_{m}^{(k,i)}=O((\Delta t)^{k})\) for \(1\le m\le k-1\), we can observe that the errors are \((k+1-\alpha )\)-th order accurate in time in the cases \(1\le i\le k\le 3\).

5 Numerical Experiments

In this section, we utilize (2.10) to approximate the equations in Examples 5.1 and 5.2, and prescribe starting values exactly.

Example 5.1

Consider the linear fractional ordinary differential equation

$$\begin{aligned} \left\{ \begin{aligned}&{^C}D^{\alpha }u(t)=\lambda u(t)+f(t), \quad t\in (0,1], \\&u(0)=u_{0}, \end{aligned}\right. \end{aligned}$$
(5.1)

where \(0<\alpha <1\). The exact solution is given by \(u(t)=e^{-t}\in C^{\infty }[0,1]\), if \( f(t)=-t^{1-\alpha }E_{1,2-\alpha }(-t)-\lambda e^{-t}\in C[0,1]\cap C^{\infty }(0,1]\), where the Mittag-Leffler functions [36] are defined by

$$\begin{aligned} E_{\alpha , \beta }(t)=\sum _{k=0}^{\infty }\frac{t^{k}}{\Gamma (\alpha k+\beta )}, \quad \alpha>0,\beta >0. \end{aligned}$$
Fig. 1
figure 1

The boundary of the stability region for different \(\alpha \) and \(\lambda \) a \(\alpha =0.5\), \(\lambda =-50\), b \(\alpha =0.3\), \(\lambda =20\times e^{\frac{{i}\pi \alpha }{2}}\), c \(\alpha =0.9\), \(\lambda =1000\times e^{\frac{i\pi \alpha }{2}}\), d \(\alpha =0.98\), \(\lambda =500i\)

In Fig. 1a–d, we plot the truncated boundary locus curves \(\sum _{n=0}^{6000}\omega _{n}^{(k,i)}e^{\sqrt{-1}\theta n}~(0\le \theta \le 2\pi )\) for \(1\le i\le k\le 3\) and some \(\alpha \in (0,1)\). It is already known from Theorem 3.1 that the stability regions of methods (3.2) lie outside their boundary locus curves. Here, we introduce the points \(z_{n}=\lambda (\Delta t_{n})^{\alpha }, 1\le n\le 5\), where \(\Delta t_{n}=1/2^{n+6}\) denote different time steps. Tables 1 and 2 show the accuracy and convergence rates of the error \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.1, where \(t_{M}=1\) is fixed and \(M=2^{j}\) for \(7\le j\le 11\), \(u(t_{M})\) and \(u_{M}^{(k,i)}\) are the exact solution and computed solution for (5.1), respectively.

From Fig. 1a–d and Tables 1, 2, we can see the influence of the stability of a numerical method on global error. In Fig. 1a, the points \(z_{n}\) with \(1\le n\le 5\) all lie in the stability regions for \(\alpha =0.5\) and \(\lambda =-50\), we get \((k+1-\alpha )\)-th order of accuracy shown in Tables 1, 2. In Fig. 1b, c, \(\{z_{n}\}_{n=1}^{5}\) fall on the half line with angle \(\frac{\pi \alpha }{2}\). It is observed that when all \(\{z_{n}\}_{n=1}^{5}\) fall out of the instability region (cf. Fig. 1b), correspondingly, as shown in Tables 1, 2, the global error agrees with \((k+1-\alpha )\)-th order of accuracy. On the other hand, due to the points \(z_{4}\) and \(z_{5}\) outside the stability regions for \(k=3\) (cf. Fig. 1c), perturbation errors are magnified and accumulated significantly, which are shown in Tables 1, 2 as well. In Fig. 1d, \(\{z_{n}\}\) are chosen on the imaginary axis with pure imaginary number \(\lambda \), Theorem 3.2 tells us that all \(\{z_{n}\}\) are in the stability region for \(k=1, 2\). The error and convergence order are obtained (cf. Table 1).

Table 1 Errors and convergence rates of \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.1 with different \(\alpha , \lambda \)
Table 2 Errors and convergence rates of \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.1 with different \(\alpha , \lambda \)

As a counter example, in Fig. 1d, the point \(z_{3}\) doesn’t belong to the stability region for \(\alpha =0.98\), for this case, the errors shown in Table 2 blow up. In fact, it can be observed that for \(k=3\), methods (3.2) don’t possess \(A(\frac{\pi }{2})\)-stability when \(\alpha \) tends to 1, as it is known that BDF3 method for ODEs is not \(A(\frac{\pi }{2})\)-stable.

Example 5.2

Consider the nonlinear equation

$$\begin{aligned} \left\{ \begin{aligned}&{^C}D^{\alpha }u(t)=-u^{2}+f(t), \quad t\in (0,1] \\&u(0)=u_{0}. \end{aligned}\right. \end{aligned}$$
(5.2)

The source function is prescribed by \(f(t)=\mu t^{1-\alpha }E_{1,2-\alpha }(\mu t)+e^{2\mu t}\) such that the exact solution reads \(u(t)=e^{\mu t}\).

Table 3 Errors and convergence orders of \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.2 with \(\mu =-1\)
Table 4 Errors and convergence orders of \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.2 with \(\mu =-1\)
Table 5 Errors and convergence orders of \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.2 with \(\mu =\sqrt{-1}\)
Table 6 Errors and convergence orders of \(|u(t_{M})-u_{M}^{(k,i)}|\) for Example 5.2 with \(\mu =\sqrt{-1}\)

We use (2.17) in combination with Newton’s method for solving the nonlinear equation (5.2). Tables 3, 4, 5 and 6 show the global error \(|e_{M}^{(k,i)}|=|u(t_{M})-u_{M}^{(k,i)}|\) and orders of accuracy for Example 5.2 with different \(\mu \) and \(\alpha \), where \(t_{M}=1\) is fixed and \(\Delta t=1/M\) with \(M=2^{j},~5\le j\le 9\). Further, it is observed that \(|e_{M}^{(k,i)}|=O(\Delta t^{k+1-\alpha })\) for \(1\le i\le k\le 3\).

6 Conclusions

We have proposed a class of new high-order approximations for solving time-fractional initial value models of order \(0<\alpha <1\). Furthermore, the local truncation error estimate in terms of a smooth solution is presented. Additionally, stability and convergence analysis of these numerical methods are discussed in detail. This will promote further investigation of the proposed methods for solving time-fractional partial differential equations.