1 Introduction

This paper deals with high-order numerical methods for approximating the following semilinear fractional differential equation for \(\alpha \in (1,2)\)

$$\begin{aligned}&_0^C D_{t}^{\alpha } y(t) = \beta y(t) + f(y(t)), \quad 0 \le t \le T, \nonumber \\&y(0) = y_{0}, \quad y^{\prime } (0) = y_{0}^{1}, \end{aligned}$$
(1)

where \(\beta < 0\), the time \(T>0\), initial values \(y_{0}, y_{0}^{1} \in {\mathbb {R}}\) and f is a nonlinear function satisfying the following uniform Lipschitz continuity: for \(s_1,s_2 \in {\mathbb {R}}\) there exists a positive constant L such that

$$\begin{aligned} |f(s_1)-f(s_2)| \le L |s_1-s_2|. \end{aligned}$$
(2)

Moreover, \(y^{\prime } (0)\) denotes the derivative of y at \(t=0\) and the Caputo derivative \( _0^C D_{t}^{\alpha } y(t)\) is defined by [15, 23], with \(\alpha \in (1, 2)\),

$$\begin{aligned} _0^C D_{t}^{\alpha } y (t)=\frac{1}{\varGamma (2-\alpha )} \int _{0}^{t} (t-s)^{1-\alpha } y''(s) \, ds, \end{aligned}$$
(3)

where \( y''(s)\) denotes the second order derivative of y and \(\varGamma \) denotes the standard \(\varGamma \) function.

Following a relation between Caputo and Riemann–Liouville fractional derivatives, the Eq. (1) can be rewritten, see [3], as

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] +\beta y(t) = f(y(t)), \quad 0 \le t \le T. \end{aligned}$$
(4)

The aim of this paper is to construct a high-order numerical method for approximating (4). In order to achieve this, we first construct a high-order scheme to approximate the Riemann–Liouville fractional derivative \( _0^R D_{t}^{\alpha } y(t), \, \alpha \in (1,2).\) Recall that the Riemann–Liouville fractional derivative can be expressed as a Hadamard finite-part integral. Approximating the Hadamard finite-part integral with the quadratic interpolation polynomials, we obtain a scheme to approximate the Riemann–Liouville fractional derivative. We shall show in Sect. 2 that the error has the asymptotic expansion \( \big ( d_{3} \tau ^{3- \alpha } + d_{4} \tau ^{4-\alpha } + d_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} \tau ^{4} + d_{3}^{*} \tau ^{6} + d_{4}^{*} \tau ^{8} + \cdots \big ) \) at any fixed time \( T= N \tau = t_{N}\) for some suitable constants \(d_{i}, i=3, 4,\ldots \) and \(d_{i}^{*}, i=2, 3,\ldots \), where \(\tau \) denotes the step size. Applying this scheme, a numerical method for the fractional differential equation (4) is derived in Sect. 3 and the corresponding error is also shown to have a similar asymptotic expansion.

The Hadamard finite-part integral approach has been proposed and analyzed in 1997 by Diethelm [2] for linear fractional differential equations of order \(\alpha \in (0, 1)\) where the author proposed a numerical scheme to approximate the Riemann–Liouville fractional derivative by approximating the Hadamard finite-part integral with piecewise linear interpolation polynomial. The convergence order of the scheme is \(O(\tau ^{2- \alpha }), \alpha \in (0, 1).\) One may obtain the L1 scheme, when the scheme in [2] is applied to approximate the Caputo fractional derivative of order \(\alpha \in (0, 1)\). Diethelm and Walz [5] showed that the error of the L1 scheme obtained in [2] for approximating the Caputo fractional derivative of order \(\alpha \in (0, 1)\) has the asymptotic expansion which can be applied to the linear fractional differential equation to derive the higher convergence orders by extrapolation, see also [19, 30]. Dimitrov [6, 7] also considered the asymptotic expansion of the L1 scheme for approximating the Caputo fractional derivative of order \(\alpha \in (0, 1)\) by using the different approach from [5], see also [10, 24]. For other numerical methods for fractional differential equations, we refer to [18, 20, 1, 14, 17, 26,27,28,29, 11, 12, 21, 13, 16, 31], and references therein.

To the best of our knowledge, there are hardly any numerical schemes in literature for approximating the Caputo fractional derivative and fractional differential equation of order \( \alpha \in (1,2) \) by using the Hadamard finite-part integral approach. Therefore, an attempt has been made in this paper to develop and analyse approximation schemes for the Caputo fractional derivative and the fractional differential equation of the order \(\alpha \in (1, 2)\) by using the Hadamard finite-part integral approach.

The main contributions of this paper are as follows:

  1. 1.

    A new high-order scheme for approximating the Riemann–Liouville fractional derivative of order \( \alpha \in (1,2)\) is introduced based on the Hadamard finite-part integral. it is observed that the error has the asymptotic expansion:

    $$\begin{aligned} \big ( d_{3} \tau ^{3- \alpha } + d_{4} \tau ^{4-\alpha } + d_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} \tau ^{4} + d_{3}^{*} \tau ^{6} + d_{4}^{*} \tau ^{8} + \cdots \big ) \end{aligned}$$
    (5)

    for some suitable constants \(d_{i}, i=3, 4,\ldots \) and \(d_{i}^{*}, i=2, 3,\ldots \), where \(\tau \) denotes the step size.

  2. 2.

    We examine a finite difference method based on the previously mentioned scheme for solving semilinear fractional differential equations of order \(\alpha \in (1,2)\) and have proved that the approximate error generated by this method exhibits an asymptotic expansion.

  3. 3.

    The properties of the weights utilized in the approximation scheme for the Riemann–Liouville fractional derivative are thoroughly investigated. Based on these properties, we establish rigorous error estimates with an order of \(O(\tau ^{3-\alpha })\), where \(\alpha \in (1, 2)\), for semilinear fractional differential equations.

  4. 4.

    By employing Richardson extrapolation in conjunction with the asymptotic behaviour of the error, we devise higher-order approximation schemes for effectively approximating semilinear fractional differential equations. The rates of convergence of these schemes are of \(O(\tau ^{3-\alpha })\), \(O(\tau ^{4-\alpha })\), \(O(\tau ^{5-\alpha })\), \(O(\tau ^{4})\), etc., aiming to enhance the accuracy and precision of the approximations.

The paper is organized as follows. Section 2 deals with high-order approximation of the Riemann–Liouville fractional derivative of order \(\alpha \in (1,2)\) based on the Hadamard finite-part integral. Section 3 focuses on high-order approximations of linear fractional differential equations of order \(\alpha \in (1,2)\). Section 4 is on the numerical approximation of a semilinear fractional differential equation of order \(\alpha \in (1,2)\). Numerical tests are carried out to confirm our theoretical findings in Sect. 5.

By \(C, c_{l}, c^{*}_{l}, d_{l}, d^{*}_{l}, l \in {\mathbb {Z}}^{+}\), we denote some positive constants independent of the step size \(\tau \), but not necessarily the same at different occurrences.

2 New high-order scheme for the Riemann–Liouville fractional derivative of order \(\alpha \in (1, 2)\)

In this section, based on the Hadamard finite-part integral approximation, we introduce a new high-order scheme for approximating the Riemann–Liouville fractional derivative of order \(\alpha \in (1, 2)\) and prove that the error has an asymptotic expansion.

2.1 The scheme

It is well-known that the Riemann–Liouville fractional derivative and the Hadamard finite-part integral has the following relation, see, e.g., Elliott [9, Theorem 2.1], with \(\alpha \in (1,2)\),

where the integral \( =\!\!\!\!\!\! \int _{0}^{1} \) must be interpreted as a Hadamard finite-part integral, see Diethelm [3, p.233].

Let \(0 = t_{0}< t_{1}< \cdots < t_{N} =T\) be a partition of [0, T] and \(\tau = \frac{T}{N}\) the step size. At \(t=t_{n}, \, n=1, 2,\ldots , N\), we may write

For any fixed \( n=1, 2,\ldots , N\), we denote \(g(w)=f(t_{n}-t_{n}w)\). When \( n \ge 2\), we approximate g(w) by the following piecewise quadratic interpolation polynomial \(g_{2}(w)\) defined on the nodes \(w_{l}=\frac{l}{n}, l=0,1,2,\ldots , n, \, n \ge 2\):

$$\begin{aligned} g_{2}(w)&= \frac{(w-w_{1})(w-w_{2})}{(w_{0}-w_{1})(w_{0}-w_{2})}g(w_{0}) +\frac{(w-w_{0})(w-w_{2})}{(w_{1}-w_{0})(w_{1}-w_{2})}g(w_{1}) \\&\quad +\frac{(w-w_{0})(w-w_{1})}{(w_{2}-w_{0})(w_{2}-w_{1})}g(w_{2}), \quad w\in [w_{0}, w_{1}], \end{aligned}$$

and, for \( k=2,3,\ldots ,n, \)

$$\begin{aligned} g_{2}(w)&=\frac{(w-w_{k-1})(w-w_{k})}{(w_{k-2}-w_{k-1})(w_{k-2}-w_{k})}g(w_{k-2}) \nonumber \\&\quad +\frac{(w-w_{k-2})(w-w_{k})}{(w_{k-1}-w_{k-2})(w_{k-1}-w_{k})}g(w_{k-1}) \nonumber \\&\quad +\frac{(w-w_{k-2})(w-w_{k-1})}{(w_{k}-w_{k-2})(w_{k}-w_{k-1})}g(w_{k}), \quad w\in [w_{k-1}, w_{k}], \end{aligned}$$
(6)

We remark that on the first interval \([w_{0}, w_{1}]\), the piecewise quadratic interpolation polynomial \(g_{2}(w)\) is defined on the nodes \(w_{0}, w_{1}, w_{2}\), and on the other interval \([w_{k-1}, w_{k}], k=2, 3,\ldots , n\), the piecewise quadratic interpolation polynomial \(g_{2}(w)\) is defined on the nodes \(w_{k-2}, w_{k-1}, w_{k}\). We also remark that when \(n=1\), it is not possible to approximate \(g(w) = f(t_{n}- t_{n} w)\) with a quadratic interpolation polynomial since we only have two nodes \(w_{l}= \frac{l}{n}, l=0, 1\) in this case.

The following lemma gives a new approximation scheme of the Riemann–Liouville fractional derivative.

Lemma 1

Let \(0=t_{0}< t_{1}< \cdots < t_{N}=T\) with \( N \ge 2\) be a partition of [0, T] and \(\tau = \frac{T}{N}\) the step size. Let \(\alpha \in (1, 2)\). Then, with \(n \ge 2\),

$$\begin{aligned} _0^R D_{t}^{\alpha }f(t_{n})&=\tau ^{-\alpha }\sum _{k=0}^{n}w_{kn}f(t_{n-k})+O(\tau ^{3-\alpha }), \end{aligned}$$
(7)

where the weights \(w_{kn}, k=0,1,2,\ldots , n\) and \(n=2,3,\ldots ,N\) are determined as follows.

For \(n=2,\)

$$\begin{aligned} \varGamma (3-\alpha )w_{kn}= {\left\{ \begin{array}{ll} 2-\frac{1}{2}\alpha +\frac{1}{2}E(2), &{}\;\quad k=0, \\ (-\alpha )(3-\alpha )-F(2), &{} \quad \ k=1, \\ \frac{1}{2}\alpha +\frac{1}{2}G(2), &{}\;\quad k=2. \end{array}\right. } \end{aligned}$$
(8)

For \(n=3,\)

$$\begin{aligned} \varGamma (3-\alpha )w_{kn}= {\left\{ \begin{array}{ll} 2-\frac{1}{2}\alpha +\frac{1}{2}E(2), &{}\;\quad k=0, \\ (-\alpha )(3-\alpha )+\frac{1}{2}E(3)-F(2), &{}\quad \ k=1, \\ \frac{1}{2}\alpha -F(3)+\frac{1}{2}G(2), &{}\;\quad k=2, \\ \frac{1}{2}G(3), &{}\quad \ k=3. \end{array}\right. } \end{aligned}$$
(9)

For \(n=4,\)

$$\begin{aligned} \varGamma (3-\alpha )w_{kn}= {\left\{ \begin{array}{ll} 2-\frac{1}{2}\alpha +\frac{1}{2}E(2), &{}\; \quad k=0, \\ (-\alpha )(3-\alpha )+\frac{1}{2}E(3)-F(2), &{}\; \quad k=1, \\ \frac{1}{2}\alpha +\frac{1}{2}E(4)-F(3)+\frac{1}{2}G(2), &{}\; \quad k=2, \\ -F(4) + \frac{1}{2} G(3), &{}\; \quad k=3, \\ \frac{1}{2}G(4), &{}\; \quad k=4. \end{array}\right. } \end{aligned}$$
(10)

For \(n\ge 5,\)

$$\begin{aligned} \varGamma (3-\alpha )w_{kn}= {\left\{ \begin{array}{ll} 2-\frac{1}{2}\alpha +\frac{1}{2}E(2), &{}\; \quad k=0, \\ (-\alpha )(3-\alpha )+\frac{1}{2}E(3)-F(2), &{}\; \quad k=1, \\ \frac{1}{2}\alpha +\frac{1}{2}E(4)-F(3)+\frac{1}{2}G(2), &{}\; \quad k=2, \\ \frac{1}{2}E(k+2)-F(k+1)+\frac{1}{2}G(k), &{}\; \quad k=3,4,\ldots , n-2, \\ -F(n)+\frac{1}{2}G(n-1), &{}\; \quad k=n-1, \\ \frac{1}{2}G(n), &{}\; \quad k=n. \end{array}\right. } \end{aligned}$$
(11)

Here \(E(\cdot ), F(\cdot )\) and \(G(\cdot )\) are defined by (17), (18) and (19), respectively.

Proof

We first write

(12)

where \(R_{n}(g)\) denotes the remainder term. By Diethelm [4, Theorem 2.4], one obtains

(13)

Note that

(14)

By the definition of Hadamard finite-part integral and with \(K(\alpha )=(-\alpha )(1-\alpha )(2-\alpha ) >0,\) for \(\alpha \in (1,2)\), we obtain, see, Elliott [9] and Diethelm [3, p. 233],

(15)

For the second term of the right hand side in (14), we arrive with \(k=2, 3,\ldots ,n\) at

$$\begin{aligned} \int _{w_{k-1}}^{w_{k}} w^{-\alpha -1}g_{2}(w)\;d w&= \frac{n^{2}}{2}g(w_{k-2})\int _{w_{k-1}}^{w_{k}}w^{-\alpha -1}(w-w_{k-1})(w-w_{k})dw \nonumber \\&\quad -n^{2}g(w_{k-1})\int _{w_{k-1}}^{w_{k}}w^{-\alpha -1}(w-w_{k-2})(w-w_{k})dw \nonumber \\&\quad +\frac{n^{2}}{2}g(w_{k})\int _{w_{k-1}}^{w_{k}}w^{-\alpha -1}(w-w_{k-2})(w-w_{k-1})dw \nonumber \\&= P(k)g(w_{k-2})+Q(k)g(w_{k-1})+R(k)g(w_{k}). \end{aligned}$$
(16)

Here, with \(k=2, 3,\ldots ,n\),

$$\begin{aligned} P(k)&= \frac{n^{2}}{2}\int _{w_{k-1}}^{w_{k}} \big ( w^{1-\alpha } -(w_{k-1}+w_{k})w^{-\alpha }+w_{k-1}w_{k}w^{-\alpha -1} \big ) \, dw \\&= \frac{n^{2}}{2}\left( \frac{1}{2-\alpha }(w_{k}^{2-\alpha } -w_{k-1}^{2-\alpha })-\frac{w_{k-1}+w_{k}}{1-\alpha }(w_{k}^{1-\alpha }-w_{k-1}^{1-\alpha }) +\frac{w_{k-1}w_{k}}{-\alpha }(w_{k}^{-\alpha }-w_{k-1}^{-\alpha })\right) \\&=\frac{1}{2\;K(\alpha )\;n^{-\alpha }} \big ( 2k^{-\alpha +2}-(-\alpha +2)k^{-\alpha +1} -2(k-1)^{-\alpha +2}-(-\alpha +2)(k-1)^{-\alpha +1} \big ) \\&= \frac{1}{2\;K(\alpha ) n^{-\alpha }}\;E(k), \end{aligned}$$

where

$$\begin{aligned} E(k)=2k^{-\alpha +2}-(-\alpha +2)k^{-\alpha +1}-2(k-1)^{-\alpha +2} -(-\alpha +2)(k-1)^{-\alpha +1}.\qquad \end{aligned}$$
(17)

Similarly, it follows for \(k=2, 3,\ldots ,n\) that

$$\begin{aligned} Q(k)&=-n^{2}\int _{w_{k-1}}^{w_{k}}(w^{1-\alpha }-(w_{k-2} +w_{k})w^{-\alpha }+w_{k-2}w_{k}w^{-\alpha -1}) \, dw \\&=-\frac{1}{K(\alpha )\;n^{-\alpha }}F(k), \end{aligned}$$

and

$$\begin{aligned} R(k)&=\frac{n^{2}}{2}\int _{w_{k-1}}^{w_{k}}(w^{1-\alpha }-(w_{k-2} +w_{k-1})w^{-\alpha }+w_{k-2}w_{k-1}w^{-\alpha -1}) \, dw \\&=\frac{1}{2\;K(\alpha )\;n^{-\alpha }}G(k), \end{aligned}$$

where

$$\begin{aligned} F(k)=2k^{-\alpha +2}-2(-\alpha +2)k^{-\alpha +1} -2(k-1)^{-\alpha +2}+(-\alpha +1)(-\alpha +2)(k-1)^{-\alpha },\nonumber \\ \end{aligned}$$
(18)

and

$$\begin{aligned} G(k)&= 2k^{-\alpha +2}-3(-\alpha +2)k^{-\alpha +1}-2(k-1)^{-\alpha +2} \nonumber \\&\quad +(-\alpha +2)(k-1)^{-\alpha +1}+2(-\alpha +1)(-\alpha +2)k^{-\alpha }. \end{aligned}$$
(19)

Combining (15) with (16), we obtain from (14),

(20)

for some suitable weights \(\alpha _{kn}\). Thus, we obtain

(21)

where \(R_{n}(g)\) is introduced in (12). By (13) and with \(g(w) = f(t_{n} - t_{n} w)\), we write (21) in the following form,

where \(w_{kn}\) are defined by (8)–(11) and

$$\begin{aligned} \varGamma (3-\alpha )w_{kn}=K(\alpha )\;n^{-\alpha }\alpha _{kn}. \end{aligned}$$

This completes the rest of the proof. \(\square \)

Remark 1

Using the following relationship between Caputo and Riemann–Liouville fractional derivatives for \(\alpha \in (1, 2)\),

$$\begin{aligned} _0^C D_{t}^{\alpha } f(t) =\, _0^R D_{t}^{\alpha } f(t)-\frac{t^{-\alpha }}{\varGamma (1-\alpha )}f(0)-\frac{t^{1-\alpha }}{\varGamma (2- \alpha )}f'(0), \end{aligned}$$

we obtain the following approximation scheme of the Caputo fractional derivative at \(t = t_{n}, \, n=2, 3,\ldots , N\),

$$\begin{aligned} _0^C D_{t}^{\alpha } f(t_{n})= \tau ^{-\alpha }\sum _{k=0}^{n}w_{kn}f(t_{n-k}) -\frac{t_{n}^{-\alpha }}{\varGamma (1-\alpha )}f(0)-\frac{t_{n}^{1-\alpha }}{\varGamma (2- \alpha )}f'(0)+ O(\tau ^{3-\alpha }).\nonumber \\ \end{aligned}$$
(22)

2.2 The error formula for the approximation scheme (21)

This subsection is on the asymptotic expansion of the error for the approximation of the Riemann–Liouville fractional derivative defined in (21).

Theorem 1

Let \(0=t_{0}< t_{1}< \cdots < t_{N}=T\) with \( N \ge 2\) be a partition of [0, T] and \(\tau = \frac{T}{N}\) the step size. Let \(\alpha \in (1, 2)\) and let g be sufficiently smooth on [0, T]. With \(R_{n} (g)\) as the remainder term in (21), the following expansion holds for \(n=2, 3,\ldots , N\)

$$\begin{aligned} R_{n}(g)&= \big ( d_{3} n^{\alpha -3} + d_{4} n^{\alpha -4} + d_{5} n^{\alpha -5} + \cdots \big ) + \big ( d_{2}^{*} n^{-4} + d_{3}^{*} n^{-6} + d_{4}^{*} n^{-8} + \cdots \big ), \end{aligned}$$
(23)

for some suitable coefficients \(d_{l}, \, l=3, 4 \ldots \) and \(d_{l}^{*}, \, l=2, 3 \ldots ,\) which are independent of n.

Our proof of the above theorem is influenced by [5, Theorem 1.3], where the piecewise linear interpolation polynomials are used to consider the approximation of the Hadamard finite-part integral with \(\alpha \in (0,1)\) and also by [30, Lemma 2.3], where the quadratic interpolation polynomials are applied for the approximation of the Hadamard finite-part integral with \(\alpha \in (0,1)\).

Proof

Let \(n \ge 2\) be fixed and let \(0=w_{0}< w_{1}< w_{2}< \cdots < w_{n}=1\), \(w_{k}= k/n, k=0, 1, 2,\ldots , n\) be a partition of [0, 1] with step size \(h=1/n, n \ge 2.\) Let \(g_{2}(w)\) denote the piecewise quadratic interpolation polynomials defined by (6) on \([w_{l}, w_{l+1}], l=0, 1, 2,\ldots , n-1\) with \(n \ge 2\). Then, it follows that

$$\begin{aligned} R_{n} (g)&= \int _{0}^{1} w^{-\alpha -1} g (w) \, d w - \int _{0}^{1} w^{-\alpha -1 } g_{2}(w) \, dw \\&= \int _{w_{0}}^{w_{1}} w^{-\alpha -1} \big ( g (w) - g_{2}(w) \big ) \, dw + \sum _{l=1}^{n-1} \int _{w_{l}}^{w_{l+1}} w^{-\alpha -1} \big ( g (w) - g_{2}(w) \big ) \, dw \\&= I_{1} + I_{2}. \end{aligned}$$

For \(I_{1}\), there holds

$$\begin{aligned} I_{1} = \int _{0}^{1} (w_{0} + h s)^{-\alpha -1}&\Big [ g(w_{0}+ h s) - \Big ( \frac{1}{2} (s-1) (s-2) g(w_{0}) \\&-s (s-2) g(w_{1}) +\frac{1}{2} s (s-1) g(w_{2}) \Big ) \Big ] h \, ds. \end{aligned}$$

Since g is sufficiently smooth, by using the Taylor series expansion, we find for \(k=0, 1, 2\) that

$$\begin{aligned} g(w_{k})&= g(w_{0}+ h s) + \frac{g^{(1)}(w_{0}+ h s)}{1!} ( hk - h s) \\&\quad + \frac{g^{(2)}(w_{0}+ h s)}{2!} ( h k - h s)^2 + \frac{g^{(3)}(w_{0}+ h s)}{3!} ( h k -h s)^3 + \cdots . \end{aligned}$$

Thus, we obtain

$$\begin{aligned} I_{1}&= \int _{0}^{1} (w_{0}+ h s)^{-\alpha -1} \Big [ h^3 g^{(3)} (w_{0}+ h s) \pi _{0}(s) \\&\quad +h^4 g^{(4)} (w_{0}+ h s) \pi _{1}(s)+ h^5 g^{(5)} (w_{0}+ h s) \pi _{2}(s) + \cdots \Big ] h \, ds \\&= \sum _{k=0}^{+\infty } h^{k+3} \int _{0}^{1} \Big [ h (w_{0}+ h s)^{-\alpha -1} g^{(k+3)} (w_{0}+ h s) \Big ] \pi _{k} (s) \ ds, \end{aligned}$$

for some suitable functions \(\pi _{0} (s), \pi _{1}(s), \pi _{2}(s),\ldots \).

Applying Lemma 3 in Appendix with \(G(t) = g^{(l)}(t), l=3, 4,\ldots \), we arrive at

$$\begin{aligned} h (w_{0}+ h s )^{-\alpha -1} g^{(l)} (w_{0} + h s) = h^{-\alpha } \sum _{k=0}^{\infty } b_{kl} (s) h^{k} + \sum _{k=0}^{\infty } a_{kl} (s) h^{k}, \end{aligned}$$

for some suitable functions \(a_{kl} (s), b_{kl}(s), k=0, 1,\ldots \) and \( l=3, 4,\ldots ,\) which are not necessarily the same at different occurrences. Hence, we obtain

$$\begin{aligned} I_{1}&= \big ( d_{3} h^{3- \alpha } + d_{4} h^{4-\alpha } + d_{5} h^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} h^{4} + d_{3}^{*} h^{6} + d_{4}^{*} h^{8} + \cdots \big ), \end{aligned}$$

for some suitable coefficients \(d_{l}, \, l=3, 4 \ldots \) and \(d_{l}^{*}, \, l=2, 3 \ldots \) which are independent of h. Here, we note that the expansion does not contain any odd integer of power h following the proof of [5, Theorem 1.3].

Now, we turn to the estimate of \(I_{2}\). For \(n \ge 2\), there holds

$$\begin{aligned} I_{2}&= \sum _{l=1}^{n-1} \int _{0}^{1} (w_{l} + h s)^{-\alpha -1} \Big [ g(w_{l} +h s) - g_{2}(w) \Big ] \, dw \\&= \sum _{l=1}^{n-1} \int _{0}^{1} (w_{l} + h s)^{-\alpha -1} \Big [ g(w_{l} +h s) - \Big ( \frac{1}{2} (s-1) (s-2) g(w_{l}) \\&\quad - s(s-2) g(w_{l+1}) + \frac{1}{2} s (s-1) g(w_{l+2}) \Big ) \Big ] h \, ds. \end{aligned}$$

A use of the Taylor series expansion as in the estimate of \(I_{1}\) shows

$$\begin{aligned} I_{2}&= h \sum _{l=1}^{n-1} \int _{0}^{1} (w_{l}+ h s)^{-\alpha -1} \Big [ h^3 g^{(3)} (w_{l}+ h s) \pi _{0}(s) + h^4 g^{(4)} (w_{l}+ h s) \pi _{1}(s) \\&\quad + h^5 g^{(5)} (w_{l}+ h s) \pi _{2}(s) + \cdots \Big ] \, ds \\&= \sum _{k=0}^{\infty } h^{k+3} \int _{0}^{1} \Big [ h \sum _{l=1}^{n-1} (w_{l}+ h s)^{-\alpha -1} g^{(k+3)} (w_{l}+ h s) \Big ] \pi _{k} (s) \ ds. \end{aligned}$$

Hence, from (24), it follows that

$$\begin{aligned} I_{2}&= \big ( d_{3} h^{3- \alpha } + d_{4} h^{4-\alpha } + d_{5} h^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} h^{4} + d_{3}^{*} h^{6} + d_{4}^{*} h^{6} + \cdots \big ), \end{aligned}$$

for some suitable coefficients \(d_{l}, \, l=3, 4 \ldots \) and \(d_{l}^{*}, \, l=2, 3 \ldots \) which are independent of h. We again note that the expansion does not contain any odd integer following the argument in the proof of [5, Theorem 1.3]. This concludes the rest of the proof. \(\square \)

Remark 2

For any fixed \( n \ge 2\), we rewrite (23) for \(n=2, 3,\ldots , N\) as

$$\begin{aligned} R_{n}(g)&= \big ( d_{3} \tau ^{3- \alpha } t_{n}^{\alpha -3} + d_{4} \tau ^{4-\alpha } t_{n}^{\alpha -4} + d_{5} \tau ^{5-\alpha }t_{n}^{\alpha -5} + \cdots \big ) \nonumber \\&\quad + \big ( d_{2}^{*} \tau ^{4} t_{n}^{-4} + d_{3}^{*} \tau ^{6} t_{n}^{-6} + d_{4}^{*} \tau ^{8} t_{n}^{-8}+ \cdots \big ). \end{aligned}$$
(24)

In particular, for \(n=N\), there holds for \(t_{n}= t_{N}=T=1\)

$$\begin{aligned} R_{N}(g)&= \big ( d_{3} \tau ^{3- \alpha } + d_{4} \tau ^{4-\alpha } + d_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} \tau ^{4} + d_{3}^{*} \tau ^{6} + d_{4}^{*} \tau ^{8} + \cdots \big ). \end{aligned}$$
(25)

Remark 3

We indeed observe in Example 1 that the experimentally determined convergence order of the numerical scheme defined by (21) ( or (7)) is \(O(\tau ^{3- \alpha })\) with \(\alpha \in (1, 2)\) at any fixed time \(t_{N}=T\) and the extrapolated values of the scheme have the convergence orders \(O(\tau ^{4- \alpha }), O(\tau ^{5- \alpha }), O(\tau ^{4}), \ldots \), with \(\alpha \in (1, 2)\) as expected in the asymptotic expansion formula (25).

3 A high-order numerical scheme for linear fractional differential equation of order \(\alpha \in (1,2)\)

This section focuses on a high-order numerical method for approximating the solution of (4) in linear case with \(\alpha \in (1,2)\), \(\beta <0\), that is,

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] = \beta y(t) + f(t), \quad 0 \le t \le T. \end{aligned}$$
(26)

For simplicity of the notations, we shall assume that \(T=1\).

Let \(0 = t_{0}< t_{1}< t_{2}<\cdots < t_{N} =T=1\) with \(N \ge 2\) be a partition of \([0, T]=[0, 1]\) and \(\tau = \frac{T}{N} =\frac{1}{N} \) the step size. At node \(t_{j} = j \tau , j =2, 3,\ldots , N\), the Eq. (4) satisfies

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y_{0}- \frac{y'(0)}{1!} t \Big ] \Big |_{t=t_{j}} = \beta y(t_{j}) + f(t_{j}). \end{aligned}$$
(27)

By (21), the solution of (26) satisfies, for \( j=2, 3,\ldots , N, \, N \ge 2\),

$$\begin{aligned} y(t_{j})&= \frac{1}{\alpha _{0,j} - t_{j}^{\alpha } \varGamma (-\alpha ) \beta } \Big [ t_{j}^{\alpha } \varGamma (-\alpha ) \Big ( f(t_{j}) + \frac{t_{j}^{-\alpha }}{\varGamma (1-\alpha )} y(0) + \frac{t_{j}^{1-\alpha }}{\varGamma (2-\alpha )} y'(0) \Big ) \nonumber \\&\quad - \sum _{k=1}^{j} \alpha _{k j} y(t_{j-k}) - R_{j} (g) \Big ]. \end{aligned}$$
(28)

Let \(y_{j} \approx y(t_{j})\) denote the approximation of the exact solution y(t) at \(t=t_{j}.\) Assume that the starting values \(y_{0}\) and \(y_{1}\) are given. Then, define the following numerical scheme to approximate (4), for \( j=2, 3,\ldots , N, \, N \ge 2\) as

$$\begin{aligned} y_{j}&=\frac{1}{\alpha _{0,j} - t_{j}^{\alpha } \varGamma (-\alpha ) \beta } \Big [ t_{j}^{\alpha } \varGamma (-\alpha ) \Big ( f(t_{j})+ \frac{t_{j}^{-\alpha }}{\varGamma (1-\alpha )} y(0) \nonumber \\&\qquad + \frac{t_{j}^{1-\alpha }}{\varGamma (2-\alpha )} y'(0) \Big )- \sum _{k=1}^{j} \alpha _{k j} y_{j-k} \Big ], \end{aligned}$$
(29)

for given \(y_0= y(0)\) and \(y_1=y(t_1).\)

Theorem 2

Let \(\alpha \in (1, 2)\). Let \(0 = t_{0}< t_{1}< t_{2}< \cdots < t_{N} =1\) with \(N \ge 2\) be a partition of [0, 1] and \(\tau \) the step size. Let \(y(t_{j})\) and \( y_{j}\) be the exact and the approximate solutions of (28) and (29), respectively. Assume that the function \( y \in C^{m+2}[0, 1], \; m \ge 3\). Further assume that we obtain the exact starting values \(y_{0} = y(0)\) and \(y_{1} = y(t_{1})\). Then, there exist coefficients \(c_{\mu } = c_{\mu } (\alpha )\) and \(c_{\mu }^{*} = c_{\mu }^{*} (\alpha )\) such that the error possesses an asymptotic expansion of the form

$$\begin{aligned} y(t_{N}) - y_{N} = \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + \cdots , \quad \text{ for } \; N \rightarrow \infty , \end{aligned}$$

or, since \(\tau = \frac{1}{N}\),

$$\begin{aligned} y(t_{N}) - y_{N} = \sum _{\mu =3}^{ m+1 } c_{\mu } \tau ^{\mu - \alpha } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} \tau ^{2 \mu } + \cdots , \quad \text{ for } \; \tau \rightarrow 0. \end{aligned}$$
(30)

Proof

Our proof here is influenced by the proof of Theorem 2.1 in [5], where the asymptotic expansion of the error for approximating the linear fractional differential equation with order \(\alpha \in (0, 1)\) is considered.

Let us fix \(t_{l} =c\) to be a constant for \( l=2,3,\ldots N\) with \(N \ge 2\) and investigate the difference

$$\begin{aligned} e_{l} = y(t_{l}) - y_{l}, \quad \text{ for } \; l \rightarrow \infty , \quad \text{ with } \; t_{l} = l \tau = \frac{l}{N} =c, \end{aligned}$$

where \(\tau =1/N\) is the step size. In other words, there is a constant c, independent of N, such that

$$\begin{aligned} l=c \cdot N, \quad \text{ or } \; N= l/c, \end{aligned}$$

and consequently, we observe that if \(e_{l}\) possesses an asymptotic expansion with respect to l, then \(e_{N}\) possesses at the same time one with respect to N, and vice versa.

We now claim that

$$\begin{aligned} e_{l}= y(t_{l})- y_{l} = \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha - \mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o(N^{\alpha -m-1}), \quad \text{ for } \; l \rightarrow \infty , \end{aligned}$$
(31)

for some suitable constants \(c_{\mu }, c_{\mu }^{*}\) which will be determined later.

Subtracting (29) from (28) and noting \(t_{l} = l \tau = \frac{l}{N} =c\), there holds, with \( g(w) = f(t_{l}- t_{l} w)\),

$$\begin{aligned} e_{l}&= \frac{1}{\alpha _{0, l} - (\frac{l}{N})^{\alpha } \varGamma (-\alpha ) \beta } \Big [ - \sum _{k=1}^{l} \alpha _{k l} ( y(t_{l-k}) - y_{l-k}) - R_{l} (g) \Big ] \nonumber \\&= \frac{1}{c^{\alpha } \varGamma (-\alpha ) \beta - \alpha _{0, l}} \Big ( \sum _{k=1}^{l} \alpha _{k l} e_{l-k} + R_{l} (g) \Big ). \end{aligned}$$
(32)

Since \( g \in C^{m+2}[0, 1], \; m \ge 3,\) an application of the Theorem 1 shows

$$\begin{aligned} R_{l}(g) = \sum _{\mu =3}^{ m+1} d_{\mu } l^{\alpha - \mu } + \sum _{\mu =2}^{ \mu ^{*}} d_{\mu }^{*} l^{-2 \mu } + o(l^{ \alpha -m -1}), \quad \text{ for } \; l \rightarrow \infty , \end{aligned}$$
(33)

where \(\mu ^{*}\) is the integer satisfying \( 2 \mu ^{*}< m+1 - \alpha < 2 ( \mu ^{*} +1), \) and \(d_{\mu }\) and \(d_{\mu }^{*}\) are certain coefficients which are independent of \(\tau \).

Since \(l/N =c\), rewrite (33) as

$$\begin{aligned} R_{l}(g) = \sum _{\mu =3}^{ m+1} {\tilde{d}}_{\mu } N^{\alpha - \mu } + \sum _{\mu =2}^{ \mu ^{*}} {\tilde{d}}_{\mu }^{*} N^{-2 \mu } + o(N^{ \alpha -m -1}), \quad \text{ for } \; N \rightarrow \infty , \end{aligned}$$
(34)

for some new coefficients \({\tilde{d}}_{\mu }\) and \( {\tilde{d}}_{\mu }^{*}\). Choose

$$\begin{aligned}&c_{\mu } = \frac{1}{-c^{\alpha } \varGamma (-\alpha ) \beta - 1/\alpha } {\tilde{d}}_{\mu }, \quad \mu =3,4,\ldots , m+1, \end{aligned}$$
(35)
$$\begin{aligned}&c_{\mu }^{*} = \frac{1}{-c^{\alpha } \varGamma (-\alpha ) \beta - 1/\alpha } {\tilde{d}}_{\mu }^{*}, \quad \mu =2, 3,\ldots , \mu ^{*}. \end{aligned}$$
(36)

We now claim below that (31) holds for the coefficents \(c_{\mu }, c_{\mu }^{*}\) defined in (35) and (36).

For the proof, we use mathematical induction. By our assumption, \(e_{0} =0, e_{1} =0\), and hence, (31) holds for \(l=0, 1\) with the coefficients given by (35) and (36). Let us now consider the case for \(l=2\). An application of the Theorem 1 implies

$$\begin{aligned} e_{2}&= y(t_{2}) - y_{2} = \frac{1}{c^{\alpha } \varGamma (-\alpha ) \beta - \alpha _{0, 2}} \Big ( \sum _{k=1}^{2} \alpha _{k 2} e_{2-k} + R_{2} (g) \Big ) \nonumber \\&= \frac{1}{c^{\alpha } \varGamma (-\alpha ) \beta - \alpha _{0, 2}} \Big [ \Big ( \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o ( N^{ \alpha -m-1}) \Big ) \nonumber \\&\qquad \cdot \Big ( \sum _{k=0}^{2} \alpha _{k 2} - \alpha _{0 2} \Big ) + R_{2}(g) \Big ]. \end{aligned}$$
(37)

Thus, we arrive noting that \( \sum _{k=0}^{2} \alpha _{k 2} = - 1/\alpha \) and \( \alpha _{0, 2} = \frac{2^{-\alpha } (\alpha +2) ( N c)^{\alpha }}{(-\alpha ) ( -\alpha +1) ( -\alpha +2)}\) at

$$\begin{aligned}&\Big [ \frac{2^{-\alpha } (\alpha +2) ( N c)^{\alpha }}{(-\alpha ) ( -\alpha +1) ( -\alpha +2)} - c^{\alpha } \varGamma (-\alpha ) \beta \Big ] e_{2} \nonumber \\&\quad = \frac{1}{\alpha } \Big [ \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o ( N^{\alpha -m-1}) \Big ] \nonumber \\&\qquad - \sum _{\mu =3}^{m+1} {\tilde{d}}_{\mu } N^{\alpha -\mu } - \sum _{\mu =2}^{\mu ^{*}} {\tilde{d}}_{\mu }^{*} N^{-2 \mu } + o ((N^{\alpha -m -1}) \nonumber \\&\qquad + \frac{2^{-\alpha } (\alpha +2) ( N c)^{\alpha }}{(-\alpha ) ( -\alpha +1) ( -\alpha +2)} \Big [ \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o (N^{\alpha -m-1}) \Big ]. \end{aligned}$$
(38)

This shows that \(e_{2}\) has an asymptotic expansion with respect to the powers of N. By comparing with the coefficients of powers of N, see [5],

$$\begin{aligned} e_{2} = \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o ( N^{\alpha -m-1}), \end{aligned}$$

where \(c_{\mu }\) and \(c_{\mu }^{*}\) are defined by (35) and (36), respectively.

Assume that (31) holds for \(l=0, 1,\ldots , j-1\). Following the same argument as for showing (38), we obtain, noting \( \sum _{k=0}^{j} \alpha _{k j} = - 1/\alpha \) and applying Theorem 1,

$$\begin{aligned}&\Big [ \frac{2^{-\alpha } (\alpha +2) ( N c)^{\alpha }}{(-\alpha ) ( -\alpha +1) ( -\alpha +2)} - c^{\alpha } \varGamma (-\alpha ) \beta \Big ] e_{j} \nonumber \nonumber \\&\quad = \frac{1}{\alpha } \Big [ \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o (N^{\alpha -m-1}) \Big ] \nonumber \\&\qquad - \sum _{\mu =3}^{m+1} {\tilde{d}}_{\mu } N^{\alpha -\mu } - \sum _{\mu =2}^{\mu ^{*}} {\tilde{d}}_{\mu }^{*} N^{-2 \mu } + o ( N^{\alpha -m -1}) \nonumber \\&\qquad + \frac{2^{-\alpha } (\alpha +2) ( N c)^{\alpha }}{(-\alpha ) ( -\alpha +1) ( -\alpha +2)} \Big [ \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o (N^{\alpha -m-1}) \Big ]. \end{aligned}$$
(39)

This shows that \(e_{j}\) possesses an asymptotic expansion with respect to the powers of N. By comparing the coefficients of powers of N, see [5], we find that

$$\begin{aligned} e_{j} = \sum _{\mu =3}^{m+1} c_{\mu } N^{\alpha -\mu } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} N^{-2 \mu } + o ( N^{\alpha -m-1}). \end{aligned}$$

Hence, (31) holds for \(l=j\). Together these estimates complete the proof of (31). Applying \(l=N\) in (31), we obtain (30). This concludes the rest of the proof. \(\square \)

Remark 4

In Theorem 2, we indeed assume that \(y_{1}\) is known exactly. However, in practical applications, it is often necessary to approximate \(y_{1}\) numerically using alternative numerical methods. These methods aim to achieve a desired level of accuracy, ensuring that the error \(|y_{1}- y(t_{1})|\) meets the required precision. Since the primary focus of this work is on investigating the asymptotic expansion for \(y_{N}\) with \(N \ge 2\), we assume, for the sake of simplicity, that we are able to accurately obtain the value of \(y_{1}\). This simplifying assumption allows us to concentrate on the analysis of higher-order terms and the behavior of the approximation beyond the initial point.

Remark 5

In Theorem 2, we have made an assumption that the solution of the fractional differential equation (28) is smooth enough. However, such an assumption may not be realistic for fractional differential equations as the behavior of the solution to (28) typically follows a power-law growth of the form \(O(t^{\alpha })\), where \(\alpha \in (1, 2)\). But for zero initial data, it is possible to obtain smooth solution as required by our analysis. Our numerical experiments with nonzero initial data suggest that our extrapolation scheme provides higher order convergence as predicted by our theory. To address cases where the solution lacks sufficient smoothness, one approach is to introduce improved algorithms that apply extrapolation techniques. These algorithms aim to achieve higher-order accuracy, as demonstrated in [17, Section 4.1]. By employing extrapolation, it becomes possible to enhance the approximation accuracy even when dealing with solutions that do not exhibit strong regularity.

4 A high-order numerical scheme for a semilinear fractional differential equation of order \(\alpha \in (1,2)\)

This section focuses on a high-order numerical scheme for approximating the solution of the semilinear problem (4).

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] = \beta y(t) + f(y(t)), \quad 0 \le t \le T. \end{aligned}$$
(40)

For simplicity, we shall assume that \(T=1\) as before.

Let \(y_{j} \approx y(t_{j})\) denote the approximation of the exact solutions \(y(t_{j})\). Based on the Caputo derivative approximation scheme (22), we denote

$$\begin{aligned} D^{\alpha }_{\tau } y_j: = \tau ^{-\alpha }\sum _{k=0}^{j}w_{kj} y_{j-k}. \end{aligned}$$

Given the starting values \(y_{0}\) and \(y_{1},\) define the following numerical scheme for approximating (40)

$$\begin{aligned} D^{\alpha }_{\tau } y_j-\frac{t_{j}^{-\alpha }}{\varGamma (1-\alpha )}y(0)-\frac{t_{j}^{1-\alpha }}{\varGamma (2- \alpha )}y'(0) = \beta y_{j} + f(y_j),\;\;\text{ for }\; j=2,3,\ldots , N \nonumber \\ \end{aligned}$$
(41)

with \(y_0= y(0)\) and \(y_1=y(t_1)\).

Let \({\tilde{y}}_j,\;j=2,3,\ldots ,N\) be the solutions of the linearized problem:

$$\begin{aligned} D^{\alpha }_{\tau } {\tilde{y}}_j-\frac{t_{j}^{-\alpha }}{\varGamma (1-\alpha )}y(0)-\frac{t_{j}^{1-\alpha }}{\varGamma (2- \alpha )}y'(0)= \beta {\tilde{y}}_{j} + f(y(t_j)),\;\;\text{ for }\; j=2,3,\ldots , N \nonumber \\ \end{aligned}$$
(42)

with \({\tilde{y}}_0= y(0)\) and \({\tilde{y}}_1=y(t_1).\) Set for \(j=2, 3,\ldots , N\)

$$\begin{aligned} e_{j} = (y(t_{j}) - {\tilde{y}}_{j}) + ({\tilde{y}}_{j} - y_{j}) =: \eta _{j} + \theta _{j}. \end{aligned}$$

Applying Theorem 2 with f(t) replaced by f(y(t)), we obtain the asymptotic expansion of the error \(\eta _N\) as

$$\begin{aligned} \eta _j = \sum _{\mu =3}^{ m+1 } c_{\mu } \tau ^{\mu - \alpha } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} \tau ^{2 \mu } + \cdots , \quad \text{ for } \; \tau \rightarrow 0. \end{aligned}$$
(43)

Therefore, it remains to prove a similar error expansion of \(\theta _N.\) Now the error equation in \(\theta _j\) becomes

$$\begin{aligned} D^{\alpha }_{\tau } \theta _j= \beta \theta _{j} +\big ( f(y(t_j))- f(y_j)\big ),\;\;\text{ for }\; j=2,3,\ldots , N \end{aligned}$$
(44)

with \(\theta _0= 0\) and \(\theta _1= 0.\) With the help of

$$\begin{aligned} \delta _{t} \phi _{n-\frac{1}{2}} =\frac{\phi (t_{n})-\phi (t_{n-1})}{\tau }, \end{aligned}$$
(45)

and

$$\begin{aligned} p_{l}=\sum \limits _{k=0}^{l-1}(l-k)w_{kl},\ \ l=1,2,\ldots ,n,\ n=2,3,\ldots ,N, \end{aligned}$$
(46)

rewrite \(D^{\alpha }_{\tau } \theta _j\) in a suitable manner in terms of \(\delta _t \theta _{j-\frac{1}{2}}\) and hence, obtain an equivalent equation \(\theta _j \) for \( j=1,2,\ldots , N\) as

$$\begin{aligned}{} & {} \tau ^{1-\alpha } \left( p_{1}\delta _{t}\theta _{j-\frac{1}{2}} - \sum _{l=1}^{j-1}(p_{j-l}-p_{j-l+1}) \delta _{t} \theta _{l-\frac{1}{2}}\right) \nonumber \\{} & {} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \beta \theta _{j} + \big ( f(y(t_j))- f(y_j)\big ) \end{aligned}$$
(47)

with \(\theta _0= 0\) and \(\theta _1= 0.\)

In order to derive an estimate of \(\theta _j,\) we need the following Lemma, whose proof is given in the Sect. 7.2 of the “Appendix”.

Lemma 2

Let \(1<\alpha <2\). Then, the coefficients \(p_{l}\) defined by (46) satisfy the following properties

$$\begin{aligned}&p_{l}>0,\ \ l=1, 2,\ldots , n,\ n=2,3,\ldots , N, \end{aligned}$$
(48)
$$\begin{aligned}&p_{l}>p_{l+1},\ \ l=1, 2,\ldots , n-1,\ n=2,3,\ldots , N, \end{aligned}$$
(49)
$$\begin{aligned}&\sum _{l=1}^{n}p_{l}\le \frac{n^{2-\alpha }}{\varGamma (3-\alpha )}, \quad n=2,3,\ldots , N. \end{aligned}$$
(50)

Theorem 3

For \(\alpha \in (1, 2),\) let \(y(t_{j})\) and \( y_{j}\) be the exact and the approximate solutions of (40) and (41), respectively. Assume that the function \( y \in C^{m+2}[0, 1], \; m \ge 3\). Further, assume that exact starting values \(y_{0} = y(0)\) and \(y_{1} = y(t_{1})\) are known. Then, there exist coefficients \(c_{\mu } = c_{\mu } (\alpha )\) and \(c_{\mu }^{*} = c_{\mu }^{*} (\alpha )\) such that the error satisfies

$$\begin{aligned} y(t_{N}) - y_{N} = \sum _{\mu =3}^{ m+1 } c_{\mu } \tau ^{\mu - \alpha } + \sum _{\mu =2}^{\mu ^{*}} c_{\mu }^{*} \tau ^{2 \mu } + \cdots , \quad \text{ for } \; \tau \rightarrow 0. \end{aligned}$$
(51)

Proof

Since \( e_{j} = (y(t_{j}) - {\tilde{y}}_{j}) + ({\tilde{y}}_{j} - y_{j}) =: \eta _{j} + \theta _{j} \) and estimate of \(\eta _j\) is known from (43), it is enough to estimate \(\theta _j, j=2, 3,\ldots , N\). Multiplying (47) by \(\tau \delta _t \theta _{j-\frac{1}{2}}\), it follows that

$$\begin{aligned} \tau ^{1- \alpha }&\Big ( p_{1} \delta _{t} \theta _{j-\frac{1}{2}} - \sum _{l=1}^{j-1} (p_{j-l} - p_{j-l+1}) \delta _{t} \theta _{l-\frac{1}{2}} \Big ) \Big ( \tau \delta _{t} \theta _{j- \frac{1}{2}} \Big ) \\&- \beta \theta _{j} \Big ( \tau \delta _{t} \theta _{j-\frac{1}{2}} \Big ) =\big ( f(y(t_{j})) - f(y_{j}) \big ) \Big ( \tau \delta _{t} \theta _{j- \frac{1}{2}} \Big ). \end{aligned}$$

Note that

$$\begin{aligned}&\Big [\sum _{l=1}^{j-1} ( p_{j-l} - p_{j-l+1}) \delta _{t} \theta _{l- \frac{1}{2}} \Big ] \delta _{t} \theta _{j- \frac{1}{2}} \le \sum _{l=1}^{j-1} (p_{j-l} - p_{j-l+1}) \; \frac{1}{2} \Big (\big |\delta _{t} \theta _{l-\frac{1}{2}} \big |^2 +\big |\delta _{t} \theta _{j-\frac{1}{2}}\big |^2 \Big ) \nonumber \\&\quad = \frac{1}{2} \sum _{l=1}^{j-1} ( p_{j-l} - p_{j-l+1}) \big |\delta _{t} \theta _{l-\frac{1}{2}} \big |^2 + \frac{1}{2} (p_{1} - p_{j}) \big | \delta _{t} \theta _{j-\frac{1}{2}} \big |^2 \nonumber \\&\quad = \frac{1}{2} \sum _{l=1}^{j-1} p_{j-l} \big |\delta _{t} \theta _{l-\frac{1}{2}}\big |^2 - \frac{1}{2} \sum _{l=1}^{j} p_{j-l+1} \big |\delta _{t} \theta _{l-\frac{1}{2}} \big |^2 + \frac{1}{2} p_{1} \big |\delta _{t} \theta _{ j-\frac{1}{2}}\big |^2 + \frac{1}{2} (p_{1} -p_{j}) \big |\delta _{t} \theta _{j-\frac{1}{2}} \big |^2, \end{aligned}$$
(52)

and

$$\begin{aligned} - \beta \theta _{j} \big (\tau \delta _{t} \theta _{j-\frac{1}{2}}\big ) = - \beta \theta _{j} \big ( \theta _{j} - \theta _{j-1} \big ) \ge - \frac{\beta }{2} \big ( | \theta _{j} |^2 - | \theta _{j-1} |^2 \big ), \end{aligned}$$

and

$$\begin{aligned}&\big ( f(y(t_{j})) - f(y_{j}) \big ) \big ( \tau \delta _{t} \theta _{j- \frac{1}{2}} \big ) \le L | y(t_{j})- y_{j}|\, \big | \tau \delta _{t} \theta _{j-\frac{1}{2}} \big | \nonumber \\&\quad \le \frac{1}{2} \tau ^{\alpha } \frac{L^2}{p_{j}} \big ( |\eta _{j} |^2 + | \theta _{j} |^2 \big ) + \frac{1}{2} p_{j} \tau ^{2- \alpha } \big | \delta _{t} \theta _{j- \frac{1}{2}} \big |^2, \end{aligned}$$
(53)

we arrive at

$$\begin{aligned}&\tau ^{1- \alpha } \big ( p_{1} \delta _{t} \theta _{j-\frac{1}{2}} \big ) - \tau ^{2- \alpha } \Big [ \frac{1}{2} \sum _{l=1}^{j-1} p_{j-l} \big | \delta _{t} \theta _{l-\frac{1}{2}} \big |^2 - \frac{1}{2} \sum _{l=1}^{j} p_{j-l+1} \big | \delta _{t} \theta _{l-\frac{1}{2}} \big |^2 \\&\qquad + \frac{1}{2} p_{1} \big | \delta _{t} \theta _{j-\frac{1}{2}} \big |^2 + \frac{1}{2} (p_{1} - p_{j}) \big | \delta _{t} \theta _{j-\frac{1}{2}} \big |^2 \Big ] - \frac{\beta }{2} \big ( | \theta _{j} |^2 - | \theta _{j-1} |^2 \big ) \\&\quad \le \frac{1}{2} \tau ^{\alpha } \frac{L^2}{p_{j}} \big (|\eta _{j}|^2 + |\theta _{j}|^2 \big ) + \frac{1}{2} p_{j} \tau ^{2- \alpha } \big | \delta _{t} \theta _{j-\frac{1}{2}} \big |^2. \end{aligned}$$

Denoting

$$\begin{aligned} E_j =- \beta |\theta _j|^2 + \tau ^{2-\alpha } \sum _{l=1}^{j} p_{j-l+1} |\delta _{t} \theta _{l-\frac{1}{2}}|^2, \end{aligned}$$

we obtain

$$\begin{aligned} E_j \le E_{j-1} + \frac{C(L)}{p_j} \tau ^{\alpha }\;\big (|\eta _j|^2 + |\theta _j|^2\big ). \end{aligned}$$
(54)

It follows using \(1/p_j = \varGamma (2-\alpha ) \tau ^{1-\alpha }\;t_j^{\alpha -1}, j \ge 2\), \(\theta _0=0\), \(\theta _1=0\), and \( \tau \sum _{l=2}^{j} t_{l}^{\alpha -1} \le C t_{j}^{\alpha }\) that

$$\begin{aligned} E_j\le & {} E_{1} + C\;\varGamma (2-\alpha )\; \tau \;\sum _{l=2}^{j} t^{\alpha -1}_l \big (|\eta _l|^2 + |\theta _l|^2\big )\nonumber \\\le & {} C(L,\alpha )\; t_j^{\alpha } \max _{1\le l\le j} |\eta _l|^2 + C(L,\alpha )\; t_j^{\alpha -1} \tau \;\sum _{l=2}^{j} |\theta _l|^2. \end{aligned}$$
(55)

A use of Gronwall’s inequality yields

$$\begin{aligned} |\theta _j|^2 \le C(L,T,\alpha )\; \max _{1\le l\le j} |\eta _l|^2, \end{aligned}$$
(56)

which implies that

$$\begin{aligned} |e_j |= |\eta _j| +| \theta _j | \le C(L,T,\alpha )\; \max _{1\le l\le j} |\eta _l|, \end{aligned}$$

where \(\eta _{l}\), by (43), has the asymptotic expansion. This completes the rest of the proof. \(\square \)

Remark 6

Indeed, proving the asymptotic expansion of the error for the approximation of the semilinear problem (40) directly can pose a significant challenge. The nonlinearity introduces additional complexities that can significantly complicate the analysis and proof techniques required. Therefore, an intermediate solution \({\tilde{y}}_j\) through linearization is introduced and then comparison with semilinear discrete solution helps to provide the necessary result.

5 Numerical examples

This section deals with several numerical experiments using high-order numerical methods developed in this paper for approximating the Riemann–Liouville fractional derivative, the linear and semilinear fractional differential equations. It is observed that the numerical results are consistent with our theoretical findings.

Let us first recall the Richardson extrapolation algorithm. Let \(A_{0}(\tau )\) denote the approximation of A calculated by an algorithm with the step size \(\tau \). Assume that the error has the following asymptotic expansion

$$\begin{aligned} A= A_{0}(\tau ) + a_{0} \tau ^{\lambda _{0}} + a_{1} \tau ^{\lambda _{1}} + a_{2} \tau ^{\lambda _{2}} + \cdots \quad \text{ as } \; \tau \rightarrow 0, \end{aligned}$$
(57)

where \(a_{j}, j=0, 1, 2,\ldots \) are unknown constants and \(0< \lambda _{0}< \lambda _{1}< \lambda _{2} < \cdots \) are some positive numbers.

Let \(b \in {\mathbb {Z}}^{+}, \, b >1\) (usually \(b=2\)). Let \(A_{0} \big (\frac{\tau }{b} \big ) \) denote the approximation of A calculated with the step size \(\frac{\tau }{b}\) by the same algorithm as for calculating \(A_{0}(\tau )\). Then by (57), there holds

$$\begin{aligned} A= A_{0} \left( \frac{\tau }{b} \right) + a_{0} \left( \frac{\tau }{b} \right) ^{\lambda _{0}} + a_{1} \left( \frac{\tau }{b} \right) ^{\lambda _{1}} + a_{2} \left( \frac{\tau }{b} \right) ^{\lambda _{2}} + \cdots \quad \text{ as } \; \tau \rightarrow 0. \end{aligned}$$
(58)

Similarly, we may calculate the approximations \(A_{0} \big (\frac{\tau }{b^2} \big ), A_{0} \big (\frac{\tau }{b^3} \big ),\ldots \) of A by using the step sizes \( \frac{\tau }{b^2}, \frac{\tau }{b^3},\ldots \).

By (57), we note that

$$\begin{aligned} A= A_{0}(\tau ) + O( \tau ^{\lambda _{0}}), \quad \text{ as } \; \tau \rightarrow 0, \end{aligned}$$
(59)

that is, the approximation \(A_{0}(\tau )\) has the convergence order \(O(\tau ^{\lambda _{0}})\). The convergence order \(\lambda _{0}\) can be calculated numerically. By (59), there exists a constant C such that, for sufficiently small \(\tau \),

$$\begin{aligned} |A- A_{0} (\tau ) | \approx C \tau ^{\lambda _{0}}, \end{aligned}$$

and

$$\begin{aligned} \Big |A- A_{0} \big (\frac{\tau }{b} \big ) \Big | \approx C \big (\frac{\tau }{b} \big )^{\lambda _{0}}. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \frac{| A- A_{0}(\tau )|}{| A- A_{0} \big (\frac{\tau }{b} \big )|} \approx b^{\lambda _{0}}, \end{aligned}$$

which implies that the convergence order \(\lambda _{0}\) can be calculated by

$$\begin{aligned} \lambda _{0} \approx \log _{b} \left( \frac{| A- A_{0}(\tau )|}{| A- A_{0} \big (\frac{\tau }{b} \big )|}\right) . \end{aligned}$$
(60)

Since the error in (57) has the asymptotic expansion, we may use the Richardson extrapolation to construct a new approximation of A which has higher convergence order \(O(\tau ^{\lambda _{1}}), \, \lambda _{1} > \lambda _{0}\). To see this, multiplying (58) by \(b^{\lambda _{0}}\), we get

$$\begin{aligned} b^{\lambda _{0}} A\!=\! b^{\lambda _{0}} A_{0} \left( \frac{\tau }{b} \right) \!+\! b^{\lambda _{0}} a_{0} \left( \frac{\tau }{b} \right) ^{\lambda _{0}} \!+\! b^{\lambda _{0}} a_{1} \left( \frac{\tau }{b} \right) ^{\lambda _{1}} \!+\! b^{\lambda _{0}} a_{2} \left( \frac{\tau }{b} \right) ^{\lambda _{2}} \!+\! \cdots \quad \text{ as } \; \tau \rightarrow 0.\nonumber \\ \end{aligned}$$
(61)

Subtracting (57) from (61), we obtain

$$\begin{aligned} A= \frac{b^{\lambda _{0}} A_{0} \big ( \frac{\tau }{b} \big ) - A_{0} (\tau )}{ b^{\lambda _{0}}-1} + \frac{b^{\lambda _{0}} a_{1} \big ( \frac{\tau }{b} \big )^{\lambda _{1}}- a_{1} \tau ^{\lambda _{1}}}{ b^{\lambda _{0}}-1} + \frac{b^{\lambda _{1}} a_{2} \big ( \frac{\tau }{b} \big )^{\lambda _{2}}- a_{2} \tau ^{\lambda _{2}}}{ b^{\lambda _{0}}-1} + \cdots . \end{aligned}$$

Denote

$$\begin{aligned} A_{1}(\tau ) = \frac{b^{\lambda _{0}} A_{0} \big ( \frac{\tau }{b} \big ) - A_{0} (\tau )}{ b^{\lambda _{0}}-1}, \end{aligned}$$
(62)

we then arrive at

$$\begin{aligned} A= A_{1}(\tau ) + b_{1} \tau ^{\lambda _{1}} + b_{2} \tau ^{\lambda _{2}} + \cdots \quad \text{ as } \, \tau \rightarrow 0, \end{aligned}$$
(63)

for some suitable constants \(b_{1}, b_{2},\ldots \). We now remark that the new approximation \(A_{1}(\tau )\) only depends on b and \(\lambda _{0}\) and is independent of the coefficients \(a_{j}, j=0, 1, 2,\ldots \) in (57). Hence we obtain a new approximation \(A_{1}(\tau )\) of A which has the higher convergence order \(O(\tau ^{\lambda _{1}})\). To see the convergence order \(\lambda _{1}\), using (62), we may calculate

$$\begin{aligned} A_{1}\left( \frac{\tau }{b} \right) = \frac{b^{\lambda _{0}} A_{0} \big ( \frac{\tau }{b^2} \big ) - A_{0} \big (\frac{\tau }{b} \big )}{ b^{\lambda _{0}}-1}. \end{aligned}$$

Then, by (63), it follows that

$$\begin{aligned} \frac{| A- A_{1}(\tau )|}{| A- A_{1} \big (\frac{\tau }{b} \big )|} \approx b^{\lambda _{1}}, \end{aligned}$$

and

$$\begin{aligned} \lambda _{1} \approx \log _{b} \left( \frac{| A- A_{1}(\tau )|}{| A- A_{1} \big (\frac{\tau }{b} \big )|} \right) . \end{aligned}$$

Continuing this process, we may construct the high-order approximations \(A_{2}(\tau ), A_{3}(\tau ),\ldots \) which have the convergence orders \(O(\tau ^{\lambda _{2}})\), \(O(\tau ^{\lambda _{3}}),\ldots . \)

Remark 7

The approximations \(A_{1}(\tau ), A_{2}(\tau ),\ldots \) only depend on b and \(\lambda _{j}, j=0, 1, 2,\ldots \) and are independent of the coefficients \(a_{j}, j=0, 1, 2,\ldots \) in (57).

Example 1

Consider the approximation scheme defined in (21) or (7) for approximating Riemann–Liouville fractional derivative \(\, _{0}^{R} D_{t}^{\alpha } f(t_{N}), \alpha \in (1, 2)\) at a fixed time \(t_{N} =T\) for some smooth functions \(f(t), t \in [0, T]\).

By Theorem 1, we obtain

(64)

where the error \(R_{N}(g)\) satisfies

$$\begin{aligned} R_{N}(g)&= \big ( d_{3} \tau ^{3- \alpha } + d_{4} \tau ^{4-\alpha } + d_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} \tau ^{4} + d_{3}^{*} \tau ^{6} + d_{4}^{*} \tau ^{8} + \cdots \big ), \end{aligned}$$

for some suitable coefficients \(d_{l}, \, l=3, 4 \ldots \) and \(d_{l}^{*}, \, l=2, 3 \ldots \). This implies for fixed \(t_{N}=T\),

$$\begin{aligned} _0^R D_{t}^{\alpha }f(t_{N})= & {} \tau ^{-\alpha } \sum _{k=0}^{N} w_{k N} f(t_{N-k}) \nonumber \\{} & {} +\big ( d_{3} \tau ^{3- \alpha } + d_{4} \tau ^{4-\alpha } + d_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} \tau ^{4} + d_{3}^{*} \tau ^{6} + d_{4}^{*} \tau ^{8} + \cdots \big ).\nonumber \\ \end{aligned}$$
(65)

Denote \(A=_0^RD_{t}^{\alpha }f(t_{N})\) and approximate A by \( A_{0}(\tau ) = \tau ^{-\alpha } \sum _{k=0}^{N} w_{k N} f(t_{N-k}). \) Then, by (65),

$$\begin{aligned} A= A_{0}(\tau ) + \big ( d_{3} \tau ^{3- \alpha } + d_{4} \tau ^{4-\alpha } + d_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( d_{2}^{*} \tau ^{4} + d_{3}^{*} \tau ^{6} + d_{4}^{*} \tau ^{8} + \cdots \big ).\nonumber \\ \end{aligned}$$
(66)

In Table 1, we choose \(f(t) = t^{5}, \tau = 1/20\), \(b=2\) and \(T=1\). We obtain the approximate solutions with the step sizes \( \big ( \tau , \frac{\tau }{2}, \frac{\tau }{2^2}, \frac{\tau }{2^3}, \frac{\tau }{2^4}, \frac{\tau }{2^5} \big )= \big ( \frac{1}{20}, \frac{1}{40}, \frac{1}{80}, \frac{1}{160}, \frac{1}{320}, \frac{1}{640} \big )\).

We observe that the experimentally determined order of convergence (EOC) of the approximate solutions in the first column is \(O(\tau ^{3- \alpha })\) with \(\alpha \in (1, 2)\). After the first extrapolation, the new approximate solutions have the experimentally determined convergence order \(O(\tau ^{4- \alpha })\). After the second extrapolation, the experimentally determined convergence order of the approximate solutions is slightly less than the expected order \(O(\tau ^{5- \alpha }),\) because of the computational errors. Here, the number in the bracket \(( \cdot )\) denotes the expected convergence order.

In Tables 2 and 3, we choose \(f(t) = \cos (\pi t)\) and \(f(t) = e^{t}\), respectively, and use the same parameters as in Table 1, we also observe the similar experimentally determined orders of the convergence as in Table 1.

Table 1 Errors for approximating \(\, _{0}^{R}D_{t}^{\alpha } \big ( t^{5} \big )\) with \(\alpha =1.5 \), taken at \(T=1\) in Example 1
Table 2 Errors for approximating \(\, _{0}^{R}D_{t}^{\alpha } \big ( \cos \pi t \big )\) with \(\alpha =1.5 \), taken at \(T=1\) in Example 1
Table 3 Errors for approximating \(\, _{0}^{R}D_{t}^{\alpha } \big ( e^t \big )\) with \(\alpha =1.5 \), taken at \(T=1\) in Example 1

Example 2

Consider the following linear fractional differential equation

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] = \beta y(t) + f(t), \quad 0 \le t \le T, \end{aligned}$$
(67)

where \(y(t) = t^{5}\) and \( \beta =-1\) and \(f(t) = \, _{0}^{R} D_{t}^{\alpha } t^5 + t^5\). The initial values are \(y_{0}= y_{0}^{1}=0\).

Let \(A= y(t_{N})\) with \(T_{N}=1\) be the exact solution of (67). Let \(A_{0}(\tau )= y_{N}\) be the approximate solution obtained from (29). By Theorem 2, we arrive at

$$\begin{aligned} y(t_{N}) - y_{N}&=\big ( c_{3} \tau ^{3- \alpha } + c_{4} \tau ^{4-\alpha } + c_{5} \tau ^{5-\alpha } + \cdots \big ) + \big ( c_{2}^{*} \tau ^{4} + c_{3}^{*} \tau ^{6} + c_{4}^{*} \tau ^{8} + \cdots . \big ).\nonumber \\ \end{aligned}$$
(68)

for some suitable constants \(c_{\mu }, \mu = 3, 4,\ldots \) and \(c^{*}_{\mu }, \mu = 2, 3,\ldots \).

In Tables 4 and 5, we choose \(\tau = 1/20\), \(b=2\), \( y_{0}=0\) and \(y_{1} = \tau ^{5}\). We obtain the extrapolated values of the approximate solutions with the step sizes \( \big ( \tau , \frac{\tau }{2}, \frac{\tau }{2^2}, \frac{\tau }{2^3}, \frac{\tau }{2^4}, \frac{\tau }{2^5} \big )= \big ( \frac{1}{20}, \frac{1}{40}, \frac{1}{80}, \frac{1}{160}, \frac{1}{320}, \frac{1}{640} \big )\).

We observe that the experimentally determined order of convergence (EOC) of the approximate solution in the first column is \(O(\tau ^{3- \alpha })\) with \(\alpha \in (1, 2)\). After the first extrapolation, we get the new approximate solutions which have the experimentally determined order of convergence slightly less that the expected order \(O(\tau ^{4- \alpha })\) because of the computational errors. After the second extrapolation, the experimentally determined order of convergence of the approximate solutions is also slightly less than the expected order \(O(\tau ^{5- \alpha })\) due to the computational errors.

In Tables 4 and 5, We compared the convergence orders and CPU times of our numerical scheme with a scheme developed in [8]. In [8], an approximate scheme (47)–(49) with a convergence order of \(O(\tau ^{4-\alpha }), \alpha \in (1, 2)\), was proposed for solving time fractional wave equations with order \(\alpha \in (1, 2)\). We modified the scheme (47)–(49) in [8] to solve fractional differential equation with order \(\alpha \in (1, 2)\). We observed that our scheme achieves a convergence order close to \(O(\tau ^{5-\alpha })\) after two extrapolations, while requiring nearly the same CPU times as the method presented in [8]. This indicates that our scheme exhibits better convergence behavior compared to the scheme in [8], which only achieves a convergence order of \(O(\tau ^{4-\alpha })\).

Table 4 Errors for Eq. (67) with \(\alpha =1.2 \), taken at \(T=1\) in Example 2
Table 5 Errors for Eq. (67) with \(\alpha =1.8\), taken at \(T=1\) in Example 2

Example 3

Consider the following semilinear fractional differential equation

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] = \beta y(t) + f(y(t)) + g(t), \quad 0 \le t \le T, \end{aligned}$$
(69)

where \(y(t) = t^{5}\), \( \beta =-1\), \(f(y) = \sin (y)\) and \(g(t)= \, _{0}^{R} D_{t}^{\alpha } t^5 + t^5 - \sin (t^5)\).

For given \(y_{0} = y (0) =0, \, y_{1} = y(\tau )= \tau ^{5}\), we define the following numerical method, with \(n \ge 2\),

$$\begin{aligned} w_{0} y_{n} - \tau ^{\alpha } \beta y_{n} - \tau ^{\alpha } f(y_{n}) =&- \sum _{j=1}^{n} w_{j} y_{n-j} + \tau ^{\alpha } g(t_{n}) + \tau ^{\alpha } \Big ( \frac{\varGamma (1)}{\varGamma (1- \alpha )} t_{n}^{-\alpha } \Big ) y(0)\nonumber \\&+ \tau ^{\alpha } \Big ( \frac{\varGamma (2)}{\varGamma (2- \alpha )} t_{n}^{1-\alpha } \Big ) y^{\prime } (0). \end{aligned}$$
(70)

Let \(A= y(t_{N})\) with \(T_{N}=1\) be the exact solution of (69). Let \(A_{0}(\tau )= y_{N}\) be the approximate solution obtained from (70) by using MATLAB function "fsolve.m".

In Table 6, we choose \(\tau = 1/20\), \(b=2\). We obtain the extrapolated values of the approximate solutions with the step sizes \( \big ( \tau , \frac{\tau }{2}, \frac{\tau }{2^2}, \frac{\tau }{2^3}, \frac{\tau }{2^4}, \frac{\tau }{2^5} \big )= \big ( \frac{1}{20}, \frac{1}{40}, \frac{1}{80}, \frac{1}{160}, \frac{1}{320}, \frac{1}{640} \big )\).

We observe in Table 6 that the experimentally determined order of convergence (EOC) of the approximate solution in the first column is almost \(O(\tau ^{3- \alpha })\) with \(\alpha =1.5\). After the first extrapolation, we get the new approximate solutions. The experimentally determined order of convergence is slightly less than the expected \(O(\tau ^{4- \alpha })\). After the second extrapolation, the experimentally determined order of convergence of the approximate solutions is also less than the expected \(O(\tau ^{5- \alpha })\) due to the nonlinearity of the problem.

Table 6 Errors for Eq. (69) with \(\alpha =1.5 \), taken at \(T=1\) in Example 3

Example 4

Consider the following semilinear fractional differential equation

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] = \beta y(t) + f(y(t)) + g(t), \quad 0 \le t \le T, \end{aligned}$$
(71)

where \(y(t) = t^{5}\), \( \beta =-1\), \(f(y) = y-y^3\) and \(g(t)= \, _{0}^{R} D_{t}^{\alpha } t^5 + t^5 - \big ( t^5 - (t^5)^3 \big ) \).

For given \(y_{0} = y (0) =0, \, y_{1} = y(\tau )= \tau ^{5}\), we define the following numerical method, with \(n \ge 2\),

$$\begin{aligned} w_{0} y_{n} - \tau ^{\alpha } \beta y_{n} - \tau ^{\alpha } f(y_{n}) =&- \sum _{j=1}^{n} w_{j} y_{n-j} + \tau ^{\alpha } g(t_{n}) + \tau ^{\alpha } \Big ( \frac{\varGamma (1)}{\varGamma (1- \alpha )} t_{n}^{-\alpha } \Big ) y(0)\nonumber \\&+ \tau ^{\alpha } \Big ( \frac{\varGamma (2)}{\varGamma (2- \alpha )} t_{n}^{1-\alpha } \Big ) y^{\prime } (0). \end{aligned}$$
(72)

Let \(A= y(t_{N})\) with \(T_{N}=1\) be the exact solution of (71). Let \(A_{0}(\tau )= y_{N}\) be the approximate solution obtained from (72) by using MATLAB function "fsolve.m".

We use the same parameters as in Example 3 and in Table 7, we obtain the similar experimentally determined convergence orders as in Example 3.

Table 7 Errors for Eq. (69) with \(\alpha =1.5 \), taken at \(T=1\) in Example 4

Note that this type of nonlinearity does not satisfy uniform Lipschitz condition (2), but it satisfies local Lipschitz condition. However, the above mentioned computational results indicate that it may be possible to derive the error analysis for this case and this will be a part of our future endeavor.

Example 5

Consider the following linear fractional differential equation

$$\begin{aligned} _0^R D_{t}^{\alpha } \Big [ y(t) - y(0) - \frac{y'(0)}{1!} t \Big ] = \beta y(t) + f(t), \quad 0 \le t \le T, \end{aligned}$$
(73)

where \(y(t) = t^{\gamma }, \gamma >1 \) and \( \beta =-1\) and \(f(t) = \, _{0}^{R} D_{t}^{\alpha } t^{\gamma } + t^{\gamma }\). The initial values are \(y_{0}= y_{0}^{1}=0\).

In this example, we consider the case where the solution is not sufficiently smooth. We shall choose \(\gamma = 1.4\) and the exact solution takes \(y (t) = t^{1.4}\) which is not sufficiently smooth. We use the same step sizes as in Example 2. In Table 8 we choose \( \alpha = 1.4\), \(\tau = 1/20\), \(b=2\), \(y_{0} =0\) and \( y_{1}= \tau ^{1.4}\). We obtain the extrapolated values of the approximate solutions with the step sizes \( \big ( \tau , \frac{\tau }{2}, \frac{\tau }{2^2}, \frac{\tau }{2^3}, \frac{\tau }{2^4}, \frac{\tau }{2^5} \big )= \big ( \frac{1}{20}, \frac{1}{40}, \frac{1}{80}, \frac{1}{160}, \frac{1}{320}, \frac{1}{640} \big )\). We observed that the convergence orders obtained after extrapolations are much lower than the expected values due to the solutions’ lack of sufficient smoothness. To address this issue, we recognize the necessity of developing alternative approaches for extrapolations when dealing with solutions that lack smoothness. For further insights and potential solutions, please refer to Remark 5 in Sect. 3.

Table 8 Errors for Eq. (73) with \(\alpha =1.4\), taken at \(T=1\) in Example 5

6 Conclusion

In this paper, we construct a new high-order scheme for approximating the Riemann–Liouville fractional derivative with order \(\alpha \in (1, 2)\) based on the Hadamard finite-part integral expression of the Riemann–Liouville fractional derivative. The asymptotic expansion of the approximation error is proved. By using proposed scheme, we obtain a high-order numerical method for a linear fractional differential equation and the error also has an asymptotic expansion. We also construct and analyze a high-order numerical scheme for approximating a semilinear fractional differential equation and it is again shown that the error has an asymptotic expansion. The numerical experiments show that the numerical results are consistent with our theoretical results.