1 Introduction

Over the past few decades, many natural phenomena have been successfully modeled using the fractional differential equation [16]. As an example, the authors in [7] constructed the following equation that describes the motion of a rigid plate immersed in a Newtonian fluid:

$$ y''(t)+AD^{\frac{3}{2}}y(t)+By(t)=f(t),\quad 0\leq t \leq T, $$
(1)

and

$$ y(0)=a_{0},\qquad y(T)=a_{1}, $$
(2)

where the operator \(D^{\frac{3}{2}}\) is a Liouville–Caputo derivative, \(A\neq 0,B\) are constants, and the function \(f(t)\) is known. The existence and uniqueness of the problem have been established in [8, 9]. Many methods have been developed to deal with the problem in the literature [1014]. Podlubny also investigated this equation and introduced an approximate analytical solution by Green’s function in his monograph [15]. Ray and Bera [16] adopted a semi-analytical method for solving Bagley–Torvik equation and obtained the same solution as Podlubny’s solution. Rajarama and Chakraverty [17] adopted the Sumudu transformation method to obtain the analytical solution of the problem. Cenesiz et al. [1820] suggested a Taylor polynomial along with the collocation method for dealing with a class of fractional differential equations including the Bagley–Torvik equation. In [2124], the wavelet method was used to deal with the problems. Diethelm and Ford [25] solved the problem by using Adams predictor and corrector methods. In [2628] the spectral collocation method based on a hybrid function and Chebyshev polynomial were employed to handle the equation. Moreover, shifted Legendre polynomial based Galerkin and collocation methods were utilized in delay Bagley–Torvik equations in [29]. Most recently, Hou and Ji [30, 31] introduced Jacobi polynomials and Laplace transform together with Laguerre polynomials to solve the equation.

Papers [26, 27] and [32] focused on the Chebyshev polynomial method for the Bagley–Torvik equation. In these studies, the operational matrix of fractional integration and Tau method were applied to tackle the problem. The objective of the current study is to develop a modified Chebyshev spectral collocation method to handle the Bagley–Torvik equation. We generate the operational matrices of derivative for shifted Chebyshev polynomials in the physical space. Thereafter, we obtain a discrete numerical scheme, in which the nonhomogeneous terms are not approximated. A rigorous error analysis in \(L^{\infty }\)-norm is provided.

2 The fractional integration and differentiation

In this section, we mainly introduce the widely used Riemann–Liouville fractional integral and Liouville–Caputo fractional derivative.

Definition 1

([33])

The Riemann–Liouville fractional integral operator \(J^{\alpha }\), \(\alpha >0\), is defined as follows:

$$ J^{\alpha }f(t)=\frac{1}{\varGamma (\alpha )} \int ^{t}_{0}(t-s)^{ \alpha -1}f(s)\,ds,\quad \alpha >0. $$

Definition 2

([33])

The Liouville–Caputo fractional derivative operator \(D^{\alpha }\) is defined as follows:

$$ D^{\alpha }f(t)=\frac{1}{\varGamma (n-\alpha )} \int _{0}^{t}(t-\tau )^{n- \alpha -1}f^{(n)}( \tau )\,d(\tau ) $$
(3)

for \(n-1< \alpha \leq n\), \(n\in \mathbb{N}\), \(t>0\), \(f(t)\in C_{-1}^{n}\).

For the Liouville–Caputo derivative (3), we have

$$ D^{\alpha }x^{\beta }= \textstyle\begin{cases} 0 & \text{for $\beta \in \mathbb{N}_{0}$ and $\beta < \lceil \alpha \rceil $}; \\ \frac{\varGamma (\beta +1)}{\varGamma (\beta +1-\alpha )}x^{\beta - \alpha } & \text{for $\beta \in \mathbb{N}_{0}$ and $\beta \geq \lceil \alpha \rceil $} \\ & \text{or $\beta \notin \mathbb{N}_{0}$ and $\beta >\lfloor \alpha \rfloor $}. \end{cases} $$
(4)

Here, \(\lceil \alpha \rceil \) and \(\lfloor \alpha \rfloor \) are the ceiling and floor functions, respectively. Also \(\mathbb{N}_{0}=\{0,1,2,\ldots \}\).

3 Shifted Chebyshev polynomials and their properties

The well-known Chebyshev polynomials are defined on the interval \([-1,1]\) and are obtained by expanding the following formulae:

$$ T_{n}(x)=\cos \bigl(n\arccos (x)\bigr),\quad n=0,1,\ldots ; x\in [-1,1]. $$

To use these polynomials on the interval \(t\in [0,L]\) for any real \(L>0\), we introduce the change of variable \(x=2t/L-1\), \(0\leq t \leq L\), and obtain the shifted Chebyshev polynomials

$$ T^{*}_{Ln}(t)=T_{n}(2t/L-1). $$

The shifted Chebyshev polynomials \(T^{*}_{Ln}(t)\) satisfy the recurrence relation

$$ T^{*}_{Ln+1}(t)=2 ( 2t/L-1 )T^{*}_{Ln}(t)-T^{*}_{Ln-1}(t),\quad n\in N, $$

where \(T^{*}_{L0}(t)=1\), \(T^{*}_{L1}(t)=2t/L-1\). The analytic form of the shifted Chebyshev polynomials \(T^{*}_{Li}(t)\) of degree i is given by

$$ T^{*}_{Li}(t)=i\sum _{k=0}^{i}(-1)^{i-k} \frac{(i+k-1)!}{(i-k)!(2k)!L^{k}}t^{k}, $$
(5)

where \(T^{*}_{Li}(0)=(-1)^{i}\), \(T^{*}_{Li}(L)=1\). The \(T^{*}_{Li}(t)\) also satisfy a discrete orthogonality condition

where the interpolation points are chosen to be the Chebyshev–Gauss–Lobatto points associated with the interval \([0,L]\), \(t_{k}=\frac{L}{2}(1-\cos (k\pi /N))\), \(k=0,1,2,\ldots ,N\). Here, the summation symbol with double primes denotes a sum with both the first and last terms halved.

4 The operational matrix of derivative

A continuous and bounded function \(y(t)\) can be approximated in terms of shifted Chebyshev polynomials in the interval \([0,L]\) by the formula

(6)

Using the discrete orthogonality relation, the coefficient \(c_{k}\) in (6) is given by the explicit formula

(7)

Applying (6), (7), the function \(y_{N}(t)\) can be written collectively in a matrix form

$$ y_{N}(t)=T(t)\cdot P\cdot Y, $$
(8)

where

$$\begin{aligned}& T(t)=\bigl[T^{*}_{L0}(t), T^{*}_{L1}(t), \ldots ,T^{*}_{LN-1}(t),T^{*}_{LN}(t) \bigr], \\& P= \begin{bmatrix} \frac{1}{2N}T^{*}_{0}(t_{0}) & \frac{2}{2N}T^{*}_{0}(t_{1}) & \frac{2}{2N}T^{*}_{0}(t_{2})& \cdots & \frac{1}{2N}T^{*}_{0}(t_{N}) \\ \frac{1}{N}T^{*}_{1}(t_{0}) & \frac{2}{N}T^{*}_{1}(t_{1}) & \frac{2}{N}T^{*}_{1}(t_{2})& \cdots & \frac{1}{N}T^{*}_{1}(t_{N}) \\ \frac{1}{N}T^{*}_{2}(t_{0}) & \frac{2}{N}T^{*}_{2}(t_{1}) & \frac{2}{N}T^{*}_{2}(t_{2})& \cdots & \frac{1}{N}T^{*}_{2}(t_{N}) \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{1}{2N}T^{*}_{N}(t_{0}) & \frac{2}{2N}T^{*}_{N}(t_{1}) & \frac{2}{2N}T^{*}_{N}(t_{2})& \cdots & \frac{1}{2N}T^{*}_{N}(t_{N}) \end{bmatrix}, \end{aligned}$$

and

$$ Y= \bigl[ y(t_{0}), y(t_{1}), y(t_{2}), \ldots , y(t_{N}) \bigr] ^{T}. $$

The derivative \(y_{N}'(t)\) is as follows:

$$ y_{N}'(t)=T'(t)\cdot P\cdot Y. $$
(9)

We know that

$$ T'(t)=T(t)\cdot \frac{2}{L}M, $$
(10)

in which M is the \((N+1)\times (N+1)\) operational matrix of derivative given by

$$ M= \begin{bmatrix} 0 &1 & 0 & 3 & 0 & 5 & \cdots & m_{1} \\ 0 &0 & 4 & 0 & 8 & 0 & \cdots & m_{2} \\ 0 &0 & 0 & 6 & 0 & 10 & \cdots & m_{3} \\ \vdots &\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 &0 & 0 & 0 & 0 & 0 & \cdots & 0 \end{bmatrix}_{(N+1)\times (N+1)}, $$

so that \(m_{1}\), \(m_{2}\), and \(m_{3}\) are respectively N, 0, 2N for odd N and 0, 2N, 0 for even N. Then, we substitute equation (10) into (9) to get

$$ y_{N}'(t)=T(t)\cdot \frac{2}{L}M \cdot P \cdot Y. $$
(11)

Therefore \(y_{N}'(t)\) can be expressed in a discretized form as follows:

$$ Y^{(1)}=Q\cdot \frac{2}{L} M \cdot P \cdot Y, $$

where

$$ Y^{(1)}=\bigl[y'(t_{0}), y'(t_{1}), y'(t_{2}), \ldots , y'(t_{N})\bigr]^{T} $$

and

$$ Q= \begin{bmatrix} T^{*}_{L0}(t_{0})& T^{*}_{L1}(t_{0}) & T^{*}_{L2}(t_{0}) & \cdots & T^{*}_{LN}(t_{0}) \\ T^{*}_{L0}(t_{1})& T^{*}_{L1}(t_{1}) & T^{*}_{L2}(t_{1}) & \cdots & T^{*}_{LN}(t_{1}) \\ T^{*}_{L0}(t_{2})& T^{*}_{L1}(t_{2}) & T^{*}_{L2}(t_{2}) & \cdots & T^{*}_{LN}(t_{2}) \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ T^{*}_{L0}(t_{N})& T^{*}_{L1}(t_{N}) & T^{*}_{L2}(t_{N}) & \cdots & T^{*}_{LN}(t_{N}) \end{bmatrix}. $$

So, we can get the operational matrix of derivative

$$ D^{(1)}=Q\cdot \frac{2}{L} M\cdot P. $$

Furthermore, the operational matrix of derivative of the n-order derivative can be completely determined from those of the first derivative

$$ D^{(n)}=D^{(1)}D^{(1)}\cdots D^{(1)}=P\cdot \biggl(\frac{2}{L} \biggr)^{n}M^{n} \cdot Q. $$
(12)

5 Calculation of the operational matrix of fractional order derivatives

According to the definition of Liouville–Caputo fractional derivative, we can write

$$ D^{\alpha }y_{N}(x)= D^{\alpha }T(x) \cdot P \cdot Y, $$
(13)

where \(\alpha >0\). Applying (5), the Liouville–Caputo fractional derivative of the vector \(T(x)\) in (13) can be expressed as

$$ D^{\alpha }T(x)= D^{\alpha } X\cdot N, $$
(14)

where

$$ N= \begin{bmatrix} 1 & -1 & 1 & -1 & \cdots & (-1)^{N} \\ 0 & 2/L & -8/L & 18/L & \cdots & (-1)^{N-1}2N^{2}/L \\ 0 & 0 & 8/L^{2} & -48/L^{2} & \cdots & (-1)^{N-2}\frac{2}{3}N^{2}(N^{2}-1)/L^{2} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 &0 & 0 &\cdots & \cdots & 2^{2N-1/L^{N}} \end{bmatrix} $$

and

$$ X=\bigl[1,t,t^{2},\ldots ,t^{N}\bigr]. $$

Using (4) we have

$$ D^{\alpha }X= \bigl[0,\ldots ,0, c_{\lceil \alpha \rceil }t^{\lceil \alpha \rceil -\alpha }, c_{\lceil \alpha \rceil +1}t^{\lceil \alpha \rceil +1-\alpha }, \ldots , c_{N}t^{N-\alpha } \bigr], $$
(15)

where

$$ c_{\lceil \alpha \rceil }= \frac{\varGamma (\lceil \alpha \rceil +1)}{\varGamma (\lceil \alpha \rceil +1-\alpha )}, \qquad c_{\lceil \alpha \rceil +1}= \frac{\varGamma (\lceil \alpha \rceil +2)}{\varGamma (\lceil \alpha \rceil +2-\alpha )}, \qquad \ldots ,\qquad c_{N}=\frac{\varGamma (N+1)}{\varGamma (N+1-\alpha )}. $$

Employing (13), (14) and (15), we get

$$ Y^{\alpha }=C \cdot N\cdot P\cdot Y, $$

where

$$ Y^{(\alpha )}=\bigl[y^{(\alpha )}(t_{0}), y^{(\alpha )}(t_{1}), y'(t_{2}), \ldots , y^{(\alpha )}(t_{N})\bigr]^{T} $$

and

$$ C= \begin{bmatrix} 0&\cdots & 0&c_{\lceil \alpha \rceil }x_{0}^{\lceil \alpha \rceil - \alpha }& c_{\lceil \alpha \rceil +1}x_{0}^{\lceil \alpha \rceil +1- \alpha } & \cdots & c_{N}x_{0}^{N-\alpha } \\ 0&\cdots & 0&c_{\lceil \alpha \rceil }x_{1}^{\lceil \alpha \rceil - \alpha }& c_{\lceil \alpha \rceil +1}x_{1}^{\lceil \alpha \rceil +1- \alpha } & \cdots & c_{N}x_{1}^{N-\alpha } \\ 0&\cdots & 0&c_{\lceil \alpha \rceil }x_{2}^{\lceil \alpha \rceil - \alpha }& c_{\lceil \alpha \rceil +1}x_{2}^{\lceil \alpha \rceil +1- \alpha } & \cdots & c_{N}x_{2}^{N-\alpha } \\ \vdots & \cdots & \vdots & \vdots & \vdots &\ddots & \vdots \\ 0&\cdots & 0&c_{\lceil \alpha \rceil }x_{N}^{\lceil \alpha \rceil - \alpha }& c_{\lceil \alpha \rceil +1}x_{N}^{\lceil \alpha \rceil +1- \alpha } & \cdots & c_{N}x_{N}^{N-\alpha } \end{bmatrix}. $$

Then we get the operational matrix of fractional derivative

$$ D^{(\alpha )}=C \cdot N\cdot P. $$
(16)

For simplicity, the operational matrix of the fractional derivative in (16) can be written collectively in the following form:

$$ D^{(\alpha )}= \begin{bmatrix} d^{(\alpha )}_{00} & d^{(\alpha )}_{01} & \cdots & d^{(\alpha )}_{0N} \\ d^{(\alpha )}_{10} & d^{(\alpha )}_{11} & \cdots & d^{(\alpha )}_{1N} \\ \vdots & \vdots & \ddots & \vdots \\ d^{(\alpha )}_{N0} & d^{(\alpha )}_{N1} & \cdots & d^{(\alpha )}_{NN} \end{bmatrix} . $$
(17)

6 Applications to the Bagley–Torvik equation

To show the fundamental importance of the operational matrix of fractional order derivatives, we apply it for solving the Bagley–Torvik equation. To solve the problem, we first consider incorporating boundary conditions

$$\begin{aligned}& D^{\alpha }y(x_{i})=\sum ^{N}_{j=0}d_{ij}^{(\alpha )}y(x_{j})=d_{i0}^{( \alpha )}y(0)+ \sum^{N-1}_{j=1}d_{ij}^{(\alpha )}y(x_{j})+d_{iN}^{( \alpha )}y(x_{N}), \end{aligned}$$
(18)
$$\begin{aligned}& D^{2}y(x_{i})=d_{i0}^{(2)}y(0)+ \sum^{N-1}_{j=1}d_{ij}^{(2)}y(x_{j})+d_{iN}^{(2)}y(x_{N}). \end{aligned}$$
(19)

By substituting the approximation (18) in (19) and by using the boundary conditions (2), we get a system of algebraic equations:

$$ D^{(2)}y(x_{i})+AD^{(3/2)}y(x_{i})+By(x_{i})=f(x_{i}),\quad i=1,2, \ldots ,N-1. $$
(20)

Solving the system of algebraic equations, we can obtain the vector Y. Then, using (8), we can get the output response

$$ y(x)=T(x)\cdot P\cdot Y. $$
(21)

7 Some useful lemmas

In this section, we give some useful lemmas, which play a significant role in the convergence analysis later. We first introduce some notations that will be used. Let \(I:=(-1,1)\) and \(L^{2}_{\omega ^{\alpha ,\beta }}(I)\) be the space of measurable functions whose square is Lebesgue integrable in I relative to the weight function \(\omega ^{\alpha ,\beta }(x)\). The inner produce and norm of \(L^{2}_{\omega ^{\alpha ,\beta }}(I)\) are defined by

$$\begin{aligned}& (u,v)_{\omega ^{\alpha ,\beta },I}= \int _{-1}^{1}u(x)v(x)\omega ^{ \alpha ,\beta }\,dx, \quad \forall u,v \in L^{2}_{\omega ^{\alpha ,\beta }}(I), \\& \Vert u \Vert _{L^{2}_{\omega ^{\alpha ,\beta }}}=(u,u)_{\omega ^{\alpha ,\beta }}^{ \frac{1}{2}}. \end{aligned}$$

For a nonnegative integer m, define

$$ H^{m}_{\omega ^{\alpha ,\beta ,}}= \bigl\{ v: \partial _{x}^{k}v \in L^{2}_{ \omega ^{\alpha ,\beta }}(I), 0\leq k \leq m \bigr\} , $$

with the seminorm and the norm as follows:

$$\begin{aligned}& \vert v \vert _{m,\omega ^{\alpha ,\beta }}= \bigl\Vert \partial _{x}^{m}v \bigr\Vert _{\omega ^{ \alpha ,\beta }}, \qquad \Vert v \Vert _{m,\omega ^{\alpha ,\beta }}= \Biggl( \sum_{k=0}^{m} \vert v \vert _{k, \omega ^{\alpha ,\beta }}^{2} \Biggr)^{\frac{1}{2}}, \\& \vert v \vert _{H^{m;N}_{\omega ^{\alpha ,\beta }}}= \Biggl( \sum_{k=\min (m,N+1)}^{m} \bigl\Vert \partial _{x}^{k}v \bigr\Vert ^{2}_{L^{2}_{\omega ^{\alpha ,\beta }}} \Biggr)^{ \frac{1}{2}}. \end{aligned}$$

Particularly, let

$$ \omega (x)=\omega ^{-\frac{1}{2},-\frac{1}{2}}(x) $$

be the Chebyshev weight function. Denote by \(L^{\infty }(-1,1)\) the measurable functions space with the norm

$$ \Vert v \Vert _{L^{\infty }}= \sup_{x\in I} \bigl\vert v(x) \bigr\vert . $$

For a given positive integer N, we denote the points by \(\{x_{i}\}_{i=0}^{N}\), which is a set of \(N+1\) Gauss–Lobatto points, corresponding to the weight \(\omega (x)\). By \(P_{N}\) we denote the space of all polynomials of degree not exceeding N. For all \(v\in C[-1,1]\), we define the Lagrange interpolating polynomial \(I_{N}v\in P_{N}\), satisfying

$$ I_{N}v(x_{i})=v(x_{i}). $$

The Lagrange interpolating polynomial can be written in the form

$$ I_{N}v(x)=\sum_{i=0}^{N}v(x_{i})F_{i}(x),\quad 0\leq i\leq N, $$

where \(F_{i}(x)\) is the Lagrange interpolation basis function associated with \(\{x_{i}\}_{i=0}^{N}\).

Lemma 3

([34])

Assume that \(v\in H^{m}_{\omega }\), and denote \(I_{N}v\) its interpolation polynomial associated with the Gauss–Lobatto points \(\{x_{i}\}_{i=0}^{N}\), namely

$$ I_{N}v(x_{i})=v(x_{i}). $$

Then the following estimates hold:

$$ \Vert v-I_{N}v \Vert _{L^{\infty }}\leq CN^{\frac{1}{2}-m} \vert v \vert _{H^{m;N}_{\omega }}. $$

8 Convergence analysis

In this section, an error estimate of the applied method for the solutions of the Bagley–Torvik equation is provided. For the sake of applying the theory of orthogonal polynomials, we use the variable transformations \(t=T(1+x)/2\), \(x\in [-1,1]\) to rewrite (1), (2) as follows:

$$ u''(x)+aD^{\frac{3}{2}}u(x)+bu(x)=g(x) $$
(22)

and

$$ u(-1)=a_{0},\qquad u(1)=a_{1}, $$

where

$$\begin{aligned}& u(x)=y \biggl(\frac{T(1+x)}{2} \biggr),\qquad a=A \biggl(\frac{T}{2} \biggr)^{1/2}, \\& g(x)= \biggl(\frac{T}{2} \biggr)^{2}f \biggl( \frac{T(1+x)}{2} \biggr),\qquad b=B \biggl(\frac{T}{2} \biggr)^{2}. \end{aligned}$$

Theorem 4

Let \(u(x)\) be the exact solution of the Bagley–Torvik equation differential equation (22), which is assumed to be sufficiently smooth. Let the approximate solution \(u_{N}(x)\) be obtained by using the proposed method. If \(u(x)\in H^{m}_{\omega }(I)\), then for sufficiently large N the following error estimate holds:

$$ \bigl\Vert e(x) \bigr\Vert _{L^{\infty }}\leq CN^{\frac{1}{2}-m} \vert u \vert _{H^{m;N}_{\omega }}+CN^{ \frac{1}{2}-m} \bigl\vert u'' \bigr\vert _{H^{m;N}_{\omega }} +CN^{\frac{1}{2}-m} \bigl\vert u^{ \frac{3}{2}} \bigr\vert _{H^{m;N}_{\omega }}. $$
(23)

Proof

We use \(u_{i}\approx u(x_{i})\), \(u^{(\alpha )}_{i}\approx u^{\alpha }(x_{i}), 0\leq i \leq N \), and

$$ u_{N}=\sum_{j=0}^{N}u_{j}F_{j}(x),\qquad u^{(\alpha )}_{N}=\sum_{j=0}^{N}u_{j}F^{ \alpha }_{j}(x), $$

where \(F_{j}\), \(j=0,1,2,\ldots ,N\), is the Lagrange interpolation basis function. Consider equation (22). By using Chebyshev–Gauss–Lobatto collocation points \(\{x_{i}\}_{i=0}^{N}\), we have

$$\begin{aligned}& u''(x_{i})+u^{(3/2)}(x_{i})+u(x_{i})=g(x_{i}), \end{aligned}$$
(24)
$$\begin{aligned}& u(x_{i})= \int _{-1}^{x_{i}}(t-s)u''(s)\,ds+u(-1)+x_{i}u'(-1). \end{aligned}$$
(25)

Then the numerical scheme (20) can be written as

$$\begin{aligned}& u_{N}''(x_{i})+u_{N}^{(3/2)}(x_{i})+u_{N}(x_{i})=g(x_{i}), \end{aligned}$$
(26)
$$\begin{aligned}& u_{i}= \int _{-1}^{x_{i}}(x-s)u_{N}''(s)\,ds+u(-1)+x_{i}u'(-1). \end{aligned}$$
(27)

We now subtract (24) from (26) and subtract (25) from (27) to get the error equation

$$\begin{aligned}& u''(x_{i})-u_{N}''(x_{i})+u^{(3/2)}(x_{i})-u_{N}^{(3/2)}(x_{i})+u(x_{i})-u_{N}(x_{i})=0, \end{aligned}$$
(28)
$$\begin{aligned}& u(x_{i})-u_{N}(x_{i})= \int _{-1}^{x_{i}}(t-s)e(s)\,ds. \end{aligned}$$
(29)

Multiplying by \(F_{i}(x)\) both sides of (28), (29) and summing from 0 to N yield

$$\begin{aligned}& I_{N}u''(x)-u_{N}''(x)+I_{N}u^{(3/2)}(t)-u_{N}^{(3/2)}+I_{N}u(x)-u_{N}(x)=0, \\& I_{N}u(x)-u_{N}(x)=I_{N} \int _{-1}^{x}(x-s)e(s)\,ds. \end{aligned}$$

Consequently,

$$\begin{aligned}& e''_{N}(x)=e_{N}(x)+ \frac{1}{\varGamma (1/2)} \int _{-1}^{x}(x-s)^{- \frac{1}{2}}e''(s)\,ds+J_{1}+J_{2}+J_{3}, \\& e_{N}(x)= \int _{-1}^{x}(x-s)e''(s)\,ds+J_{1}+J_{4}, \end{aligned}$$

where

$$ \begin{aligned} &J_{1}=u(x)-I_{N}u(x), \\ &J_{2}=u''(x)-I_{N}u''(x), \\ &J_{3}=u^{\frac{3}{2}}(x)-I_{N}u^{\frac{3}{2}}(x), \\ &J_{4}=I_{N} \int _{-1}^{x}(x-s)e''(s)\,ds- \int _{-1}^{x}(t-s)e''(s)\,ds. \end{aligned} $$

It follows from the Gronwall inequality and [35] that

$$\begin{aligned}& \bigl\Vert e''(x) \bigr\Vert _{L^{\infty }}\leq C \Biggl( \bigl\Vert e(x) \bigr\Vert _{\infty }+ \sum_{i=1}^{4} \Vert J_{i} \Vert _{\infty } \Biggr), \\& \bigl\Vert e(x) \bigr\Vert _{L^{\infty }}\leq C \Biggl( \bigl\Vert e''(x) \bigr\Vert _{\infty }+\sum _{i=1}^{4} \Vert J_{i} \Vert _{\infty } \Biggr), \end{aligned}$$

then we have

$$\begin{aligned}& \bigl\Vert e''(x) \bigr\Vert _{L^{\infty }}\leq C\sum_{i=1}^{4} \Vert J_{i} \Vert _{\infty }, \\& \bigl\Vert e(x) \bigr\Vert _{L^{\infty }}\leq C\sum _{i=1}^{4} \Vert J_{i} \Vert _{\infty }. \end{aligned}$$

Using Lemma 3, we have

$$\begin{aligned}& \Vert J_{1} \Vert _{L^{\infty }}\leq CN^{\frac{1}{2}-m} \vert u \vert _{H^{m;N}_{\omega }(I)}, \end{aligned}$$
(30)
$$\begin{aligned}& \Vert J_{2} \Vert _{L^{\infty }}\leq CN^{\frac{1}{2}-m} \bigl\vert u'' \bigr\vert _{H^{m;N}_{\omega }(I)}, \end{aligned}$$
(31)
$$\begin{aligned}& \Vert J_{3} \Vert _{L^{\infty }}\leq CN^{\frac{1}{2}-m} \bigl\vert u^{\frac{3}{2}} \bigr\vert _{H^{m;N}_{ \omega }(I)}. \end{aligned}$$
(32)

We now estimate \(J_{4}\). By virtue of Lemma() with \(m=2\), we have

$$ \Vert J_{4} \Vert _{L^{\infty }}\leq CN^{-\frac{3}{2}}\log N \bigl\Vert e(t) \bigr\Vert _{L^{ \infty }}. $$
(33)

Therefore, a combination of (30), (31), (32), and (33) yields estimate (23). □

9 Illustrative examples

To illustrate the effectiveness of the proposed method in the present paper, some test examples are carried out in this section. The results obtained by the present methods reveal that the present method is very effective and convenient for fractional differential equations.

Example 9.1

As the first example, we consider the following Bagley–Torvik differential equation [3638]:

$$ D^{\frac{3}{2}}y(t)+y(t)=\frac{2t^{1/2}}{\varGamma (3/2)}+t^{2}-t $$
(34)

with the boundary conditions \(y(0)=0\) and \(y(1)=0\). With \(N=3\), from (16) we get

$$ D^{\frac{3}{2}}= \begin{bmatrix} 0 & 0 & 0 \\ 3.1915& -6.3831 &3.1915 \\ 9.0270& -18.054 & 9.0270 \end{bmatrix}. $$

The following system of algebraic equations will be obtained:

$$ \begin{bmatrix} 1 & 0 & 0 \\ 3.1915& -5.3831 &3.1915 \\ 9.0270& -18.054 & 10.0270 \end{bmatrix} \begin{bmatrix} y(0) \\ y(1/2) \\ y(1) \end{bmatrix} = \begin{bmatrix} 0 \\ 1.3458 \\ 2.2568 \end{bmatrix}. $$
(35)

Applying the boundary conditions \(y(0)=0\), \(y(1)=0\) and solving (35), we obtain \(y(1/2)=-0.025\). Thus

$$ y(t)= \begin{bmatrix} 1 & 2t-1 & 8t^{2}-8t+1 \end{bmatrix} \begin{bmatrix} 0.250 & 0.500 & 0.250 \\ -0.500 & 0 & 0.500 \\ 0.250 & -0.50 & 0.250 \end{bmatrix} \begin{bmatrix} y(0) \\ y(1/2) \\ y(1) \end{bmatrix}, $$

which is the exact solution \(y(t)=t^{2}-t\).

Example 9.2

In this example we consider the following equation:

$$ y''(t)+\frac{1}{2}D^{\frac{3}{2}}y(t)+ \frac{1}{2}y(t)=g(t), $$

where

$$ g(t)= \textstyle\begin{cases} 8, & 0\leq t \leq 1; \\ 0, & t>1. \end{cases} $$

The analytical solution can be found in [15]. The problem is considered in [18, 20, 22, 28, 39]. First, we consider the boundary conditions \(y(0)=0\), \(y(20)=-1.48433\) and apply the present method to solve the problem with \(N=8,16,32,64,128\). In Table 1, we list the \(L^{\infty }\), \(L^{2}\) errors and CPU time for the differential values of N. The numerical solutions obtained by the present method and some other numerical methods, such as the wavelet method [22] and the hybrid functions method [28], are given in Tables 2 and 3. Clearly, numerical results show that the present method is working well and the accuracy is comparable with the existing methods. Also, the numerical results with \(N=16,32\) and the exact solution are plotted in Fig. 1. For Example 9.2, Fig. 1 shows that the approximate solutions using the present method are in high agreement with the exact solutions. Second, we solve this problem with the boundary conditions \(y(0)=0\), \(y(1)=2.95179355\). We compare the absolute errors of the present method, the Taylor collocation method [18], and the fractional Taylor method [20] in Table 4. This indicates that our results are better than given by [18, 20].

Figure 1
figure 1

Comparison of the numerical solution with \(N=16,32\) using our method and the exact solution

Table 1 The \(L^{\infty }\), \(L^{2}_{\omega }\) errors and CPU time for differential values of N in Example 9.2
Table 2 Comparison of the numerical results of the wavelet and the present method for Example 9.2
Table 3 Comparison of the numerical results of the hybrid function and the present method for Example 9.2
Table 4 Comparison of absolute errors of the present method with the Taylor method for Example 9.2

10 Conclusion

In this work, the shifted Chebyshev operational matrix of fractional derivatives has been derived. Also, the operational matrix in combination with a collocation method is used to approximate the unknown function of the Bagley–Torvik equation. Moreover, a convergence analysis was performed under the \(L^{\infty }\) norm. Finally, numerical examples were presented to demonstrate the validity and applicability of the proposed numerical scheme. From examples, we observed that our scheme is simple and accurate. We believe that the ideas introduced in this study can be extended for systems of nonlinear fractional differential equations and fractional integro-differential equations.