1 Introduction

Despite the long history of fractional calculus in the field of mathematics, a large amount of real world applications of this field has appeared mainly during the last decades. This type of calculus has become so wide that almost no branch of science and engineering cannot be found without fractional calculus and a lot of books have been written in these regards (see for example Refs. [14] and the references therein). Increasing the use of fractional calculations has increased the variety of questions and resulted in various basic definitions for fractional integral and derivative. We recall that the Riemann–Liouville definition entails physically unacceptable initial conditions [1]; conversely for the Liouville–Caputo fractional derivative, the initial conditions are expressed in terms of integer-order derivatives having direct physical significance [1, 5]. A few years ago Caputo and Fabrizio [6] have opened the following subject of debate within the mathematical community: is it possible to describe all nonlocal phenomena within the same basic kernels, namely the power kernel involved within the definition of Riemann–Liouville derivative and some other few basic fractional derivatives. If we analyze, step by step, the way Caputo has introduced his classical fractional derivative [5], we will realize that during the last step he generalized the classical integral to the fractional Riemann–Liouville integral (see for more details [5]). After that, about almost 50 years later, he kindly asked the mathematical community how the Gamma function appears into the description of real phenomena, and why only some existing fractional operators are required by experiments [6]. Immediately, Nieto and Losada found, by using the Laplace transform, the associated integral of the so-called Caputo–Fabrizio fractional derivative [7].

Also, regarding the extension of the Liouville–Caputo derivative reported recently in [8, 9], it was suggested a new fractional-order integral and derivative involving the Mittag-Leffler function with nonlocal property in [10]. This concept was tested with success in many fields including chaotic behavior, epidemiology, thermal science, hydrology, mechanical engineering and biology [1123].

The dynamics of many applied physical or biological problem can be modeled by a system of fractional differential equations (FDEs) (for example see [24] for a relaxation system). A system of Mittag-Leffler non-singular FDEs can be described by

$$ \begin{aligned} &{_{\quad0 }^{ABC}D^{\alpha}_{t}} \mathbf{y}(t)=A\mathbf{y}(t)+\mathbf {f}(t), \quad t\in I:=[0,T], \\ &\mathbf{y}(0)=\mathbf{y}_{0}, \end{aligned} $$

where A is a constant matrix of dimension \(\nu\times\nu\), \(\nu\in \mathcal{N}\) is the dimension of the system, \(\mathbf{f}:\mathcal{R} \rightarrow\mathcal{R}^{\nu}\) is a known vector-valued function, \({_{\quad0 }^{ABC}D^{\alpha}_{t}}\mathbf{y}(t)\) is a fractional derivative involving Mittag-Leffler functions (also known as AB type [10]) and \(\mathbf{y}: \mathcal{R} \rightarrow \mathcal{R}^{\nu}\) is the unknown function. Recently, it was observed that the system (1) is more successful for modeling of suspension concentration distribution in turbulent flows than other models [25].

The conditions for the existence and uniqueness of the solution to exponential non-singular system can be found in [26]. The consistency condition

$$A\mathbf{y}_{0}+\mathbf{f}(0)=0, $$

is one of them. It seems that this condition is also important for system (1) with Mittag-Leffler non-singular kernels and as mentioned in [27], we should consider the initial condition carefully. This imposes some restriction on system (1). However, due to the important dynamics of the solutions of system (1), it is significant to solve the system (1) analytically or numerically [28]. Because of the novelty and the newness of this topic there are a few articles on this subject. We found only, a linear piecewise polynomial base method for solving this system numerically [29]. However, solving this system with Chebyshev base polynomials has not been studied yet.

The spectral methods using Chebyshev polynomials are well known for differential and partial differential equations [3033]. For smooth problems in simple geometries, they offer exponential rates of convergence or spectral accuracy. An important advantage of these methods over finite-difference methods is that computing the coefficient of the approximation, completely determines the solution at any point of the desired interval. Therefore, numerical solution of the system (1) using operational matrix spectral methods based on Chebyshev polynomials is very important.

The discrete orthogonality properties of the Chebyshev polynomials are the advantages over other orthogonal polynomials like Legendre polynomials. Also, the zeros of the Chebyshev polynomials are known analytically. These properties lead to the Clenshaw–Curtis formula which makes integration easy. We use this formula to obtain the operational matrix of the fractional integration.

The aim of this paper is to obtain an efficient numerical method to solve the system (1) using the operational matrix based on Chebyshev polynomials. For this purpose, we obtain the operational matrix approximation for fractional integral operator. We transform the system (1) to a system of weak singular integral equation and then using an operational matrix we obtain a system of linear algebraic equations. Solving the obtained algebraic system, we get the numerical approximation. We investigate the existence and convergence of the numerical solution. To this end, we also study the regularity of the exact solutions.

The structure of this paper is as follows. In Sect. 2, we review new definition of the fractional calculus and related results and the Chebyshev polynomials. In Sect. 3, we review the approximations of multi-variable functions in terms of the shifted Chebyshev polynomials and we obtain the operational matrix approximation for fractional integral operator. In Sect. 4, we propose a spectral method based on the operational matrix for solving system of Mittag-Leffler, non-singular FDEs. In Sect. 5, we obtain the regularity of the solutions. In Sect. 6, we study a convergence analysis for proposed method without discretization. In Sect. 7, we obtain the convergence results for discretized version. Finally, in Sect. 8, we provide some numerical examples to show the efficiency of the introduced method, and we present a comparison between the solution of the AB type FDEs and the Liouville–Caputo type FDEs.

2 Definitions and preliminaries

In this section, we first recall some basic definitions related and results to the Mittag-Leffler function [34]. Then we recall some basic definitions and results related to the new non-singular fractional derivative and integral formulas [10].

2.1 The Mittag-Leffler function

The Mittag-Leffler function is the cornerstone of fractional calculus. Several books and excellent papers [3437] describe the importance of these types of operators. The concept of Mittag-Leffler calculus was introduced in [10] and the integral associated to the non-singular fractional operator with Mittag-Leffler kernel was found by using the Laplace transform [38].

Throughout the paper, the symbol \(E_{\alpha}\) shows the one parameter Mittag-Leffler function [39] defined by

$$E_{\alpha}(z)=\sum_{k=0}^{\infty} \frac{z^{k}}{\Gamma(\alpha k+1)}, \quad \operatorname{Re}(\alpha)>0. $$

The two-parameter Mittag-Leffler function is defined as

$$E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty} \frac{z^{k}}{\Gamma( \alpha k+\beta)} \quad\bigl(\alpha,\beta\in\mathcal{C}, \operatorname {Re}( \alpha)>0\bigr). $$

Here, the notation Γ denotes the gamma function. An interesting book containing the history, applications and the effect of the gamma functions on the progress in mathematics and the progress in describing the real phenomena can be found in [40].

Theorem 2.1


Let \(\rho, \mu, \upsilon, \omega\in\mathcal{C}\) (\(\operatorname {Re}(\rho), \operatorname{Re}(\mu), \operatorname{Re}(\upsilon)>0\)).Then

$$ \int_{0}^{x} (x-t)^{\mu-1}E_{\rho,\mu} \bigl(\omega(x-t)^{\rho } \bigr)t^{\upsilon-1}\,dt=\Gamma( \upsilon)x^{\mu+\upsilon -1}E_{\rho,\mu+\upsilon}\bigl(\omega x^{\rho}\bigr). $$

2.2 The non-singular fractional derivative and integral involving Mittag-Leffler kernel

We use a Sobolev space defined by

$${\mathrm{H}}^{1}[t_{0},t_{f}]:=\biggl\{ u\in{ \mathrm{L}}^{2}[t_{0},t_{f}]: \frac {du}{dt} \in{\mathrm{L}}^{2}[t_{0},t_{f}] \biggr\} $$

to define the fractional derivative as follows.

Definition 2.2

For \(f\in{\mathrm{H}}^{1}[t_{0},t_{f}]\) and \(0< \alpha<1\), the (left) fractional derivative involving the Mittag-Leffler kernel in the Liouville–Caputo sense is defined by [10]

$$ {_{\quad t_{0} }^{ABC}D^{\alpha}_{t}}f(t)= \frac{B(\alpha)}{1-\alpha } \int_{t_{0}}^{t}\frac{df(\tau)}{d\tau}E_{\alpha} \biggl(-\alpha\frac {(t-\tau)^{\alpha}}{1-\alpha}\biggr)\,d\tau, $$

where \(B(\alpha)\) is a normalization function obeying \(B(0)=B(1)=1\).

The associated fractional integral is also defined by [10]

$$ \begin{aligned}[b] {_{\hspace{3pt} t_{0} }^{AB}I^{\alpha}_{t}}f(t)&= \frac{1-\alpha}{B(\alpha )}f(t)+\frac{\alpha}{B(\alpha)\Gamma(\alpha)} \int_{t_{0}}^{t}f(\tau) (t-\tau)^{\alpha-1}\,d\tau \\ &=\frac{1-\alpha}{B(\alpha)}f(t)+\frac{\alpha}{B(\alpha)} {_{t_{0}}I_{t}^{\alpha}}f(t). \end{aligned} $$

The fractional integral of \((t-t_{0})^{\beta}\) (\(\beta>-1\)) for \(\alpha >0\) is

$${}_{t_{0}}I_{t}^{\alpha}(t-t_{0})^{\beta}= \frac{\Gamma(\beta+1)}{\Gamma (\alpha+\beta+1)}(t-t_{0})^{\beta+\alpha} $$


$${_{\hspace{3pt} t_{0} }^{AB}I^{\alpha}_{t}}(t-t_{0})^{\beta}= \frac{(t-t_{0})^{\beta }}{B(\alpha)} \biggl(1-\alpha+\frac{\alpha\Gamma(\beta+1)}{\Gamma (\alpha+\beta+1)}(t-t_{0})^{\alpha} \biggr). $$

The Newton–Leibniz formula for this fractional derivative and integral is obtained in [38, 41].

Proposition 2.3

For \(0<\alpha<1\), we have [38]

$$ \bigl({_{\hspace{3pt} t_{0} }^{AB}I^{\alpha}_{t}} {_{\quad t_{0} }^{ABC}D^{\alpha }_{t}} \bigr)f(t)=f(t)-f(t_{0}). $$

From Theorem 2.1, the fractional derivative of a monomial \(t^{\beta}\) (\(\beta>0\)) is

$$ {_{\quad0 }^{ABC} D^{\alpha}_{t}}t^{\beta}= \frac{B(\alpha)\Gamma (\beta+1)}{1-\alpha}t^{\beta}E_{\alpha,1+\beta}\biggl( -\frac{\alpha }{1-\alpha}t^{\alpha} \biggr),\quad\beta>0 , \alpha>0. $$

2.3 Chebyshev polynomials

Here, we review some basic definitions and results related to the Chebyshev polynomials [30, 42].

Definition 2.4

Let \(x=\cos(\theta)\). Then the Chebyshev polynomial \(T_{n}(x)\), \(n\in \mathbb{N}\cup\{0\}\), over the interval \([-1,1]\), is defined by the relation

$$ T_{n}(x)=\cos(n\theta). $$

The Chebyshev polynomials are orthogonal with respect to the weight function \(w(x)=\frac{1}{\sqrt{1-x^{2}}}\) and the corresponding inner product is

$$ \langle f,g\rangle= \int_{-1}^{1}w(x)g(x)f(x)\,dx, \quad\mbox{for } f,g \in\mathcal {L}_{2}[-1,1]. $$

The well-known recursive formula

$$ T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x) , \quad n\in \mathcal{N}, $$

with \(T_{0}(x)=1\) and \(T_{1}(x)=x\) is important for numerical computing of these polynomials, whereas we may use

$$ T_{n}(x)=\sum_{k=0}^{[n/2]}(-1)^{k}2^{n-2k-1} \frac{n}{n-k}\binom{n-k}{k} x^{n-2k} $$

to compute Chebyshev polynomials in analysis. Since the range of interest of the problem (1) is \([0,T]\), we can define the shifted Chebyshev polynomials \(T^{*}_{n}(x)\) by

$$T^{*}_{n}(x)=T_{n}\biggl(\frac{2}{T}x-1\biggr) $$

with corresponding weight function \(w^{*}(x)=w(\frac{2}{T}x-1)\). Using \(T_{n}(2x-1)=T_{2n}(\sqrt{x})\) (see [30], Sect. 1.3) we could compute the shifted Chebyshev polynomials by

$$ T_{n}(2x-1)=\sum_{k=0}^{n}(-1)^{k}2^{2n-2k-1} \frac{2n}{2n-k}\binom {2n-k}{k} x^{n-k}, \quad n>0. $$

The discrete orthogonality of Chebyshev polynomials leads to the Clenshaw–Curtis formula:

$$ \int_{-1}^{1}w(x)f(x)\,dx\simeq\frac{\pi}{N+1} \sum_{k=1}^{N+1}f(x_{k}), $$

where \(x_{k}\) for \(k=1,\ldots, N+1\) are zeros of \(T_{N+1}(x)\). Therefore, we have

$$\int_{0}^{T}w^{*}(x)f(x)\,dx= \int_{-1}^{1}w(x)f\biggl(\frac {T}{2}(x+1) \biggr)\,dx\simeq\frac{T\pi}{2(N+1)}\sum_{k=1}^{N+1}f \biggl(\frac {T}{2}(x_{k}+1)\biggr). $$

Also, the norm of \(T^{*}_{n}(x)\),

$$\gamma_{n}:= \bigl\Vert T_{n}^{*}(x) \bigr\Vert ^{2}= \int_{0}^{T}w^{*}(x) \bigl(T_{n}^{*} \bigr)^{2}(x)\,dx=\frac{T}{2} \textstyle\begin{cases} \frac{\pi}{2}, &n>0,\\ \pi, &n=0, \end{cases} $$

will be of importance later.

3 Function approximation

A function \(f(t)\) defined over the interval \([0,T]\), may be expanded as

$$ f(t)\simeq{p}_{N} f(t):=\sum _{m=0}^{N}c_{m}T^{*}_{m}(t)= \mathbf{C}^{T}\Psi (t) , \quad N\in\mathbb{N}, $$

where \({p}_{N}:C[0,T]\mapsto\pi_{N}\) (\(N\in\mathcal{N}\)), is an orthogonal projection, \(\pi_{N}\) is the space of polynomials with degree not exceeding N, C and Ψ are the matrices of size \((N+1)\times1\)

$$ \begin{aligned} &\mathbf{C}^{T}=[c_{0}, \ldots,c_{N}], \\ &\Psi^{T}(t)=\bigl[T^{*}_{0}(t),\ldots,T^{*}_{N}(t) \bigr], \end{aligned} $$


$$ \begin{aligned}[b] c_{i}&=\frac{1}{\gamma_{i}} \int_{0}^{T}w^{*}(x)f(x)T^{*}_{i}(x)\,dx \\ &=\frac{1}{\gamma_{i}} \int_{0}^{T}w\biggl(\frac{2}{T}x-1 \biggr)f(x)T_{i}\biggl(\frac {2}{T}x-1\biggr)\,dx \\ &=\frac{T}{2\gamma_{i}} \int_{-1}^{1}w(t)f\biggl(\frac{T}{2}(t+1) \biggr)T_{i}(t)\,dt \\ &\simeq\frac{T\pi}{2\gamma_{i}(N+1)}\sum_{k=1}^{N+1}f \biggl(\frac {T}{2}(x_{k}+1)\biggr)T_{i}(x_{k}), \quad i=0, \ldots, N. \end{aligned} $$

The following error estimate for the Dini–Lipschitz continuous function f provides the convergence of approximation by Chebyshev polynomials.

Theorem 3.1

([30] (Theorem 5.7))

Let \(g\in\mathbb{C}[0,T]\) and g satisfy the Dini–Lipschitz condition, i.e.,

$$\omega(\delta)\log(\delta)\rightarrow0\quad\textit{as } \delta \rightarrow0, $$

where ω is the modulus of continuity. Then \(\|g-p_{n}g\|_{\infty }\rightarrow0 \) as \(n\rightarrow\infty\).

Theorem 3.2

Let \(0<\alpha<1\) and \(N,M\in\mathbb{N}\). Then

$$ \frac{1}{\Gamma(\alpha)} \int_{0}^{x}\frac{\Psi(\tau)}{(x-\tau )^{1-\alpha}}\,d\tau\simeq P_{\alpha,M}\Psi(x), $$

where \(P_{\alpha,M}=(p_{n,r})\) is the operational matrix of dimension \(N\times N\) and its elements can be computed using

$$p_{0,r}\simeq\frac{\pi}{\gamma_{r}\Gamma(1+\alpha)(M+1)} \biggl(\frac{T}{2} \biggr)^{\alpha+1}\sum_{k=1}^{M+1} \bigl((x_{k}+1) \bigr)^{\alpha}T_{r}(x_{k}) $$

for \(r=1, \ldots, N\), and

$$p_{n,r}\simeq\sum_{k=0}^{n} \widehat{p}_{n,k}\sum_{j=1}^{M+1} (x_{j}+1 )^{n-k+\alpha}T_{r}(x_{j}), $$


$$\widehat{p}_{n,k}=\frac{(-1)^{k}T^{\alpha+1}\pi}{\gamma_{r}(M+1)}\frac {2^{n-k-1-\alpha}n}{2n-k} \binom{2n-k}{k} \frac{\Gamma (n-k+1)}{\Gamma(n-k+\alpha+1)} $$

for \(n=1, \ldots, N\) and \(r=0, \ldots, N\).


Taking the fractional integral on both sides of (11), we get

$$\begin{aligned} I^{\alpha}T_{n}^{*}(x)&=\frac{1}{\Gamma(\alpha)} \int_{0}^{x}\frac {T_{n}^{*}(\tau)}{(x-\tau)^{1-\alpha}}\,d\tau = \frac{1}{\Gamma(\alpha )} \int_{0}^{x}\frac{T_{n}(\frac{2}{T}\tau-1)}{(x-\tau)^{1-\alpha}}\,d\tau , \quad z= \frac{\tau}{T} \\ &=T^{\alpha}\frac{1}{\Gamma(\alpha)} \int_{0}^{\frac{x}{T}}\frac {T_{n}(2z-1)}{(\frac{x}{T}-z)^{1-\alpha}}\,dz \\ &=T^{\alpha}h_{n}\biggl(\frac{x}{T}\biggr), \end{aligned}$$


$$\begin{aligned} h_{n}(x)&:=\frac{1}{\Gamma(\alpha)} \int_{0}^{x}\frac {T_{n}(2z-1)}{(x-z)^{1-\alpha}}\,dz \\ &=\sum_{k=0}^{n}(-1)^{k}2^{2n-2k-1} \frac{2n}{2n-k}\binom{2n-k}{k} \frac{1}{\Gamma(\alpha)} \int_{0}^{x}\frac{z^{n-k}}{(x-z)^{1-\alpha }}\,dz \\ &=\sum_{k=0}^{n}(-1)^{k}2^{2n-2k-1} \frac{2n}{2n-k}\binom{2n-k}{k} I^{\alpha}x^{n-k} \\ &=\sum_{k=0}^{n}(-1)^{k}2^{2n-2k-1} \frac{2n}{2n-k}\binom{2n-k}{k} \frac{\Gamma(n-k+1)}{\Gamma(n-k+\alpha+1)}x^{n-k+\alpha} \end{aligned}$$

for \(n>0\), and

$$\begin{aligned} I^{\alpha}T_{n}^{*}(x)=\sum _{k=0}^{n}(-1)^{k}\frac {2^{2n-2k-1}}{T^{n-k}} \frac{2n}{2n-k}\binom{2n-k}{k} \frac{\Gamma (n-k+1)}{\Gamma(n-k+\alpha+1)}x^{n-k+\alpha}. \end{aligned}$$

For \(n=0\), it can easily be checked that

$$\begin{aligned} I^{\alpha}T_{0}^{*}(x) =\frac{x^{\alpha}}{\Gamma(1+\alpha)}. \end{aligned}$$

Now, applying (15) to the \(f(x)=x^{n-k+\alpha}\), we obtain

$$ x^{n-k+\alpha}\simeq\sum_{r=0}^{N} \frac{T\pi}{2\gamma_{r}(M+1)}\sum_{j=1}^{M+1} \biggl( \frac{T}{2}(x_{j}+1) \biggr)^{n-k+\alpha}T_{r}(x_{j})T^{*}_{r}(x). $$

By substituting the coefficients of \(T^{*}_{r}(x)\) from (19) into (17) and (18) we obtain the desired result. □

Remark 3.3

For \(f\in C[-1,1]\), the maximum error of Clenshaw–Curtis formula is less than \(4\|f-p_{N}f\|_{\infty}\) [43]. Hence, the Clenshaw–Curtis formula for \(x^{n-k+\alpha}\) in the proof of 3.2 is convergent and we conclude that

$$ p_{N}\bigl(I^{\alpha}\Psi\bigr)= P_{\alpha}\Psi(x), $$

where \(P_{\alpha}=\lim_{M\rightarrow\infty} P_{\alpha,M}\).

4 Constructing the method

Taking fractional integration from both sides of system (1) and using (5), the system (1) can be written in the following form:

$$ \biggl(\mathbf{I}-\frac{1-\alpha}{B(\alpha)}A \biggr)\mathbf {y}(t)= \frac{\alpha}{B(\alpha)}A I^{\alpha}\mathbf{y}(t)+\mathbf {y}_{0}+ \frac{1}{B(\alpha)} \bigl((1-\alpha)\mathbf{f}(t)+\alpha I^{\alpha} \mathbf{f}(t) \bigr). $$

Let \(E=\mathbf{I}-\frac{1-\alpha}{B(\alpha)}A\). Then, using the following lemma, one can guarantee the invertibility of E.

Lemma 4.1

Let A be a constant matrix, \(0<\alpha<1\), be such that \(1-\frac {B(\alpha)}{\|A\|}<\alpha\), and I denotes the identity matrix. Then the matrix

$$E=:\mathbf{I}-\frac{1-\alpha}{B(\alpha)}A $$

is invertible.


The proof is a direct result of the geometric series theorem. □

Now, multiplying by \(E^{-1}\) both sides of (21), we obtain the second kind of weakly singular integral equations of the form

$$ \begin{aligned}[b] \mathbf{y}(t)&=\frac{\alpha}{B(\alpha)}E^{-1}A I^{\alpha}\mathbf {y}(t)+E^{-1}\mathbf{y}_{0} \\ &\quad{}+\frac{1}{B(\alpha)}E^{-1} \bigl((1-\alpha)\mathbf {f}(t)+ \alpha I^{\alpha} \mathbf{f}(t) \bigr). \end{aligned} $$

In order to obtain a numerical method, we suppose

$$ \mathbf{y}_{N}(t)=Y^{T}\Psi(t) $$

to be the approximate solution, \(\mathbf{f}(t)=F^{T}\Psi(t)\) and \(Y_{0}=[y_{0},0,\ldots,0]^{T}\). Substituting them into (22) and using the operational matrix of Theorem 3.2, we obtain

$$ \biggl(Y^{T}-\frac{\alpha}{B(\alpha)}E^{-1}AY^{T}P_{\alpha} \biggr)=H, $$


$$H:=E^{-1}Y_{0}+\frac{1}{B(\alpha)}E^{-1} \bigl((1- \alpha)F^{T}+\alpha F^{T} P_{\alpha} \bigr). $$

Solving the linear system (24), we obtain \(Y^{T}\), and finally we obtain the approximate solution using (23).

To solve (24), we can use the vectorization operators to obtain system of linear algebraic equations in the standard form. We denote by vec the vectorization of a matrix

$$\operatorname{vec}(A):=(a_{1,1}\ldots,a_{m,1},\ldots,a_{1,n}, \ldots,a_{m,n})^{T}. $$

We note that

$$\operatorname{vec}(ABC)=\bigl(C^{T} \otimes A\bigr)\operatorname{vec}(B); $$

\(I_{\nu\times\nu}\) is the identity matrix and the notation ⊗ is for the Kronecker product. By the vec notation, the system (24) can be transformed to the standard form

$$\biggl(\mathbf{I}\otimes\mathbf{I}-P_{\alpha}^{T} \otimes \frac {\alpha}{B(\alpha)}E^{-1}A \biggr)\operatorname{vec}\bigl(Y^{T} \bigr)=\operatorname{vec}(H), $$

which it can be solved by mathematic software like MATLAB.

5 Regularity of the solution

For the simplicity of notation and analysis, we may write the system (22) as follows:

$$ \mathbf{y}=\mathcal{T}\mathbf{y}+\tilde{\mathbf{f}}, $$


$$\mathcal{T}\mathbf{y}=\frac{\alpha}{B(\alpha)}E^{-1}A I^{\alpha } \mathbf{y} $$


$$\tilde{\mathbf{f}}(t)=E^{-1}\mathbf{y}_{0}+ \frac{1}{B(\alpha )}E^{-1} \bigl((1-\alpha)\mathbf{f}(t)+\alpha I^{\alpha} \mathbf {f}(t) \bigr). $$

The system (25) is a system of second kind Volterra integral equations with weakly singular kernel. But the regularity of its solution is different due to the presence of the \(I^{\alpha} \mathbf {f}(t)\). Therefore, we introduce the space \(\mathcal{C}^{m,\lambda }(0,T]\), \(0< \lambda\), with \(m\in\mathbb{N}_{0}:=\mathbb{N}\cup\{0\} \). The set of all continuously differentiable functions \(g:(0,T]\mapsto \mathbb{R}\) is in \(\mathcal{C}^{m,\lambda}(0,T]\), if there exists \(g_{i}\in C[0,T]\), for \(i=0,\ldots,m\) and a real number \(c\in\mathbb {R}\) such that \(g=t^{\lambda}g_{0}(t)+c\) and

$$g^{(i)}(t)=t^{\lambda-i}g_{i}(t), \quad i=1,\ldots,m. $$

It is straightforward to show that the space \(\mathcal{C}^{m,\lambda }(0,T]\) equipped with the norm

$$\Vert g \Vert _{m,\lambda}=c+\sum_{i=0}^{m} \sup_{t\in(0,T]}t^{i-\lambda} \bigl\vert g^{(i)}(t) \bigr\vert $$

is a Banach space. We note that, for \(0<\lambda_{1}<\lambda_{2}\leq1\), we have

$$\mathcal{C}^{m}[0,T]\subset\mathcal{C}^{m,1}(0,T]\subset \mathcal {C}^{m,\lambda_{2}}(0,T]\subset\mathcal{C}^{m,\lambda_{1}}(0,T] \subset C[0,T]. $$

Remark 5.1

We note that, for \(f\in\mathcal{C}^{m,\lambda}(0,T]\) there exists a positive constant \(c>0\) such that

$$ \Vert f \Vert _{\infty}< c \Vert f \Vert _{m,\lambda} $$

and for \(f\in\mathcal{C}^{0,\lambda}(0,T]\) the norms \(\|f\|_{\infty }\) and \(\|f\|_{0,\lambda}\) are equivalent.

Lemma 5.2

Suppose \(0< \alpha\leq1\).

  • Let \(f \in\mathbb{C}^{m}[0,T]\), for \(m\in\mathbb{N}\). Then \(I^{\alpha}f\in\mathcal{C}^{m,\alpha}(0,T]\).

  • Let \(f \in\mathbb{C}^{m,\lambda}(0,T]\), for \(m\in\mathbb {N}\). Then \(I^{\alpha}f\in\mathcal{C}^{m,\alpha+\lambda }(0,T]\subset\mathcal{C}^{m,\min(\alpha,\lambda)}(0,T]\).


By integral substitution (\(\tau=tz\)), we obtain

$$I^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)} \int_{0}^{t}f(\tau) (t-\tau )^{\alpha-1}\,d \tau=t^{\alpha}g(t), $$


$$g(t)=\frac{1}{\Gamma(\alpha)} \int_{0}^{1}f(tz) (1-z)^{\alpha-1}\,dz. $$

It is obvious that \(g\in\mathbb{C}^{m}[0,T]\) if \(f\in\mathbb {C}^{m}[0,T]\) and \(g\in\mathbb{C}^{m,\alpha}[0,T]\) if \(f\in\mathbb{C}^{m,\alpha }[0,T]\), which completes the proof. □

Here, we should be concerned that the systems we investigated are of dimension \(\nu\geq1\) and we use the norm

$$\Vert \mathbf{f} \Vert _{m,\lambda,\nu}=\max_{i=1,\ldots,\nu} \Vert f_{i} \Vert _{m,\lambda} $$

with \(\mathbf{f}=[f_{1},\ldots,f_{\nu}]^{T}\in(\mathbb{C}^{m,\lambda }[0,T])^{\nu}\) \(\lambda>0\), and \(m\in\mathbb{N}_{0}\).

Theorem 5.3

Assume that \(\mathbf{f} \in(\mathbb{C}^{m}[0,T])^{\nu}\) or \(\mathbf {f} \in(\mathbb{C}^{m,\alpha}(0,T])^{\nu}\), for \(\alpha\in(0,1)\). Then the system (25) has a unique solution \(\mathbf{y}\in (\mathbb{C}^{m,\alpha}(0,T])^{\nu}\). Furthermore, \((I-\mathcal{T})^{-1}\) is a bounded operator.


By using Lemma 5.2, \(\tilde{\mathbf{f}}(t)\in (\mathbb {C}^{m,\alpha}(0,T])^{\nu}\). Consider a Picard iteration corresponding to the system (25)

$$ \mathbf{y}_{n+1}=\tilde{\mathbf{f}}+\frac{\alpha E^{-1}A}{B(\alpha )\Gamma(\alpha)} \int_{0}^{t} \frac{\mathbf{y}_{n}(s)}{(t-s)^{1-\alpha}}\,ds $$

with \(y_{0}=\tilde{f}\). The first iteration can be written in the form

$$ \mathbf{y}_{1}=\tilde{\mathbf{f}}+ \int_{0}^{t} \frac{Q_{1}(t,s;\alpha )\tilde{\mathbf{f}}(s)}{(t-s)^{1-\alpha}}\,ds,\quad Q_{1}(t,s;\alpha ):=\frac{\alpha E^{-1}A}{B(\alpha)\Gamma(\alpha)}, $$

and the second iteration can be written in the form

$$ \begin{aligned}[b] \mathbf{y}_{2}&=\tilde{\mathbf{f}}+ \int_{0}^{t} \frac {Q_{1}(t,s;\alpha)\tilde{\mathbf{f}}(s)}{(t-s)^{1-\alpha}}\,ds+ \frac {\alpha E^{-1}A}{B(\alpha)\Gamma(\alpha)} \int_{0}^{t} \int_{0}^{s} \frac{Q_{1}(s,\tau;\alpha)\tilde{\mathbf{f}}(\tau)}{(s-\tau )^{1-\alpha}(t-s)^{1-\alpha}}\,d\tau \,ds \\ &=\tilde{\mathbf{f}}+ \int_{0}^{t} \frac{Q_{1}(t,s;\alpha)\tilde {\mathbf{f}}(s)}{(t-s)^{1-\alpha}}\,ds+ \frac{\alpha E^{-1}A}{B(\alpha )\Gamma(\alpha)} \int_{0}^{t} \int_{\tau}^{t} \frac{Q_{1}(s,\tau ;\alpha)\tilde{\mathbf{f}}(\tau)}{(s-\tau)^{1-\alpha }(t-s)^{1-\alpha}} \,ds \,d\tau. \end{aligned} $$

Using the variable transformation \(s=\tau+(t-\tau)z\), we obtain

$$ \mathbf{y}_{2}=\tilde{\mathbf{f}}+ \int_{0}^{t} \frac{Q_{1}(t,s;\alpha )\tilde{\mathbf{f}}(s)}{(t-s)^{1-\alpha}}\,ds+ \frac{\alpha E^{-1}A}{B(\alpha)\Gamma(\alpha)} \int_{0}^{t}\frac{(t-\tau )^{\alpha}Q_{2}(t,\tau;\alpha)}{(t-\tau)^{1-\alpha}} \,d\tau, $$


$$Q_{2}(t,\tau;\alpha):= \int_{0}^{1} \frac{Q_{1}(\tau+(t-\tau)z,\tau ;\alpha)\tilde{\mathbf{f}}(\tau)}{z^{1-\alpha}(1-z)^{1-\alpha}} \,dz. $$

Proceeding by this procedure and by an argument similar to [44], Chap. 6 (note that we have used \(1-\alpha\) instead of α), one can show that

$$(I-\mathcal{T})^{-1}\tilde{\mathbf{f}}(t)=\tilde{\mathbf {f}}(t)+ \int_{0}^{t} \frac{Q(t,s;\alpha)\tilde{\mathbf {f}}(s)}{(t-s)^{1-\alpha}}\,ds, $$


$$Q(t,s;\alpha)=\sum_{n=1}^{\infty}(t-s)^{\alpha(n-1)}Q_{n}(t,s; \alpha) $$


$$Q_{n}(t,s;\alpha):=\frac{\alpha E^{-1}A}{B(\alpha)\Gamma(\alpha )} \int_{0}^{1}(1-z)^{\alpha-1}z^{(n-1)(\alpha )-1}Q_{n-1} \bigl(s+(t-s)z,s;\alpha\bigr)\,dz. $$

Therefore, \((I-\mathcal{T})^{-1}\) is a bounded operator in \((\mathbb {C}^{m,\alpha}(0,T])^{\nu}\). Other parts of the proof are straightforward. □

6 Convergence analysis

The orthogonal projection \(\mathbf{p}_{N}:(C[0,T])^{\nu}\mapsto(\pi _{N})^{\nu}\) (\(m,N\in\mathcal{N}\)) can be defined by

$$\mathbf{p}_{N}\bigl([f_{1},\ldots,f_{\nu}]^{T} \bigr):=\bigl[{p}_{N}(f_{1}),\ldots ,{p}_{N}(f_{\nu}) \bigr]^{T}, $$

where \({p}_{N}\) is defined by (13). The introduced method can be written in the form

$$ \mathbf{y}_{N}=\mathbf{p}_{N} \mathcal{T} \mathbf{y}_{N}+\mathbf {p}_{N}\widetilde{ \mathbf{f}}_{N}, $$

where \(\widetilde{\mathbf{f}}_{N}=E^{-1}\mathbf{y}_{0}+\frac {1}{B(\alpha)}E^{-1} ((1-\alpha)\mathbf{f}(t)+\alpha I^{\alpha } \mathbf{p}_{N}\mathbf{f}(t) )\) and \(\mathbf{y}_{N}\in(\pi _{N})^{\nu}\).

It is well known that the operator \(I^{\alpha}\) is compact on \(\mathbb {C}[0,T]\) (see [45]) and hence is compact on \((\mathbb{C}^{m,\alpha}[0,T])^{\nu}\subset(\mathbb{C}[0,T])^{\nu }\). Briefly, consider a bounded sequence \((\mathbf{f}_{n})\), \(\mathbf {f}_{n}=[f_{n1},\ldots,f_{n\nu}]^{T}\) in \((\mathbb{C}^{m,\alpha }[0,T])^{\nu}\) where each of its elements is bounded on \(\mathbb {C}[0,T]\), using (26). By compactness of \(I^{\alpha}\) on \(\mathbb{C}[0,T]\), the sequence \(I^{\alpha}{f_{nj}}\) (\(j=1,\ldots,\nu\)) contains a convergent subsequence in \(\mathbb{C}[0,T]\). This subsequence is in the space \(\mathbb{C}^{m,\alpha}[0,T]\) since \(f_{nj}\in\mathbb{C}^{m,\alpha }[0,T]\) and converges to an element of \(\mathbb{C}^{m,\alpha}[0,T]\) since it is a compact space. That means \(I^{\alpha}\mathbf {f}_{n}=[I^{\alpha}f_{n1},\ldots,I^{\alpha}f_{n\nu}]^{T}\) contains a convergent subsequence in \((\mathbb{C}^{m,\alpha}[0,T])^{\nu}\). Therefore, \(I^{\alpha}\) is compact on \((\mathbb{C}^{m,\alpha }[0,T])^{\nu}\).

Lemma 6.1

Let \(\mathbf{f}\in(\mathbb{C}^{0,\alpha}[0,T])^{\nu}\), \(0<\alpha <1\), \(m\in\mathbb{N}_{0}\), then \(\|\mathbf{f}-\mathbf{p}_{N}\mathbf {f}\|_{\infty}\rightarrow0\) and \({\|\mathbf{f}-\mathbf{p}_{N}\mathbf {f}\|}_{0,\alpha,\nu}\rightarrow0\) as \(N\rightarrow\infty\).


Since \(t^{\alpha}\) satisfies the Dini–Lipschitz condition, f satisfies the Dini–Lipschitz condition and, by Theorem 3.1, \(\|\mathbf{f}-\mathbf{p}_{n}\mathbf{f}\|_{\infty}\rightarrow 0\), as \(N\rightarrow\infty\). The latter can be obtained by equivalency of the norms. □

Theorem 6.2


Let X be a Banach space, and let \(\{\mathbf{p}_{N}\}\) be a family of bounded projections on X with \(\mathbf{p}_{N}x\rightarrow x\), as \(N\rightarrow \infty\), for \(x\in X\). Let \(\mathbf{T}:X\mapsto X\) be compact. Then \(\|{\mathbf{T}}-\mathbf{p}_{N}{\mathbf{T}}\|\rightarrow0\), as \(N\rightarrow\infty\).

Setting \(X=(\mathbb{C}^{0,\alpha}[0,T])^{\nu}\), \(\mathbf {T}=\mathcal{T}\) and using Theorem 6.2, we have

$$\Vert {\mathcal{T}}-\mathbf{p}_{N}{\mathcal{T}} \Vert _{L((\mathbb {C}^{0,\alpha}[0,T])^{\nu})}\rightarrow0, $$

as \(N\rightarrow\infty\). Here, the notation \(L((\mathbb{C}^{0,\alpha }[0,T])^{\nu})\) shows the space of linear operators on \((\mathbb {C}^{0,\alpha}[0,T])^{\nu}\), and the operator norm is induced norm. The operator \(\mathcal{T}\) is compact and due to the fact that compact linear operators are bounded (see [45]), the operator \(\mathcal{T}\) is also bounded. In order to obtain the convergence of the proposed method, we can use the following lemma to show that the operator \((I-\mathbf{p}_{N}\mathcal {T})^{-1}\) exists and is bounded for all sufficiently large N.

Theorem 6.3

([46], Theorem 3.1.1 with \(\lambda=1\))

Assume \(\mathcal{T}:X\mapsto X\) is bounded, with X a Banach space, and assume \(I-\mathcal{T}:X\mapsto X\) to be a bijective operator. Further assume

$$\Vert \mathcal{T}-\mathbf{p}_{N}\mathcal{T} \Vert \rightarrow0, \quad \textit{as } N\rightarrow\infty. $$

Then, for all sufficiently large N, say \(N > N_{0}\), the operator \((I-p_{N}\mathcal{T})^{-1}\) exists as a bounded operator from X to X. Moreover, it is uniformly bounded:

$$\sup_{N>N_{0}} \bigl\Vert (I-\mathbf{p}_{N} \mathcal{T})^{-1} \bigr\Vert < \infty. $$

Remark 6.4

By Theorem 6.3, the operator \((I-\mathbf{p}_{N}\mathcal {T})^{-1}\) exists for sufficiently large N. This fact guarantees the existence of a numerical method for sufficiently large N, since

$$\mathbf{y}_{N}=(I-\mathbf{p}_{N} \mathcal{T})^{-1} \mathbf {p}_{N}\widetilde{\mathbf{\mathbf{f}}}_{N} $$

by using (29).

Taking into account

$$ \begin{aligned}[b] (I-\mathbf{p}_{N} \mathcal{T}) (\mathbf{y}-\mathbf{y}_{N})&=\mathbf {y}- \mathbf{p}_{N}\mathcal{T}\mathbf{y}-\mathbf{p}_{N}\widetilde {\mathbf{f}}_{N} \\ &=\mathbf{y}-\mathbf{p}_{N}\mathcal{T}\mathbf{y}-\mathbf {p}_{N}\widetilde{\mathbf{f}}+\mathbf{p}_{N}\widetilde{ \mathbf {f}}-\mathbf{p}_{N}\widetilde{\mathbf{f}}_{N} \\ &=\mathbf{y}-\mathbf{p}_{N}\mathbf{y}+\mathbf{p}_{N}( \widetilde {\mathbf{f}}-\widetilde{\mathbf{f}}_{N}), \end{aligned} $$

we can write

$$ (\mathbf{y}-\mathbf{y}_{N})=(I-\mathbf{p}_{N} \mathcal{T})^{-1}(\mathbf {y}-\mathbf{p}_{N}\mathbf{y}+ \mathbf{p}_{N}\widetilde{\mathbf {f}}-\mathbf{p}_{N} \widetilde{\mathbf{f}}_{N}). $$

Now, taking the norm from both sides of Eq. (31), we obtain

$$ \bigl\Vert (\mathbf{y}-\mathbf{y}_{N}) \bigr\Vert _{0,\alpha,\nu}\leq \bigl\Vert (I-\mathbf {p}_{N} \mathcal{T})^{-1} \bigr\Vert _{L((\mathbb{C}^{0,\alpha}[0,T])^{\nu})}\bigl( \Vert \mathbf{y}-\mathbf{p}_{N}\mathbf{y} \Vert _{0,\alpha,\nu}+ \Vert \mathbf {p}_{N}\widetilde{\mathbf{f}}-\mathbf{p}_{N} \widetilde{\mathbf{f}}_{N} \Vert _{0,\alpha,\nu}\bigr). $$

Finally, we note that

$$\Vert \mathbf{p}_{N}\widetilde{\mathbf{f}}-\mathbf{p}_{N} \widetilde {\mathbf{f}}_{N} \Vert _{0,\alpha,\nu}\leq c \Vert \mathbf{f}-\mathbf{p}_{N} \mathbf{f} \Vert _{0,\alpha,\nu}, $$

where c is a constant number, and we can state the following theorem.

Theorem 6.5

Assume that \(\mathbf{f} \in(\mathbb{C}^{0,\alpha}(0,T])^{\nu}\), for \(\alpha\in(0,1)\). Then, for sufficiently large N, the approximated solution of system (1), say \(\mathbf{y}_{N}\), obtained by (23) and (24) exists and converges to the exact solution y. Furthermore, we have

$$ \Vert \mathbf{y}-\mathbf{y}_{N} \Vert _{0,\alpha,\nu}\leq c \bigl( \Vert \mathbf {y}-\mathbf{p}_{N}\mathbf{y} \Vert _{0,\alpha,\nu}+ \Vert \mathbf{f}-\mathbf {p}_{N} \mathbf{f} \Vert _{0,\alpha,\nu}\bigr), $$

where c is a constant number.

Also, the result of Theorem 6.5 is true with the norm \(\|\cdot\| _{\infty}\). This holds by considering the equivalency of the norms \(\| \cdot\|_{0,\alpha,\nu}\) and \(\|\cdot\|_{\infty}\).

7 Existence and convergence results for discretized version

Often, we cannot compute the infinite series of \(P_{\alpha}\), and instead we use \(P_{\alpha,M}\) \(M\in\mathbb{N}\). We call the obtained method the discretized version, because we discretized the corresponding integral by the Clenshaw–Curtis formula. For \(\mathbf{y}_{N}(t)=Y^{T}\Psi(t)\), we can define \(\mathcal{T}_{M}y_{N}\) by

$$\mathcal{T}_{M}\mathbf{y}_{N}:=\frac{\alpha}{B(\alpha )}E^{-1}AY^{T}P_{\alpha,M} \Psi, $$

and we have

$$\mathbf{p}_{N}\mathcal{T}_{M}\mathbf{y}_{N}= \frac{\alpha}{B(\alpha )}E^{-1}AY^{T}P_{\alpha,M}\Psi. $$

The numerical approximation by the discretized version can now be obtained by solving

$$ \mathbf{y}_{N,M}=\mathbf{p}_{N} \mathcal{T}_{M}\mathbf{y}_{N,M}+\mathbf {p}_{N} \widetilde{\mathbf{f}}_{N}, $$

where \(\mathbf{y}_{N,M}(t)=Y_{M}^{T}\Psi(t)\). According to Remark 3.3, we have

$$\bigl\Vert (\mathbf{p}_{N}\mathcal{T}_{M}- \mathbf{p}_{N}\mathcal{T})\mathbf{y}_{N} \bigr\Vert _{\infty}= \biggl\Vert \frac{\alpha}{B(\alpha)}E^{-1}AY^{T}(P_{\alpha ,M}-P_{\alpha}) \Psi \biggr\Vert _{\infty}\rightarrow0 $$

as \(M\rightarrow\infty\) on \(\mathbb{C}^{0,\lambda}\) with \(\lambda >0\). Hence, by Theorem 6.2,

$$\Vert \mathbf{p}_{N}\mathcal{T}_{M}- \mathbf{p}_{N}\mathcal{T} \Vert _{\infty }\rightarrow0 $$

as \(M\rightarrow\infty\) on Banach space \(\mathbb{C}^{0,\lambda}\). In order to show that \(I-\mathbf{p}_{N}\mathcal{T}_{M}\) is invertible, we note that

$$I-\mathbf{p}_{N}\mathcal{T}_{M}=(I-\mathbf{p}_{N} \mathcal {T}) \bigl(I+(I-\mathbf{p}_{N}\mathcal{T})^{-1}( \mathbf{p}_{N}\mathcal {T}\mathbf{y}_{N}- \mathbf{p}_{N}\mathcal{T}_{M})\bigr). $$

Regarding the arguments of previous section, \((I-\mathbf{p}_{N}\mathcal {T})\) is invertible for sufficiently large N. Also, \((I+(I-\mathbf {p}_{N}\mathcal{T})^{-1}(\mathbf{p}_{N}\mathcal{T}\mathbf{y}_{N}-\mathbf {p}_{N}\mathcal{T}_{M}))\) is invertible by geometric series theorem for sufficiently large M, since \(\|(I-\mathbf{p}_{N}\mathcal {T})^{-1}(\mathbf{p}_{N}\mathcal{T}\mathbf{y}_{N}-\mathbf{p}_{N}\mathcal {T}_{M})\|_{\infty}\rightarrow0\) as \(M\rightarrow\infty\). Thus, we conclude that, for all sufficiently large M and N, say \(M>M_{0}\) and \(N>N_{0}\), the operator \(I-\mathbf{p}_{N}\mathcal{T}_{M}\) is invertible. This fact guarantees the existence of the numerical solution and we can write

$$\mathbf{y}_{N,M}=(I-\mathbf{p}_{N}\mathcal{T}_{M})^{-1} \mathbf {p}_{N}\widetilde{\mathbf{f}}_{N}. $$

In order to provide the convergence analysis, we note that

$$\Vert \mathbf{y}_{N,M}-\mathbf{y} \Vert _{\infty}\leq \Vert \mathbf {y}_{N,M}-\mathbf{y}_{N} \Vert _{\infty}+ \Vert \mathbf{y}_{N}-\mathbf{y} \Vert _{\infty}. $$

Therefore, it remains to provide the convergence for \(\|\mathbf {y}_{N,M}-\mathbf{y}_{N}\|_{\infty}\). Since

$$\mathbf{y}_{N,M}-\mathbf{y}_{N}=\mathbf{p}_{N} \mathcal{T}_{M}\mathbf {y}_{N,M}-\mathbf{p}_{N} \mathcal{T}\mathbf{y}_{N}=\mathbf{p}_{N} \mathcal{T}_{M}(\mathbf{y}_{N,M}-\mathbf{y}_{N})+ \mathbf{p}_{N} (\mathcal {T}_{M}-\mathcal{T}) \mathbf{y}_{N}, $$

we conclude that, for sufficiently large M and N, we have

$$\mathbf{y}_{N,M}-\mathbf{y}_{N}=(I-\mathbf{p}_{N} \mathcal {T}_{M})^{-1}\mathbf{p}_{N} ( \mathcal{T}-\mathcal{T}_{M})\mathbf{y}_{N} $$

and hence

$$\Vert \mathbf{y}_{N,M}-\mathbf{y}_{N} \Vert _{\infty}\rightarrow0 $$

as \(M,N\rightarrow\infty\).

8 Numerical examples

To show the effectiveness and efficiency of the method some examples are presented for illustration. In the rest of the paper, we assume that \(B(\alpha)=1\), and we obtain the maximum error by

$$E_{i}(N)=\max\biggl\{ \bigl\vert y_{iN}(t)-y_{i}(t) \bigr\vert : t=jh, h=\frac{T}{100}, j=0,\ldots ,100\biggr\} , $$

for \(i=1,\ldots,n\). Here, \(y_{iN}\) and \(y_{i}\) (\(i=1,\ldots,n\)) show the ith component of the approximate and the exact solution, respectively. In all the following numerical experiments, we set \(M=N\), unless otherwise stated.

Example 8.1


Let us consider the initial value problem described by (1) with \(A=0\), \(f(t)=t\) and \(y_{0}=0\). The exact solution of this system is

$$y(t)=y_{0}+\frac{1-\alpha}{B(\alpha)}t+\frac{\alpha}{\Gamma(\alpha +2)B(\alpha)}t^{1+\alpha}. $$

Table 1 shows the results of the approximation and the exact solution on \(t=0,0.2,\ldots,1\), for different values of N. This table shows that with increasing N, the approximate solution converges to the exact solution.

Table 1 The approximation and exact solution of Example 8.1 for \(\alpha=0.5\)

Example 8.2

Consider a non-homogeneous systems (1), with constant vector-valued functions \(A=\mathbf{I}\), \(\mathbf{y}_{0}=[1,0,\ldots ,0]^{T}\), and

f(t)= ( 1 B ( α ) 1 α t E α , 2 ( α 1 α t α ) t B ( α ) 2 ! 1 α t 2 E α , 3 ( α 1 α t α ) t 2 B ( α ) ( n 1 ) ! 1 α t n 1 E α , n ( α 1 α t α ) t n 1 ) ,

with exact solution \(\mathbf{y}_{od}(t)=[1,t,t^{2},\ldots,t^{n-1}]^{T}\), on \([0,1]\). Tables 2 and 3 show the maximum error for \(\alpha=0.5\) and \(\alpha=0.9\) and various N. As these tables show, with increasing N the maximum error decreases rapidly and the proposed method converges to the exact solution.

Table 2 The max error for Example 8.2 for \(\alpha =0.5\)
Table 3 The max error for Example 8.2 for \(\alpha =0.9\)

By Theorem 6.5, \(\|\mathbf{y}-\mathbf{y}_{N}\|_{\infty}\) is less than \(c(\|\mathbf{y}-\mathbf{p}_{N}\mathbf{y}\|_{\infty}+\| \mathbf{f}-\mathbf{p}_{N} \mathbf{f}\|_{\infty})\), Therefore, we set \(c=e^{3}\) by experiment and plot the natural logarithm of them versus N. Figs. 1 and 2 show that \(\|\mathbf{y}-\mathbf {y}_{N}\|_{\infty}\) and \(c(\|\mathbf{y}-\mathbf{p}_{N}\mathbf{y}\| _{\infty}+\|\mathbf{f}-\mathbf{p}_{N} \mathbf{f}\|_{\infty})\) are similar, which confirms the theoretical analysis. This numerical experiments are obtained by setting \(M=N+30\) and \(\alpha=0.9\). For the first component, the two functions f and y are polynomials and we expect the approximate solution to be exact up to the floating point error. Tables 2 and 3 confirm this fact.

Figure 1
figure 1

The natural logarithm of the \(\|y-y_{N}\|_{\infty}\) versus N, in Example 8.2

Figure 2
figure 2

The natural logarithm of the \(e^{3}(\|y-p_{N}y\|_{\infty}+\| f-p_{N} f\|_{\infty})\) versus N, in Example 8.2

Example 8.3

Consider the system 1, with

A= ( 0.09 0.038 0.66 0.038 ) ,

\(\mathbf{y}_{0}=[0,1]^{T}\), and \(\mathbf{f}(t)=\mathbf{f}_{1}+\mathbf {f}_{2}t\) with

f 1 = ( 0.038 0.038 ) , f 2 = ( 1 1 ) .

We can obtain the exact solution using the Laplace transform as follows:

$$ \begin{aligned}[b] \mathbf{y}_{AB}(t)&=E_{\alpha}\biggl( \frac{\alpha}{B(\alpha )}E^{-1}At^{\alpha}\biggr)E^{-1} \mathbf{y}_{0} \\ &\quad{}+\frac{1-\alpha}{B(\alpha)}E_{\alpha,1}\biggl(\frac{\alpha }{B(\alpha)}E^{-1}At^{\alpha} \biggr)E^{-1}\mathbf{f}_{1} \\ &\quad{}+\frac{1-\alpha}{B(\alpha)}tE_{\alpha,2}\biggl(\frac{\alpha }{B(\alpha)}E^{-1}At^{\alpha} \biggr)E^{-1}\mathbf{f}_{2} \\ &\quad{}+\frac{\alpha}{B(\alpha)}t^{\alpha}E_{\alpha,1+\alpha }\biggl( \frac{\alpha}{B(\alpha)}E^{-1}At^{\alpha}\biggr)E^{-1} \mathbf{f}_{1} \\ &\quad{}+\frac{\alpha}{B(\alpha)}t^{\alpha+1} E_{\alpha,2+\alpha }\biggl( \frac{\alpha}{B(\alpha)}E^{-1}At^{\alpha}\biggr)E^{-1} \mathbf{f}_{2}. \end{aligned} $$

Table 4 shows the maximum error for \(\alpha=0.9\) and various N. Figures 3 and 4 show the numerical solution for various α on \([0,15]\). We observe that, as α approaches 0, the solution of this system approaches the algebraic linear system

$$ \mathbf{y}-\mathbf{y}_{0}=A\mathbf{y}+\mathbf{f}(t). $$

Figure 5 shows the error behavior of this example, for \(\alpha =0.5\). Theoretically, we expect \(\|y-y_{N}\|_{\infty}\) is less than \(c(\| y-p_{N}y\|_{\infty}+\|f-p_{N} f\|_{\infty})\) up to a constant multiplier c. We observe a similar behavior for both components of the error with \(c=e^{0.5}\). This is in complete agrement with the error analysis.

Figure 3
figure 3

Numerical solution of Example 8.3 for the first component with \(N=10\)

Figure 4
figure 4

Numerical solution of Example 8.3 for the second component with \(N=10\)

Figure 5
figure 5

The natural logarithm of the \(e^{0.5}(\|y-p_{N}y\|_{\infty}+\| f-p_{N} f\|_{\infty})\) and \(\|y-y_{N}\|_{\infty}\) versus N, in Example 8.3

Table 4 The max error for Example 8.3 for \(\alpha=0.9\), on \([0, 15]\)

8.1 Application

Since dynamical systems with ordinary or fractional differential equations are abundant in natural phenomenon, we expect that, like other type of FDEs, the AB type FDEs will also be successful in modeling of natural phenomenon. This was confirmed with the comparison in the previous section. Beside the fact that the AB type derivative has been recently introduced, many applications in this field can be found in the literature we talked about in the introduction section. One of many examples for modeling of natural phenomenon using AB type FDEs is the amount of drug lidocaine in the bloodstream and body tissue [47]. The human disease of ventricular arrythmia or irregular heartbeat is treated clinically using the drug lidocaine. Let \(X(t)\) be the amount of lidocaine in the bloodstream and \(Y(t)\) be the amount of lidocaine in body tissue. Then the dynamics of the drug therapy obeys the following system:

$$ \begin{aligned} &{_{0}D^{\alpha}_{t}}X(t)=-0.09X(t)+0.038Y(t), \\ &{_{0}D^{\alpha}_{t}}Y(t)=0.66X(t)-0.038Y(t), \end{aligned} $$

This system is equivalent to the system of Example 8.3, with \(\mathbf{f}=[0,0]^{T}\), and was solved by ordinary and Liouville–Caputo fractional derivatives in the literature. The solution of this system with AB type fractional derivative is illustrated in Figs. 6 and 7 for various value of α.

Figure 6
figure 6

Amount of drug lidocaine in the bloodstream for various value of α

Figure 7
figure 7

Amount of drug lidocaine in the body tissue for various values of α

8.2 A comparison of the proposed method with other methods

Due to the novelty of the subject, there are few numerical methods available in the literature for solving the system (1). We found only a type of predictor–corrector method introduced in [29] for solving such systems numerically. We use the numerical values reported in that paper to compare our proposed method with that one. Consider Example 4.1 of [29]. It corresponds to \(A=0\), and \(f(t)=0\), with the notations of this paper. As we see in Table 5, the result of our method, even with small \(N=1, 2,3\), is much better than the result of [29].

Table 5 A comparison between the proposed method of this paper and the method proposed in [29], (Method 1), with \(N=1,2,3\) for \(\alpha=0.5\)

9 Conclusion

We used the Clenshaw–Curtis quadrature to obtain an operational matrix of a fractional integral based on Chebyshev polynomials. Then, by taking the fractional integral from both sides of the system of FDEs involving the Mittag-Leffler kernel, we obtained a system of second kind weakly singular equations involving the fractional integral of the non-homogeneous part. This system was transformed to a system of linear algebraic equations, and using vectorization operator we obtained the system of standard linear algebraic equations. Solving it we obtained an approximate solution of the system of AB type FDEs. Numerical examples showed that the proposed method effectively and accurately solves the system of FDEs. The successfulness of the new definition of the fractional derivative and integral, involving the Mittag-Leffler kernel, makes it important to improve numerical methods for nonlinear system of FDEs and to implement numerical methods in other bases. Therefore, future studies with this new type of calculus are to find more experimental data and to compare the related results to the Liouville–Caputo and Riemann–Liouville derivatives. Also, the stability of the related fractional differential equations needs more investigation.