1 Introduction

Several computational problems in various research areas such as mathematics, fluid dynamics, chemistry, biology, viscoelasticity, engineering and physics have arisen in semi-infinite domains [19]. Subsequently, many researchers have utilized various transformations on orthogonal polynomials to map \([ -1, 1]\) into \([0, \infty)\) maintaining their orthogonal property [1015].

Spectral methods provide a computational approach that has become better known over the last decade and has become the topic of study for many researchers [1626], especially when linked with the fractional calculus [9, 2738] which is an important branch of applied mathematics. This type of differentiation and integration could be considered as a generalization of the usual definition of differentiation and integration to non-integer order.

In this paper, we study coupled Burgers equations with time-fractional derivatives given by

$$\begin{aligned}& \frac{\partial^{\alpha} u(x,t)}{\partial t^{\alpha}}=\frac{\partial^{2} u}{\partial x^{2}}+2 u \frac{\partial u}{\partial x}- \frac{\partial (uv)}{\partial x},\quad 0 < \alpha< 1, \end{aligned}$$
(1)
$$\begin{aligned}& \frac{\partial^{\beta} v(x,t)}{\partial t^{\beta}}=\frac{\partial^{2} v}{\partial x^{2}}+2 v \frac{\partial v}{\partial x}- \frac{\partial (uv)}{\partial x}, \quad 0 < \beta< 1 \end{aligned}$$
(2)

on the semi-infinite domain \([0,\infty)\).

The coupled Burgers equations have recently been applied to different areas of science, in particular in physical problems such as the phenomena of turbulence flow through a shock wave traveling in a viscous fluid (see [39, 40]).

The study of coupled Burgers equations is very important because the system is a basic model of sedimentation or evolution of scaled volume concentrations of two sorts of particles in liquid suspensions or colloids under the impact of gravity [40]. It has been studied by many authors using various techniques [4146].

In this paper, we introduce the exponential Chebyshev functions collocation method based upon orthogonal Chebyshev polynomials to solve a time-fractional coupled Burgers equation. The fractional derivative is defined in the Caputo sense for time variable which is discretized utilizing a trapezoidal quadrature formula (TQF) and a finite difference method (FDM).

The justification of this paper is to apply the Chebyshev exponential method for efficient applicable in unbounded domains with steady state property (\(u(\infty)=\mathit{constant}\)), i.e., the solution to be regular at ∞. In fact, many problems in mathematical physics and astrophysics which occur on a semi-infinite interval are related to the diffusion equations such as Burgers, KdV and heat equations. Furthermore, many methods based on polynomials basis, such as Legendre, Chebyshev, Laguerre spectral methods and also semi-analytic methods such as Adomian decomposition, variational iteration and differential transform methods, can not justify the steady state property of fluid \(u(\infty )=\mathit{constant}\). In this study we will show that such difficulty can be surmounted by our proposed method.

The error analysis of exponential Chebyshev functions expansion has also been investigated, which confirms the efficiency of the method.

2 Definitions and basic properties

In this section, we give some definitions and basic properties of fractional calculus and Chebyshev polynomials which are required for our subsequent development.

2.1 Definition of fractional calculus

Here we recall definition and basic results of fractional calculus; for more details, we refer to [32].

Definition 1

A real function \(u(t)\), \(t>0\) is said to be in the space \(C_{\mu}\), \(\mu\in\mathbb{R}\) if there exists a real number \(p>\mu\) such that \(u(t)=t^{p}u_{1}(t)\), where \(u_{1}(t)\in C(0,\infty)\), and it is said to be in the space \(C_{\mu}^{n}\) if and only if \(u^{(n)}\in C_{\mu}\), \(n\in\mathbb{N}\).

Definition 2

The Riemann-Liouville fractional integral operator of order \(\alpha>0\), of a function \(f\in C_{\mu}\), \(\mu\geq-1\), is defined as

$$\begin{aligned}& I^{\alpha}u(t)=\frac{1}{\Gamma(\alpha)} \int_{0}^{t} (t-s)^{\alpha-1}u(s)\,ds,\quad \alpha>0, \\& I^{0} u(t)=u(t), \end{aligned}$$

where \(\Gamma(\cdot)\) is the well-known gamma function.

Definition 3

The fractional derivative of \(u(t)\) in the Caputo sense is defined as

$$\begin{aligned} D^{\alpha}u(t)=I^{m-\alpha}D^{m} u(t), \end{aligned}$$

for \(m-1<\alpha\leq m\), \(m\in\mathbb{N}\), \(t>0\) and \(u\in C_{-1}^{m}\). Also it can be rewritten in the following form:

$$D^{\alpha}u(t) =\frac{1}{\Gamma(m-\alpha)} \int_{0}^{t}(t-x)^{m-\alpha-1}u^{(m)}(x)\,dx. $$

Similar to integer-order differentiation, Caputo fractional differential has the linear property:

$$D^{\alpha} \bigl( c_{1}f_{1}(t)+c_{2}f_{2}(t) \bigr) =c_{1}D^{\alpha}f_{1}(t)+c_{2}D^{\alpha}f_{2}(t), $$

where \(c_{1}\) and \(c_{2}\) are constants. If so, for the Caputo derivative, we have the following basic properties:

$$\begin{aligned} & \mbox{(i)}\quad D^{\alpha}t^{\gamma}= \textstyle\begin{cases} \frac{\Gamma(\gamma+1)}{\Gamma(\gamma-\alpha+1)} t^{\gamma-\alpha}, & \mbox{for } \gamma\in\mathbb{N}_{0}\mbox{ and } \gamma\geq\lceil \alpha\rceil \mbox{ or } \gamma\notin\mathbb{N}\mbox{ and } \gamma> \lfloor \alpha\rfloor, \\ 0, & \mbox{for } \gamma\in\mathbb{N}_{0}, \end{cases}\displaystyle \end{aligned}$$
(3)
$$\begin{aligned} & \mbox{(ii)}\quad D^{\alpha}(c)=0, \\ & \mbox{(iii)}\quad I^{\alpha}D^{\alpha} u(t) = u(t)- \sum_{i=0}^{m-1}\frac{u^{(i)}(0)}{i!}t^{i}, \end{aligned}$$
(4)

where c is constant, \(\lfloor\alpha\rfloor\) and \(\lceil\alpha\rceil\) are floor and ceiling functions, respectively, \(\mathbb{N}_{0}= \lbrace0,1,2,\ldots \rbrace\) and \(\mathbb{N}= \lbrace1,2,\ldots \rbrace\).

2.2 Exponential Chebyshev functions

The well-known first kind Chebyshev polynomials of degree n, defined on the interval \([-1,1]\), are given by

$$T_{n}(s)=\cos \bigl(n \cos^{-1}(s) \bigr), $$

where \(s=\cos(\theta)\), and thus the following property is immediately obtained:

$$\begin{aligned} T_{n}(s)=\cos(n \theta)\leq1. \end{aligned}$$
(5)

Also, we have the relation

$$\begin{aligned} T_{n+1}(s)=2xT_{n}(s)-T_{n-1}(s), \quad n=1,2,3, \ldots, \end{aligned}$$
(6)

where \(T_{0}(s)=1\) and \(T_{1}(s)=s\). \(T_{n}(s)\) is the eigenfunction of the singular Sturm-Liouville problem

$$\begin{aligned} \sqrt{1-s^{2}} \partial_{s} \bigl(\sqrt{1-s^{2}} \partial_{s} T_{n}(s) \bigr)+n^{2}T_{n}(s)=0. \end{aligned}$$
(7)

The first kind Chebyshev polynomials are orthogonal in the interval \([-1,1]\) with respect to the weight function

$$w(s) =\frac{ 1}{\sqrt{1-s^{2}}}. $$

The analytic form of the first kind Chebyshev polynomials of degree n is given by

$$\begin{aligned} T_{n}(s)=n\sum_{k=0}^{n} (-2)^{k}\frac{(n+k-1)!}{(n-k)!(2k)!}(1-s)^{k}, \quad n>0. \end{aligned}$$
(8)

From Eq. (5), the zeroes of \(T_{n}(s)\) are obtained as follows:

$$\begin{aligned} s_{k}=\cos \biggl(\pi\frac{2k+1}{2n} \biggr), \quad k=0,1, \ldots, n-1. \end{aligned}$$
(9)

Definition 4

Exponential Chebyshev functions

We can use exponential transformation to have new functions which are defined on the semi-infinite interval. The nth exponential Chebyshev functions can be defined by the one-to-one transformation

$$s=1-2e^{-\frac{x}{L}},\quad L>0, $$

as

$$\begin{aligned} E_{n}(x) = T_{n}(s)=T_{n} \bigl(1-2e^{-\frac{x}{L}}\bigr). \end{aligned}$$
(10)

According to (6) and (10), we may deduce the recurrences relation for \(E_{n}(x)\) in the form

$$E_{n+1}{(x)}=2\bigl(1-2e^{-\frac{x}{L}}\bigr)E_{n}(x)-E_{n-1}(x), $$

with starting values \(E_{0}(x)=1\), \(E_{1}(x)=1-2e^{-\frac{x}{L}}\). The first few exponential Chebyshev functions of the first kind are as follows:

$$\begin{aligned} \textstyle\begin{cases} E_{0}(x)=1, \\ E_{1}(x)=1-2e^{-\frac{x}{L}}, \\ E_{2}(x)=1-8e^{-\frac{x}{L}}+8e^{-\frac{2x}{L}},\\ E_{3}(x)=1-18e^{-\frac{x}{L}}+48e^{-\frac{2x}{L}}-32e^{-\frac{3x}{L}}. \end{cases}\displaystyle \end{aligned}$$
(11)

According to (7), \(E_{n}(x)\) is the nth eigenfunction of the singular Sturm-Liouville problem

$$\begin{aligned} L^{2}\sqrt{\exp(x/L)-1} \partial_{x} \bigl(\sqrt{\exp(x/L)-1} \partial_{x} E_{n}(x) \bigr)+n^{2}E_{n}(x)=0. \end{aligned}$$
(12)

Also, from formula (8), we can directly construct the nth exponential Chebyshev functions as

$$\begin{aligned} E_{n}(x) = n \sum_{k=0}^{n} (-4)^{k} \frac{(n+k-1)!}{(n-k)!(2k)!} \exp \biggl( {-k\frac{x}{L}} \biggr) , \quad n>0. \end{aligned}$$
(13)

The roots of \(E_{n}(x)\) are immediately obtained from (9) as follows:

$$\begin{aligned} x_{k}=-L\ln \biggl(\frac{1-s_{k}}{2} \biggr), \quad k=0,1, \ldots, n-1. \end{aligned}$$
(14)

3 Function approximation

Let

$$\rho(x)=\frac{1}{L\sqrt{\exp(\frac{x}{L})-1}}, $$

which denotes a non-negative, integrable, real-valued function over the semi-infinite interval \(\Lambda=[0,\infty)\). Subsequently, we define

$$L^{2}_{\rho}(\Lambda)= \bigl\{ v:\Lambda\longrightarrow \mathbb{R} \vert v\text{ is measurable and }\Vert v\Vert _{\rho}< \infty \bigr\} , $$

where

$$\Vert v\Vert ^{2}_{\rho}= \int _{0}^{+\infty}{v^{2}(x) \rho (x) \,dx} $$

is the norm induced by the inner product of the space \(L^{2}_{\rho}(\Lambda)\),

$$\begin{aligned} \langle u,v\rangle_{\rho}= \int _{0}^{+\infty}{u(x) v(x) \rho(x) \,dx}. \end{aligned}$$
(15)

It is easily seen that \(\{E_{j}(x) \}_{j\geq0}\) denotes a system which is mutually orthogonal under (15), i.e.,

$$\bigl\langle E_{n}(x),E_{m}(x)\bigr\rangle _{\rho}=c_{n} \delta_{nm}, \quad c_{0}=\pi, c_{n}= \frac{\pi}{2}, n\geq1. $$

The classical Weierstrass theorem implies that such a system is complete in the space \(L^{2}_{\rho}(\Lambda)\). Thus, for any function \(u(x)\in L^{2}_{\rho}(\Lambda)\), the following expansion holds:

$$\begin{aligned} u(x)=\sum_{j=0}^{+\infty}{a_{j}E_{j}(x)}, \end{aligned}$$
(16)

where

$$\begin{aligned} a_{j}=c_{j}^{-1} \int _{0}^{+\infty}{u(x)E_{j}(x) \rho(x)\,dx}=c_{j}^{-1}\bigl\langle u(x),E_{i}(x)\bigr\rangle _{\rho}. \end{aligned}$$
(17)

If \(u(x)\) in (16) is truncated up to the mth terms, then it can be written as

$$\begin{aligned} u(x)\simeq u_{m}(x)=\sum_{j=0}^{m}{a_{j}E_{j}(x)}. \end{aligned}$$
(18)

Now, we can estimate an upper bound for function approximation in a special case. Firstly, the error function \(e_{m}(x)\) can be defined in the following form:

$$\begin{aligned} e_{m}(x)=u(x)-u_{m}(x),\qquad x\in\Lambda. \end{aligned}$$
(19)

The completeness of the system \(\{E_{i}(x) \}_{i\geq0}\) is equivalent to the following property as m tends to infinity:

$$u_{m}(x)\longrightarrow u(x),\qquad e_{m,w}=\bigl\Vert e_{m}(x)\bigr\Vert _{\rho}\longrightarrow0. $$

Accordingly, the \(L_{\infty}\) bound for \(e_{m}(x)\) will be

$$\begin{aligned} e_{m,\infty}&=\bigl\Vert e_{m}(x)\bigr\Vert _{\infty} =\max_{x\in\Lambda} \Biggl\vert \sum _{j=m+1}^{\infty}{a_{j}E_{j}(x)} \Biggr\vert = \max_{s\in [-1,1]} \Biggl\vert \sum_{j=m+1}^{\infty}{a_{j}T_{j}(s)} \Biggr\vert \\ &=\max_{0\leq\theta\leq2\pi} \Biggl\vert \sum_{j=m+1}^{\infty}{a_{j} \cos(j\theta)} \Biggr\vert \leq\sum_{j=m+1}^{\infty} \vert a_{j} \vert. \end{aligned}$$
(20)

Lemma 1

The \(L_{\infty}\) and \(L_{\rho}\) errors for a function \(u\in L^{2}_{\rho}(\Lambda)\), defined by (19), satisfy the following relations:

$$\begin{aligned} &e_{m,\rho}^{2}=\frac{2}{\pi} \sum _{i=m+1}^{\infty}\bigl\langle u(x),E_{i}(x)\bigr\rangle _{\rho}^{2}, \end{aligned}$$
(21)
$$\begin{aligned} &e_{m,\infty}=\bigl\Vert e_{m}(x)\bigr\Vert _{\infty} \leq\frac{2}{\pi}\sum_{i=m+1}^{\infty}\bigl\langle u(x),E_{i}(x)\bigr\rangle _{\rho}. \end{aligned}$$
(22)

Proof

The completeness of the system \(\{E_{i}(x) \}_{i\geq0}\) helped us to consider the error as

$$e_{m,\rho}^{2}=\Biggl\Vert \sum_{i=m+1}^{\infty}a_{i}E_{i}(x) \Biggr\Vert _{\rho}^{2}. $$

Using the definition of \(\Vert \cdot \Vert _{\rho}\), one has

$$e_{m,\rho}^{2}=\sum_{i=m+1}^{\infty} \sum_{j=m+1}^{\infty}a_{i}a_{j} \bigl\langle E_{i}(x),E_{j}(x)\bigr\rangle _{\rho}= \frac{\pi}{2}\sum_{i=m+1}^{\infty} \sum _{j=m+1}^{\infty}a_{i}a_{j} \delta_{ij}= \frac{\pi}{2}\sum_{i=m+1}^{\infty}{a_{i}^{2}}. $$

Now by using Eq. (17) the first relation can be proved. Also,

$$\begin{aligned} e_{m,\infty}&=\bigl\Vert e_{m}(x)\bigr\Vert _{\infty} =\max_{x\in\Lambda} \Biggl\vert \sum _{i=m+1}^{\infty}{a_{i}E_{i}(x)} \Biggr\vert = \max_{s\in [-1,1]} \Biggl\vert \sum_{i=m+1}^{\infty}{a_{i}T_{i}(s)} \Biggr\vert \\ &=\max_{0\leq\theta\leq2\pi} \Biggl\vert \sum _{i=m+1}^{\infty}{a_{i}\cos(i\theta)} \Biggr\vert \leq\sum _{i=m+1}^{\infty} \vert a_{i} \vert. \end{aligned}$$
(23)

Consequently, (17) completes the proof. □

This lemma shows that the convergence of exponential Chebyshev functions approximation is involved with the function \(u(x)\). Now, by knowing that the function \(u(x)\in L^{2}_{\rho}(\Lambda)\) has some good properties, we can present an upper bound for estimating the error of function approximation by these basis functions.

Theorem 1

Let \(u_{m}(x)\) be function approximation of \(u(x)\in L^{2}_{\rho}(\Lambda)\), obtained by (18), and \(\mathcal{U}(s)=u (-L\ln(\frac{1-s}{2}) )\) be analytic on \((-1,1)\), then an error bound for this approximation can be presented as follows:

$$e_{m,\infty}\leq M_{\infty}\frac{1}{(m+1)!} \biggl(\frac{1}{2} \biggr)^{m}, \qquad e_{m,\rho}\leq\sqrt{\frac{\pi}{3}}M_{\infty} \frac{1}{(m+1)!} \biggl(\frac{1}{2} \biggr)^{m+\frac{1}{2}}, $$

where \(M_{\infty}\geq2\max_{i} \vert \mathcal {U}^{(i)}(s)\vert , s\in(-1,1)\).

Proof

Defining \(x= -L\ln(\frac{1-s}{2})\) gives

$$\bigl\langle u(x),E_{i}(x)\bigr\rangle _{\rho}= \int_{-1}^{1} \frac{\mathcal{U}(s) T_{i}(s)}{\sqrt{1-s^{2}}} \,ds. $$

Knowing that \(\mathcal{U}(s)\) is analytic, we have

$$\begin{aligned} \bigl\langle u(x),E_{i}(x)\bigr\rangle _{\rho}=\sum _{j=0}^{i-1}\frac{\mathcal {U}^{(j)}(0)}{j!} \int _{-1}^{1}s^{j}T_{i}(s)w(s)\,ds +\frac{\mathcal{U}^{(i)}(\eta_{i})}{i!} \int _{-1}^{1}s^{i}T_{i}(s)w(s)\,ds, \quad \eta_{i}\in(-1,1). \end{aligned}$$

Using the following properties of Chebyshev polynomials

$$\int _{-1}^{1}s^{j}T_{i}(s)w(s)\,ds=0, \quad j< i, \qquad \int _{-1}^{1}s^{i}T_{i}(s)w(s)\,ds= \frac{\pi}{2^{i}}, $$

yields

$$\bigl\langle u(x),E_{i}(x)\bigr\rangle _{\rho}= \frac{\pi\mathcal{U}^{(i)}(\eta_{i})}{i!2^{i}}. $$

Now, assuming \(M_{\infty}\geq2\max_{i} \vert \mathcal {U}^{(i)}(x)\vert \), \(s\in(-1,1)\) and using (23), we get

$$e_{m,\infty}\leq M_{\infty}\sum_{i=m+1}^{\infty} \frac{1}{i!2^{i}} \leq M_{\infty}\frac{1}{(m+1)!2^{m}}. $$

Now, according to Lemma 1, we can prove the theorem as follows:

$$\begin{gathered} e_{m,\rho}^{2}\leq\frac{\pi}{2}M_{\infty}^{2} \sum_{i=m+1}^{\infty} \frac{1}{(i!)^{2}2^{2i}}\leq\pi M_{\infty}^{2}\frac{1}{((m+1)!)^{2}} \sum _{i=m+1}^{\infty}\frac{1}{2^{2i+1}}, \\ e_{m,\rho}\leq\sqrt{\frac{\pi}{3}}M_{\infty} \frac{1}{(m+1)!} \biggl(\frac{1}{2} \biggr)^{m+\frac{1}{2}}. \end{gathered} $$

 □

From the previous theorem, any real function defined in \(L^{2}_{\rho}(\Lambda)\), whose mapping under the transformation \(-L\ln(\frac{1-s}{2})\) is analytic, has a convergence series solution in the form (18). Furthermore, we can show that the error defined in (19) has superlinear convergence defined below.

Definition 5

\(x_{m}\) tends to with superlinear convergence if there exist a positive sequence \(\lambda_{m}\longrightarrow0\) and an integer number N such that

$$\begin{aligned} \vert x_{m+1}-\bar{x}\vert \leq\lambda_{m} \vert x_{m}-\bar {x}\vert ,\quad m\geq N. \end{aligned}$$
(24)

Theorem 2

In Theorem  1, let \(M\geq M_{i}\) for any integer i, then the error is superlinear convergence to zero.

Proof

Choosing the positive sequence

$$\lambda_{m}=\frac{1}{2m} $$

for Theorem 1 gives \(e_{m+1}\leq\lambda_{m} e_{m}\), and consequently, Definition 5 completes the proof. □

According to Theorem 2, any function \(u(x)\in L^{2}_{\rho}(\Lambda )\) that is analytic under the transformation \(x=-L\ln(\frac{1-s}{2})\) has a superlinear convergence series in the form (16).

4 Spectral collection method to solve TFCBEs

In this section, we discuss the spectral collection method to solve the following time-fractional coupled Burgers equations:

$$ \begin{gathered} \frac{\partial^{\alpha} u(x,t)}{\partial t^{\alpha}} = L_{1}\bigl[u(x,t),v(x,t)\bigr], \quad 0 < \alpha< 1, \\ \frac{\partial^{\beta} v(x,t)}{\partial t^{\beta}} =L_{2}\bigl[u(x,t),v(x,t)\bigr], \quad 0 < \beta< 1, \end{gathered} $$
(25)

where \(L_{1}\) and \(L_{2}\) are some derivative operators. The initial and boundary conditions are

$$\begin{aligned} &u(x,0)=I_{u}(x), \qquad v(x,0)=I_{v}(x), \\ &u(0,t)= B_{1}(t), \qquad u(\infty,t)=B_{2}(t), \\ &v(0,t)=B_{3}(t), \qquad v(\infty,t)=B_{4}(t). \end{aligned}$$

The functions \(u(x,t)\) and \(v(x,t)\) are discretized in time \(t=t_{n}\), and then they can be expanded by the exponential Chebyshev functions as follows:

$$\begin{aligned} &u(x,t_{n})\simeq u_{m}(x,t_{n})= \sum_{i=0}^{m}a_{i}^{n}E_{i}(x), \qquad v(x,t_{n})\simeq v_{m}(x,t_{n})=\sum _{i=0}^{m}b_{i}^{n}E_{i}(x). \end{aligned}$$
(26)

Also, the time-fractional derivative can be discretized by TQF and FDM as well.

4.1 Trapezoidal quadrature formula

Now we consider the following fractional differential equation:

$$\begin{aligned} D_{*}^{\alpha}u(t) = f\bigl(u(t),t\bigr),\qquad u(0) = {u_{0}}, 0< \alpha< 1, \end{aligned}$$
(27)

which, by applying (4), converts to the Volterra integral equation

$$\begin{aligned} u(t) = u(0) + \frac{1}{{\Gamma(\alpha)}} \int _{0}^{t} {{{(t - s )}^{\alpha - 1}}f\bigl(u(s ),s \bigr)} \,ds. \end{aligned}$$
(28)

For the numerical computation of (28), the integral is replaced by TQF at point \(t_{n}\)

$$\begin{aligned} \int _{0}^{{t_{n} }} {{{({t_{n}} - s )}^{\alpha - 1}} g(s )} \,ds \approx \int _{0}^{{t_{n}}} {{{({t_{n}} - s )}^{\alpha - 1}} {{\widetilde{g}}_{n}}(s )} \,ds, \end{aligned}$$
(29)

where \(g(s)=f(s,u(s))\) and \({\widetilde{g}_{n} }(s )\) is the piecewise linear interpolation of g with nodes and knots chosen at \(t_{j}\), \(j=0,1,2,\ldots,n\). After some elementary calculations, the right-hand side of (29) gives [47]

$$\begin{aligned} \int _{0}^{{t_{n}}} {{{({t_{n} } - s )}^{\alpha- 1}} {{\widetilde{g}}_{n}}(s )} \,ds = \frac{{{\tau^{\alpha}} }}{{\alpha(\alpha + 1)}} \sum_{j = 0}^{n} {{{\gamma_{j,n} ^{(\alpha)}}}g({t_{j}})}, \end{aligned}$$
(30)

where

$$\begin{aligned} {\gamma_{j,n} ^{(\alpha)}} = \textstyle\begin{cases} {(n-1)^{\alpha+1}} - (n-1- \alpha){n ^{\alpha}},&\mbox{if }j = 0, \\ {(n-j+1)^{\alpha+1}} +{(n -j- 1)^{\alpha+ 1}} - 2 {(n-j)^{\alpha+1}},&\mbox{if }1 \le j \le n-1, \\ 1,& \mbox{if }j = n \end{cases}\displaystyle \end{aligned}$$
(31)

and \({\gamma_{j,n} ^{(\alpha)}}\) is a positive number bounded by (\(0 < {\gamma_{j,n} ^{(\alpha)}} \leq1 \)).

From (29) we immediately get

$$\begin{aligned}& \biggl\vert \int _{0}^{{t_{n} }} {{{({t_{n} } - s )}^{\alpha - 1}}g(s )} \,ds - \int _{0}^{{t_{n} }} {{{({t_{n} } - s )}^{\alpha - 1}} {{\widetilde{g}}_{n} }(s )} \,ds \biggr\vert \\& \quad \leq\max _{0 \le t \le{t_{n} }} \bigl\vert g(t) - {{\widetilde{g}}_{n} }(t) \bigr\vert \int _{0}^{{t_{n}}} \bigl\vert {{{({t_{n}} - s )}^{\alpha - 1}}} \bigr\vert \,ds, \end{aligned}$$
(32)

so that error bounds and orders of convergence for product integration follow from standard results of approximation theory. For a piecewise linear approximation to a smooth function \(g(t)\), the produced TQF is of second order [48].

Accordingly, the time-fractional derivative for Eqs. (25) can be converted to the following singular integro-partial differential equations:

$$\begin{aligned}& u(x,t)=u(x,0) + \frac{1}{{\Gamma(\alpha)}} \int _{0}^{t} {{{(t - s )}^{\alpha - 1}}L_{1} \bigl[u(x,s ),v(x,s )\bigr]} \,ds, \\& v_{m}(x,t)=v(x,0) + \frac{1}{{\Gamma(\beta)}} \int _{0}^{t} {{{(t - s )}^{\beta - 1}}L_{2} \bigl[u(x,s ),v(x,s )\bigr]} \,ds. \end{aligned}$$

Then TQF (30) together with (26) gives

$$\begin{aligned} &u_{m}(x,t_{n})=I_{u}(x) + s_{\alpha}\sum_{j = 0}^{n} {{\gamma_{j,n}^{(\alpha)}}L_{1}\bigl[u_{m}(x,t_{j} ),v_{m}(x,t_{j} )\bigr]} \,ds, \end{aligned}$$
(33)
$$\begin{aligned} &v_{m}(x,t_{n})=I_{v}(x) + s_{\beta}\sum_{j = 0}^{n} {{\gamma_{j,n} ^{(\beta)}}L_{2}\bigl[u_{m}(x,t_{j} ),v_{m}(x,t_{j} )\bigr]} \,ds, \end{aligned}$$
(34)

where \(s_{\alpha}={\tau^{\alpha}}/{{\Gamma(\alpha+2 )}}\). From the above equations, the unknown coefficients \(a_{i}^{n}\) and \(b_{i}^{n}\), \(i=0,1,\ldots,m\), should be determined for any step of time. To do so, we use \(m-1\) collocation nodes \(x_{k}\), the roots of \(E_{m-1}(x)\), together with the boundary conditions as follows:

$$\begin{aligned}& u_{m}(x_{k},t_{n})=I_{u}(x_{k}) + s_{\alpha}\sum_{j = 0}^{n} {{\gamma_{j,n}^{(\alpha)}}L_{1}\bigl[u_{m}(x_{k},t_{j} ),v_{m}(x_{k},t_{j} )\bigr]} \,ds, \\& v_{m}(x_{k},t_{n})=I_{v}(x_{k}) + s_{\beta} \sum_{j = 0}^{n} {{\gamma_{j,n} ^{(\beta)}}L_{2}\bigl[u_{m}(x_{k},t_{j} ),v_{m}(x_{k},t_{j} )\bigr]} \,ds, \\& \sum_{i=0}^{m}(-1)^{i}a_{i}^{n}=B_{1}(t_{n}), \qquad\sum_{i=0}^{m}a_{i}^{n}=B_{2}(t_{n}), \end{aligned}$$
(35)
$$\begin{aligned}& \sum_{i=0}^{m}(-1)^{i}b_{i}^{n}=B_{3}(t_{n}), \qquad\sum_{i=0}^{m}b_{i}^{n}=B_{4}(t_{n}). \end{aligned}$$
(36)

For any time step, the above equations form an algebraic system of nonlinear equations with \(2m+2\) unknowns which can be solved by the fixed point iterative method.

4.2 Finite difference approximations for time-fractional derivative

In this section, a fractional order finite difference approximation [27, 49] for the time-fractional partial differential equations is proposed.

Define \(t_{j}=j\tau\), \(j=0,1,2,\ldots,n\), where \(\tau={T}/{n}\). The time-fractional derivative term of order \(0< \alpha\leq1 \) with respect to time at \(t=t_{n}\) is approximated by the following scheme:

$$ \begin{aligned}[b] \frac{\partial^{\alpha}u(x,t_{n})}{\partial t^{\alpha}} &= \frac{1}{\Gamma(1-\alpha)} \int_{t_{0}}^{t_{n}}(t_{n}-s)^{-\alpha} \frac{\partial u(x,s)}{\partial s}\,ds \\ &=\frac{1}{\Gamma(1-\alpha)} \sum_{j=0}^{n-1} \int_{t_{j}}^{t_{j+1}} (t_{n}-s)^{-\alpha} \frac{\partial u(x,s)}{\partial s}\,ds \\ &\simeq\frac{1}{\Gamma(1-\alpha)} \sum_{j=0}^{n-1} \int_{t_{j}}^{t_{j+1}} (t_{n}-s)^{-\alpha} \frac{u(x,t_{j+1})-u(x,t_{j})}{\tau} \\ &= \sum_{j=0}^{n-1}w_{n-j-1}^{(\alpha)} \bigl(u(x,t_{j+1})-u(x,t_{j}) \bigr) \\ &=w_{0}^{(\alpha)}u(x,t_{n})-w_{n-1}^{(\alpha)}u(x,t_{0}) +\sum_{j=1}^{n-1}\bigl(w_{n-j}^{(\alpha)}-w_{n-j-1}^{(\alpha)} \bigr)u(x,t_{j}). \end{aligned} $$
(37)

Similarly,

$$\begin{aligned} \frac{\partial^{\beta}v(x,t_{n})}{\partial t^{\beta}}\simeq w_{0}^{(\beta)}v(x,t_{n})-w_{n-1}^{(\beta)}v(x,t_{0}) +\sum_{j=1}^{n-1}\bigl(w_{n-j}^{(\beta)}-w_{n-j-1}^{(\beta)} \bigr)v(x,t_{j}), \end{aligned}$$
(38)

where

$$\begin{gathered} w_{j}^{(\alpha)}=\frac{\tau^{-\alpha}}{\Gamma(2-\alpha)} \bigl((j+1)^{1-\alpha}-j^{1-\alpha}\bigr), \\ w_{j}^{(\beta)}=\frac{\tau^{-\beta}}{\Gamma(2-\beta)} \bigl((j+1)^{1-\beta }-j^{1-\beta} \bigr). \end{gathered} $$

We apply this formula to discretize the time variable. The rate of convergence of this formula is \(O(\tau^{2-\alpha}) \).

Accordingly, Eqs. (25), using the initial conditions, are converted to

$$\begin{aligned} & w_{0}^{(\alpha)}u(x,t_{n})-L_{1} \bigl[u(x,t_{n}),v(x,t_{n})\bigr] =w_{n-1}^{(\alpha)}I_{u}(x) -\sum_{j=1}^{n-1}\bigl(w_{n-j}^{(\alpha)}-w_{n-j-1}^{(\alpha)} \bigr)u(x,t_{j}), \end{aligned}$$
(39)
$$\begin{aligned} & w_{0}^{(\beta)}v(x,t_{n})-L_{2} \bigl[u(x,t_{n}),v(x,t_{n})\bigr] =w_{n-1}^{(\beta)}I_{v}(x) -\sum _{j=1}^{n-1}\bigl(w_{n-j}^{(\beta)}-w_{n-j-1}^{(\beta)} \bigr)v(x,t_{j}). \end{aligned}$$
(40)

Again, similar to the last subsection, we use \(m-1\) collocation nodes \(x_{k}\), which are the roots of \(E_{m-1}(x)\), together with the boundary conditions (35) and (36) to obtain the unknown coefficients \(a_{i}^{n}\) and \(b_{i}^{n}\) at any time step.

5 Numerical experiments

In this section, we present four examples to illustrate the numerical results.

Example 1

Consider the following time-fractional coupled Burgers equations:

$$\begin{aligned}& \frac{\partial^{\alpha} u(x,t)}{\partial t^{\alpha}} =\frac{\partial^{2} u}{\partial x^{2}}+2 u \frac{\partial u}{\partial x} - \frac{\partial(uv)}{\partial x}+f(x,t),\quad 0 < \alpha< 1, \\& \frac{\partial^{\beta} v(x,t)}{\partial t^{\beta}} =\frac{\partial^{2} v}{\partial x^{2}}+2 v \frac{\partial v}{\partial x} - \frac{\partial(uv)}{\partial x}+g(x,t),\quad 0 < \beta< 1 \end{aligned}$$

with the initial conditions

$$\begin{aligned} u(x,0)=1,\qquad v(x,0)=1, \end{aligned}$$

and the boundary conditions

$$\begin{aligned} &u(0,t)=0.841 t^{3} +1, \qquad v(0,t)=0.841 t^{3} +1, \\ &u(x,t),v(x,t)\rightarrow1 \quad\text{when } x\rightarrow \infty. \end{aligned}$$

Also, \(f(x,t)\) and \(g(x,t)\) are given by

$$\begin{aligned} &f(x,t)=\frac{3! \sin(e^{-x})t^{3-\alpha}}{\Gamma(4-\alpha)} +t^{3} e^{-2x} \sin \bigl(e^{-x}\bigr)-t^{3} e^{-x} \cos \bigl(e^{-x}\bigr), \\ &g(x,t)=\frac{3! \sin(e^{-x})t^{3-\beta}}{\Gamma(4-\beta)} +t^{3} e^{-2x} \sin \bigl(e^{-x}\bigr)-t^{3} e^{-x} \cos \bigl(e^{-x}\bigr). \end{aligned}$$

Exact solution for this problem is \(u(x,t)=v(x,t)=t^{3} \sin(e^{-x})+1\).

In the first problem, we explain the proposed method with more details. Firstly, we approximate \(u(x,t_{n})\), \(v(x,t_{n})\) and their derivatives by ECFs as follows:

$$\begin{aligned} & u_{m}(x,t_{n}) =\sum_{i=0}^{m}a_{i}^{n} E_{i}(x), \qquad v_{m}(x,t_{n}) =\sum _{i=0}^{m}b_{i}^{n} E_{i}(x), \\ &\frac{\partial u_{m}(x,t_{n})}{\partial x} =\sum_{i=0}^{m}a_{i}^{n} E_{i}^{\prime}(x), \qquad \frac{\partial v_{m}(x,t_{n})}{\partial x} =\sum _{i=0}^{m}b_{i}^{n} E_{i}'(x), \\ &\frac{\partial^{2} u_{m}(x,t_{n})}{\partial x^{2}} =\sum_{i=0}^{m}a_{i}^{n} E_{i}''(x), \qquad \frac{\partial^{2} v_{m}(x,t_{n})}{\partial x^{2}} = \sum_{i=0}^{m}b_{i}^{n}E_{i}''(x). \end{aligned}$$

For this problem, the operators \(L_{1}\) and \(L_{2}\) defined in (25) after substituting the collocation nodes \(x_{k}\) are obtained as follows:

$$\begin{aligned}& L_{1k}^{j} =L_{1}\bigl[u_{m}(x_{k},t_{j}),v_{m}(x_{k},t_{j}) \bigr]\\& \hphantom{L_{1k}^{j} }=\sum_{i=0}^{m}a_{i}^{j}E_{i}^{\prime\prime}(x_{k}) +2\sum_{i=0}^{m}a_{i}^{j}E_{i}(x_{k}) \sum_{i=0}^{m}a_{i}^{j} E_{i}^{\prime}(x_{k}) \\& \hphantom{L_{1k}^{j} =} {}-\sum_{i=0}^{m}a_{i}^{j}E_{i}(x_{k}) \sum_{i=0}^{m}b_{i}^{j} E_{i}^{\prime}(x_{k}) -\sum _{i=0}^{m}b_{i}^{j}E_{i}(x_{k}) \sum_{i=0}^{m}a_{i}^{j} E_{i}^{\prime}(x_{k}) +f(x_{k},t_{j}), \\& L_{2k}^{j}=L_{2}\bigl[u_{m}(x_{k},t_{j}),v_{m}(x_{k},t_{j}) \bigr]\\& \hphantom{L_{2k}^{j}}=\sum_{i=0}^{m}b_{i}^{j}E_{i}^{\prime\prime}(x_{k}) +2\sum_{i=0}^{m}b_{i}^{j}E_{i}(x_{k}) \sum_{i=0}^{m}b_{i}^{j} E_{i}^{\prime}(x_{k}) \\& \hphantom{L_{2k}^{j}=} {}-\sum_{i=0}^{m}a_{i}^{j}E_{i}(x_{k}) \sum_{i=0}^{m}b_{i}^{j} E_{i}^{\prime}(x_{k}) -\sum _{i=0}^{m}b_{i}^{j}E_{i}(x_{k}) \sum_{i=0}^{m}a_{i}^{j} E_{i}^{\prime}(x_{k}) +g(x_{k},t_{j}). \end{aligned}$$

Note that the values of \(E_{i}(x_{k})\) and its derivatives can be obtained from Eq. (13) as well.

TQF implementation

Now TQF gives the following \(2m-2\) equations at any step of time \(t_{n}\)

$$\begin{aligned}& \sum_{i=0}^{m}a_{i}^{n}E_{i}(x_{k})-s_{\alpha}L_{1k}^{n}=I_{u}(x_{k}) +s_{\alpha}\sum_{j=0}^{n-1} \gamma_{j,n}^{(\alpha)}L_{1k}^{j},\quad k=1, \ldots,m-1, \\& \sum_{i=0}^{m}b_{i}^{n}E_{i}(x_{k})-s_{\beta}L_{2k}^{n}=I_{v}(x_{k}) +s_{\beta}\sum_{j=0}^{n-1} \gamma_{j,n}^{(\beta)}L_{2k}^{j},\quad k=1, \ldots,m-1, \end{aligned}$$

where for this problem \(I_{u}(x_{k})=I_{v}(x_{k})=1\). Also it should be noted that the second hand sides of the above equations are known since they are obtained in the last steps of time. The above equations together with the boundary conditions (35) and (36)

$$\begin{aligned} &\sum_{i=0}^{m}(-1)^{i}a_{i}^{n}=0.841 t^{3}_{n} +1, \quad\sum_{i=0}^{m} a_{i}^{n}=1, \end{aligned}$$
(41)
$$\begin{aligned} &\sum_{i=0}^{m}(-1)^{i}b_{i}^{n}=0.841 t^{3}_{n} +1,\quad \sum_{i=0}^{m} b_{i}^{n}=1, \end{aligned}$$
(42)

construct a system of nonlinear equations which can be solved by the Newton method (or fsolve command) to find the coefficients \(a_{j}^{n}\) and \(b_{j}^{n}\) at any step of time.

FDM implementation

After substituting the collocation nodes \(x_{k}\) in Eqs. (39) and (40), and knowing that \(I_{u}(x_{k})=I_{v}(x_{k})=1\), the following \(2m-2\) equations at any step of time \(t_{n}\) are generated as

$$\begin{aligned} & w_{0}^{(\alpha)}u(x_{k},t_{n})-L_{1k}^{n} =w_{n-1}^{(\alpha)}-\sum_{j=1}^{n-1} \bigl(w_{n-j}^{(\alpha)} -w_{n-j-1}^{(\alpha)}\bigr)\sum _{i=0}^{m}a_{i}^{j}E_{i}(x_{k}), \end{aligned}$$
(43)
$$\begin{aligned} & w_{0}^{(\beta)}v(x_{k},t_{n})-L_{2k}^{n} =w_{n-1}^{(\beta)}-\sum_{j=1}^{n-1} \bigl(w_{n-j}^{(\beta)} -w_{n-j-1}^{(\beta)}\bigr)\sum _{i=0}^{m}b_{i}^{j}E_{i}(x_{k}). \end{aligned}$$
(44)

Now these equations along with four boundary conditions that appear in Eqs. (41) and (42) give a nonlinear system of equations which can be solved by the Newton method (or fsolve command) to find the coefficients \(a_{j}^{n}\) and \(b_{j}^{n}\) at any step of time.

The maximum errors \(e_{m,\infty}(u)\) and \(e_{m,\infty}(v)\) obtained via the proposed methods are shown in Figure 1 with parameter \(L=3\). A comparison between TQF and FDM reveals that TQF approach is superior to FDM.

Figure 1
figure 1

Example 1: Comparison for the maximum absolute errors \(\pmb{e_{m,\infty}}\) with \(\pmb{m=6}\) , \(\pmb{\alpha=\beta=0.6 }\) , \(\pmb{L=3}\) ) between the spectral collection method with TQF and FDM.

Example 2

We consider the time-fractional coupled Burgers equation with the initial condition

$$\begin{aligned} u(x,0)=0,\qquad v(x,0)=0, \end{aligned}$$

and the boundary conditions

$$\begin{aligned} &u(0,t)=\frac{1}{3}t^{3}, \qquad u\rightarrow \frac{1}{2}t^{3}\quad \mbox{as } x\rightarrow\infty, \\ &v(0,t)=\frac{1}{3}t^{3}, \qquad v\rightarrow \frac{1}{2}t^{3} \quad \mbox{as } x\rightarrow\infty, \end{aligned}$$

where \(f(x,t)\) and \(g(x,t)\) are given by

$$ \begin{gathered} f(x,t)=\frac{3! t^{3-\alpha}}{(e^{-x}+2)\Gamma(4-\alpha)} -\frac{2t^{3} e^{-2x}}{(e^{-x}+2)^{3}} + \frac{t^{3} e^{-x}}{(e^{-x}+2)^{2}}, \\ g(x,t)=\frac{3! t^{3-\beta}}{(e^{-x}+2)\Gamma(4-\beta)} -\frac{2t^{3} e^{-2x}}{(e^{-x}+2)^{3}} +\frac{t^{3} e^{-x}}{(e^{-x}+2)^{2}}. \end{gathered} $$

Exact solution for this problem is \(u(x,t)=v(x,t)=\frac{t^{3}}{(e^{-x}+2)} \).

The maximum absolute errors for time-fractional coupled Burgers equation for this problem with (\(\alpha=0.4 \), \(\beta=0.4\), \(L=3\)) are reported in Tables 1 and 2.

Table 1 Example 2: Maximum absolute errors \(\pmb{e_{m,\infty}}\) with \(\pmb{m=5 }\) , \(\pmb{\alpha=0.4 }\) , \(\pmb{\beta =0.4}\) and \(\pmb{L=3}\)
Table 2 Example 2: Maximum absolute errors \(\pmb{e_{m,\infty}}\) with \(\pmb{\tau=1/128}\) , \(\pmb{\alpha=0.4 }\) , \(\pmb{\beta=0.4}\) and \(\pmb{L=3}\)

Example 3

We consider the time-fractional coupled Burgers equation of the first example with the initial condition

$$\begin{aligned} u(x,0)=1,\qquad v(x,0)=1, \end{aligned}$$

and the boundary conditions

$$\begin{aligned} &u(0,t)=t^{6} +1, \qquad u\rightarrow1 \quad \mbox{as } x \rightarrow \infty, \\ &v(0,t)=t^{6} +1, \qquad v\rightarrow1 \quad \mbox{as } x \rightarrow \infty, \end{aligned}$$

where \(f(x,t)\) and \(g(x,t)\) are given by

$$ f(x,t)=\frac{6! e^{-x} t^{6-\alpha}}{\Gamma(7-\alpha)}-t^{6} e^{x} $$
$$ g(x,t)=\frac{6! e^{-x}t^{6-\beta}}{\Gamma(7-\beta)}+t^{6} e^{x}. $$

Exact solution of this problem is \(u(x,t)=v(x,t)=t^{6} e^{-x}+1\).

The maximum absolute errors for a time-fractional coupled Burgers equation for this problem are reported in Tables 3 and 4.

Table 3 Example 3: Maximum absolute errors \(\pmb{e_{m,\infty}}\) with \(\pmb{m=5 }\) , \(\pmb{\alpha=0.5 }\) , \(\pmb{\beta =0.5}\) and \(\pmb{L=3}\)
Table 4 Example 3: Maximum absolute errors \(\pmb{e_{m,\infty}}\) with \(\pmb{\tau=1/128}\) , \(\pmb{\alpha=0.5 }\) , \(\pmb{\beta=0.5}\) and \(\pmb{L=3}\)

Example 4

Considering the following homogeneous TFCBEs:

$$\begin{aligned}& \frac{\partial^{\alpha} u(x,t)}{\partial t^{\alpha}} =\frac{\partial^{2} u}{\partial x^{2}}+ v \frac{\partial u}{\partial x},\quad 0 < \alpha< 1, \end{aligned}$$
(45)
$$\begin{aligned}& \frac{\partial^{\beta} v(x,t)}{\partial t^{\beta}} =\frac{\partial^{2} v}{\partial x^{2}}+u\frac{\partial v}{\partial x},\quad 0 < \beta< 1 \end{aligned}$$
(46)

with the initial and boundary conditions

$$\begin{aligned} &u(x,0)=\frac{2}{1+e^{-x}}, \qquad v(x,0)=\frac{2}{1+e^{-x}}, \\ &u(0,t)=\frac{2}{1+e^{-t}}, \qquad u(\infty,t)=2, \\ &v(0,t)=\frac{2}{1+e^{-x}}, \qquad v(\infty,t)=2. \end{aligned}$$

For this problem, only for the case \(\alpha=\beta=1\), the exact solution is \(u(x,t)=v(x,t)=\frac{2}{1+e^{-(x+t)}}\). Table 5 illustrates the maximum error for this case when \(\tau=1/64\). When \(\alpha=\beta=0.5\), we report the difference between the values of \(u_{7}\) and \(u_{6}\) at the final time \(T=1\) in Table 6. Also, the graph of \(u(x,t_{n})=v(x,t_{n})\), for different time steps, for this case is displayed in Figure 2.

Figure 2
figure 2

Graph of \(\pmb{u(x,t_{n})=v(x,t_{n})}\) in a different time step and \(\pmb{\alpha=\beta=0.5}\) .

Table 5 Example 4: Maximum absolute errors \(\pmb{e_{m,\infty}}\) with \(\pmb{\tau=1/64}\) , \(\pmb{\alpha=\beta=1}\) and \(\pmb{L=3}\)
Table 6 Example 4: Absolute errors \(\pmb{\vert u_{7}-u_{6}\vert}\) and \(\pmb{\vert v_{7}-v_{6}\vert}\) with \(\pmb{\alpha=\beta=0.5}\) , in the final time

Example 5

Consider the following homogeneous time-fractional Burgers equation [50, 51]:

$$\begin{aligned}& \frac{\partial^{\alpha} u(x,t)}{\partial t^{\alpha}} =\nu\frac{\partial^{2} u}{\partial x^{2}}- v \frac{\partial u}{\partial x}, \quad 0 < \alpha< 1 \end{aligned}$$
(47)
$$\begin{aligned}& \frac{\partial^{\beta} v(x,t)}{\partial t^{\beta}} =\nu\frac{\partial^{2} v}{\partial x^{2}}- u\frac{\partial v}{\partial x}, \quad 0 < \beta< 1 \end{aligned}$$
(48)

with the following initial and boundary conditions:

$$\begin{aligned} &u(x,0)=v(x,0)=\frac{\mu+\sigma+(\sigma-\mu)e^{\frac{\mu}{\nu }(x-\lambda)}}{1+e^{\frac{\mu}{\nu}(x-\lambda)}}, \\ &u(0,t)=v(0,t)=\frac{\mu+\sigma+(\sigma-\mu)e^{\frac{\mu}{\nu }(-\sigma t-\lambda)}}{1+e^{\frac{\mu}{\nu}(-\sigma t-\lambda)}}, \\ &u(\infty,t)=v(\infty,t)=0, \end{aligned}$$

where μ, σ, λ and ν are arbitrary constants. For this problem, the exact solution only exists in the case \(\alpha=\beta=1\) as follows:

$$u(x,t)=v(x,t)=\frac{\mu+\sigma+(\sigma-\mu) e^{\frac{\mu}{\nu}(x-\sigma t-\lambda)}}{1+e^{\frac{\mu}{\nu}(x-\sigma t-\lambda)}}. $$

We can compare the results obtained by the proposed method and three-term solution of the differential transform method (DTM) [50] for \(\alpha=\beta=1\). Figure 3 (left) displays the maximum error for these methods with \(\nu=1 \), \(\mu=-1\), \(\lambda=0\) and \(\sigma=-1\).

Figure 3
figure 3

Example 5: Maximum absolute errors for the function \(\pmb{u(x,1)}\) (left) and the comparison between methods for the function \(\pmb{u(x,1)}\) with \(\pmb{\tau=1/10}\) and \(\pmb{m=5}\) (right).

Also, we can compare our results by the variational iteration method (VIM) [52] for different α and β. We report the results obtained by the proposed method and VIM [52] for \(u(x,t)\) at the final time \(T=1\) while \(\alpha=\beta=0.5\) in Figure 3 (right).

6 Conclusion

In this paper we presented a numerical method for solving the time-fractional Burgers equation by utilizing the exponential Chebyshev functions and TQF and FDM as well. Numerical results illustrate the validity and efficiency of the method and comparison for the maximum absolute errors between spectral collection method with TQF and FDM. This technique can be used to solve fractional time partial differential equations.