1 Introduction

Many phenomena in engineering and applied sciences can be represented successfully using fractional calculus models such as anomalous diffusion, materials, and mechanics, signal processing, biological systems, finance, etc. (see, for instance, [17]). There is a tremendous interest in fractional differential equations, as the theory of fractional derivatives itself and its applications have been intensively developed. The theory of fractional differential equations describes the reality of life more powerfully and systematically. In recent years, several researchers have studied differential equations of fractional order through diverse techniques [811].

The time-fractional Burgers’ equation is important since it is a kind of sub-diffusion convection equation. Several different methods for solving the equation have been developed such as the local fractional Riccati differential equation method in [12], the homotopy analysis transform method [13], the finite difference method [14], the variational iteration method [15]. The study of coupled Burgers equations is significant for t-dimensional. The system is a simple sedimentation model or the evolution of scaled volume concentrations of two types of fluid suspension or colloid particles under the influence of gravity. Many powerful methods had been developed to find analytic or numerical solutions of coupled Burgers’ equations such as homotopy perturbation method [16], differential transformation method [17], non-polynomial spline method [18], septic b-spline collocation method [19], Galerkin quadratic b-spline method [20], Adomian decomposition method [21], meshless radial point interpolation method [22].

Several vital analytical and numerical techniques have been proposed to solve the coupled nonlinear time-fractional Burger equations NLTFBEs. Prakash et al. [23] suggested an analytical algorithm based on the homotopy perturbation Sumudu transform method to investigate the coupled NLFBEs. Hoda et al. [24] introduced the Laplace–Adomian decomposition method, the Laplace variational iteration method, and the reduced differential transformation method for solving the one-dimensional and two-dimensional fractional coupled Burgers’ equations. In [25] Liu and Hou explicitly applied the generalized two-dimensional differential transform method to solve the coupled space- and time-fractional Burgers equations (STFBEs). Heydari and Avazzadeh [26] proposed an effective numerical method based on Hahn polynomials to solve the nonsingular variable-order time-fractional coupled Burgers’ equations. The authors in [27] suggested a hybrid spectral exponential Chebyshev approach based on a spectral collection method to solve the coupled TFBEs. Veeresha and Prakasha [28] and Singh et al. [29] presented the q-homotopy analysis transform method to solve the coupled TFBEs and STFBEs, respectively. The coupled STFBEs have also been solved using the Adomian decomposition method by Chan and An [30]. Islam and Akbar [31] obtained exact general solutions of the coupled STFBEs by using the generalized \((G'/G)\)-expansion method with the assistance of the complex fractional transformation. Prakash et al. [32] proposed the fractional variational iteration method to solve the same problem. Hussein [33] proposed a continuous and discrete-time weak Galerkin finite element approach for solving two-dimensional time-fractional coupled Burgers’ equations. Hussain et al. [34] obtained the numerical solutions of the coupled TFBEs using the meshfree spectral method.

In comparison with local methods, spectral methods are often efficient and highly accurate systems. Convergence speed is one of the spectral methods’ great advantages. Furthermore, spectral methods have a high level of precision (often so-called “exponential convergence”). The primary idea of all spectral method versions is to express the solution to the problem as a finite sum of certain foundation functions and then to choose the coefficients, to minimize the difference between precise and approximate solutions. Recently, the classical spectral methods have been developed to obtain accurate solutions for linear and nonlinear FDEs. Spectral approaches with the assistance of operational matrices of orthogonal polynomials have been considered to approximate the solution of FDEs (see, for example, [3539]).

One of the disadvantages of the non-polynomial method is that the time step size must be small enough. The main advantage of the proposed methods is that they are easy to implement. Also, the solutions can be obtained with high accuracy with relatively fewer spatial grid nodes compared with other numerical techniques. For this reason and because the current methods can be directly applied to other applications, we are motivated to apply these techniques for coupled Burgers equations.

In this paper we develop two accurate numerical methods to approximate the numerical solutions of the coupled TFBEs. The first method is the non-polynomial B-spline method [8, 4042] based on the L1-approximation and finite difference approximations for spatial derivatives. The second method is the shifted Jacobi spectral collocation method [4345] with the assistance of the operational matrix of fractional and integer-order derivatives. The collocation approach proposed in this paper is somewhat different from those collocation methods commonly discussed in the literature. Now, we consider the time-fractional coupled Burgers’ equations of the form

$$\begin{aligned} \frac{\partial ^{\alpha _{1}}u}{\partial t^{\alpha _{1}}} & = \frac{\partial ^{2}u}{\partial x^{2}}+2u\frac{\partial u}{\partial x}- \frac{\partial (uv)}{\partial x}+f (x,t ), \quad 0< \alpha _{1}\leq 1, \end{aligned}$$
(1)
$$\begin{aligned} \frac{\partial ^{\alpha _{2}}v}{\partial t^{\alpha _{2}}} & = \frac{\partial ^{2}v}{\partial x^{2}}+2v\frac{\partial v}{\partial x}- \frac{\partial (uv)}{\partial x}+g (x,t ),\quad 0< \alpha _{2}\leq 1, \end{aligned}$$
(2)

subject to the initial and boundary conditions

$$\begin{aligned}& u(x,0)=p(x), \qquad v(x,0)=q(x),\qquad a\leq x\leq b \\& u(a,t)=f_{1}(t),\qquad u(b,t)=f_{2}(t),\qquad v(a,t)=g_{1}(t),\qquad v(b,t)=g_{2}(t),\quad t>0, \end{aligned}$$

where \(\alpha _{1}\) and \(\alpha _{2}\) are parameters describing fractional derivatives, x and t are the space and time variables, respectively. u and v are the velocity components to be determined. \(f (x,t )\) and \(g (x,t )\) are continuous functions on their domains. The functions \(p(x)\), \(q(x)\), \(f_{1}(t)\), \(f_{2}(t)\), \(g_{1}(t)\), \(g_{2}(t)\) are sufficient smooth functions. The fractional derivatives of order \(\alpha _{1}\) and \(\alpha _{2}\) in Eqs. (1) and (2) are treated in the sense of Liouville–Caputo defined by Jerome and Oldham [6]. In the case of \(\alpha _{1}=\alpha _{2}=1\), Eqs. (1) and (2) are reduced to the classical coupled Burgers equations.

Definition 1

([1])

A real function \(u(t)\), \(t > 0\), is said to be in the space \(C_{\varOmega }\), \(\varOmega \in \mathbb{R}\) if there exists a real number \(p >\varOmega \) such that \(u(t) = t^{p} u_{1}(t)\), where \(u_{1}(t) \in C(0,\infty )\), and it is said to be in the space \(C^{n}_{\varOmega }\) if and only if \(u^{(n)}\in C_{\varOmega }\), \(n\in \mathbb{N}\).

Definition 2

([21])

The Liouville–Caputo fractional derivative of \(u\in C_{\varOmega }^{n}\) (\(\varOmega \geq -1\)) is defined as

$$ \frac{\partial ^{\alpha }u(x,t)}{\partial t^{\alpha }}= \frac{1}{\Gamma (n-\alpha )} { \int _{0}^{t}} \frac{\partial ^{n}u(x,s)}{\partial t^{n}}(t-s)^{n-\alpha -1} \,ds,\quad n-1< \alpha \leq n, n=1,2,\ldots. $$
(3)

2 Non-polynomial B-spline method

In this section, we take a spline function of the form: \(H_{3}=\operatorname{span}\{1,x,\sinh (\omega x),\cosh (\omega x)\}\), where ω is the frequency of the hyperbolic part of spline functions which will be used to raise the accuracy of the method.

2.1 Derivation of the numerical method

Consider \(x\in [a,b]\) and \(t\in [0,\tau ]\). Let \(a=x_{0}< x_{1}<\cdots<x_{N}<x_{N+1}=b\) and \(0=t_{0}< t_{1}<\cdots<t_{M}=\tau \) be the uniform meshes of the intervals \([a,b]\) and \([0,\tau ]\), where \(x_{i}=a+ih\), \(h=(b-a)/(N+1)\), and \(t_{n}=nk\), \(k=\tau /M\) for \(n=0,1,\ldots,M\) and \(i=0,1,\ldots,N+1\). Let \(U_{i}^{n}\) and \(V_{i}^{n}\) be an approximation to \(u(x_{i},t_{n})\) and \(v(x_{i},t_{n})\), respectively, obtained by the segment \(P_{i}(x,t_{n})\) of the mixed spline function passing through the points \((x_{i},U_{i}^{n})\) and \((x_{i+1},U_{i+1}^{n})\), \((x_{i},V_{i}^{n})\) and \((x_{i+1},V_{i+1}^{n})\). Each segment has the form [8, 42]

$$ P_{i}(x,t_{n})=a_{i}(t_{n}) \cosh \bigl(\omega (x-x_{i})\bigr)+b_{i}(t_{n}) \sinh \bigl(\omega (x-x_{i})\bigr)+c_{i}(t_{n}) (x-x_{i})+d_{i}(t_{n}) $$
(4)

for each \(i=0,1,\ldots,N\). To obtain expressions for the coefficients of Eq. (4) in terms of \(U_{i}^{n}\), \(U_{i+1}^{n}\), \(V_{i}^{n}\), \(V_{i+1}^{n}\), \(S_{i}^{n}\), and \(S_{i+1}^{n}\) which are as follows:

$$ U_{i}^{n}=P_{i}(x_{i},t_{n}),\qquad U_{i+1}^{n}=P_{i}(x_{i+1},t_{n}),\qquad S_{i}^{n}=P_{i}^{(2)}(x_{i},t_{n}),\qquad S_{i+1}^{n}=P_{i}^{(2)}(x_{i+1},t_{n}), $$
(5)

where \(P_{i}^{(2)}(x,t)=\frac{\partial ^{2}}{\partial x^{2}}P_{i}(x,t)\). Using Eqs. (4) and (5), we get

$$\begin{aligned}& a_{i}+d_{i}=U_{i}^{n} \\& a_{i}\cosh \theta +b_{i}\sinh \theta +c_{i}h+d_{i}=U_{i+1}^{n}, \\& a_{i}\omega ^{2}=S_{i}^{n}, \\& a_{i}\omega ^{2}\cosh \theta +\omega ^{2}b_{i}\sinh \theta =S_{i+1}^{n}, \end{aligned}$$

where \(a_{i}\equiv a_{i}(t_{n})\), \(b_{i}\equiv b_{i}(t_{n})\), \(c_{i}\equiv c_{i}(t_{n})\), \(d_{i}\equiv d_{i}(t_{n})\), and \(\theta =\omega h\).

Solving the last four equations, we obtain the following expressions:

$$\begin{aligned}& a_{i} =\frac{h^{2}}{\theta ^{2}}S_{i}^{n},\qquad b_{i} = \frac{h^{2} ( S_{i+1}^{n}-S_{i}^{n} \cosh \theta ) }{\theta ^{2}\sinh \theta }, \\& c_{i} =\frac{ ( U_{i+1}^{n}-U_{i}^{n} ) }{h}- \frac{h ( S_{i+1}^{n}-S_{i}^{n} ) }{\theta ^{2}},\qquad d_{i}=U_{i}^{n}- \frac{h^{2}}{\theta ^{2}} S_{i}^{n}. \end{aligned}$$
(6)

Using the continuity condition of the first derivative at \(x=x_{i}\), that is, \(P_{i}^{\prime }(x_{i},t_{n})=P_{i-1}^{\prime }(x_{i},t_{n})\), we get the following equation:

$$ b_{i}\omega +c_{i}=a_{i-1}\omega \sinh \theta +b_{i-1}\omega \cosh \theta +c_{i-1}. $$
(7)

Using Eq. (6), and after slight rearrangements, Eq. (7) becomes

$$ U_{i+1}^{n}-2U_{i}^{n}+U_{i-1}^{n}= \gamma S_{i+1}^{n}+\beta S_{i}^{n}+ \gamma S_{i-1}^{n} ,\quad i=1,2,\ldots,N. $$
(8)

Similarly, we get

$$ V_{i+1}^{n}-2V_{i}^{n}+V_{i-1}^{n}= \gamma \rho _{i+1}^{n}+\beta \rho _{i}^{n}+ \gamma \rho _{i-1}^{n} ,\quad i=1,2,\ldots,N, $$
(9)

where \(\gamma =\frac{h^{2}}{\theta ^{2} }-\frac{h^{2}}{\theta \sinh \theta }\), \(\beta = \frac{2h^{2}\cosh \theta }{\theta \sinh \theta }- \frac{2h^{2}}{\theta ^{2}}\), and \(\theta =\omega h \).

Remark 1

As \(\omega \longrightarrow 0\), that is, \(\theta \longrightarrow 0\), \(( \gamma ,\beta ) \longrightarrow ( \frac{h^{2}}{6},\frac{4h^{2}}{6} )\), and Eqs. (8) and (9) become as follows:

$$\begin{aligned}& U_{i+1}^{n}-2U_{i}^{n}+U_{i-1}^{n}= \frac{h^{2}}{6} \bigl( S_{i+1}^{n}+4S_{i}^{n}+S_{i-1}^{n} \bigr) ,\quad i=1,2,\ldots,N, \\& V_{i+1}^{n}-2V_{i}^{n}+V_{i-1}^{n}= \frac{h^{2}}{6} \bigl( \rho _{i+1}^{n} +4\rho _{i}^{n}+\rho _{i-1}^{n} \bigr) ,\quad i=1,2,\ldots,N. \end{aligned}$$

From Eqs. (1) and (2), we write \(S_{i}^{n}\) and \(\rho _{i}^{n}\) in the form

$$\begin{aligned}& S_{i}^{n} =\frac{\partial ^{2}U_{i}^{n}}{\partial x^{2}}= \frac{\partial ^{\alpha _{1}}U_{i}^{n}}{\partial t^{\alpha _{1}}}+ \bigl( V_{i}^{n}-2U_{i}^{n} \bigr) \frac{\partial U_{i}^{n}}{\partial x}+U_{i}^{n}\frac{\partial V_{i}^{n}}{\partial x}-f (x_{i},t_{n} ), \end{aligned}$$
(10)
$$\begin{aligned}& \rho _{i}^{n} =\frac{\partial ^{2}V_{i}^{n}}{\partial x^{2}}= \frac{\partial ^{\alpha _{2}}V_{i}^{n}}{\partial t^{\alpha _{2}}}+ \bigl( U_{i}^{n}-2V_{i}^{n} \bigr) \frac{\partial V_{i}^{n}}{\partial x}+V_{i} ^{n} \frac{\partial U_{i}^{n}}{\partial x}-g (x_{i},t_{n} ). \end{aligned}$$
(11)

The time-fractional partial derivatives of order \(\alpha _{1}\) and \(\alpha _{2}\) in Eq. (1) are considered in the Liouville–Caputo fractional derivatives, which can be approximated by the following lemma.

Lemma 1

([46])

Suppose \(0<\alpha <1\) and \(g(t)\in C^{2}[0,t_{n}]\), it holds that

$$\begin{aligned} &\Biggl\vert \frac{1}{\Gamma (1-\alpha )} \int _{0}^{t_{n}} \frac{g^{\prime }(t)}{(t_{n}-t)^{\alpha }}\,dt - \frac{k^{-\alpha }}{\Gamma (2-\alpha )} \\ &\qquad {} \times\Biggl[ g(t_{n})-\varphi _{n-1}g(t_{0})- \sum_{q=1}^{n-1}\bigl( \varphi _{n-q-1}^{\alpha }-\varphi _{n-q}^{\alpha } \bigr)g(t_{q}) \Biggr] \Biggr\vert \\ &\quad \leq \frac{1}{\Gamma (2-\alpha )} \biggl[ \frac{1-\alpha }{12}+ \frac{2^{2-\alpha }}{2-\alpha }- \bigl( 1+2^{-\alpha } \bigr) \biggr] \max _{0\leq t\leq t_{n}} \bigl\vert g^{\prime \prime }(t) \bigr\vert k^{2-\alpha }, \end{aligned}$$
(12)

where \(\varphi _{q}^{\alpha }=(q+1)^{1-\alpha }-q^{1-\alpha }\), \(q\geq 0\).

Lemma 2

([47])

Let \(0<\alpha <1\) and \(\varphi _{q}=(q+1)^{1-\alpha }-q^{1-\alpha }\), \(q= 0, 1, \ldots\) , then \(1=\varphi _{0}^{\alpha }>\varphi _{1}^{\alpha }>\cdots >\varphi _{q}^{\alpha }\rightarrow 0\), as \(q\rightarrow \infty \).

Using Lemma 1, the Liouville–Caputo fractional derivative can be approximated as follows:

$$\begin{aligned}& \frac{\partial ^{\alpha _{1}}U_{i}^{n}}{\partial t^{\alpha _{1}}} = \sigma _{1} \Biggl[ U_{i}^{n}- \varphi _{n-1}^{\alpha _{1}}U_{i}^{0}- \sum _{q=1}^{n-1} \bigl( \varphi _{n-q-1}^{\alpha _{1}}- \varphi _{n-q}^{\alpha _{1}} \bigr) U_{i}^{q} \Biggr] +O \bigl( k^{2- \alpha _{1}} \bigr), \end{aligned}$$
(13)
$$\begin{aligned}& \frac{\partial ^{\alpha _{1}}V_{i}^{n}}{\partial t^{\alpha _{2}}} = \sigma _{2} \Biggl[ V_{i}^{n}- \varphi _{n-1}^{\alpha _{2}}V_{i}^{0}- \sum _{q=1}^{n-1} \bigl( \varphi _{n-q-1}^{\alpha _{2}}- \varphi _{n-q}^{\alpha _{2}} \bigr) V_{i}^{q} \Biggr] +O \bigl( k^{2- \alpha _{2}} \bigr), \end{aligned}$$
(14)

where \(\sigma _{1}=\frac{k^{-\alpha _{1}}}{\Gamma (2-\alpha _{1})}\), \(\sigma _{2}=\frac{k^{-\alpha _{2}}}{\Gamma (2-\alpha _{2})}\).

Substituting Eqs. (13) and (14) into Eqs. (10) and (11), the spatial derivatives \(S_{r}^{n}\) and \(\rho _{r} ^{n}\), \(r=i-1, i, i+1\), are discretized for \(n=1\) and \(n\geq 2\) as follows:

$$\begin{aligned}& S_{i-1}^{1} \simeq \sigma _{1} \bigl( U_{i-1}^{1}-U_{i-1}^{0} \bigr) + \frac{\delta _{i-1}^{1}}{2h} \bigl(4U_{i}^{1}-3U_{i-1}^{1}-U_{i+1}^{1} \bigr) \\& \hphantom{S_{i-1}^{1} \simeq{}}{} +\frac{\eta _{i-1}^{1}}{2h} \bigl(4V_{i}^{1}-3V_{i-1}^{1}-V_{i+1}^{1} \bigr)-f_{i-1}^{1} , \\& S_{i}^{1} \simeq \sigma _{1} \bigl( U_{i}^{1}-U_{i}^{0} \bigr) + \frac{\delta _{i}^{1}}{2h} \bigl(U_{i+1}^{1}-U_{i-1}^{1} \bigr) + \frac{\eta _{i}^{1}}{2h} \bigl(V_{i+1}^{1}-V_{i-1}^{1} \bigr)-f_{i}^{1} , \end{aligned}$$
(15)
$$\begin{aligned}& S_{i+1}^{1} \simeq \sigma _{1} \bigl(U_{i+1}^{1}-U_{i+1}^{0} \bigr) + \frac{\delta _{i+1}^{1}}{2h} \bigl(U_{i-1}^{1}-4U_{i}^{1}+3U_{i+1}^{1} \bigr) \\& \hphantom{S_{i+1}^{1} \simeq {}}{} +\frac{\eta _{i+1}^{1}}{2h} \bigl(V_{i-1}^{1}-4V_{i}^{1}+3V_{i+1}^{1} \bigr)-f_{i+1}^{1} , \\ & \rho _{i-1}^{1} \simeq \sigma _{2} \bigl(V_{i-1}^{1}-V_{i-1} ^{0} \bigr) +\frac{\zeta _{i-1}^{1}}{2h} \bigl(4V_{i}^{1}-3V_{i-1}^{1}-V_{i+1}^{1} \bigr) \\ & \hphantom{\rho _{i-1}^{1} \simeq{}}{} +\frac{\xi _{i-1}^{1}}{2h} \bigl(4U_{i}^{1}-3U_{i-1}^{1}-U_{i+1}^{1} \bigr)-g_{i-1}^{1} , \\ & \rho _{i}^{1} \simeq \sigma _{2} \bigl(V_{i}^{1}-V_{i}^{0} \bigr) + \frac{\zeta _{i}^{1}}{2h} \bigl(V_{i+1}^{1}-V_{i-1}^{1} \bigr) + \frac{\xi _{i}^{1}}{2h} \bigl( U_{i+1}^{1}-U_{i-1}^{1} \bigr)-g_{i}^{1} , \end{aligned}$$
(16)
$$\begin{aligned}& \rho _{i+1}^{1} \simeq \sigma _{2} \bigl(V_{i+1}^{1}-V_{i+1} ^{0} \bigr) +\frac{\zeta _{i+1}^{1}}{2h} \bigl(V_{i-1}^{1}-4V_{i}^{1}+3V_{i+1}^{1} \bigr) \\ & \hphantom{\rho _{i+1}^{1} \simeq{}}{} +\frac{\xi _{i+1}^{1}}{2h} \bigl(U_{i-1}^{1}-4U_{i}^{1}+3U_{i+1}^{1} \bigr)-g_{i+1}^{1} , \\ & S_{i-1}^{n} \simeq \sigma _{1} \bigl(U_{i-1}^{n}-\varphi _{n-1}^{ \alpha _{1}}U_{i-1}^{0} \bigr) +\sigma _{1}{ \sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{ \alpha _{1}} \bigr)U_{i-1}^{q} +\frac{\delta _{i-1}^{n}}{2h} \\ & \hphantom{ S_{i-1}^{n} \simeq{}}{}\times \bigl(-3U_{i-1}^{n}+4U_{i}^{n}-U_{i+1}^{n} \bigr) + \frac{\eta _{i-1}^{n}}{2h} \bigl(-3V_{i-1}^{n}+4V_{i}^{n}-V_{i+1}^{n} \bigr)-f_{i-1}^{n} , \\ & S_{i}^{n} \simeq \sigma _{1} \bigl(U_{i}^{n}-\varphi _{n-1}^{\alpha _{1}}U_{i}^{0} \bigr) +\sigma _{1}{ \sum_{q=1}^{n-1}} \bigl( \varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{\alpha _{1}} \bigr)U_{i}^{q} \\ & \hphantom{S_{i}^{n} \simeq{}}{} +\frac{\delta _{i}^{n}}{2h} \bigl(U_{i+1}^{n}-U_{i-1}^{n} \bigr) + \frac{\eta _{i}^{n}}{2h} \bigl(V_{i+1}^{n}-V_{i-1}^{n} \bigr)-f_{i}^{n} , \end{aligned}$$
(17)
$$\begin{aligned}& S_{i+1}^{n} \simeq \sigma _{1} \bigl(U_{i+1}^{n}-\varphi _{n-1}^{ \alpha _{1}}U_{i+1}^{0} \bigr) +\sigma _{1} { \sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{ \alpha _{1}} \bigr)U_{i+1}^{q} \\ & \hphantom{S_{i+1}^{n} \simeq{}}{} +\frac{\delta _{i+1}^{n}}{2h} \bigl(U_{i-1}^{n}-4U_{i}^{n}+3U_{i+1}^{n} \bigr) +\frac{\eta _{i+1}^{n}}{2h} \bigl(V_{i-1}^{n}-4V_{i}^{n}+3V_{i+1}^{n} \bigr)-f_{i+1}^{n}, \\ & \rho _{i-1}^{n}\simeq \sigma _{2} \bigl(V_{i-1}^{n}-\varphi _{n-1}^{ \alpha _{2}}V_{i-1} ^{0} \bigr) +\sigma _{2} { \sum _{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}- \varphi _{n-q}^{ \alpha _{1}} \bigr)V_{i-1}^{q}+ \frac{\zeta _{i-1}^{n}}{2h} \\ & \hphantom{\rho _{i-1}^{n}\simeq{}}{}\times \bigl(-3V_{i-1}^{n}+4V_{i-1}^{n}-V_{i+1}^{n} \bigr) + \frac{\xi _{i-1}^{n}}{2h} \bigl(-3U_{i-1}^{n}+4U_{i-1}^{n}-U_{i+1}^{n} \bigr)-g_{i-1}^{n}, \\ & \rho _{i}^{n} \simeq \sigma _{2} \bigl(V_{i}^{n}-\varphi _{n-1}^{ \alpha _{2}}V_{i}^{0} \bigr) +\sigma _{2} { \sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{ \alpha _{1}} \bigr)V_{i}^{q} \\ & \hphantom{\rho _{i}^{n} \simeq{}}{} +\frac{\zeta _{i}^{n}}{2h} \bigl(V_{i+1}^{n}-V_{i-1}^{n} \bigr) + \frac{\xi _{i}^{n}}{2h} \bigl( U_{i+1}^{n}-U_{i-1}^{n} \bigr)-g_{i}^{n}, \\ & \rho _{i+1}^{n} \simeq \sigma _{2} \bigl(V_{i+1}^{n}-\varphi _{n-1}^{ \alpha _{2}}V_{i+1}^{0} \bigr) +\sigma _{2}{ \sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{ \alpha _{1}} \bigr)V_{i+1}^{q} \\ & \hphantom{\rho _{i+1}^{n} \simeq{}}{} +\frac{\zeta _{i+1}^{n}}{2h} \bigl(V_{i-1}^{n}-4V_{i}^{n}+3V_{i+1}^{n} \bigr) +\frac{\xi _{i+1}^{n}}{2h} \bigl(U_{i-1}^{n}-4U_{i}^{n}+3U_{i+1}^{n} \bigr)-g_{i+1}^{n}, \end{aligned}$$
(18)

where \(\delta _{i}^{n}= ( V_{i}^{n}-2U_{i}^{n} ) \), \(\zeta _{i}^{n}= ( U_{i}^{n}-2V_{i}^{n} ) \), \(\eta _{i}^{n}=U_{i}^{n}\), and \(\xi _{i}^{n}=V_{i}^{n}\), \(i=1,2,\ldots,N\).

Substituting Eqs. (15) to (18) into Eqs. (8) and (9), after slight rearrangement, yields the following systems:

$$\begin{aligned}& A_{1}U_{i-1}^{1} +A_{2}U_{i}^{1}+A_{3}U_{i+1}^{1}+A_{4}V_{i-1}^{1}+A_{5} V_{i}^{1}+A_{6}V_{i+1}^{1} \\& \quad =A_{7}U_{i-1}^{0}+A_{8}U_{i}^{0}+A_{7}U_{i+1}^{0}- \gamma f_{i-1}^{1}- \beta f_{i}^{1} -\gamma f_{i+1}^{1}, \end{aligned}$$
(19)
$$\begin{aligned}& B_{1}V_{i-1}^{1} +B_{2}V_{i}^{1}+B_{3}V_{i+1}^{1}+B_{4}U_{i-1}^{1}+B_{5}U_{i}^{1}+B_{6}U_{i+1}^{1} \\& \quad =B_{7}V_{i-1}^{0}+B_{8}V_{i}^{0}+B_{7}V_{i+1}^{0}- \gamma g_{i-1}^{1}- \beta g_{i}^{1} -\gamma g_{i+1}^{1}, \end{aligned}$$
(20)
$$\begin{aligned}& A_{1}U_{i-1}^{n} +A_{2}U_{i}^{n}+A_{3}U_{i+1}^{n}+A_{4}V_{i-1}^{n}+A_{5} V_{i}^{n}+A_{6}V_{i+1}^{n} \\& \quad =A_{7}U_{i-1}^{0}+A_{8} U_{i}^{0}+A_{7}U_{i+1}^{0}- (\lambda _{1} )_{i}^{n}-\gamma f_{i-1}^{n}-\beta f_{i}^{n} - \gamma f_{i+1}^{n}, \end{aligned}$$
(21)
$$\begin{aligned}& B_{1}V_{i-1}^{n} +B_{2}V_{i}^{n}+B_{3}V_{i+1}^{n}+B_{4}U_{i-1}^{n}+B_{5} U_{i}^{n}+B_{6}U_{i+1}^{n} \\& \quad =B_{7}V_{i-1}^{0}+B_{8}V_{i}^{0}+B_{7}V_{i+1}^{0}- (\lambda _{2} )_{i}^{n}-\gamma g_{i-1}^{n}-\beta g_{i}^{n} - \gamma g_{i+1}^{n}. \end{aligned}$$
(22)

where \(i=1,2,\ldots,N\), \(n\geq 2\) and

$$\begin{aligned}& A_{1}=1-\gamma \sigma _{1}+\frac{3\gamma \delta _{i-1}^{n}}{2h}+ \frac{\beta \delta _{i}^{n}}{2h}-\frac{\gamma \delta _{i+1}^{n}}{2h}, \qquad A_{2}=-2-\beta \sigma _{1}-\frac{2\gamma \delta _{i-1}^{n}}{h}+ \frac{2\gamma \delta _{i+1}^{n}}{h}, \\& A_{3}=1-\gamma \sigma _{1}+\frac{\gamma \delta _{i-1}^{n}}{2h}- \frac{\beta \delta _{i}^{n}}{2h}-\frac{3\gamma \delta _{i+1}^{n}}{2h}, \qquad A_{4}= \frac{3\gamma \eta _{i-1}^{n}}{2h}+ \frac{\beta \eta _{i}^{n}}{2h}-\frac{\gamma \eta _{i+1}^{n}}{2h}, \\& A_{5}=\frac{2\gamma }{h} \bigl(\eta _{i+1}^{n}- \eta _{i-1}^{n} \bigr) , \qquad A_{6}= \frac{\gamma \eta _{i-1}^{n}}{2h}- \frac{\beta \eta _{i}^{n}}{2h}-\frac{3\gamma \eta _{i+1}^{n}}{2h} \\& A_{7}=-\gamma \sigma _{1}\varphi _{n-1}^{\alpha _{1}},\qquad A_{8}=- \beta \sigma _{1}\varphi _{n-1}^{\alpha _{1}},\qquad B_{1}=1-\gamma \sigma _{2}+\frac{3\gamma \zeta _{i-1}^{n}}{2h}+ \frac{\beta \zeta _{i}^{n}}{2h}-\frac{\gamma \zeta _{i+1}^{n}}{2h}, \\& B_{2}=-2-\beta \sigma _{2}-\frac{2\gamma \zeta _{i-1}^{n}}{h}+ \frac{2\gamma \zeta _{i+1}^{n}}{h},\qquad B_{3}=1-\gamma \sigma _{2}+ \frac{\gamma \zeta _{i-1}^{n}}{2h}-\frac{\beta \zeta _{i}^{n}}{2h}- \frac{3\gamma \zeta _{i+1}^{n}}{2h}, \\& B_{4}=\frac{3\gamma \xi _{i-1}^{n}}{2h}+ \frac{\beta \xi _{i}^{n}}{2h}- \frac{\gamma \xi _{i+1}^{n}}{2h},\qquad B_{5}= \frac{2\gamma }{h} \bigl(\xi _{i+1}^{n}-\xi _{i-1}^{n} \bigr) , \\& B_{6}=\frac{\gamma \xi _{i-1}^{n}}{2h}-\frac{\beta \xi _{i}^{n}}{2h}- \frac{3\gamma \xi _{i+1}^{n}}{2h},\qquad B_{7}=-\gamma \sigma _{2} \varphi _{n-1}^{\alpha _{2}}, \qquad B_{8}=-\beta \sigma _{2}\varphi _{n-1}^{ \alpha _{2}}, \\& (\lambda _{1} )_{i}^{n}=\sigma _{1} {\sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{\alpha _{1}} \bigr) \bigl[ \gamma \bigl(U_{i+1}^{q}+U_{i-1}^{q} \bigr) +\beta U_{i}^{q} \bigr], \\& (\lambda _{2} )_{i}^{n}=\sigma _{2} {\sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{2}}-\varphi _{n-q}^{\alpha _{2}} \bigr) \bigl[ \gamma \bigl(V_{i+1}^{q}+V_{i-1}^{q} \bigr) +\beta V_{i}^{q} \bigr]. \end{aligned}$$

The system thus obtained on simplifying Eqs. (19) to (22) consists of \((2N+4)\) unknown variables \(( U_{0},U_{1},\ldots,U_{N+1} ) \) and \(( V_{0},V_{1},\ldots,V_{N+1} ) \) in the \((2N ) \) linear equations. Four additional constraints are necessary to achieve a unique solution to the resulting scheme. These are obtained as follows by introducing boundary conditions:

$$ U_{0}^{n}=f_{1}(t_{n}),\qquad U_{N+1}^{n}=f_{2}(t_{n}),\qquad V_{0}^{n}=g_{1}(t_{n}),\qquad V_{N+1}^{n}=g_{2}(t_{n}). $$

Eliminating \(U_{0}\), \(U_{N+1}\) and V0, V\(_{N+1}\), the system gets reduced to a matrix system of dimension \((2N)\times (2N)\), and the initial values are obtained by the initial conditions.

Remark 2

The local truncation errors (see [46]) \(T=[T_{1i},T_{2i}]\), \(i=1,2,\dots ,N\), can be written as follows:

$$\begin{aligned}& T_{1i}= \bigl(h^{2}-(2\gamma +\beta ) \bigr) \frac{\partial ^{2} u_{i}^{n}}{\partial x^{2}} + \biggl( \frac{h^{2}}{12}-\gamma \biggr)h^{2} \frac{\partial ^{4} u_{i}^{n}}{\partial x^{4}}+O\bigl(k^{2-\alpha _{1}}+h^{6}\bigr), \end{aligned}$$
(23)
$$\begin{aligned}& T_{2i}= \bigl(h^{2}-(2\gamma +\beta ) \bigr) \frac{\partial ^{2} v_{i}^{n}}{\partial x^{2}} + \biggl( \frac{h^{2}}{12}-\gamma \biggr)h^{2} \frac{\partial ^{4} v_{i}^{n}}{\partial x^{4}}+O\bigl(k^{2-\alpha _{2}}+h^{6}\bigr). \end{aligned}$$
(24)

Equations (23) and (24) design two methods for choices of parameters β and γ as follows:

  1. 1.

    If \(2\gamma +\beta =h^{2}\) and \(\gamma \neq \frac{h^{2}}{12}\), then \(T_{ji}=O(k^{2-\alpha _{2}}+h^{4})\), \(j=1,2\).

  2. 2.

    If \(2\gamma +\beta =h^{2}\) and \(\gamma = \frac{h^{2}}{12}\), then \(T_{ji}=O(k^{2-\alpha _{2}}+h^{6})\), \(j=1,2\).

2.2 Convergence analysis

According to Remark 2, we have chosen \(2\gamma +\beta =h^{2}\), where \(\gamma =\frac{h^{2}}{12}\) and \(\beta =\frac{5h^{2}}{6}\). Let us rewrite Eqs. (21) and (22) as follows:

$$\begin{aligned} QR=P, \end{aligned}$$
(25)

where \(R=[U,V]^{T}\), \(U=[U_{1}^{n},U_{2}^{n},\dots ,U_{N}^{n}]^{T}\), \(V=[V_{1}^{n},V_{2}^{n}, \dots ,V_{N}^{n}]^{T}\) and a matrix Q is given as a block matrix

$$ \mathrm{Q} = \left [ \textstyle\begin{array}{c|c} Q_{11} & Q_{12} \\ \hline Q_{21} & Q_{22} \end{array}\displaystyle \right ], $$

where

$$ \left . \begin{aligned} &Q_{11}=Q_{0}+h^{2} \sigma _{1} Q_{1}+\frac{h}{2} Z_{\delta },\qquad Q_{12} = \frac{h}{2} Z_{\eta }, \\ &Q_{21}=\frac{h}{2} Z_{\xi }, \qquad Q_{22}=Q_{0}+h^{2}\sigma _{2} Q_{1}+ \frac{h}{2} Z_{\zeta }, \end{aligned} \right \} , $$
(26)

and square matrices \(Q_{0}\), \(Q_{1}\), and \(Z_{x}, x=\delta ,\eta ,\xi ,\zeta \) are given by

$$\begin{aligned}& Q_{0}= \begin{bmatrix} -2&1&0&0&\cdots &0 \\ 1&-2&1&0&\cdots &0 \\ 0&1&-2&1&\cdots &0 \\ \vdots &\vdots &\ddots &\ddots &\ddots &\vdots \\ 0&0&\cdots &1&-2&1 \\ 0&0&\cdots &0&1&-2 \end{bmatrix}, \\& Q_{1}= \begin{bmatrix} -5/6&-1/12&0&0&\cdots &0 \\ -1/12&-5/6&-1/12&0&\cdots &0 \\ 0&-1/12&-5/6&-1/12&\cdots &0 \\ \vdots &\vdots &\ddots &\ddots &\ddots &\vdots \\ 0&0&\cdots &-1/12&-5/6&-1/12 \\ 0&0&\cdots &0&-1/12&-5/6 \end{bmatrix}, \\& Z_{x}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \frac{x_{i+1}^{n}-x_{i-1}^{n}}{3}&\frac{3x_{i-1}^{n}-10 x_{i}^{n}-x_{i+1}^{n}}{12}&0&0 \\ \frac{3x_{i-1}^{n}+10 x_{i}^{n}-x_{i+1}^{n}}{12}&\frac{x_{i+1}^{n}-x_{i-1}^{n}}{3}&\frac{3x_{i-1}^{n}-10 x_{i}^{n}-x_{i+1}^{n}}{12}&0 \\ 0&\frac{3x_{i-1}^{n}+10 x_{i}^{n}-x_{i+1}^{n}}{12}&\frac{x_{i+1}^{n}-x_{i-1}^{n}}{3}&\frac{3x_{i-1}^{n}-10 x_{i}^{n}-x_{i+1}^{n}}{12} \\ \vdots &\vdots &\ddots &\ddots \\ 0&0&\cdots &\frac{3x_{i-1}^{n}+10 x_{i}^{n}-x_{i+1}^{n}}{12} \\ 0&0&\cdots &0 \end{array}\displaystyle \right . \\& \left . \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} &\cdots &0 \\ &\cdots &0 \\ &\cdots &0 \\ &\ddots &\vdots \\ &\frac{x_{i+1}^{n}-x_{i-1}^{n}}{3}&\frac{3x_{i-1}^{n}-10 x_{i}^{n}-x_{i+1}^{n}}{12} \\ &\frac{3x_{i-1}^{n}+10 x_{i}^{n}-x_{i+1}^{n}}{12}&\frac{x_{i+1}^{n}-x_{i-1}^{n}}{3} \end{array}\displaystyle \right ],\quad x=\delta ,\eta ,\xi , \zeta . \end{aligned}$$

A matrix \(P=[P_{1},P_{2}]^{T}\) where

$$\begin{aligned}& P_{1}= \textstyle\begin{cases} \textstyle\begin{array}{l} A_{7}U_{0}^{0}+A_{8} U_{1}^{0}+A_{7}U_{2}^{0}- (\lambda _{1} )_{1}^{n} \\ \quad {}-\gamma f_{0}^{n}-\beta f_{1}^{n} -\gamma f_{2}^{n}-A_{1} U_{0}^{n}, \end{array}\displaystyle & i=1, \\ \textstyle\begin{array}{l} A_{7}U_{i-1}^{0}+A_{8} U_{i}^{0}+A_{7}U_{i+1}^{0} \\ \quad {}- (\lambda _{1} )_{i}^{n}-\gamma f_{i-1}^{n}-\beta f_{i}^{n} -\gamma f_{i+1}^{n}, \end{array}\displaystyle & 1< i< N, \\ \textstyle\begin{array}{l} A_{7}U_{N-1}^{0}+A_{8} U_{N}^{0}+A_{7}U_{N+1}^{0}- (\lambda _{1} )_{N}^{n} \\ \quad {}-\gamma f_{N-1}^{n}-\beta f_{N}^{n} -\gamma f_{N+1}^{n}-A_{3} U_{N+1}^{n}, \end{array}\displaystyle & i=N, \end{cases}\displaystyle \\& P_{2}= \textstyle\begin{cases} \textstyle\begin{array}{l} A_{7}V_{0}^{0}+A_{8} V_{1}^{0}+A_{7}V_{2}^{0}- (\lambda _{1} )_{1}^{n} \\ \quad {}-\gamma f_{0}^{n}-\beta f_{1}^{n} -\gamma f_{2}^{n}-A_{1} V_{0}^{n}, \end{array}\displaystyle & i=1, \\ \textstyle\begin{array}{l} A_{7}V_{i-1}^{0}+A_{8} V_{i}^{0}+A_{7}V_{i+1}^{0} \\ \quad {} - (\lambda _{1} )_{i}^{n}-\gamma f_{i-1}^{n}-\beta f_{i}^{n} -\gamma f_{i+1}^{n} , \end{array}\displaystyle & 1< i< N, \\ \textstyle\begin{array}{l} A_{7}V_{N-1}^{0}+A_{8} V_{N}^{0}+A_{7}V_{N+1}^{0}- (\lambda _{1} )_{N}^{n} \\ \quad {} -\gamma f_{N-1}^{n}-\beta f_{N}^{n} -\gamma f_{N+1}^{n}-A_{3} V_{N+1}^{n} , \end{array}\displaystyle & i=N. \end{cases}\displaystyle \end{aligned}$$

Let \(\bar{U}=[u,v]^{T}\), \(u=[u_{1},u_{2},\dots ,u_{N}]^{T}\), and \(v=[v_{1},v_{2},\dots ,v_{N}]^{T}\) be the exact solutions of Eqs. (1) and (2) at nodal points \(x_{i}\), \(i=1,2,\dots ,N\), and then we have

$$ Q\bar{U}=P+T, $$
(27)

where \(T=[T_{1i},T_{2i}]^{T}\) is the local truncation error described in Remark 2. From Eqs. (25) and (27), we can write the error equation as follows:

$$ Q (\bar{U}-R )=QE=T, $$
(28)

where \(E=[E_{1i},E_{2i}]^{T}\) is the error of discretization with \(E_{1i}=u_{i}-U_{i}^{n}\) and \(E_{2i}=v_{i}-V_{i}^{n}\).

For sufficiently small step h, the diagonal blocks \(Q_{11}\) and \(Q_{22}\) are invertible and the following condition holds:

$$ \bigl\Vert Q_{12}Q_{22}^{-1} \bigr\Vert _{\infty } \bigl\Vert Q_{21}Q_{11}^{-1} \bigr\Vert _{\infty }< 1. $$

According to [48], matrix Q is invertible. Moreover,

$$ \bigl\Vert Q^{-1} \bigr\Vert _{\infty }\leq \frac{\max \{ \Vert Q_{11}^{-1} \Vert _{\infty }, \Vert Q_{22}^{-1} \Vert _{\infty } \} (1+ \Vert Q_{12}Q_{22}^{-1} \Vert _{\infty } ) (1+ \Vert Q_{21}Q_{11}^{-1} \Vert _{\infty } )}{1- \Vert Q_{12}Q_{22}^{-1} \Vert _{\infty } \Vert Q_{21}Q_{11}^{-1} \Vert _{\infty }}. $$
(29)

From Eq. (28) and norm inequalities, we have

$$ \Vert E \Vert _{\infty }\leq \bigl\Vert Q^{-1} \bigr\Vert _{\infty } \Vert T \Vert _{\infty }. $$
(30)

Since

$$ \Vert T \Vert _{\infty }\leq \textstyle\begin{cases} O(k^{2-\alpha }+h^{4}),& 2\gamma +\beta =h^{2}, \gamma \neq h^{2}/12, \\ O(k^{2-\alpha }+h^{6}),& 2\gamma +\beta =h^{2}, \gamma = h^{2}/12, \end{cases} $$

and from classifications of the matrices \(Q_{11}\), \(Q_{12}\), \(Q_{21}\), and \(Q_{22}\) defined in Eq. (26), we have

$$ \Vert E \Vert _{\infty }\leq \textstyle\begin{cases} O(k^{2-\alpha }+h^{2}),& 2\gamma +\beta =h^{2}, \gamma \neq h^{2}/12, \\ O(k^{2-\alpha }+h^{4}),& 2\gamma +\beta =h^{2}, \gamma = h^{2}/12. \end{cases} $$
(31)

This shows that Eq. (25) is a second-order convergence method in the case \(2\gamma +\beta =h^{2}\), \(\gamma \neq h^{2}/12\), and a fourth-order convergence method in the case \(2\gamma +\beta =h^{2}\), \(\gamma = h^{2}/12\).

2.3 Stability analysis of the method

The stability analysis of the difference schemes listed in Eqs. (19) to (22) is discussed by assuming the nonlinear terms \(\delta _{r}^{n}\) and \(\eta _{r}^{n}\), \(r=i-1, i, i+1\), as local constants D and E respectively.

Let \(\tilde{U}_{i}^{n}\) and \(\tilde{V}_{i}^{n}\) be the approximate solutions of Eqs. (19) to (22) and define

$$ P_{i}^{n}=U_{i}^{n}- \tilde{U}_{i}^{n},\qquad Q_{i}^{n}=V_{i}^{n}- \tilde{V}_{i}^{n},\quad i=0, 1,\ldots,N+1, n=0,1,\ldots,T. $$

With the above definition and regarding Eqs. (19) and (21), we can get the following round-off error equations:

$$\begin{aligned}& a_{1}P_{i-1}^{1}+a_{2}P_{i}^{1}+a_{3}P_{i+1}^{1}+ a_{4}Q_{i-1}^{1}+a_{5}Q_{i}^{1}+a_{6}Q_{i+1}^{1} \\& \quad =a_{7}P _{i-1}^{0}+a_{8}P_{i}^{0}+a_{7}P_{i+1}^{0}, \end{aligned}$$
(32)
$$\begin{aligned}& a_{1}P_{i-1}^{n}+a_{2}P_{i}^{n}+a_{3}P_{i+1}^{n}+ a_{4}Q_{i-1}^{n}+a_{5}Q_{i}^{n}+a_{6}Q_{i+1}^{n} \\& \quad =a_{7}P_{i-1}^{0} +a_{8}P_{i}^{0}+a_{7}P_{i+1}^{0}- \sigma _{1} {\sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{\alpha _{1}}-\varphi _{n-q}^{\alpha _{1}} \bigr) \bigl[ \gamma \bigl(P_{i+1}^{q}+P_{i-1}^{q} \bigr) +\beta P_{i}^{q} \bigr], \end{aligned}$$
(33)

where \(a_{1}=1-\gamma \sigma _{1}+\frac{D}{2h} ( \beta +2\gamma )\), \(a_{2}=-2-\beta \sigma _{1}\), \(a_{3}=1-\gamma \sigma _{1}- \frac{D}{2h} ( \beta +2\gamma )\), \(a_{4}=\frac{E}{2h} ( \beta +2\gamma )\), \(a_{5}=0\), \(a_{6}=-\frac{E}{2h} ( \beta +2\gamma ) \), \(a_{7}=-\gamma \sigma _{1} \varphi _{n-1}^{\alpha _{1}}\), and \(a_{8}=-\beta \sigma _{1}\varphi _{n-1}^{\alpha _{1}}\).

The Von Neumann method assumes that

$$\begin{aligned}& P_{i}^{n}=\varsigma _{n}e^{(Ii\phi h)}, \end{aligned}$$
(34)
$$\begin{aligned}& Q_{i}^{n}=\varsigma _{n}e^{(Ii\phi h)}, \end{aligned}$$
(35)

where \(I=\sqrt{-1}\).

Substituting Eqs. (34) and (35) into Eq. (32), we get

$$\begin{aligned}& \bigl( a_{1}e^{I (i-1 ) \phi h}+a_{2}e^{I i \phi h}+a_{3} e^{I ( i+1 ) \phi h}+a_{4}e^{I (i-1 ) \phi h}+a_{6}e^{I (i+1 ) \phi h} \bigr) \varsigma _{1} \\& \quad = \bigl( a_{7}e^{I ( i-1 ) \phi h}+ a_{8}e^{I i \phi h}+ a_{7}e^{I ( i+1 ) \phi h} \bigr) \varsigma _{0}, \end{aligned}$$

after some algebraic manipulation, we have

$$ \varsigma _{1}=\frac{\varphi }{\eta +I \psi }\varsigma _{0}, $$
(36)

where \(\varphi =\sigma _{1} ( \beta +2\gamma \cos ( \phi h ) ) \), \(\eta =2 ( 1-\cos (\phi h) ) +\varphi \), and \(\psi =\frac{D+E}{h} (\beta +2\gamma ) \sin ( \phi h ) \).

$$ \vert \varsigma _{1} \vert =\sqrt{ \frac{\varphi ^{2}}{\eta ^{2}+ \psi ^{2}}} \vert \varsigma _{0} \vert \leq \vert \varsigma _{0} \vert . $$
(37)

Substituting Eqs. (34) and (35) into Eq. (33) results in

$$\begin{aligned}& \bigl(a_{1}e^{I (i-1 ) \phi h}+a_{2}e^{I i \phi h}+a_{3} e^{I (i+1 ) \phi h}+a_{4}e^{I (i-1 )\phi h}+a_{6}e^{I (i+1 ) \phi h} \bigr) \varsigma _{n} \\& \quad = \bigl( a_{7} \bigl(e^{I (i-1 ) \phi h}+e^{I (i+1 ) \phi h} \bigr)+ a_{8}e^{I i \phi h} \bigr) \varsigma _{0} \\& \qquad {} -\sigma _{1}{\sum_{q=1}^{n-1}} \bigl(\varphi _{n-q-1}^{ \alpha _{1}}-\varphi _{n-q}^{\alpha _{1}} \bigr) \bigl[ \gamma \bigl(e^{I (i-1 ) \phi h}+e^{I (i+1 ) \phi h} \bigr)+\beta e^{Ii \phi h} \bigr]\varsigma _{q}. \end{aligned}$$

After some rearrangement we get

$$ \varsigma _{n}=\frac{\varphi }{\eta +I \psi } \Biggl(\varphi _{n-1}^{ \alpha _{1}}\varsigma _{0}+\sum _{q=1}^{n-1} \bigl(\varphi _{n-q-1}^{ \alpha _{1}}- \varphi _{n-q}^{\alpha _{1}} \bigr)\varsigma _{q} \Biggr). $$

Using mathematical induction, we can prove that \(\vert \varsigma _{n} \vert \leq \vert \varsigma _{0} \vert \) as follows:

For \(n=2\),

$$ \varsigma _{2}=\frac{\varphi }{\eta +I \psi } \bigl(\varphi _{1}^{ \alpha _{1}}\varsigma _{0}+ \bigl(\varphi _{0}^{\alpha _{1}}-\varphi _{1}^{ \alpha _{1}} \bigr)\varsigma _{1} \bigr)= \frac{\varphi }{\eta +I \psi } \varphi _{0}^{\alpha _{1}}\varsigma _{0}= \frac{\varphi }{\eta +I \psi }\varsigma _{0}\quad \Rightarrow\quad \vert \varsigma _{2} \vert \leq \vert \varsigma _{0} \vert . $$

Let \(k\in \mathbb{Z_{+}} \) be given and suppose \(\vert \varsigma _{n} \vert \leq \vert \varsigma _{0} \vert \) is true for \(n = k\). Then

$$ \varsigma _{k+1}=\frac{\varphi }{\eta +I \psi } \Biggl(\varphi _{k}^{ \alpha _{1}}\varsigma _{0}+\sum _{q=1}^{k} \bigl(\varphi _{k-q}^{ \alpha _{1}}- \varphi _{k -q+1}^{\alpha _{1}} \bigr)\varsigma _{q} \Biggr). $$

By Lemma 2, we have \(0<\varphi _{q}^{\alpha _{1}}<\varphi _{q-1}^{\alpha _{1}}\), \(q=0,1,\ldots \) , and consequently \((\varphi _{k-q-1}^{\alpha _{1}}-\varphi _{k -q}^{\alpha _{1}} )>0\). Thus

$$\begin{aligned} \vert \varsigma _{k+1} \vert &=\sqrt{ \frac{\varphi ^{2}}{\eta ^{2}+ \psi ^{2}}} \Biggl(\varphi _{k}^{ \alpha _{1}} \vert \varsigma _{0} \vert +\sum_{q=1}^{k} \bigl(\varphi _{k-q}^{\alpha _{1}}-\varphi _{k -q+1}^{\alpha _{1}} \bigr) \vert \varsigma _{q} \vert \Biggr) \\ &\leq \varphi _{k}^{\alpha _{1}} \vert \varsigma _{0} \vert +\sum_{q=1}^{k} \bigl(\varphi _{k-q}^{\alpha _{1}}- \varphi _{k -q+1}^{\alpha _{1}} \bigr) \vert \varsigma _{0} \vert \quad \text{(by induction hypothesis)} \\ & = \Biggl(\varphi _{k}^{\alpha _{1}}+\sum _{q=1}^{k} \bigl( \varphi _{k-q}^{\alpha _{1}}- \varphi _{k -q+1}^{\alpha _{1}} \bigr) \Biggr) \vert \varsigma _{0} \vert =\varphi _{0}^{ \alpha _{1}} \vert \varsigma _{0} \vert = \vert \varsigma _{0} \vert . \end{aligned}$$

Expanding the summation in the last equation, the intermediate terms cancel each other, and we are left with the term \(\varphi _{0}^{\alpha _{1}} \vert \varsigma _{0} \vert \). Thus, \(\vert \varsigma _{n} \vert \leq \vert \varsigma _{0} \vert \) holds for \(n\geq 1\), and we have stability for \(\beta ,\gamma > 0\).

We can obtain similar results for Eqs. (20) and (22).

3 Shifted Jacobi spectral collocation method

This section introduces a numerical scheme based on the shifted Jacobi spectral collocation method (SJSCM) to obtain the approximate solution of the coupled Burgers system Eqs. (1) and (2).

The shifted Jacobi polynomial of degree j is denoted by \(P_{L,j}^{(\mu ,\eta )}(x); \mu ,\eta \geq -1\), \(x\in [0,L]\) constitute an orthogonal system with respect to the weight function \(\omega _{L}^{(\mu ,\eta )}(x)=x^{\eta }(L-x)^{\mu }\)

$$ \int _{0}^{L}P_{L,i}^{(\mu ,\eta )}(x)P_{L,j}^{(\mu ,\eta )}(x) \omega _{L}^{(\mu ,\eta )}(x)\,dx=\delta _{ij}h_{L,j}^{(\mu ,\eta )}, $$

where \(\delta _{ij}\) is the Kronecker function and

$$ h_{L,j}^{(\mu ,\eta )}= \frac{L^{\mu +\eta +1}\varGamma (j+\mu +1) \varGamma (j+\eta +1)}{(2j+\mu +\eta +1)\varGamma (j+\mu +\eta +1)j!}. $$
(38)

The shifted Jacobi polynomial can be obtained with the following three-term recurrence relation:

$$ P_{L,j+1}^{(\mu ,\eta )}(x)=(a_{j}x-b_{j})P_{L,j}^{(\mu ,\eta )}(x)-c_{j}P_{L,j-1}^{( \mu ,\eta )(x)},\quad j\geq 1, $$

with \(P_{L,0}^{(\mu ,\eta )}(x)=1\) and \(P_{L,1}^{(\mu ,\eta )}(x)=\frac{1}{L}(\mu +\eta +2)x-(\eta +1)\), where

$$\begin{aligned}& a_{j} = \frac{(2j+\mu +\eta +1)(2j+\mu +\eta +2)}{(j+1)(j+\mu +\eta +1)L}, \\& b_{j} = \frac{(2j+\mu +\eta +1)(2j^{2}+(1+\eta )(\mu +\eta )+2j(\mu +\eta +1))}{(j+1)(j+\mu +\eta +1)(2j+\mu +\eta )}, \\& c_{j} = \frac{(2j+\mu +\eta +2)(j+\mu )(j+\eta )}{(j+1)(j+\mu +\eta +1)(2j+\mu +\eta )}. \end{aligned}$$

The analytic form of shifted Jacobi polynomial \(P_{L,j}^{(\mu ,\eta )}(x)\) is given by

$$ P_{L,j}^{(\mu ,\eta )}(x)=\sum_{k=0}^{j}(-1)^{j-k} \frac{\varGamma (j+\eta +1)\varGamma (j+k+\mu +\eta +1)}{\varGamma (k+\eta +1)\varGamma (j+\mu +\eta +1)(j-k)!k!L^{k}}x^{k}. $$

The values of the shifted Jacobi polynomials at the boundary points are given by

$$ P_{L,j}^{(\mu ,\eta )}(0)=(-1)^{j} \frac{\varGamma (j+\eta +1)}{\varGamma (\eta +1)j!},\qquad P_{L,j}^{( \mu ,\eta )}(L)= \frac{\varGamma (j+\mu +1)}{\varGamma (\mu +1)j!}. $$

Suppose that \(f(x)\) is a square-integrable function with respect to the shifted Jacobi weight function \(\omega _{L}^{(\mu ,\eta )}\) on the interval \((0,L)\), then \(f(x)\) can be written in terms of \(P_{L,j}^{(\mu ,\eta )}\) as follows:

$$ f(x)=\sum_{j=0}^{\infty }a_{j}P_{L,j}^{(\mu ,\eta )}, $$
(39)

where

$$ a_{j}=\frac{1}{h_{L,j}^{(\mu ,\eta )}} \int _{0}^{L}f(x)P_{L,j}^{( \mu ,\eta )}(x) \omega _{L}^{(\mu ,\eta )}(x)\,dx,\quad j=0,1,\ldots. $$

The shifted Jacobi–Gauss quadrature is used to approximate the previous integral as follows:

$$ \int _{0}^{L}f(x)P_{L,j}^{(\mu ,\eta )}(x) \omega _{L}^{(\mu ,\eta )}(x)\,dx= \sum_{k=0}^{N}f \bigl(x_{G,L,k}^{(\mu ,\eta )} \bigr) P_{L,j}^{( \mu ,\eta )} \bigl(x_{G,L,k}^{(\mu ,\eta )} \bigr)\omega _{G,L,k}^{( \mu ,\eta )}, $$
(40)

where \(x_{G,L,k}^{(\mu ,\eta )}, k=0,1,\ldots ,N\), are the roots of the shifted Jacobi polynomial \(P_{L,N+1}^{(\mu ,\eta )}(x)\) of degree \(N+1\) and \(\omega _{G,L,k}^{(\mu ,\eta )}, k=0,1,\ldots ,N \), are the corresponding Christoffel numbers

$$ \omega _{G,L,k}^{(\mu ,\eta )}= \frac{L^{\mu +\eta +1}\varGamma (N+\mu +2)\varGamma (N+\eta +2)}{(N+1)!\varGamma (N+\mu +\eta +2) (L-x_{G,L,k}^{(\mu ,\eta )} ) x_{G,L,k}^{(\mu ,\eta )} [\partial _{x}P_{L,N+1}^{(\mu ,\eta )} (x_{G,L,k}^{(\mu ,\eta )} ) ]^{2}}. $$
(41)

Now, if we approximate \(f(x)\) in Eq. (39) by the first \((N+1)\)-terms, then

$$ f(x)\simeq f_{N}(x)\equiv \sum_{j=0}^{N}a_{j} P_{L,j}^{(\mu ,\eta )}(x)=A^{T} \Psi _{L,N}(x), $$
(42)

with \(A^{T}= [ a_{0}\enskip a_{1}\enskip \cdots \enskip a_{N}] \) and \(\Psi _{L,N}(x)= [ P_{L,0}^{(\mu ,\eta )}(x)\enskip P_{L,1}^{(\mu ,\eta )}(x)\enskip \cdots \enskip P_{L,N}^{( \mu ,\eta )}(x) ] ^{T}\).

Similarly, in terms of the double shifted Jacobi polynomials, a function of two independent variables \(f(x, t)\) that is infinitely differentiable in \([0, L] \times [0, \tau ]\) can be extended as follows:

$$ f(x,t)=\sum_{i=0}^{\infty }\sum _{j=0}^{\infty }a_{ij}P_{\tau ,i}^{( \mu ,\eta )}(t) P_{L,j}^{(\mu ,\eta )}(x), $$

which can be approximated by the first \((M+1)\times (N+1)\) terms with the truncation error \(\sum_{i=M+1}^{\infty }\sum_{j=N+1}^{\infty }a_{ij}P_{\tau ,i}^{(\mu , \eta )}(t) P_{L,j}^{(\mu ,\eta )}(x)\) as follows:

$$ f(x,t)\simeq f_{M,N}(x,t)\equiv \sum_{i=0}^{M} \sum_{j=0}^{N}a_{ij}P_{ \tau ,i}^{(\mu ,\eta )}(t) P_{L,j}^{(\mu ,\eta )}(x)= \Psi _{\tau ,M}^{T}(t) A \Psi _{L,N}(x), $$
(43)

with the coefficient matrix A given by

$$ A= \begin{bmatrix} a_{00} &a_{01} &\cdots &a_{0N} \\ a_{10} &a_{11} &\cdots &a_{1N} \\ \vdots &\vdots &\cdots &\vdots \\ a_{M0} &a_{M1} &\cdots &a_{MN} \end{bmatrix} , $$

where

$$\begin{aligned}& a_{ij}=\frac{1}{h_{\tau ,i}^{(\mu ,\eta )}} \frac{1}{h_{L,j}^{(\mu ,\eta )}} \int _{0}^{\tau } \int _{0}^{L}f(x,t)P_{ \tau ,i}^{(\mu ,\eta )}(t)P_{L,j}^{(\mu ,\eta )}(x) \omega _{\tau }^{( \mu ,\eta )}(t)\omega _{L}^{(\mu ,\eta )}(x)\,dx \,dt, \\& \quad i=0,1,\ldots ,M, j=0,1,\ldots ,N. \end{aligned}$$

Using the shifted Jacobi–Gauss quadrature formula [36, 49], we can approximate the coefficients \(a_{ij}\) as follows:

$$\begin{aligned} a_{ij}&=\frac{1}{h_{\tau ,i}^{(\mu ,\eta )}} \frac{1}{h_{L,j}^{(\mu ,\eta )}} \\ &\quad \times {} \sum_{\kappa =0}^{M}\sum _{\zeta =0}^{N}f \bigl(x_{G,L,\zeta }^{(\mu , \eta )},t_{G,\tau ,\kappa }^{(\mu ,\eta )} \bigr) P_{\tau ,i}^{(\mu , \eta )} \bigl(t_{G,\tau ,\kappa }^{(\mu ,\eta )} \bigr)P_{L,j}^{(\mu , \eta )} \bigl(x_{G,L,\zeta }^{(\mu ,\eta )} \bigr)\omega _{G,\tau , \kappa }^{(\mu ,\eta )}\omega _{G,L,\zeta }^{(\mu ,\eta )}, \end{aligned}$$
(44)

where \(x_{G,L,\zeta }^{(\mu ,\eta )}\), \(t_{G,\tau ,\kappa }^{(\mu ,\eta )}\), \(\omega _{G,\tau ,\kappa }^{(\mu ,\eta )}\), and \(\omega _{G,L,\zeta }^{(\mu ,\eta )}\) are defined by Eqs. (40) and (41).

Theorem 1

([39])

Let \(\Psi _{\tau ,M}(t)\) be shifted Jacobi vector defined in Eq. (42), and let \(\alpha > 0\). Then

$$ D_{t}^{\alpha }\Psi _{\tau ,M}(t)\simeq D_{\alpha }\Psi _{\tau ,M}(t), $$
(45)

where \(D_{\alpha }\) is the \((M + 1) \times (M + 1)\) operational matrix of derivatives of order α in the Liouville–Caputo sense and is defined by

$$ D_{\alpha }= \begin{bmatrix} 0 &0 &\cdots &0 \\ \vdots &\vdots &\cdots &\vdots \\ 0 &0 &\cdots &0 \\ d_{\alpha } ( \lfloor \alpha \rfloor , 0 ) &d_{ \alpha } ( \lfloor \alpha \rfloor , 1 ) &\cdots &d_{ \alpha } ( \lfloor \alpha \rfloor , M ) \\ \vdots &\vdots &\cdots &\vdots \\ d_{\alpha } (i, 0 ) &d_{\alpha } (i, 1 ) &\cdots &d_{ \alpha } (i, M ) \\ \vdots &\vdots &\cdots &\vdots \\ d_{\alpha } (M, 0 ) &d_{\alpha } (M, 1 ) &\cdots &d_{ \alpha } (M, M ) \end{bmatrix}, $$

where \(\lfloor \alpha \rfloor \) is the floor function and

$$\begin{aligned} d_{\alpha } (i, j ) =&\sum_{k= \lceil \alpha \rceil }^{i} \frac{(-1)^{i-k}\tau ^{\mu +\eta -\alpha +1}\varGamma (j+\eta +1) \varGamma (i+\eta +1)\varGamma (i+k+\mu +\eta +1)}{h_{\tau ,j}^{(\mu ,\eta )}\varGamma (k+\eta +1)\varGamma (j+\mu +\eta +1)\varGamma (i+\mu +\eta +1)\varGamma (k-\alpha +1)(i-k)!} \\ &{} \times \sum_{l=0}^{j} \frac{(-1)^{j-l}\varGamma (j+l+\mu +\eta +1) \varGamma (\mu +1)\varGamma (l+k+\eta -\alpha +1)}{\varGamma (l+\eta +1)\varGamma (l+k+\mu +\eta -\alpha +2)(j-l)! l!}. \end{aligned}$$

Similarly, the fractional derivative α of \(\Psi _{L,N}(x)\) can be expressed as in Eq. (45):

$$ D_{x}^{\alpha }\Psi _{L,N}(x)\simeq D_{\alpha }\Psi _{L,N}(x), $$
(46)

where \(D_{\alpha }\) is the \((N + 1) \times (N + 1)\) operational matrix of derivatives of order α.

3.1 Time-fractional coupled Burgers’ equation

We are going to consider the time-fractional coupled Burgers’ Eqs. (1) and (2), which may be written as follows:

$$\begin{aligned}& \frac{\partial ^{\alpha _{1}}u}{\partial t^{\alpha _{1}}} - \frac{\partial ^{2}u}{\partial x^{2}}+(v-2u) \frac{\partial u}{\partial x}+u \frac{\partial v}{\partial x}-f (x,t )=0, \quad 0< \alpha _{1}\leq 1, \end{aligned}$$
(47)
$$\begin{aligned}& \frac{\partial ^{\alpha _{2}}v}{\partial t^{\alpha _{2}}} - \frac{\partial ^{2}v}{\partial x^{2}}+(u-2v) \frac{\partial v}{\partial x}+v \frac{\partial u}{\partial x}-g (x,t )=0,\quad 0< \alpha _{2}\leq 1 \end{aligned}$$
(48)

with the initial-boundary conditions

$$\begin{aligned}& u(x,0)=p(x),\qquad v(x,0)=q(x),\quad a\leq x\leq b \\& u(a,t)=f_{1}(t),\qquad u(b,t)=f_{2}(t), \qquad v(a,t)=g_{1}(t),\qquad v(b,t)=g_{2} (t),\quad t>0. \end{aligned}$$

Using Eq. (43), we approximate \(u(x,t)\), \(v(x,t)\), \(f(x,t)\), and \(g(x,t)\) as follows:

$$\begin{aligned} \left . \begin{aligned} & u(x,t)\simeq u_{M,N}(x,t)= \Psi _{\tau ,M}^{T}(t) U \Psi _{L,N}(x), \\ & v(x,t)\simeq v_{M,N}(x,t)= \Psi _{\tau ,M}^{T}(t) V \Psi _{L,N}(x), \\ & f(x,t)\simeq f_{M,N}(x,t)= \Psi _{\tau ,M}^{T}(t) A \Psi _{L,N}(x), \\ &g(x,t)\simeq g_{M,N}(x,t)= \Psi _{\tau ,M}^{T}(t) B \Psi _{L,N}(x), \end{aligned} \right \} \end{aligned}$$
(49)

where U and V are unknown coefficients \((M+1)\times (N+1)\) matrices, while A and B are defined by Eq. (43), where

$$\begin{aligned}& a_{ij}=\frac{1}{h_{\tau ,i}^{(\mu ,\eta )}} \frac{1}{h_{L,j}^{(\mu ,\eta )}}\sum _{\kappa =0}^{M}\sum_{\zeta =0}^{N}f \bigl(x_{G,L,\zeta }^{(\mu ,\eta )},t_{G,\tau ,\kappa }^{(\mu ,\eta )} \bigr) P_{\tau ,i}^{(\mu ,\eta )} \bigl(t_{G,\tau ,\kappa }^{(\mu , \eta )} \bigr)P_{L,j}^{(\mu ,\eta )} \bigl(x_{G,L,\zeta }^{(\mu , \eta )} \bigr)\omega _{G,\tau ,\kappa }^{(\mu ,\eta )}\omega _{G,L, \zeta }^{(\mu ,\eta )}, \\& b_{ij}=\frac{1}{h_{\tau ,i}^{(\mu ,\eta )}} \frac{1}{h_{L,j}^{(\mu ,\eta )}}\sum _{\kappa =0}^{M}\sum_{\zeta =0}^{N}g \bigl(x_{G,L,\zeta }^{(\mu ,\eta )},t_{G,\tau ,\kappa }^{(\mu ,\eta )} \bigr) P_{\tau ,i}^{(\mu ,\eta )} \bigl(t_{G,\tau ,\kappa }^{(\mu , \eta )} \bigr)P_{L,j}^{(\mu ,\eta )} \bigl(x_{G,L,\zeta }^{(\mu , \eta )} \bigr)\omega _{G,\tau ,\kappa }^{(\mu ,\eta )}\omega _{G,L, \zeta }^{(\mu ,\eta )}. \end{aligned}$$

Using Theorem 1, we have

$$ \left . \begin{aligned} & \frac{\partial ^{\alpha _{1}}u(x,t)}{\partial t^{\alpha _{1}}}\simeq \Psi _{\tau ,M}^{T}(t) D_{\alpha _{1}}^{T} U \Psi _{L,N}(x), \\ & \frac{\partial u(x,t)}{\partial x}\simeq \Psi _{\tau ,M}^{T}(t) U D_{1} \Psi _{L,N}(x), \\ & \frac{\partial ^{2} u(x,t)}{\partial x^{2}}\simeq \Psi _{\tau ,M}^{T}(t) U D_{2} \Psi _{L,N}(x), \\ & \frac{\partial ^{\alpha _{2}}v(x,t)}{\partial t^{\alpha _{2}}}\simeq \Psi _{\tau ,M}^{T}(t) D_{\alpha _{2}}^{T} V \Psi _{L,N}(x), \\ & \frac{\partial v(x,t)}{\partial x}\simeq \Psi _{\tau ,M}^{T}(t) V D_{1} \Psi _{L,N}(x), \\ &\frac{\partial ^{2} v(x,t)}{\partial x^{2}}\simeq \Psi _{\tau ,M}^{T}(t) V D_{2} \Psi _{L,N}(x). \end{aligned} \right \} $$
(50)

Substituting Eqs. (49) and (50) in Eqs. (47) and (48) and collocating at \((M+1)\times (N-1)\) points, we have

$$ \left . \begin{aligned} & \Psi _{\tau ,M}^{T} (t_{i} ) D_{\alpha _{1}}^{T} U \Psi _{L,N} (x_{j} )-\Psi _{\tau ,M}^{T} (t_{i} ) U D_{2} \Psi _{L,N} (x_{j} ) \\ &\quad {} + \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) V \Psi _{L,N} (x_{j} )-2 \Psi _{\tau ,M}^{T} (t_{i} ) U \Psi _{L,N} (x_{j} ) \bigr) \\ &\quad {} \times \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) U D_{1} \Psi _{L,N} (x_{j} ) \bigr)-\Psi _{\tau ,M}^{T} (t_{i} ) A \Psi _{L,N} (x_{j} ) \\ &\quad {} + \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) U \Psi _{L,N} (x_{j} ) \bigr) \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) V D_{1} \Psi _{L,N} (x_{j} ) \bigr)\simeq 0, \\ & \Psi _{\tau ,M}^{T} (t_{i} ) D_{\alpha _{2}}^{T} V \Psi _{L,N} (x_{j} )-\Psi _{\tau ,M}^{T} (t_{i} ) V D_{2} \Psi _{L,N} (x_{j} ) \\ &\quad {} + \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) U \Psi _{L,N} (x_{j} )-2 \Psi _{\tau ,M}^{T} (t_{i} ) V \Psi _{L,N} (x_{j} ) \bigr) \\ &\quad {} \times \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) V D_{1} \Psi _{L,N} (x_{j} ) \bigr)-\Psi _{\tau ,M}^{T} (t_{i} ) B \Psi _{L,N} (x_{j} ) \\ &\quad {} + \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) V \Psi _{L,N} (x_{j} ) \bigr) \bigl(\Psi _{\tau ,M}^{T} (t_{i} ) U D_{1} \Psi _{L,N} (x_{j} ) \bigr)\simeq 0, \end{aligned} \right \} $$
(51)

where \(x_{j}\), \(j=0,1,\ldots ,N-2\), are the roots of shifted Jacobi polynomial \(P_{L,N-1}^{(\mu ,\eta )} (x )\) of degree \(N-1\) and \(t_{i}\), \(i=0,1,\ldots ,M\), are the roots of shifted Jacobi polynomial \(P_{\tau ,M+1}^{(\mu ,\eta )} (t )\) of degree \(M+1\). System (51) consists of \(2(M+1)\times (N-1)\) algebraic equations in the unknown coefficients \(u_{ij}\) and \(v_{ij}\), \(i=0, 1,\ldots ,M\), \(j=0,1,\ldots ,N-2\). Four additional equations are needed to obtain a unique solution for the resulting scheme. These are accomplished by the boundary conditions

$$ \left . \begin{aligned} & \Psi _{\tau ,M}^{T} (t_{i} ) U \Psi _{L,N}(0)=f_{1} (t_{i} ), \\ & \Psi _{\tau ,M}^{T} (t_{i} ) U \Psi _{L,N}(L)=f_{2} (t_{i} ), \\ & \Psi _{\tau ,M}^{T} (t_{i} ) V \Psi _{L,N}(0)=g_{1} (t_{i} ), \\ &\Psi _{\tau ,M}^{T} (t_{i} ) V \Psi _{L,N}(L)=g_{2} (t_{i} ), \end{aligned} \right \} \quad i=0,1,\ldots ,M. $$
(52)

System (51) can be combined with Eq. (52) to form the system of \(2(M + 1)(N +1)\) nonlinear algebraic equations in \(2(M + 1)(N +1)\) undefined coefficients, which can be resolved using Newton’s iterative approach. Consequently, \(u_{M, N}(x, t)\) and \(v_{M,N}(x, t)\) can be calculated by the formulae given in Eq. (49).

The convergence and error analysis of the shifted Jacobi polynomial has been considered in [50, 51]. The Caputo fractional derivative of the shifted Jacobi polynomials satisfies the following estimate:

$$ \bigl\vert {}^{C}D^{\alpha } P_{L,j}^{(\mu ,\eta )}(x) \bigr\vert \leq \mathcal{C} j^{\alpha +q}, $$

where \(\mathcal{C}\) is a positive generic constant and \(q=\max \{\mu ,\eta ,-1/2\}\). [50, Theorem 3] proved that the expansion coefficient \(a_{i,j}\) in Eq. (43) satisfies the following estimate:

$$ \vert a_{i,j} \vert =\mathcal{O}\bigl(i^{-5/2}j^{-5/2} \bigr)\quad \text{for all } i,j>3. $$

Finally, by [51, Theorem 3], we find that the truncation error of solutions of Eqs. (47) and (48) obtained by shifted Jacobi polynomial satisfies the following estimates:

$$ \Vert u-u_{M,N} \Vert _{2}=\mathcal{O} \bigl(M^{-3/4}N^{-3/4}\bigr),\qquad \Vert v-v_{M,N} \Vert _{2}=\mathcal{O}\bigl(M^{-3/4}N^{-3/4} \bigr). $$

4 Numerical solutions

In this section, we discuss three numerical examples representing the time-fractional coupled Burgers equations to ensure the high accuracy and applicability of the suggested methods. We compute the \(L_{2}\) and \(L_{\infty }\) error norms by the following formulae:

$$ L_{\infty }=\max_{0\leq j\leq N} \bigl\vert U(x_{j},t)-u(x_{j},t) \bigr\vert , \qquad L_{2} =\sqrt{h \sum_{j=0}^{N} \bigl\vert U(x_{j},t)-u(x_{j},t) \bigr\vert ^{2}}, $$

where

$$ h=\max_{0\leq j\leq N} \{ h_{j}: h_{j}=x_{j+1}-x_{j} \} . $$

The proposed methods are examined up to \(\tau =1\), where the time step-size of the first algorithm is \(k=1/512\). In all figures, sub-figures (a) and (b) represent the approximate solutions and absolute errors of the SJSCM, respectively, and sub-figures (c) and (d) represent the approximate solutions and absolute errors of the non-polynomial B-spline method, respectively.

Example 1

Consider the system introduced in Eqs. (1) and (2) with initial-boundary conditions:

$$\begin{aligned}& u ( x,0 ) =v ( x,0 ) =0 , \quad a\leq x\leq b \\& u(a,t)=v(a,t)=t^{3} \sin \bigl(e^{-a}\bigr),\qquad u(b,t)=v(b,t)=t^{3} \sin \bigl(e^{-b}\bigr),\quad t>0, \end{aligned}$$

where

$$\begin{aligned}& f(x,t) = \frac{3! t^{3-\alpha _{1}}\sin (e^{-x})}{\varGamma (4-\alpha _{1})}+t^{3} e^{-2x} \sin \bigl(e^{-x} \bigr)-t^{3} e^{-x} \cos \bigl(e^{-x} \bigr), \\& g(x,t) = \frac{3! t^{3-\alpha _{2}}\sin (e^{-x})}{\varGamma (4-\alpha _{2})}+t^{3} e^{-2x} \sin \bigl(e^{-x} \bigr)-t^{3} e^{-x} \cos \bigl(e^{-x} \bigr). \end{aligned}$$

The exact solution of this problem is \(u ( x,t ) =v ( x,t ) =t^{3} \sin ( e^{- x} ) \).

The \(L_{2}\) and \(L_{\infty }\) error norms and the approximation solutions and absolute error distributions of Example 1 are given in Tables 1 to 3 and Figs. 1 to 3, respectively, for suggested methods with \(\alpha _{1}=\alpha _{2}=0.1, 0.4, 0.7\) for different values of N on \([0,1]\times [0,1]\). Tables 4 to 6 present the \(L_{2}\) and \(L_{\infty }\) error norms obtained while varying \(N \in \{6,8,10,12 \}\), \(\alpha _{1}=\alpha _{2}=0.2, 0.4, 0.6\) and \(x \in [0,3]\). Figures 4 to 6 display the graphs of approximate solution and absolute error for \(u(x,t)=v(x,t)\) with \(\alpha _{1}=\alpha _{2}=0.2, 0.4, 0.6\), \(N=12\) on \([0,3]\times [0,1]\).

Figure 1
figure 1

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 1 at \(\alpha _{1}=\alpha _{2}=0.1\), \(N=12\), \(0\leq x \leq 1\)

Figure 2
figure 2

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 1 at \(\alpha _{1}=\alpha _{2}=0.4\), \(N=12\), \(0\leq x \leq 1\)

Figure 3
figure 3

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using non-polynomial B-spline method of Example 1 at \(\alpha _{1}=\alpha _{2}=0.7\), \(N=12\), \(0\leq x \leq 1\)

Figure 4
figure 4

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 1 at \(\alpha _{1}=\alpha _{2}=0.2\), \(N=12\), \(0\leq x \leq 3\)

Figure 5
figure 5

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 1 at \(\alpha _{1}=\alpha _{2}=0.4\), \(N=12\), \(0\leq x \leq 3\)

Figure 6
figure 6

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 1 at \(\alpha _{1}=\alpha _{2}=0.6\), \(N=12\), \(0\leq x \leq 3\)

Table 1 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 1 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.1\)
Table 2 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 1 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.4\)
Table 3 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 1 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.7\)
Table 4 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 1 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.2\)
Table 5 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 1 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.4\)
Table 6 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 1 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.6\)

Example 2

Consider Eqs. (1) and (2) with initial-boundary conditions:

$$\begin{aligned}& u ( x,0 ) =v ( x,0 )=0 ,\quad a\leq x\leq b \\& u(a,t)=v(a,t)=\frac{t^{3}}{e^{-a}+2},\qquad u(b,t)= \frac{t^{3}}{e^{-b}+2}, \quad t>0, \end{aligned}$$

where

$$\begin{aligned}& f(x,t) = \frac{3!t^{3-\alpha _{1}}\sin (e^{-x})}{ (e^{-x}+2 )\varGamma (4-\alpha _{1})}- \frac{2t^{3} e^{-2x}}{ (e^{-x}+2 )^{3}}+ \frac{t^{3} e^{-x}}{ (e^{-x}+2 )^{2}}, \\& g(x,t) = \frac{3!t^{3-\alpha _{2}}\sin (e^{-x})}{ (e^{-x}+2 )\varGamma (4-\alpha _{2})}- \frac{2t^{3} e^{-2x}}{ (e^{-x}+2 )^{3}}+ \frac{t^{3} e^{-x}}{ (e^{-x}+2 )^{2}}. \end{aligned}$$

The exact solutions of this problem are \(u ( x,t ) =v ( x,t ) =\frac{t^{3}}{e^{-x}+2}\).

Example 2 is solved by the presented methods for two sets of parameters: \(\alpha _{1}=\alpha _{2}=0.01, 0.1, 0.5 \) on \([0, 1]\times [0,1]\) and \(\alpha _{1}=\alpha _{2}=0.05, 0.2, 0.4 \) on \([0, 3]\times [0,1]\). Tables 7 to 9 list the \(L_{2}\) and \(L_{\infty }\) error norms, and Figs. 7 to 9 illustrate the approximate solutions and the absolute errors of suggested methods for the first set of parameters. Results achieved by the established methods with respect to the second set of parameters are shown in Tables 10 to 12 and Figs. 10 to 12. It is remarkable that in Example 2 the approximate solutions obtained using SJSCM and the non-polynomial B-spline method are more accurate than those obtained using the method in [27].

Figure 7
figure 7

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 2 at \(\alpha _{1}=\alpha _{2}=0.01\), \(N=11\), \(0\leq x \leq 1\)

Figure 8
figure 8

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 2 at \(\alpha _{1}=\alpha _{2}=0.1\), \(N=11\), \(0\leq x \leq 1\)

Figure 9
figure 9

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 2 at \(\alpha _{1}=\alpha _{2}=0.5\), \(N=11\), \(0\leq x \leq 1\)

Figure 10
figure 10

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 2 at \(\alpha _{1}=\alpha _{2}=0.05\), \(N=11\), \(0\leq x \leq 3\)

Figure 11
figure 11

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 2 at \(\alpha _{1}=\alpha _{2}=0.2\), \(N=11\), \(0\leq x \leq 3\)

Figure 12
figure 12

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 2 at \(\alpha _{1}=\alpha _{2}=0.4\), \(N=11\), \(0\leq x \leq 3\)

Table 7 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 2 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.01\)
Table 8 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 2 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.1\)
Table 9 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 2 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.5\)
Table 10 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 2 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.05\)
Table 11 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 2 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.2\)
Table 12 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 2 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.4\)

A comparison between the maximum absolute errors \((L_{\infty })\) obtained via the proposed methods with the corresponding results obtained in [27] is displayed in Tables 13 and 14.

Table 13 \(L_{\infty }\) error norm for \(u(x,t)=v(x,t)\) of Example 2 with \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.4\), \(N=5\) and \(\mu =\eta =0\)
Table 14 \(L_{\infty }\) error norm for \(u(x,t)=v(x,t)\) of Example 2 with \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.4\) and \(\mu =\eta =0\)

Example 3

Finally, consider Eqs. (1) and (2) with the initial-boundary conditions:

$$\begin{aligned}& u ( x,0 ) =v ( x,0 )=0 ,\quad a\leq x\leq b \\& u(a,t)=v(a,t)=t^{6} e^{-a},\qquad u(b,t)=t^{6} e^{-b},\quad t>0, \end{aligned}$$

where

$$\begin{aligned}& f(x,t) =\frac{6! t^{6-\alpha _{1}}e^{-x}}{\varGamma (7-\alpha _{1})}-t^{6} e^{-x}, \\& g(x,t) =\frac{6! t^{6-\alpha _{2}}e^{-x}}{\varGamma (7-\alpha _{2})}-t^{6} e^{-x}. \end{aligned}$$

The exact solutions of Eqs. (1) and (2) are \(u ( x,t ) =v ( x,t ) =t^{6} e^{-x}\).

Example 3 is solved by both suggested algorithms in two space intervals, firstly, when \(x\in [0,3]\) and \(t=1\). Tables 15 to 17 exhibit the \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) at \(\alpha _{1}=\alpha _{2}=0.01, 0.1\), and 0.5 with the various choices of N. We can see that the proposed method’s numerical results achieve greater precision as the number of grid points in both space and time directions increases. Figures 13 to 15 show the graphical results of numerical solutions and absolute error distributions for \(u(x,t)=v(x,t)\) at \(\alpha _{1}=\alpha _{2}=0.01, 0.1, 0.5\). Secondly, when \(x\in [0,1]\) and \(t=1\) the \(L_{2}\) and \(L_{\infty }\) error norms at \(\alpha _{1}=\alpha _{2}=0.2, 0.4\), and 0.5 are reported in Tables 18 to 20, and the corresponding graphical solutions and absolute errors distributions are shown in Figs. 16 to 18.

Figure 13
figure 13

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 3 at \(\alpha _{1}=\alpha _{2}=0.01\), \(0\leq x \leq 3\)

Figure 14
figure 14

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 3 at \(\alpha _{1}=\alpha _{2}=0.1\), \(0\leq x \leq 3\)

Figure 15
figure 15

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 3 at \(\alpha _{1}=\alpha _{2}=0.5\), \(0\leq x \leq 3\)

Figure 16
figure 16

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 3 at \(\alpha _{1}=\alpha _{2}=0.2\), \(0\leq x \leq 1\)

Figure 17
figure 17

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 3 at \(\alpha _{1}=\alpha _{2}=0.4\), \(0\leq x \leq 1\)

Figure 18
figure 18

(a) and (b) represent approximate solution and absolute error distribution, respectively, using SJSCM. (c) and (d) represent approximate solution and absolute error distribution, respectively, using the non-polynomial B-spline method of Example 3 at \(\alpha _{1}=\alpha _{2}=0.6\), \(0\leq x \leq 1\)

Table 15 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 3 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.01\)
Table 16 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 3 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.1\)
Table 17 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 3 when \(0\leq x\leq 3\), \(\alpha _{1}=\alpha _{2}=0.5\)
Table 18 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 3 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.2\)
Table 19 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 3 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.4\)
Table 20 \(L_{2}\) and \(L_{\infty }\) error norms for \(u(x,t)=v(x,t)\) of Example 3 when \(0\leq x\leq 1\), \(\alpha _{1}=\alpha _{2}=0.6\)

Note

The computations associated with the experiments discussed above were performed in Wolfram Mathematica 12.2 on a PC with Windows 64-bit OS + processor Intel Core i7 ∼2.4 GHz. The time taken to execute the non-polynomial algorithm is 14.3906 Sec., 29.2656 Sec., and 29.7344 Sec. at N = 5, N = 9, and N = 11, respectively, for SJSCM the time is 1.1875 Sec., 24.75 Sec., and 77.2813 Sec. at N = 5, N = 9, and N = 11, respectively.

5 Conclusion

In this paper, we solved the coupled TFBEs by two different methods. Firstly, we developed the non-polynomial B-spline method based on L1-formula to approximate the Liouville–Caputo time-fractional derivative. The study of stability using the Von Neumann method showed that the scheme is unconditionally stable. Secondly, we applied the shifted Jacobi spectral collocation method based on the operational matrix of fractional derivatives in the Liouville–Caputo sense with the aid of Jacobi–Gauss quadrature. From the tables and figures introduced in Sect. 4, it is clear that the SJSCM is more accurate and stable than the non-polynomial B-spline method for all different values \(\alpha _{1}\), \(\alpha _{2}\), and N. We also note that the accuracy of the non-polynomial B-spline method increases whenever the value of \(\alpha _{1}\) and \(\alpha _{2}\) decreases. The validity is tested by solving three problems of the presented methods. The elicited results confirm the high precision of the methods presented.