1 Introduction

The interest in fractional calculus has increased in recent years because of its applicability in various fields of science and engineering [110]. Many physical phenomena from fluid mechanics, viscoelasticity, physics and other sciences can be modeled mathematically with the help of fractional differential equations (FDEs) which represents the physical phenomena more appropriately than ordinary differential equations. In most cases, it is difficult to find the analytical solution of the FDE due to its complexity. Therefore researchers resorted to different numerical methods with a variety of stability and convergence properties to solve various FDEs [1125]. The Rayleigh–Stokes problem (RSP) is one of the most extensively researched FDEs over the last few years. This equation plays an important role in representing the behavior of some non-Newtonian fluids and the fractional derivative is also used in this model problem to capture the visco-elastic behavior of the flow [26, 27]. Also, fractional calculus has described the visco-elastic model more accurately than the non-fractional model, which is why we considered the Rayleigh–Stokes problem with fractional derivative.

In this paper, we considered the two-dimensional (2D) RSP with fractional derivative of the form:

$$ \begin{aligned}[b] \frac{\partial w(x,y,t)}{\partial t}&= {}_{0} \mathrm{D}_{t}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}}+ \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ &\quad {} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}}+f(x,y,t) \end{aligned} $$
(1)

having initial and boundary conditions

$$ \begin{aligned} &w(x,y,t)=g(x,y,t), \quad (x,y) \in \partial \varOmega , \\ &w(x,y,0)=h(x,y), \quad (x,y) \in \varOmega , \end{aligned} $$
(2)

where \(0< \gamma < 1 \), \(\varOmega = \{ (x,y) \mid 0\leq x \leq L, 0\leq y \leq L \} \) and \({}_{0}\mathrm{D}_{t}^{1-\gamma } \) represents the Riemann–Liouville fractional derivative of order, which is define as

$$ {}_{0}\mathrm{D}_{t}^{1-\gamma }= \frac{1}{\varGamma (\gamma )} \frac{\partial }{\partial t} \int _{0}^{t} \frac{w(x,y,\xi )}{(t-\xi )^{1-\gamma }}\,d\xi. $$
(3)

The 2D RSP defined in (1) has been solved by several researchers such as Chen et al. [28] solved the RSP using implicit and explicit of finite differences methods; also Fourier analysis is used for the convergence and stability of the proposed schemes. The convergence of both schemes were found to be of order \(O(\tau +\Delta x^{2} +\Delta y^{2}) \). Zhuang and Liu [29] solved the same problem by an implicit numerical approximation scheme, and its stability and convergence were also established with the order of convergence \(O(\tau +\Delta x^{2} +\Delta y^{2}) \). Mohebbi et al. [30] used a higher-order implicit finite difference scheme for (1), and they discussed its convergence and stability by Fourier analysis. The convergence order of their scheme was shown to be \(O(\tau +\Delta x^{4} +\Delta y^{4}) \). Although available, higher-order schemes for solving the two-dimensional Rayleigh–Stokes with better accuracy and simplicity are still very limited. Motivated by this, we try to further formulate another high-order stable numerical method for the Rayleigh–Stokes problem for a heated generalized second-grade fluid with fractional derivative (1) with given boundary and initial conditions (2)–(3) but with better accuracy than the schemes in [30]. This paper aims to propose an unconditionally stable compact iterative scheme for solving (1) having order of convergence \(O(\tau + \Delta x^{4}+\Delta y^{4})\). A fourth-order Crank–Nicolson difference scheme is applied for the discretization of the space derivative and a relationship between the Grunwald–Letnikov, and the Riemann–Liouville fractional derivative is used for the fractional time derivative. The stability and convergence analysis of this proposed method will be established using Fourier analysis.

The paper is organized as follows: in Sect. 2, we discuss the formulation of the proposed scheme, followed by the stability of the proposed scheme in Sect. 3. The convergence analysis is discussed in Sect. 4. In Sect. 5, some numerical examples are presented with discussion, and finally, the conclusion is presented in Sect. 6.

2 Formulation of proposed method

First, we define the following notations:

$$\begin{aligned}& \delta _{x}^{2}w_{i,j}^{k}=w_{i+1,j}^{k}-2w_{i,j}^{k}+w_{i-1,j}^{k}, \qquad \delta _{y}^{2}w_{i,j}^{k}=w_{i,j+1}^{k}-2w_{i,j}^{k}+w_{i,j-1}^{k}, \\& x_{i}=i\Delta x, \quad i=0,1,2,\ldots,M , \qquad y_{j}=j \Delta y, \quad j=0,1,2,\ldots,M, \\& t_{k}=k\tau ,\quad k=0,1,2,\ldots,N, \end{aligned}$$

where \(\Delta x=\Delta y=h=\frac{L}{M} \) used for space steps and \(\tau =\frac{T}{N}\).

The compact fourth-order difference operator \(U_{xx} \) is defined as [3]

$$ \frac{\delta _{x}^{2}}{\Delta x^{2}(1+\frac{1}{12}\delta _{x}^{2})}w_{i,j}^{k} = \frac{\partial ^{2} w}{\partial x^{2}} \bigg|_{i,j}^{k}-\frac{1}{240} \frac{\partial ^{4} w}{\partial x^{4}} \bigg|_{i,j}^{k} +O \bigl(h^{6} \bigr) $$
(4)

and similarly

$$ \frac{\delta _{y}^{2}}{\Delta y^{2}(1+\frac{1}{12}\delta _{y}^{2})}w_{i,j}^{k}= \frac{\partial ^{2} w}{\partial y^{2}} \bigg|_{i,j}^{k}-\frac{1}{240} \frac{\partial ^{4} w}{\partial y^{4}} \bigg|_{i,j}^{k} +O \bigl(h^{6} \bigr). $$
(5)

We have the relationship between the Grunwald–Letnikov and the Riemann–Liouville fractional derivative [30]

$$ {}_{0}D_{l}^{1-\gamma }f(t)=\frac{1}{\tau ^{1-\gamma }} \sum_{k=0}^{[ \frac{t}{\tau }]}\omega _{k}^{1-\gamma }f(t-k \tau )+O \bigl(\tau ^{p} \bigr), $$
(6)

where \(\omega _{k}^{1-\gamma } \) are the coefficients of the generated function that is, \(\omega (z, \gamma )= \sum_{k=0}^{\infty }\omega _{k}^{\gamma }z^{k} \). We consider \(\omega (z, \gamma )= (1-z)^{\gamma }\) for \(p=1\), so the coefficients are \(\omega _{0}^{\gamma }=1\) and

$$ \begin{aligned}[b] \omega _{k}^{\gamma }&=(-1)^{k} \begin{pmatrix} \gamma \\ k \end{pmatrix} =(-1)^{k}\frac{\gamma (\gamma -1)\cdots (\gamma -k+1)}{k!} \\ & = \biggl( 1-\frac{2-\gamma }{k} \biggr)\omega _{k-1}^{\gamma },\quad k\geqslant 1. \end{aligned} $$
(7)

Let \(\eta _{l}=\omega _{l}^{1-\gamma } \) then

$$ \eta _{0}=1 \quad \mbox{and}\quad \eta _{l}=(-1)^{l} \begin{pmatrix} 1-\gamma \\ l \end{pmatrix} = \biggl( 1-\frac{2-\gamma }{k} \biggr)\eta _{l-1},\quad k\geqslant 1 . $$

From (6), we can obtain the following:

$$\begin{aligned}& {}_{0}D_{t}^{1-\gamma }\frac{\partial ^{2} w(x,y,t)}{\partial x^{2}}= \tau ^{\gamma -1}\sum_{l=0}^{[\frac{t}{\tau }]}\eta _{l} \frac{\partial ^{2} w(x,y,t-l\tau )}{\partial x^{2}}+O(\tau ), \end{aligned}$$
(8)
$$\begin{aligned}& {}_{0}D_{t}^{1-\gamma }\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}}= \tau ^{\gamma -1}\sum_{l=0}^{[\frac{t}{\tau }]}\eta _{l} \frac{\partial ^{2} w(x,y,t-l \tau )}{\partial y^{2}}+O(\tau ). \end{aligned}$$
(9)

The Crank–Nicolson scheme for (1) is

$$ \begin{aligned}[b] \frac{\partial w(x,y,t)}{\partial t} \bigg|_{i,j}^{k+\frac{1}{2}}&= {}_{0}D_{t}^{1-\gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) \bigg|_{i,j}^{k+\frac{1}{2}} \\ &\quad + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \bigg|_{i,j}^{k+\frac{1}{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \bigg|_{i,j}^{k+\frac{1}{2}}+ f(x,y,t) \big|_{i,j}^{k+\frac{1}{2}}, \end{aligned} $$
(10)

or simply we can write (10) as

$$ \frac{\partial w}{\partial t} \bigg|_{i,j}^{k+\frac{1}{2}}= {}_{0}D_{t}^{1-\gamma } \biggl( \frac{\partial ^{2} }{\partial x^{2}} + \frac{\partial ^{2} }{\partial y^{2}} \biggr) w \bigg|_{i,j}^{k+\frac{1}{2}}+ \frac{\partial ^{2} w}{\partial x^{2}} \bigg|_{i,j}^{k+\frac{1}{2}} + \frac{\partial ^{2} w}{\partial y^{2}} \bigg|_{i,j}^{k+\frac{1}{2}}+ f \big|_{i,j}^{k+\frac{1}{2}}. $$
(11)

By using (4), (5), (8) and (9), we get

$$\begin{aligned}& \begin{aligned} \frac{u_{i,j}^{k+1}-u_{i,j}^{k}}{\tau }&=\frac{\tau ^{1-\gamma }}{2h^{2}} \sum _{l=0}^{k+1}\eta _{l} \biggl( \frac{\delta _{x}^{2}}{(1+\frac{1}{12}\delta _{x}^{2})} + \frac{\delta _{y}^{2}}{(1+\frac{1}{12}\delta _{y}^{2})} \biggr)u_{i,j}^{k-l+1} \\ &\quad {} +\frac{\tau ^{1-\gamma }}{2h^{2}}\sum_{l=0}^{k} \eta _{l} \biggl( \frac{\delta _{x}^{2}}{(1+\frac{1}{12}\delta _{x}^{2})} + \frac{\delta _{y}^{2}}{(1+\frac{1}{12}\delta _{y}^{2})} \biggr)u_{i,j}^{k-l} \\ &\quad {} +\frac{\delta _{x}^{2}}{h^{2}(1+\frac{1}{12}\delta _{x}^{2})}u_{i,j}^{k+ \frac{1}{2}} + \frac{\delta _{y}^{2}}{h^{2}(1+\frac{1}{12}\delta _{y}^{2})} u_{i,j}^{k+ \frac{1}{2}}+f_{i,j}^{k+\frac{1}{2}}, \end{aligned} \\& \begin{aligned} &\biggl(1+\frac{1}{12}\delta _{x}^{2} \biggr) \biggl(1+\frac{1}{12}\delta _{y}^{2} \biggr) \bigl(w_{i,j}^{k+1}-w_{i,j}^{k} \bigr) \\ &\quad = \frac{\tau ^{2}}{2h^{2}} \Biggl( \sum_{l=0}^{k+1} \eta _{l} \biggl(\delta _{y}^{2} \biggl(1+ \frac{1}{12}\delta _{x}^{2} \biggr) +\delta _{x}^{2} \biggl(1+ \frac{1}{12}\delta _{y}^{2} \biggr) \biggr) w_{i,j}^{k+1-l} \Biggr) \\ &\qquad {} + \frac{\tau ^{2}}{2h^{2}} \Biggl(\sum_{l=0}^{k} \eta _{l} \biggl(\delta _{y}^{2} \biggl(1+ \frac{1}{12}\delta _{x}^{2} \biggr) +\delta _{x}^{2} \biggl(1+\frac{1}{12}\delta _{y}^{2} \biggr) \biggr)w_{i,j}^{k-l} \Biggr) \\ &\qquad {} + \frac{\tau }{2h^{2}} \biggl(\delta _{y}^{2} \biggl(1+\frac{1}{12}\delta _{x}^{2} \biggr)+ \delta _{x}^{2} \biggl(1+\frac{1}{12}\delta _{y}^{2} \biggr) \biggr) \bigl(w_{i,j}^{k+1}+w_{i,j}^{k} \bigr) \\ &\qquad {} + \tau \biggl(1+\frac{1}{12}\delta _{x}^{2} \biggr) \biggl(1+\frac{1}{12}\delta _{y}^{2} \biggr) f_{i,j}^{k+\frac{1}{2}}. \end{aligned} \end{aligned}$$

After simplification and rearranging

$$ \begin{aligned}[b] Aw_{i,j}^{k+1} &=B \bigl(w_{i+1,j}^{k+1}+w_{i-1,j}^{k+1}+w_{i,j+1}^{k+1}+w_{i,j-1}^{k+1} \bigr)+C \bigl(w_{i+1,j+1}^{k+1}+w_{i-1,j+1}^{k+1} \\ &\quad {} +w_{i+1,j-1}^{k+1}+w_{i-1,j-1}^{k+1} \bigr)+Dw_{i,j}^{k} +E \bigl(w_{i+1,j}^{k}+w_{i-1,j}^{k}+w_{i,j+1}^{k} +w_{i,j-1}^{k} \bigr) \\ &\quad {} +F \bigl(w_{i+1,j+1}^{k}+w_{i-1,j+1}^{k}+w_{i+1,j-1}^{k}+w_{i-1,j-1}^{k} \bigr) + \frac{25\tau }{36}f_{i,j}^{k+\frac{1}{2}}+ \frac{5\tau }{72} \bigl(f_{i+1,j}^{k+ \frac{1}{2}} \\ &\quad {} +f_{i-1,j}^{k+\frac{1}{2}}+f_{i,j+1}^{k+\frac{1}{2}}+f_{i,j-1}^{k+ \frac{1}{2}} \bigr) +\frac{\tau }{144} \bigl(f_{i+1,j+1}^{k+\frac{1}{2}} +f_{i-1,j+1}^{k+ \frac{1}{2}}+f_{i+1,j-1}^{k+\frac{1}{2}}+f_{i-1,j-1}^{k+\frac{1}{2}} \bigr) \\ &\quad {} + S_{1} \Biggl[ \sum_{l=2}^{k+1} \eta _{l} \biggl(\frac{-10}{3}w_{i,j}^{k+1-l}+ \frac{2}{3} \bigl(w_{i+1,j}^{k+1-l}+w_{i-1,j}^{k+1-l}+w_{i,j+1}^{k+1-l}+w_{i,j-1}^{k+1-l} \bigr) \biggr) \\ &\quad {} +\frac{S_{1}}{6} \bigl(w_{i+1,j+1}^{k+1-l}+w_{i-1,j+1}^{k+1-l}+w_{i+1,j-1}^{k+1-l} +w_{i-1,j-1}^{k+1-l} \bigr) \Biggr] \\ &\quad {} + S_{1} \Biggl[ \sum_{l=1}^{k}\eta _{l} \biggl( \frac{-10}{3}w_{i,j}^{k-l}+ \frac{2}{3} \bigl(w_{i+1,j}^{k-l}+w_{i-1,j}^{k-l}+w_{i,j+1}^{k-l} +w_{i,j-1}^{k-l} \bigr) \biggr) \\ &\quad {}+\frac{S_{1}}{6} \bigl(w_{i+1,j+1}^{k-l}+w_{i-1,j+1}^{k-l} +w_{i+1,j-1}^{k-l}+w_{i-1,j-1}^{k-l} \bigr) \Biggr], \end{aligned} $$
(12)

where

$$ \begin{aligned} & S_{1}=\frac{\tau ^{\gamma }}{2h^{2}},\qquad S_{2}= \frac{\tau }{2h^{2}},\qquad H=S_{1}+S_{2}, \\ & A=\frac{5}{36}(5+24H),\qquad B= \frac{1}{144}(-10+96H),\qquad C= \frac{1}{144}(-1+24H), \\ & D=\frac{1}{144} \bigl(100-480(H+S_{1}\eta _{1}) \bigr),\qquad E=\frac{1}{144} \bigl(10+96(H+S_{1} \eta _{1}) \bigr), \\ & F=\frac{1}{144} \bigl(1+24(H+S_{1}\eta _{1}) \bigr). \end{aligned} $$

Lemma 1

([31])

The coefficients\(\eta _{l}\)satisfy the following:

$$\begin{aligned} (1)&\quad \eta _{0}=1 ,\qquad \eta _{1}=\gamma -1, \quad \eta _{l}< 0,\quad l=1,2,\ldots, \\ (2)&\quad \sum_{l=0}^{\infty } \eta _{l}=0, \qquad -\sum_{l=1}^{n} \eta _{l}< 1,\quad \forall n\in N. \end{aligned}$$

3 Stability of the proposed scheme

Let \(w_{i,j}^{k}\) (\(i,j=0,1,2,\ldots,n\), \(k=0,1,2,\ldots,l\)) be the approximate solution and \(W_{i,j}^{k}\) be the exact solution of (1), then the error \(\varepsilon _{i,j}^{k}= W_{i,j}^{k}-w_{i,j}^{k} \) will also satisfy (1), so from (12)

$$ \begin{aligned}[b] A \varepsilon _{i,j}^{k+1} &=B \bigl(\varepsilon _{i+1,j}^{k+1}+ \varepsilon _{i-1,j}^{k+1}+\varepsilon _{i,j+1}^{k+1}+ \varepsilon _{i,j-1}^{k+1} \bigr)+C \bigl( \varepsilon _{i+1,j+1}^{k+1}+\varepsilon _{i-1,j+1}^{k+1} \\ &\quad {} +\varepsilon _{i+1,j-1}^{k+1}+\varepsilon _{i-1,j-1}^{k+1} \bigr)+D \varepsilon _{i,j}^{k} +E \bigl(\varepsilon _{i+1,j}^{k}+\varepsilon _{i-1,j}^{k}+ \varepsilon _{i,j+1}^{k} + \varepsilon _{i,j-1}^{k} \bigr) \\ &\quad {} +F \bigl(\varepsilon _{i+1,j+1}^{k}+\varepsilon _{i-1,j+1}^{k}+ \varepsilon _{i+1,j-1}^{k}+ \varepsilon _{i-1,j-1}^{k} \bigr) \\ &\quad {} + S_{1} \Biggl[ \sum_{l=2}^{k+1}\eta _{l} \biggl(\frac{-10}{3}\varepsilon _{i,j}^{k+1-l}+ \frac{2}{3} \bigl(\varepsilon _{i+1,j}^{k+1-l}+ \varepsilon _{i-1,j}^{k+1-l}+ \varepsilon _{i,j+1}^{k+1-l}+ \varepsilon _{i,j-1}^{k+1-l} \bigr) \biggr) \Biggr] \\ &\quad {} + \frac{S_{1}}{6} \bigl( \varepsilon _{i+1,j+1}^{k+1-l}+ \varepsilon _{i-1,j+1}^{k+1-l}+ \varepsilon _{i+1,j-1}^{k+1-l} + \varepsilon _{i-1,j-1}^{k+1-l} \bigr) \\ &\quad {}+ S_{1} \Biggl[\sum _{l=1}^{k}\eta _{l} \biggl( \frac{-10}{3}\varepsilon _{i,j}^{k-l} + \frac{2}{3} \bigl(\varepsilon _{i+1,j}^{k-l}+ \varepsilon _{i-1,j}^{k-l}+\varepsilon _{i,j+1}^{k-l} +\varepsilon _{i,j-1}^{k-l} \bigr) \biggr) \Biggr] \\ &\quad {} + \frac{S_{1}}{6} \bigl(\varepsilon _{i+1,j+1}^{k-l}+\varepsilon _{i-1,j+1}^{k-l} +\varepsilon _{i+1,j-1}^{k-l}+ \varepsilon _{i-1,j-1}^{k-l} \bigr), \end{aligned} $$
(13)

where \(\eta _{l}=(-1)^{l} \binom{1-\gamma}{l} = ( 1-\frac{2-\gamma }{k} )\eta _{l-1}\), \(l=1,2,\ldots,k+1\).

Assume the error function at initial and boundary value are defined by

$$ \varepsilon _{0}^{k}=\varepsilon _{M}^{k}= \varepsilon _{i,j}^{0}=0 $$
(14)

and the error function for the grid function when \(k=0,1,2,\ldots,N-1\) is defined as

$$ \varepsilon ^{k}(x,y)= \textstyle\begin{cases} \varepsilon _{i,j}^{k} & \mbox{when } x_{i-\frac{\Delta x}{2}}< x\leq x_{i+ \frac{\Delta x}{2}}, y_{i-\frac{\Delta y}{2}}< y\leq y_{i+ \frac{\Delta y}{2}}, \\ 0 & \mbox{when } 0 \leq x\leq \frac{\Delta x}{2}, L-\frac{\Delta x}{2} \leq x \leq L, \\ 0 & \mbox{when } 0 \leq y\leq \frac{\Delta y}{2}, L-\frac{\Delta y}{2} \leq y \leq L. \end{cases} $$
(15)

Then we can express \(\varepsilon ^{k}(x,y)\) by a Fourier series as

$$ \varepsilon ^{k}(x,y)=\sum_{l_{1},l_{2}=-\infty }^{\infty } \upsilon ^{k}(l_{1},l_{2}) e^{2\sqrt{-1}\pi (\frac{l_{1}x}{L}+ \frac{l_{2}y}{L})}, $$
(16)

where

$$ \upsilon ^{k}(l_{1},l_{2})= \frac{1}{L} \int _{0}^{L} \int _{0}^{L} \varepsilon ^{k}(x,y) e^{-2\sqrt{-1}\pi (\frac{l_{1}x}{L}+ \frac{l_{2}y}{L})} \,dx \,dy. $$

The relationship between the Parseval equality and the \(L_{2}\) norm is

$$ \bigl\Vert \varepsilon ^{k} \bigr\Vert _{l^{2}}^{2}= \sum_{i=1}^{M-1} \sum _{j=1}^{M-1} \Delta x \Delta y \bigl\vert \varepsilon _{i,j}^{k} \bigr\vert ^{2}= \sum _{l_{1},l_{2}=-\infty }^{\infty } \bigl\vert \upsilon ^{k}(l_{1},l_{2}) \bigr\vert ^{2}. $$
(17)

Now suppose that

$$ \varepsilon _{i,j}^{k}= \upsilon ^{k} e^{ I( \beta i \Delta x + \alpha j \Delta y)},\quad I=\sqrt{-1} ,$$
(18)

where \(\beta =\frac{2\pi l_{1}}{L}\), \(\alpha = \frac{2\pi l_{2}}{L}\).

Substituting (18) into (13), and after simplification we get

$$ \begin{aligned}[b] \upsilon ^{k+1}&= \biggl( \frac{D+2E \lambda _{1}+ 4F \lambda _{2} }{A+2 B \lambda _{1} +4 C \lambda _{2} } \biggr) \upsilon ^{k} + S_{1} \sum _{l=2}^{k+1} \eta _{l} \biggl( \frac{\frac{-10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3}\lambda _{2}}{A+2 B \lambda _{1} +4 C\lambda _{2}} \biggr) \upsilon ^{k+1-l} \\ &\quad {} + S_{1} \sum_{l=1}^{k} \eta _{l} \biggl( \frac{-\frac{10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3}\lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \upsilon ^{k-l}, \end{aligned} $$
(19)

where \(\lambda _{1}= \operatorname{Cos}(\beta \Delta x)+ \operatorname{Cos}(\alpha \Delta y)\) and \(\lambda _{2}= \operatorname{Cos}(\beta \Delta x) \operatorname{Cos}(\alpha \Delta y)\).

Proposition 1

If\(\upsilon ^{k+1}\), \(k=1,2,3,\ldots, N \)satisfy (19), then

$$ \bigl\vert \upsilon ^{k+1} \bigr\vert \leq \bigl\vert \upsilon ^{0} \bigr\vert . $$

Proof

By using mathematical induction, when \(k=0\)

$$ \bigl\vert \upsilon ^{1} \bigr\vert = \biggl( \frac{ \vert D+2E \lambda _{1} +4F \lambda _{2} \vert }{|A+2 B \lambda _{1} +4 C\lambda _{2}|} \biggr) \bigl\vert \upsilon ^{0} \bigr\vert . $$

Let \(\phi _{1}=\sin ^{2}{(\frac{\beta \Delta x}{2})} + \sin ^{2}{( \frac{\alpha \Delta y}{2})}\) and \(\phi _{2}=\sin ^{2}{(\frac{\beta \Delta x}{2})} \sin ^{2}{( \frac{\alpha \Delta y}{2})}\), then \(\lambda _{1}= 1-\phi _{1}\) and \(\lambda _{2}= 1-2\phi _{1} +4\phi _{2}\), therefore

$$ \bigl\vert \upsilon ^{1} \bigr\vert = \biggl( \frac{ \vert D+2E (1-\phi _{1}) +4F (1-2\phi _{1} +4\phi _{2}) \vert }{|A+2 B (1-\phi _{1}) +4 C(1-2\phi _{1} +4\phi _{2})} \biggr) \bigl\vert \upsilon ^{0} \bigr\vert , $$
(20)

where \(\phi _{1} \in (0,2)\) and \(\phi _{2} \in (0,1)\).

After simplifying (20), we have

$$\begin{aligned} \bigl\vert \upsilon ^{1} \bigr\vert &= \biggl\vert \frac{49-28\phi _{1}+16\phi _{2}-24H(16\phi _{1}-(16\phi _{2}-7))-24S_{1}(16\phi _{1}-(16\phi _{2}-7))(1-\gamma )}{49-28\phi _{1}+16\phi _{2}+24H(16\phi _{1}-(16\phi _{2}-7))} \biggr\vert \\ &\quad {} \times \bigl\vert \upsilon ^{0} \bigr\vert . \end{aligned}$$
(21)

Comparing the nominator and denominator in (21) we find

$$ \begin{aligned} &\mbox{nominator}=49-28\phi _{1}+16\phi _{2}-24H \bigl(16\phi _{1}-(16 \phi _{2}-7) \bigr) \\ &\hphantom{\mbox{nominator}={}}{}-24S_{1} \bigl(16\phi _{1}-(16\phi _{2}-7) \bigr) (1-\gamma ), \\ & \mbox{denominator}= 49-28\phi _{1}+16\phi _{2}+24H \bigl(16 \phi _{1}-(16\phi _{2}-7) \bigr), \end{aligned} $$

where \(S_{1}>0\), \(\phi _{1} \geq \phi _{2}\), therefore nominator < denominator, so

$$ \bigl\vert \upsilon ^{1} \bigr\vert \leq \bigl\vert \upsilon ^{0} \bigr\vert . $$
(22)

Now assume that

$$ \bigl\vert \upsilon ^{s} \bigr\vert \leq \bigl\vert \upsilon ^{0} \bigr\vert , \quad \mbox{for }s=2,3,\ldots,k $$
(23)

and we need to prove it for \(s=k+1\):

$$ \begin{aligned}[b] \upsilon ^{k+1}&= \biggl( \frac{D+2E \lambda _{1}+ 4F \lambda _{2} }{A+2 B \lambda _{1} +4 C \lambda _{2} } \biggr) \upsilon ^{k} + S_{1} \sum _{l=2}^{k+1} \eta _{l} \biggl( \frac{\frac{-10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3}\lambda _{2}}{A+2 B \lambda _{1} +4 C\lambda _{2}} \biggr) \upsilon ^{k+1-l} \\ & \quad {}+ S_{1} \sum_{l=1}^{k} \eta _{l} \biggl( \frac{-\frac{10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3}\lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \upsilon ^{k-l}, \end{aligned} $$
(24)

using (23), we have

$$\begin{aligned}& \begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \bigl\vert \upsilon ^{0} \bigr\vert \\ &\quad {}+ s_{1} \biggl\vert \frac{\frac{-10}{3}+\frac{16}{3} [1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \\ &\quad {} \times \sum_{l=2}^{k+1} \eta _{l} \bigl\vert \upsilon ^{0} \bigr\vert + s_{1} \biggl\vert \frac{\frac{-10}{3} + \frac{16}{3}[1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \sum _{l=1}^{k} \eta _{l} \bigl\vert \upsilon ^{0} \bigr\vert , \end{aligned} \\& \begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \bigl\vert \upsilon ^{0} \bigr\vert \\ &\quad {}+ s_{1} \biggl\vert \frac{\frac{-10}{3}+\frac{16}{3} [1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \\ &\quad {} \times \Biggl(\sum_{l=1}^{k+1}\eta _{l} -\eta _{1} +\sum_{l=1}^{k} \eta _{l} \Biggr) \bigl\vert \upsilon ^{0} \bigr\vert , \end{aligned} \\& \begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \bigl\vert \upsilon ^{0} \bigr\vert \\ &\quad {}+ s_{1} \biggl\vert \frac{\frac{-10}{3}+\frac{16}{3} [1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \\ &\quad {} \times ( -\eta _{1} -2) \bigl\vert \upsilon ^{0} \bigr\vert \quad \because \text{using Lemma~1}, \end{aligned} \\& \begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \bigl\vert \upsilon ^{0} \bigr\vert \\ &\quad {}+ s_{1} \biggl\vert \frac{\frac{-10}{3} +\frac{16}{3} [1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \\ &\quad {} \times ( -\gamma -1) \bigl\vert \upsilon ^{0} \bigr\vert , \end{aligned} \\& \begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \bigl\vert \upsilon ^{0} \bigr\vert \\ &\quad {} + s_{1} \biggl\vert \frac{\frac{10}{3} (\gamma +1)- \frac{16}{3}(\gamma +1) [1-\phi _{1}] -\frac{8}{3}(\gamma +1) [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \\ &\quad {}\times \bigl\vert \upsilon ^{0} \bigr\vert \quad \because \text{using Lemma~1}, \end{aligned} \\& \bigl\vert \upsilon ^{k+1} \bigr\vert \leq \biggl\vert \frac{g_{0} - 24 H g_{1} - 24 s_{1} g_{1}(1-\gamma )-24(1+\gamma )g_{1}}{g_{0} + 24 H g_{1}} \biggr\vert \bigl\vert \upsilon ^{0} \bigr\vert , \end{aligned}$$
(25)

where \(g_{0}=49 - 28 \phi _{1} + 16 \phi _{2}\), \(g_{1}= 16 \phi _{1} -( 16 \phi _{2}-7)\), \(\gamma \in (0,1)\) and \(s_{1}>0\).

By comparing the numerator and denominator in (25)

$$ \begin{aligned} &\text{numerator} =g_{0} - 24 H g_{1} - 24 s_{1} g_{1}(1- \gamma )-24(1+ \gamma )g_{1}, \\ & \text{denominator} =g_{0} + 24 H g_{1}, \end{aligned} $$

numerator < denominator, therefore

$$ \bigl\vert \upsilon ^{k+1} \bigr\vert \leq \bigl\vert \upsilon ^{0} \bigr\vert . $$

 □

Theorem 1

The C–N high-order compact finite difference scheme (12) is unconditionally stable.

Proof

Using (17) and Proposition 1, we have

$$ \bigl\Vert \varepsilon ^{k+1} \bigr\Vert \leq \bigl\Vert \varepsilon ^{0} \bigr\Vert ,\quad k=0,1,2,3,\ldots,N-1. $$

This shows C–N high-order compact finite difference scheme (12) is unconditionally stable. □

4 Convergence of the proposed scheme

Let us denote the truncation error at point \(u(x_{i},y_{j},t_{k+\frac{1}{2}})\) by \(R_{i,j}^{k+\frac{1}{2}}\), then from the proposed scheme we know

$$ \bigl\vert R_{i,j}^{k+\frac{1}{2}} \bigr\vert \leq O \bigl(\tau + h^{4} \bigr). $$
(26)

Suppose that \(w_{i,j}^{k}\) and \(W_{i,j}^{k}\) (\(i,j=0,1,2,\ldots,n\); \(k=0,1,2,\ldots,l\)) be the approximate and exact solution of (1), respectively, then the error \(\psi _{i,j}^{k}= W_{i,j}^{k}- w_{i,j}^{k}\) will also satisfy (1), so from (12)

$$ \begin{aligned}[b] A \psi _{i,j}^{k+1} &=B \bigl(\psi _{i+1,j}^{k+1}+\psi _{i-1,j}^{k+1}+ \psi _{i,j+1}^{k+1}+\psi _{i,j-1}^{k+1} \bigr)+C \bigl(\psi _{i+1,j+1}^{k+1}+ \psi _{i-1,j+1}^{k+1} \\ &\quad {} +\psi _{i+1,j-1}^{k+1}+\psi _{i-1,j-1}^{k+1} \bigr)+D \psi _{i,j}^{k} +E \bigl( \psi _{i+1,j}^{k}+ \psi _{i-1,j}^{k}+\psi _{i,j+1}^{k} + \psi _{i,j-1}^{k} \bigr) \\ &\quad {} +F \bigl(\psi _{i+1,j+1}^{k}+\psi _{i-1,j+1}^{k}+ \psi _{i+1,j-1}^{k}+ \psi _{i-1,j-1}^{k} \bigr) \\ &\quad {} + S_{1} \Biggl[ \sum_{l=2}^{k+1} \eta _{l} \biggl( \frac{-10}{3}\psi _{i,j}^{k+1-l}+ \frac{2}{3} \bigl(\psi _{i+1,j}^{k+1-l}+ \psi _{i-1,j}^{k+1-l}+ \psi _{i,j+1}^{k+1-l}+ \psi _{i,j-1}^{k+1-l} \bigr) \biggr) \\ &\quad {} +\frac{S_{1}}{6} \bigl( \psi _{i+1,j+1}^{k+1-l}+ \psi _{i-1,j+1}^{k+1-l}+ \psi _{i+1,j-1}^{k+1-l} + \psi _{i-1,j-1}^{k+1-l} \bigr) \Biggr] \\ &\quad {} + S_{1} \Biggl[ \sum_{l=1}^{k} \eta {l} \biggl(\frac{-10}{3}\psi _{i,j}^{k-l} + \frac{2}{3} \bigl( \psi _{i+1,j}^{k-l}+\psi _{i-1,j}^{k-l}+\psi _{i,j+1}^{k-l} +\psi _{i,j-1}^{k-l} \bigr) \biggr) \\ &\quad {}+\frac{S_{1}}{6} \bigl(\psi _{i+1,j+1}^{k-l}+ \psi _{i-1,j+1}^{k-l} +\psi _{i+1,j-1}^{k-l}+ \psi _{i-1,j-1}^{k-l} \bigr) \Biggr] \end{aligned} $$
(27)

with error initial and boundary conditions

$$ \begin{aligned} &\psi _{i,j}^{0}=0,\quad i=1,2, \ldots, M, j=1,2,\ldots, M, \quad \mbox{and} \\ & \psi _{0}^{k}=\psi _{M}^{k}=0,\quad k=1,2,\ldots, N-1. \end{aligned} $$
(28)

Also define the grid functions when \(k=0,1,2,\ldots, N\),

$$ \psi ^{k}(x,y)= \textstyle\begin{cases} \psi _{i,j}^{k} & \mbox{when } x_{i-\frac{\Delta x}{2}}< x\leq x_{i+ \frac{\Delta x}{2}}, y_{i-\frac{\Delta y}{2}}< y\leq y_{i+ \frac{\Delta y}{2}}, \\ 0 & \mbox{when } 0\leq x\leq \frac{\Delta x}{2}, L-\frac{\Delta x}{2}\leq x \leq L, \\ 0 & \mbox{when } 0\leq y\leq \frac{\Delta y}{2}, L-\frac{\Delta y}{2} \leq y \leq L, \end{cases} $$

and

$$ R^{k}(x,y)= \textstyle\begin{cases} R_{i,j}^{k} & \mbox{when } x_{i-\frac{\Delta x}{2}}< x\leq x_{i+ \frac{\Delta x}{2}}, y_{i-\frac{\Delta y}{2}}< y\leq y_{i+ \frac{\Delta y}{2}}, \\ 0 & \mbox{when } 0 \leq x\leq \frac{\Delta x}{2}, L-\frac{\Delta x}{2} \leq x \leq L, \\ 0 & \mbox{when } 0 \leq y\leq \frac{\Delta y}{2}, L-\frac{\Delta y}{2} \leq y \leq L. \end{cases} $$

The above grid functions can be expressed as \(\psi ^{k}(x,y)\) and \(R^{k}(x,y)\) by Fourier series:

$$\begin{aligned}& \psi ^{k}(x,y)=\sum_{l_{1},l_{2}=-\infty }^{\infty } \xi ^{k}(l_{1},l_{2})e^{2 \sqrt{-1}\pi (\frac{l_{1}x}{L}+\frac{l_{2}y}{L})},\quad k=0,1,2,\ldots,N, \end{aligned}$$
(29)
$$\begin{aligned}& R^{k}(x,y)=\sum_{l_{1},l_{2}=-\infty }^{\infty } \mu ^{k}(l_{1},l_{2})e^{2 \sqrt{-1}\pi (\frac{l_{1}x}{L}+\frac{l_{2}y}{L})},\quad k=0,1,2,\ldots,N, \end{aligned}$$
(30)

where

$$\begin{aligned}& \xi ^{k}(l_{1},l_{2})=\frac{1}{L} \int _{0}^{L} \int _{0}^{L} \psi ^{k}(x,y)e^{-2 \sqrt{-1}\pi (\frac{l_{1}x}{L}+\frac{l_{2}y}{L})} \,dx \,dy, \\& \mu ^{k}(l_{1},l_{2})=\frac{1}{L} \int _{0}^{L} \int _{0}^{L} R^{k}(x,y)e^{-2 \sqrt{-1}\pi (\frac{l_{1}x}{L}+\frac{l_{2}y}{L})} \,dx \,dy. \end{aligned}$$

Now by the Parseval equality

$$\begin{aligned}& \bigl\Vert \psi ^{k} \bigr\Vert _{l^{2}}^{2}= \sum_{i=1}^{M-1} \sum _{j=1}^{M-1} \Delta x \Delta y \bigl\vert \psi _{i,j}^{k} \bigr\vert ^{2}=\sum _{l_{1},l_{2}=-\infty }^{ \infty } \bigl\vert \xi ^{k}(l_{1},l_{2}) \bigr\vert ^{2} , \end{aligned}$$
(31)
$$\begin{aligned}& \bigl\Vert R^{k} \bigr\Vert _{l^{2}}^{2}= \sum_{i=1}^{M-1} \sum _{j=1}^{M-1} \Delta x \Delta y \bigl\vert R_{i,j}^{k} \bigr\vert ^{2}=\sum _{l_{1},l_{2}=-\infty }^{ \infty } \bigl\vert \mu ^{k}(l_{1},l_{2}) \bigr\vert ^{2} . \end{aligned}$$
(32)

Based on the above, suppose that

$$\begin{aligned}& \psi _{i,j}^{k}=\xi ^{k}e^{I(\beta i \Delta x + \alpha j \Delta y)},\quad I=\sqrt{-1}, \end{aligned}$$
(33)
$$\begin{aligned}& R_{i,j}^{k}=\mu ^{k}e^{I(\beta i \Delta x + \alpha j \Delta y)},\quad I= \sqrt{-1}, \end{aligned}$$
(34)

where \(\beta =\frac{2\pi l_{1}}{L}\), \(\alpha =\frac{2\pi l_{2}}{L}\).

Substituting (33) and (34) into (27), we get

$$ \begin{aligned}[b] \xi ^{k+1}&= \biggl( \frac{D+2E\lambda _{1} + 4F \lambda _{2} }{A+2 B \lambda _{1} +4 C \lambda _{2} } \biggr) \xi ^{k} + S_{1} \sum _{l=2}^{k+1} \eta _{l} \biggl( \frac{\frac{-10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3}\lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \xi ^{k+1-l} \\ &\quad {}+ S_{1} \sum_{l=1}^{k} \eta _{l} \biggl( \frac{-\frac{10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3} \lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \xi ^{k-l} + \frac{\tau }{A+2 B \lambda _{1} +4 C \lambda _{2}}\mu ^{k+\frac{1}{2}}. \end{aligned} $$
(35)

Proposition 2

Let\(\xi ^{k+1}\) (\(k=0,1,2,\ldots,N\)) satisfy (23), then

$$ \bigl\vert \xi ^{k+1} \bigr\vert \leq k \tau \bigl\vert \mu ^{\frac{1}{2}} \bigr\vert . $$
(36)

Proof

We know from (24) that \(\psi _{i,j}^{0}=0\), which implies that \(\xi ^{0}=0\).

Using mathematical induction when \(k=0\):

$$ \xi ^{1}= \biggl( \frac{D+2E \lambda _{1} + 4F \lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \xi ^{0} +\frac{\tau }{A+2 B \lambda _{1} +4 C \lambda _{2}} \mu ^{\frac{1}{2}}. $$
(37)

After simplifying (37)

$$ \bigl\vert \upsilon ^{1} \bigr\vert = \biggl\vert \frac{\tau }{A+2 B (1-\phi _{1}) +4 C (1-2\phi _{1}+4\phi _{2})} \biggr\vert \bigl\vert \mu ^{\frac{1}{2}} \bigr\vert $$
(38)

since \(\phi _{2} \in (0,1)\) and \(\phi _{1} \in (0,2)\). and \(A,B,C >0\), we have

$$ \bigl\vert \upsilon ^{1} \bigr\vert \leq \tau \bigl\vert \mu ^{\frac{1}{2}} \bigr\vert . $$
(39)

Assume that

$$ \bigl\vert \xi ^{s} \bigr\vert \leq s \tau \bigl\vert \mu ^{\frac{1}{2}} \bigr\vert ,\quad s=2,3,\ldots,k, $$
(40)

and we will prove it for \(s=k+1\):

$$\begin{aligned} \begin{aligned} &\begin{aligned} \xi ^{k+1}&= \biggl( \frac{D+2E\lambda _{1} + 4F \lambda _{2} }{A+2 B \lambda _{1} +4 C \lambda _{2} } \biggr) \xi ^{k} + S_{1} \sum _{l=2}^{k+1} \eta _{l} \biggl( \frac{\frac{-10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3}\lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \xi ^{k+1-l} \\ &\quad {}+ S_{1} \sum_{l=1}^{k} \eta _{l} \biggl( \frac{-\frac{10}{3}+\frac{4}{3}\lambda _{1}+ \frac{2}{3} \lambda _{2}}{A+2 B \lambda _{1} +4 C \lambda _{2}} \biggr) \xi ^{k-l} + \frac{\tau }{A+2 B \lambda _{1} +4 C \lambda _{2}}\mu ^{k+\frac{1}{2}}, \end{aligned} \\ &\begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &= \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \bigl\vert \upsilon ^{k} \bigr\vert \\ &\quad {} + s_{1} \biggl\vert \frac{\frac{-10}{3}+\frac{16}{3} [1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \sum _{l=2}^{k+1} \eta _{l} \bigl\vert \upsilon ^{k+1-l} \bigr\vert \\ &\quad {} + s_{1} \biggl\vert \frac{\frac{-10}{3} + \frac{16}{3}[1-\phi _{1}] +\frac{8}{3} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \sum _{l=1}^{k} \eta _{l} \bigl\vert \upsilon ^{k-l} \bigr\vert \\ &\quad {} + \frac{\tau }{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \bigl\vert \mu ^{k+\frac{1}{2}} \bigr\vert , \end{aligned} \end{aligned} \end{aligned}$$
(41)

using (40)

$$\begin{aligned} \begin{aligned} &\begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl( \biggl\vert \frac{d_{2}+8 e_{2} [1-\phi _{1}] +16f_{2} [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \\ &\quad {} + s_{1} \frac{\frac{10}{3} (\gamma +1)- \frac{16}{3}(\gamma +1) [1-\phi _{1}] -\frac{8}{3}(\gamma +1) [1-2\phi _{1}+4\phi _{2}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr\vert \biggr)\tau k \bigl\vert \mu ^{(k-1)+\frac{1}{2}} \bigr\vert \\ &\quad {} + \frac{\tau }{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \bigl\vert \mu ^{k+\frac{1}{2}} \bigr\vert , \end{aligned} \\ &\begin{aligned} \bigl\vert \upsilon ^{k+1} \bigr\vert &\leq \biggl\vert \biggl( \frac{(d_{2}+\frac{10}{3} s_{1}(\gamma +1))+ (8e_{2}-\frac{16}{3} s_{1}(\gamma +1)) [1-\phi _{1}]}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \\ &\quad {} + \frac{ (16f_{2} - \frac{8}{3} s_{1}(\gamma +1)) [1-2\phi _{1}+4\phi _{2}]+ c_{k}}{a_{2}+8 b_{2} [1-\phi _{1}] +16 c_{2} [1-2\phi _{1}+4\phi _{2}]} \biggr) \biggr\vert k\tau \bigl\vert \mu ^{k+\frac{1}{2}} \bigr\vert . \end{aligned} \end{aligned} \end{aligned}$$
(42)

After simplifying (42)

$$ \bigl\vert \upsilon ^{k+1} \bigr\vert \leq \biggl\vert \frac{g_{0} - 24 H g_{1} - 24 s_{1} g_{1}(1-\gamma )-24(1+\gamma )g_{1}+c_{k}}{g_{0} + 24 H g_{1}} \biggr\vert k\tau \bigl\vert \mu ^{k+\frac{1}{2}} \bigr\vert , $$
(43)

where \(c_{k}=\frac{1}{k}\).

Comparing the numerator and denominator in (43) we find

$$ \begin{aligned} &\text{numerator} =g_{0} - 24 H g_{1} - 24 s_{1} g_{1}(1- \gamma )-24(1+ \gamma )g_{1}+c_{k}, \\ & \text{denominator} =g_{0} + 24 H g_{1}, \end{aligned} $$

\(\mbox{numerator} < \mbox{denominator}\), therefore

$$ \bigl\vert \upsilon ^{k+1} \bigr\vert \leq k\tau \bigl\vert \mu ^{k+\frac{1}{2}} \bigr\vert . $$

Hence, we proved the proposition. □

Theorem 2

The compact high-order Crank–Nicolson scheme is\(l^{2}\)convergent and its order of convergence is\(O(\tau +h^{4})\).

Proof

From Proposition 2, (22), (27) and (28),

$$ \bigl\Vert \psi ^{k+1} \bigr\Vert _{l^{2}}\leq k \tau \bigl\Vert R^{ \frac{1}{2}} \bigr\Vert _{l^{2}} \leq k\tau Q \bigl( \tau +h^{4} \bigr), $$

where Q is the constant of proportionality, also \(k \tau =T \), then

$$ \bigl\Vert \psi ^{k+1} \bigr\Vert _{l^{2}}\leq C_{1} \bigl(\tau +h^{4} \bigr),\quad C_{1}=TQ. $$

 □

5 Solvability of the proposed scheme

The C–N high-order compact finite difference scheme (23) can be written in matrix form:

$$ \begin{aligned} & G_{1} W^{1}=G_{2} W^{0} +G_{3} \psi ^{\frac{1}{2}},\quad k=0, \\ & G_{1} W^{k+1}=G_{2} W^{k} +G_{3} \psi ^{k+\frac{1}{2}} +S_{1} \sum _{l=2}^{k+1} \eta _{l} G_{4} W^{k+1-l } + S_{1} \sum _{l=1}^{k}\eta _{l} G_{4} W^{k-l },\quad k \geq 1, \\ & W_{i,j}^{0}=a_{0}(x_{i},y_{j}),\quad 1\leq i \leq M, 1\leq j \leq M, \\ & W_{0,j}^{k}=b_{1}(0,y_{j}), \quad 1 \leq i \leq M, 0\leq k \leq N, \\ & W_{L,j}^{k}=b_{2}(L,y_{j}), \quad 1 \leq i \leq M, 0\leq k \leq N, \\ & W_{i,0}^{k}=b_{3}(x_{i},0),\quad 1 \leq j \leq M, 0\leq k \leq N, \\ & W_{i,L}^{k}=b_{4}(x_{i},L),\quad 1 \leq j \leq M, 0\leq k \leq N, \end{aligned} $$
(44)

where \(\psi ^{k}=[\psi _{0}^{k},\psi _{1}^{k},\psi _{2}^{k},\ldots,\psi _{n}^{k}]^{T}\), \(\psi ^{k+\frac{1}{2}}=f(x_{i},y_{j},t_{k+\frac{1}{2}})\), and \(G_{1}\), \(G_{2}\), \(G_{3}\), \(G_{4}\) are in the form

$$\begin{aligned}& G_{1}= \begin{bmatrix} A &-B & & \cdots & -B &-C & \cdots & & 0 \\ -B & A &-B & & -C & -B & -C & &\vdots \\ &-B & A & -B & \cdots & -C & -B & -C & \\ \vdots &\cdots & -B & A&-B & & -C & -B &-C \\ & & & -B & A & -B & & -C &-B \\ \vdots &\cdots & & & -B & A & -B & & \\ & & & & & -B & A & -B & \\ & & & & & & -B & A &-B \\ 0& & \cdots & &\cdots & & & -B & A \end{bmatrix} , \\& G_{2}= \begin{bmatrix} D &E & & \cdots & E &F & \cdots & & 0 \\ E & D &E & & F & E& F & &\vdots \\ &E & D& E & \cdots & F & E & F & \\ \vdots &\cdots & E & D &E & & F & E &F \\ & & & E & D & E & & F &E \\ \vdots &\cdots & & & E & D & E & & \\ & & & & & E & D & E & \\ & & & & & & E & D &E \\ 0& & \cdots & &\cdots & & & E & D \end{bmatrix} , \\& G_{3}= \begin{bmatrix} r_{1} &r_{2} & & \cdots & r_{2} &r_{3} & \cdots & & 0 \\ r_{2} & r_{1} &r_{2} & & r_{3} & r_{2}& r_{3} & &\vdots \\ &r_{2} & r_{1}& r_{2} & \cdots & r_{3} & r_{2} & r_{3} & \\ \vdots &\cdots & r_{2} & r_{1} &r_{2} & & r_{3} & r_{2} &r_{3} \\ & & & r_{2} & r_{1} & r_{2} & & r_{3}&r_{2} \\ \vdots &\cdots & & & r_{2} & r_{1} & r_{2} & & \\ & & & & & r_{2} & r_{1} & r_{2} & \\ & & & & & & r_{2} & r_{1} &r_{2} \\ 0& & \cdots & &\cdots & & & r_{2} & r_{1} \end{bmatrix} , \\& G_{4}= \begin{bmatrix} \frac{-10}{3} &\frac{2}{3} & & \cdots & \frac{2}{3} &\frac{1}{6} & \cdots & & 0 \\ \frac{2}{3} & \frac{-10}{3} &\frac{2}{3} & & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & &\vdots \\ &\frac{2}{3} & \frac{-10}{3} & \frac{2}{3} & \cdots & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & \\ \vdots &\cdots & \frac{2}{3} & \frac{-10}{3} &\frac{2}{3} & & \frac{1}{6} & \frac{2}{3} &\frac{1}{6} \\ & & & \frac{2}{3} & \frac{-10}{3} & \frac{2}{3} & & \frac{1}{6} &\frac{2}{3} \\ \vdots &\cdots & & & \frac{2}{3} &\frac{-10}{3} & \frac{2}{3} & & \\ & & & & & \frac{2}{3} &\frac{-10}{3} & \frac{2}{3} & \\ & & & & & & \frac{2}{3}& \frac{-10}{3} &\frac{2}{3} \\ 0& & \cdots & &\cdots & & & \frac{2}{3} & \frac{-10}{3} \end{bmatrix} , \end{aligned}$$

where \(r_{1}=\frac{35\tau }{36}\), \(r_{2}=\frac{5\tau }{72}\) and \(r_{2}=\frac{\tau }{144}\).

Theorem 3

The difference equation (38) is uniquely solvable.

Proof

Since \(A=\frac{5}{36}(5+24H)\), \(B=\frac{1}{144}(-10+96H)\) and \(C=\frac{1}{144}(-1+24H)\), we have

$$ \vert A \vert =\frac{25}{36}+\frac{10 H}{3} $$

and

$$ 3 \vert -B \vert +2 \vert -C \vert \leq \frac{2}{9} + \frac{7 H}{3} < \frac{25}{36}+ \frac{10 H}{3} = \vert A \vert . $$

Thus \(|A|> 3|-B|+2|-C|\) and the matrix \(G_{1}\) is strictly diagonally dominant. Therefore, A is a nonsingular matrix. Hence our result is proved. □

6 Numerical results

In this section, we will show the effectiveness of our proposed method by performing three numerical experiments on the two-dimensional fractional Rayleigh–Stokes problem with the help of Core i7 Duo 3.40 GHz, Window 7 and 4 GB RAM using Mathematica software. We used Successive Over-Relaxation (SOR) as the acceleration technique, also for the convergence criterion a tolerance \(\zeta =10^{-5}\) is used for the maximum error \((L_{\infty })\). The convergence order for the space variable and time variable is calculated with the help of the \(C_{2}\)-order of convergence and \(C_{1}\)-order of convergence, respectively [31],

$$\begin{aligned}& \mbox{$C_{2}$-order} = \log _{2} \biggl( \frac{ \Vert L_{\infty }(16\tau ,2h) \Vert }{ \Vert L_{\infty }(\tau ,h) \Vert } \biggr), \end{aligned}$$
(45)
$$\begin{aligned}& \mbox{$C_{1}$-order} = \log _{2} \biggl( \frac{ \Vert L_{\infty }(2\tau ,h) \Vert }{ \Vert L_{\infty }(\tau ,h) \Vert } \biggr), \end{aligned}$$
(46)

where h is the space step, τ is the time step and \(L_{\infty }\) is the maximum \(\mathrm{error} =\| e \| _{l_{ \infty }}=\max_{1\leq i,j \leq N-1}|W_{i,j}^{k}-w_{i,j}^{k}|\).

The following test problems were used for the numerical experiments.

Test Problem 1

([30])

$$ \begin{aligned} \frac{\partial w(x,y,t)}{\partial t}&= {}_{0}D_{1}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ &\quad {} +\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} +\exp {(x+y)} \biggl((1+ \gamma )t^{\gamma }-2 \frac{\varGamma (2+\gamma )}{\varGamma (1 + 2\gamma )} t^{2 \gamma } -2 t^{1+\gamma } \biggr), \end{aligned} $$

where \(0< x, y<1\), and having initial and boundary conditions

$$ \begin{aligned} &w(x,y,0)=0, \\ & w(0,y,t)=\exp {(y)}t^{1+\gamma },\qquad w(x,0,t)=\exp {(x)}t^{1+ \gamma }, \\ & w(n,y,t)= \exp {(1+y)} t^{1+\gamma },\qquad w(x,n,t)=\exp {(1+x)}t^{1+ \gamma }, \end{aligned} $$

with exact solution

$$ w(x,y,t)=\exp {(x+y)} t^{1+\gamma }. $$

Test Problem 2

([30])

$$ \begin{aligned} \frac{\partial w(x,y,t)}{\partial t}&= {}_{0}D_{1}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ &\quad {} +\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} + \exp { \biggl(- \frac{(x-0.5)^{2}}{\nu }- \frac{(t-0.5)^{2}}{\nu } \biggr)} (1+\gamma )t^{ \gamma } \\ & \quad {}+ \biggl( \frac{(\varGamma (2+\gamma ))}{\varGamma (1+2\gamma )}t^{2 \gamma }+ t^{1+\gamma } \biggr) \biggl( \frac{4}{\nu }- \frac{4(x-0.5)^{2}}{{\nu }^{2}}-\frac{4(y-0.5)^{2}}{{\nu }^{2}} \biggr) \end{aligned} $$

where \(0< x, y<1\), and having initial and boundary conditions

$$ \begin{aligned} &w(x,y,0)=0, \\ & w(0,y,t)= \exp { \biggl( - \biggl(\frac{0.25}{\nu }+\frac{(y-0.5)^{2}}{\nu } \biggr) \biggr)} t^{1+\gamma }, \\ & w(x,0,t)= \exp { \biggl( - \biggl(\frac{(x-0.5)^{2}}{\nu }+\frac{0.25}{\nu } \biggr) \biggr)} t^{1+\gamma }, \\ & w(n,y,t)= \exp { \biggl( - \biggl(\frac{0.25}{\nu }+\frac{(y-0.5)^{2}}{\nu } \biggr) \biggr)} t^{1+\gamma }, \\ & w(x,n,t)= \exp { \biggl( - \biggl(\frac{(x-0.5)^{2}}{\nu } + \frac{0.25}{\nu } \biggr) \biggr)} t^{1+\gamma }, \end{aligned} $$

with exact solution

$$ w(x,y,t)= \exp { \biggl( - \biggl(\frac{(x-0.5)^{2}}{\nu }+ \frac{(y-0.5)^{2}}{\nu } \biggr) \biggr)} t^{1+\gamma }. $$

Test Problem 3

([28])

$$ \begin{aligned} \frac{\partial w(x,y,t)}{\partial t}&={}_{0}D_{1}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ &\quad {} +\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} + 2\exp {({x+y})} \biggl(t-t^{2}-2 \frac{t^{1+\gamma }}{\varGamma (2+ \gamma )} \biggr), \end{aligned} $$

where \(0< x, y<1\), and having initial and boundary conditions

$$ \begin{aligned} &w(x,y,0)=0, \\ & w(0,y,t)= \exp {(y)} t^{2},\qquad w(x,0,t)=\exp {(x)}t^{2}, \\ & w(n,y,t)= \exp {(1+y)} t^{2},\qquad w(x,n,t)=\exp {(1+x)}t^{2}, \end{aligned} $$

with exact solution

$$ w(x,y,t)=\exp {(x+y)} t^{2}. $$

We solved Test problem 1, Test problem 2 and Test problem 3 using our proposed method for the different values of γ, τ, h and \(L=T=1\). Table 1 and Table 2 shows the \(C_{1}\)-order of convergence for Test problem 1 and Test problem 3, respectively. Similarly, Table 3 and Table 4 show \(C_{2}\)-order of convergence for the Test problem 1 and Test problem 3, respectively, where Max error is \(L_{\infty }\) error. From Table 1 to Table 4, it can be observed that the computational order of convergence is in agreement with the theoretical order of convergence, i.e. Order of convergence is \(O(\tau + h^{4})\). Table 5, Table 6 and Table 7 show the comparison between the Implicit compact method, the Radial basis function Meshless Method (RMM) and our proposed method for (\(\tau = \frac{1}{50}\), \(\gamma =0.25\), \(\nu =0.1 \)), (\(h=\frac{1}{20}\), \(\tau = \frac{1}{50}\), \(\nu =\frac{1}{30}\)) and (\(\gamma =0.25\), \(\nu = \frac{1}{30}\)), respectively, from which it can be seen that our proposed method gives better results as compared to the Implicit compact method and the Radial basis function Meshless Method (RMM), also it can be seen in Fig. 1 and Fig. 2. Similarly, Tables 811 show the number of iterations, execuation time and errors (maximum and average error) for the Test problem 1 and Test problem 2.

Figure 1
figure 1

The convergence of the proposed method compared with the implicit method and RMM for Test problem 2

Figure 2
figure 2

The convergence of the proposed method compared with the implicit method and RMM for Test problem 1

Table 1 \(C_{1}\)-order of convergence for Test problem 1, when \(h=\frac{1}{8}\)
Table 2 \(C_{1}\)-order of convergence for Test problem 3, when \(h=\frac{1}{8}\)
Table 3 \(C_{2}\)-order of convergence for Test problem 1
Table 4 \(C_{2}\)-order of convergence for Test problem 3
Table 5 Comparison between proposed method when \(\tau =\frac{1}{50}\), \(\gamma =0.25\) with implicit compact and radial basis functions method for Test problem 2
Table 6 Comparison between proposed method when \(h=\frac{1}{20}\), \(\tau =\frac{1}{50}\) with implicit compact and radial basis functions method for Test problem 2
Table 7 Comparison between Proposed method when \(h=\frac{1}{16}\), \(\gamma =0.25\) with Implicit compact and radial basis functions method for Test problem 2
Table 8 The number of iterations, execution times (in sec) and errors of C–N high-order finite difference method for Test problem 1, when \(\gamma =0.1\)
Table 9 The number of iterations, execution times (in sec) and errors of C–N high-order finite difference method for Test problem 1, when \(\gamma =0.75\)
Table 10 The number of iterations, execution times (in sec) and errors of C–N high-order finite difference method for Test problem 2, when \(\gamma =0.75\) and \(\nu =0.5\)
Table 11 The number of iterations, execution times (in sec) and errors of C–N high-order finite difference method for Test problem 2, when \(\gamma =0.5\) and \(\nu =0.25\)

Figures 35 are the 3-D graphs of \(L_{\infty } \) error, the approximate solution and exact solution for the Test problem 1, respectively, when (\(h=\tau =\frac{1}{30}\), \(\gamma =0.5 \)), and similarly Figs. 68, represent the 3-D graphs of \(L_{\infty } \) error, the approximate solution and the exact solution for the Test problem 1, respectively, when (\(h=\tau =\frac{1}{25} \), \(\gamma =0.5\), \(\nu =0.1 \)). From Fig. 4, Fig. 5, Fig. 7 and Fig. 8, it can be seen that Fig. 5 and Fig. 8 (the approximate solutions) are good approximations to the exact solutions (Fig. 4 and Fig. 7).

Figure 3
figure 3

Abolute maximum error for the Test problem 1 when h=\(\frac{1}{30}\), \(\tau=\frac{1}{30}\)

Figure 4
figure 4

Exact solution for the Test problem 1 when h=\(\frac{1}{30}\), \(\tau=\frac{1}{30}\)

Figure 5
figure 5

Approximate solution for the Test problem 1 when h=\(\frac{1}{30}\), \(\tau=\frac{1}{30}\)

Figure 6
figure 6

Abolute maximum error for the Test problem 2 when h=\(\frac{1}{25}\), \(\tau=\frac{1}{25}\)

Figure 7
figure 7

Exact solution for the Test problem 2 when h=\(\frac{1}{25}\), \(\tau=\frac{1}{25}\)

Figure 8
figure 8

Approximate solution for the Test problem 2 when h=\(\frac{1}{25}\), \(\tau=\frac{1}{25}\)

7 Conclusion

In this article, we solved the two-dimensional fractional RSP using a compact high-order finite difference method. We proved that the Crank–Nicolson compact method is better in accuracy as compared to the implicit compact and radial basis function meshless method. The proposed method is unconditionally stable and convergent. The \(C_{1}\)-order of convergence and \(C_{2}\)-order of convergence show that the theoretical and experimental order of convergence agree for the time and space variables, respectively, i.e. \(O(\tau + h^{4})\).