1 Introduction

The fractional calculus has gained attention because of its application in engineering, physics, and chemistry [15]. Fractional differential equations represent more complex models, but mostly it is difficult to solve them analytically. Therefore different researchers are looking for numerical methods, e.g., finite element method, spectral method, and finite difference method, to find the solution to these fractional differential equations [622]. The finite difference method is relatively simple and easy; that is why it has been seen more in the literature for the solution of fractional differential equations.

In this paper, we consider two dimensional (2D) Rayleigh–Stokes problem for a heated generalized second-grade fluid with fractional derivative and a nonhomogeneous term of the form:

$$\begin{aligned} \frac{\partial w(x,y,t)}{\partial t}={}& {}_{0} \mathrm{D}_{t}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}}+ \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) \\ &{}+\frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}}+f(x,y,t) \end{aligned}$$
(1)

with initial and boundary conditions

$$\begin{aligned} \begin{aligned} &w(x,y,t)=g(x,y,t), \quad (x,y) \in \partial \Omega, \\ &w(x,y,0)=h(x,y), \quad (x,y) \in \Omega, \end{aligned} \end{aligned}$$
(2)

where \(0< \gamma < 1 \), \(\Omega = \{ (x,y) | 0\leq x \leq L, 0\leq y \leq L \} \).

The Rayleigh–Stokes problem has gained attention in recent years. This problem plays a vital role to show the dynamic behavior of some non-Newtonian fluids, and the fractional derivative in this model is used to capture the viscoelastic behavior of the flow [23, 24].

Several numerical methods are presented in the literature for the solution of fractional Rayleigh–Stokes problem, for example, Chen et al. [25] have solved the problem using explicit and implicit finite difference methods, they have also presented its stability and convergence using Fourier analysis. The convergence order for both schemes is \(O(\tau +\Delta x^{2} +\Delta y^{2})\). Ramy et al. [26] solved Rayleigh–Stokes problem using Jacobi spectral Galerkin method. The method they derived is efficient and easily generalizes to multiple dimensions. The advantages of this method are reasonable accuracy and relatively fewer degrees of freedom. Mohebbi et al. [27] used a higher-order implicit finite difference scheme for two-dimensional Rayleigh–Stokes problem and discussed its convergence and stability by Fourier analysis. The convergence order of their scheme is shown to be \(O(\tau +\Delta x^{4} +\Delta y^{4}) \).

High-order schemes produce more accurate results, but suffer from slow convergence due to the increase of complexity in the algorithm. Since explicit group methods reduce algorithm complexity [2831], we propose the use of explicit group method for the solution of two-dimensional Rayleigh–Stokes problem for a heated generalized second-grade fluid. The main purpose of this article is to solve two-dimensional Rayleigh–Stokes problem with the high-order explicit group method (HEGM).

The paper is arranged as follows; in Sect. 2, we give the formulation of the high-order compact explicit group scheme, and its stability is discussed in Sect. 3. In Sect. 4, the convergence of the proposed scheme is discussed. In Sect. 5, some numerical examples are presented with discussion, and finally, the conclusion is presented in Sect. 6.

2 The group explicit scheme

First, let us define the following notations:

$$\begin{aligned} &\delta _{x}^{2} w_{i,j}^{k}=w_{i+1,j}^{k}- 2 w_{i,j}^{k} + w_{i-1,j}^{k},\qquad \delta _{y}^{2} w_{i,j}^{k}=w_{i,j+1}^{k}- 2 w_{i,j}^{k} + w_{i,j-1}^{k}, \\ & w_{i,j}^{k + \frac{1}{2} }= \frac{w_{i,j}^{k+1} + w_{i,j}^{k} }{2},\qquad x_{i}=i \Delta x, y_{j}= j \Delta y,\quad \{ i,j=0,1,2,3,\dots, M\}, \\ & t_{k}= k \tau, \quad \{ k=0,1,2,3,\dots, N \}, \end{aligned}$$

where \(\Delta x = \Delta y= h = \frac{L}{M}\) which represent the space step and \(\Delta t= \frac{T}{N}\) represents the time step. The operators \(\delta _{x}^{2}\) and \(\delta _{y}^{2}\), which consist of the three-point stencil [32], satisfy

$$\begin{aligned} \frac{\delta _{x}^{2}}{h^{2}(1+\frac{1}{12}\delta _{x}^{2})} w_{i,j}^{k}= \frac{\partial ^{2}w}{\partial x^{2}} \bigg|_{i,j}^{k}- \frac{1}{240} \frac{\partial ^{4 }w}{\partial x^{4}}\bigg|_{i,j}^{k} + O \bigl(h^{6} \bigr) \end{aligned}$$
(3)

and

$$\begin{aligned} \begin{aligned} \frac{\delta _{y}^{2}}{h^{2}(1+\frac{1}{12}\delta _{y}^{2})} w_{i,j}^{k}= \frac{\partial ^{2}w}{\partial y^{2}} \bigg|_{i,j}^{k}- \frac{1}{240} \frac{\partial ^{4 }w}{\partial y^{4}} \bigg|_{i,j}^{k} + O \bigl(h^{6} \bigr). \end{aligned} \end{aligned}$$
(4)

The relationship between the Grunwald–Letnikov and Riemann–Liouville fractional derivatives is defined as [27, 33]

$$\begin{aligned} {}_{0}D_{l}^{1-\gamma }f(t)= \frac{1}{\tau ^{1-\gamma }}\sum_{k=0}^{[ \frac{t}{\tau }]}\omega _{k}^{1-\gamma }f(t-k\tau )+O \bigl(\tau ^{p} \bigr), \end{aligned}$$
(5)

where \(\omega _{k}^{1-\gamma } \) are the coefficients of the generating function, that is, \(\omega (z, \gamma )= \sum_{k=0}^{\infty }\omega _{k}^{\gamma }z^{k} \). We consider \(\omega (z, \gamma )= (1-z)^{\gamma }\) for \(p=1\), so the coefficients are \(\omega _{0}^{\gamma }=1\) and

$$\begin{aligned} \omega _{k}^{\gamma }&=(-1)^{k} \begin{pmatrix} \gamma \\ k \end{pmatrix} =(-1)^{k} \frac{\gamma (\gamma -1)\cdots (\gamma -k+1)}{k!} \\ & = \biggl( 1-\frac{2-\gamma }{k} \biggr)\omega _{k-1}^{\gamma }, \quad k \geqslant 1. \end{aligned}$$
(6)

Let \(\eta _{l}=\omega _{l}^{1-\gamma } \), then

$$\begin{aligned} \eta _{0}=1 \quad\text{and}\quad \eta _{l}=(-1)^{l} \begin{pmatrix} 1-\gamma \\ l \end{pmatrix} = \biggl( 1- \frac{2-\gamma }{k} \biggr)\eta _{l-1}, \quad k\geqslant 1. \end{aligned}$$

From (5) we can obtain the following:

$$\begin{aligned} & {}_{0}D_{t}^{1-\gamma } \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}}= \tau ^{\gamma -1}\sum_{l=0}^{[\frac{t}{\tau }]} \eta _{l} \frac{\partial ^{2} w(x,y,t-l\tau )}{\partial x^{2}}+O \bigl(\tau ^{p} \bigr), \end{aligned}$$
(7)
$$\begin{aligned} &{}_{0}D_{t}^{1-\gamma } \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}}= \tau ^{\gamma -1}\sum_{l=0}^{[\frac{t}{\tau }]} \eta _{l} \frac{\partial ^{2} w(x,y,t-l \tau )}{\partial y^{2}}+O \bigl(\tau ^{p} \bigr). \end{aligned}$$
(8)

Using (3), (4), (7), (8), and (1), we have

$$\begin{aligned} \frac{w_{i,j}^{k+1}-w_{i,j}^{k}}{\tau }={}&\tau ^{1-\gamma } \Biggl( \sum_{l=0}^{k}\eta _{l} \frac{\delta _{x}^{2}}{h^{2}(1+\frac{1}{12}\delta _{x}^{2})} + \sum_{l=0}^{k} \eta _{l} \frac{\delta _{y}^{2}}{h^{2}(1+\frac{1}{12}\delta _{y}^{2})} \Biggr)w_{i,j}^{k-l+\frac{1}{2}} \\ &{} +\frac{\delta _{x}^{2}}{h^{2}(1+\frac{1}{12}\delta _{x}^{2})}w_{i,j}^{k+ \frac{1}{2}} + \frac{\delta _{y}^{2}}{h^{2}(1+\frac{1}{12}\delta _{y}^{2})} w_{i,j}^{k+ \frac{1}{2}}+f_{i,j}^{k+\frac{1}{2}}. \end{aligned}$$
(9)

Multiplying both sides by \(\tau (1+\frac{1}{12}\delta _{x}^{2}) (1+\frac{1}{12}\delta _{y}^{2})\), we have

$$\begin{aligned} \biggl(1+\frac{1}{12}\delta _{x}^{2} \biggr) \biggl(1+ \frac{1}{12}\delta _{y}^{2} \biggr) \bigl(w_{i,j}^{k+1}-w_{i,j}^{k} \bigr)={}& \frac{\tau ^{2-\gamma }}{2h^{2}}\sum_{l=0}^{k+1}\eta _{l} \biggl( \delta _{x}^{2}+\delta _{y}^{2}+ \frac{\delta _{x}^{2}\delta _{y}^{2}}{6} \biggr) w_{i,j}^{k+1-l} \\ &{} + \frac{\tau ^{2-\gamma }}{2h^{2}}\sum_{l=0}^{k}\eta _{l} \biggl( \delta _{x}^{2}+\delta _{y}^{2}+ \frac{\delta _{x}^{2}\delta _{y}^{2}}{6} \biggr)w_{i,j}^{k-l} \\ &{} + \frac{\tau }{2h^{2}} \biggl(\delta _{x}^{2}+\delta _{y}^{2}+ \frac{\delta _{x}^{2}\delta _{y}^{2}}{6} \biggr) \bigl(w_{i,j}^{k+1}+w_{i,j}^{k} \bigr) \\ &{} + \tau \biggl(1+\frac{1}{12}\delta _{x}^{2} \biggr) \biggl(1+\frac{1}{12}\delta _{y}^{2} \biggr) f_{i,j}^{k+\frac{1}{2}}. \end{aligned}$$

After simplifying and rearranging, we get Crank–Nicoslon (C–N) high-order compact scheme

$$\begin{aligned} a_{1}w_{i,j}^{k+1} ={}&a_{2} \bigl(w_{i+1,j}^{k+1}+w_{i-1,j}^{k+1}+w_{i,j+1}^{k+1}+w_{i,j-1}^{k+1} \bigr)+a_{3} \bigl(w_{i+1,j+1}^{k+1}+w_{i-1,j+1}^{k+1} \\ & {}+w_{i+1,j-1}^{k+1}+w_{i-1,j-1}^{k+1} \bigr)+a_{4}w_{i,j}^{k} +a_{5} \bigl(w_{i+1,j}^{k}+w_{i-1,j}^{k}+w_{i,j+1}^{k} +w_{i,j-1}^{k} \bigr) \\ &{} +a_{6} \bigl(w_{i+1,j+1}^{k}+w_{i-1,j+1}^{k}+w_{i+1,j-1}^{k}+w_{i-1,j-1}^{k} \bigr) +\frac{25\tau }{36}f_{i,j}^{k+\frac{1}{2}}+\frac{5\tau }{72} \bigl(f_{i+1,j}^{k+ \frac{1}{2}} \\ &{} +f_{i-1,j}^{k+\frac{1}{2}}+f_{i,j+1}^{k+\frac{1}{2}}+f_{i,j-1}^{k+ \frac{1}{2}} \bigr) +\frac{\tau }{144} \bigl(f_{i+1,j+1}^{k+\frac{1}{2}} +f_{i-1,j+1}^{k+ \frac{1}{2}}+f_{i+1,j-1}^{k+\frac{1}{2}}+f_{i-1,j-1}^{k+\frac{1}{2}} \bigr) \\ &{} + S_{1} \Biggl[ \sum_{l=2}^{k+1} \eta _{l} \biggl(\frac{-10}{3}w_{i,j}^{k+1-l}+ \frac{2}{3} \bigl(w_{i+1,j}^{k+1-l}+w_{i-1,j}^{k+1-l}+w_{i,j+1}^{k+1-l}+w_{i,j-1}^{k+1-l} \bigr) \biggr) \Biggr] \\ &{} +\frac{S_{1}}{6} \bigl(w_{i+1,j+1}^{k+1-l}+w_{i-1,j+1}^{k+1-l}+w_{i+1,j-1}^{k+1-l} +w_{i-1,j-1}^{k+1-l} \bigr)+ S_{1} \Biggl[ \sum _{l=1}^{k}\eta _{l} \biggl( \frac{-10}{3}w_{i,j}^{k-l} \\ &{} + \frac{2}{3} \bigl(w_{i+1,j}^{k-l}+w_{i-1,j}^{k-l}+w_{i,j+1}^{k-l} +w_{i,j-1}^{k-l} \bigr) \biggr) \Biggr] +\frac{S_{1}}{6} \bigl(w_{i+1,j+1}^{k-l}+w_{i-1,j+1}^{k-l} \\ &{} +w_{i+1,j-1}^{k-l}+w_{i-1,j-1}^{k-l} \bigr) + O \bigl(\tau +h^{4} \bigr), \end{aligned}$$
(10)

where

$$\begin{aligned} & S_{1}=\frac{\tau ^{\gamma }}{2h^{2}},\qquad S_{2}= \frac{\tau }{2h^{2}},\qquad H=S_{1}+S_{2}, \\ & a_{1}=\frac{5}{36}(5+24H),\qquad a_{2}= \frac{1}{144}(-10+96H),\qquad a_{3}= \frac{1}{144}(-1+24H), \\ & a_{4}=\frac{1}{144} \bigl(100-480(H+S_{1}\eta _{1}) \bigr),\qquad a_{5}= \frac{1}{144} \bigl(10+96(H+S_{1} \eta _{1}) \bigr), \\ & a_{6}=\frac{1}{144} \bigl(1+24(H+S_{1}\eta _{1}) \bigr). \end{aligned}$$

Applying (8) to the group of four points (as shown in Fig. 1) will result in the following \(4\times 4\) system:

$$\begin{aligned} \begin{bmatrix} a_{1} & -a_{2}& -a_{3} & -a_{2} \\ -a_{2}&a_{1} & -a_{2} & -a_{3} \\ -a_{3}& -a_{2} & a_{1} & -a_{2} \\ -a_{2}& -a_{3} & -a_{2} & a_{1} \end{bmatrix} \begin{bmatrix} w_{i,j} \\ w_{i+1,j} \\ w_{i+1,j+1} \\ w_{i,j+1} \end{bmatrix} = \begin{bmatrix} rhs_{i,j} \\ rhs_{i+1,j} \\ rhs_{i+1,j+1} \\ rhs_{i,j+1} \end{bmatrix}, \end{aligned}$$
(11)

where

$$\begin{aligned} & rhs_{i,j}= a_{2} \bigl(w_{i-1,j}^{k+1}+w_{i,j-1}^{k+1} \bigr)+a_{3} \bigl(w_{i-1,j+1}^{k+1}+w_{i+1,j-1}^{k+1}+w_{i-1,j-1}^{k+1} \bigr)+a_{4} w_{i,j}^{k} \\ &\phantom{rhs_{i,j}=}{} +a_{5} \bigl(w_{i+1,j}^{k}+w_{i-1,j}^{k}+w_{i,j+1}^{k}+w_{i,j-1}^{k} \bigr)+a_{6} \bigl(w_{i+1,j+1}^{k}+w_{i-1,j+1}^{k} \\ &\phantom{rhs_{i,j}=}{} +w_{i+1,j-1}^{k}+w_{i-1,j-1}^{k} \bigr)+ \frac{25}{36}\tau f_{i,j}^{k+ \frac{1}{2}}+\frac{5}{72} \tau \bigl(f_{i+1,j}^{k+\frac{1}{2}}+f_{i-1,j}^{k+ \frac{1}{2}}+f_{i,j+1}^{k+\frac{1}{2}} \\ &\phantom{rhs_{i,j}=}{} +f_{i,j-1}^{k+\frac{1}{2}} \bigr)+ \frac{\tau }{144} \bigl(f_{i+1,j+1}^{k+ \frac{1}{2}}+f_{i-1,j+1}^{k+\frac{1}{2}}+f_{i+1,j-1}^{k+\frac{1}{2}}+f_{i-1,j-1}^{k+ \frac{1}{2}} \bigr) \\ &\phantom{rhs_{i,j}=}{} +s_{1}\sum_{l=2}^{k+1} \lambda _{l} \biggl(\frac{-10}{3}w_{i,j}^{k+1-l}+ \frac{2}{3} \bigl(w_{i+1,j}^{k+1-l}+w_{i-1,j}^{k+1-l}+w_{i,j+1}^{k+1-l} \\ &\phantom{rhs_{i,j}=}{} +w_{i,j-1}^{k+1-l} \bigr)+\frac{1}{6} \bigl(w_{i+1,j+1}^{k+1-l}+w_{i-1,j+1}^{k+1-l}+w_{i+1,j-1}^{k+1-l}+w_{i-1,j-1}^{k+1-l} \bigr) \biggr) \\ &\phantom{rhs_{i,j}=}{}+ +s_{1}\sum_{l=1}^{k} \lambda _{l} \biggl(\frac{-10}{3}w_{i,j}^{k-l}+ \frac{2}{3} \bigl(w_{i+1,j}^{k-l}+w_{i-1,j}^{k-l}+w_{i,j+1}^{k-l} +w_{i,j-1}^{k-l} \bigr) \\ &\phantom{rhs_{i,j}=}{} +\frac{1}{6} \bigl(w_{i+1,j+1}^{k-l}+w_{i-1,j+1}^{k-l}+w_{i+1,j-1}^{k-l}+w_{i-1,j-1}^{k-l} \bigr) \biggr), \\ &rhs_{i+1,j}= a_{2} \bigl(w_{i+2,j}^{k+1}+w_{i+1,j-1}^{k+1} \bigr)+a_{3} \bigl(w_{i+2,j+1}^{k+1}+w_{i+2,j-1}^{k+1}+w_{i,j-1}^{k+1} \bigr)+a_{4} w_{i+1,j}^{k} \\ &\phantom{rhs_{i+1,j}=}{} +a_{5} \bigl(w_{i+2,j}^{k}+w_{i,j}^{k}+w_{i+1,j+1}^{k}+w_{i+1,j-1}^{k} \bigr)+a_{6} \bigl(w_{i+2,j+1}^{k}+w_{i,j+1}^{k} \\ &\phantom{rhs_{i+1,j}=}{} +w_{i+2,j-1}^{k}+w_{i,j-1}^{k} \bigr)+ \frac{25}{36}\tau f_{i+1,j}^{k+ \frac{1}{2}}+\frac{5}{72} \tau \bigl(f_{i+2,j}^{k+\frac{1}{2}}+f_{i,j}^{k+ \frac{1}{2}}+f_{i+1,j+1}^{k+\frac{1}{2}} \\ &\phantom{rhs_{i+1,j}=}{} +f_{i+1,j-1}^{k+\frac{1}{2}} \bigr)+ \frac{\tau }{144} \bigl(f_{i+2,j+1}^{k+ \frac{1}{2}}+f_{i,j+1}^{k+\frac{1}{2}}+f_{i+2,j-1}^{k+\frac{1}{2}}+f_{i,j-1}^{k+ \frac{1}{2}} \bigr) \\ &\phantom{rhs_{i+1,j}=}{} +s_{1}\sum_{l=2}^{k+1} \lambda _{l} \biggl(\frac{-10}{3}w_{i+1,j}^{k+1-l}+ \frac{2}{3} \bigl(w_{i+2,j}^{k+1-l}+w_{i,j}^{k+1-l}+w_{i+1,j+1}^{k+1-l} \\ &\phantom{rhs_{i+1,j}=}{} +w_{i+1,j-1}^{k+1-l} \bigr)+\frac{1}{6} \bigl(w_{i+2,j+1}^{k+1-l}+w_{i,j+1}^{k+1-l}+w_{i+2,j-1}^{k+1-l}+w_{i,j-1}^{k+1-l} \bigr) \biggr) \\ &\phantom{rhs_{i+1,j}=}{} +s_{1}\sum_{l=1}^{k} \lambda _{l} \biggl(\frac{-10}{3}w_{i+1,j}^{k-l}+ \frac{2}{3} \bigl(w_{i+2,j}^{k-l}+w_{i,j}^{k-l}+w_{i+1,j+1}^{k-l} +w_{i+1,j-1}^{k-l} \bigr) \\ &\phantom{rhs_{i+1,j}=}{} +\frac{1}{6} \bigl(w_{i+2,j+1}^{k-l}+w_{i,j+1}^{k-l}+w_{i+2,j-1}^{k-l}+w_{i,j-1}^{k-l} \bigr) \biggr), \\ &rhs_{i+1,j+1}= a_{2} \bigl(w_{i+2,j+1}^{k+1}+w_{i+1,j+2}^{k+1} \bigr)+a_{3} \bigl(w_{i+2,j+2}^{k+1}+w_{i,j+2}^{k+1}+w_{i+2,j}^{k+1} \bigr)+a_{4} w_{i+1,j+1}^{k} \\ &\phantom{rhs_{i+1,j+1}=}{} +a_{5} \bigl(w_{i+2,j+1}^{k}+w_{i,j+1}^{k}+w_{i+1,j+2}^{k}+w_{i+1,j}^{k} \bigr)+a_{6} \bigl(w_{i+2,j+2}^{k}+w_{i,j+2}^{k} \\ &\phantom{rhs_{i+1,j+1}=}{}+w_{i+2,j}^{k}+w_{i,j}^{k} \bigr)+ \frac{25}{36}\tau f_{i+1,j+1}^{k+ \frac{1}{2}}+\frac{5}{72} \tau \bigl(f_{i+2,j+1}^{k+\frac{1}{2}}+f_{i,j+1}^{k+ \frac{1}{2}}+f_{i+1,j+2}^{k+\frac{1}{2}} \\ &\phantom{rhs_{i+1,j+1}=}{} +f_{i+1,j}^{k+\frac{1}{2}} \bigr)+ \frac{\tau }{144} \bigl(f_{i+2,j+2}^{k+ \frac{1}{2}}+f_{i,j+2}^{k+\frac{1}{2}}+f_{i+2,j}^{k+\frac{1}{2}}+f_{i,j}^{k+ \frac{1}{2}} \bigr) \\ &\phantom{rhs_{i+1,j+1}=}{} +s_{1}\sum_{l=2}^{k+1} \lambda _{l} \biggl(\frac{-10}{3}w_{i+1,j+1}^{k+1-l}+ \frac{2}{3} \bigl(w_{i+2,j+1}^{k+1-l}+w_{i,j+1}^{k+1-l}+w_{i+1,j+2}^{k+1-l} \\ &\phantom{rhs_{i+1,j+1}=}{} +w_{i+1,j}^{k+1-l} \bigr)+\frac{1}{6} \bigl(w_{i+2,j+2}^{k+1-l}+w_{i,j+2}^{k+1-l}+w_{i+2,j}^{k+1-l}+w_{i,j}^{k+1-l} \bigr) \biggr) \\ &\phantom{rhs_{i+1,j+1}=}{} + +s_{1}\sum_{l=1}^{k} \lambda _{l} \biggl(\frac{-10}{3}w_{i+1,j+1}^{k-l}+ \frac{2}{3} \bigl(w_{i+2,j+1}^{k-l}+w_{i,j+1}^{k-l}+w_{i+1,j+2}^{k-l} +w_{i+1,j}^{k-l} \bigr) \\ &\phantom{rhs_{i+1,j+1}=}{} +\frac{1}{6} \bigl(w_{i+2,j+2}^{k-l}+w_{i,j+2}^{k-l}+w_{i+2,j}^{k-l}+w_{i,j}^{k-l} \bigr) \biggr), \\ &rhs_{i,j+1}= a_{2} \bigl(w_{i-1,j+1}^{k+1}+w_{i,j+2}^{k+1} \bigr)+a_{3} \bigl(w_{i-1,j+2}^{k+1}+w_{i+1,j+2}^{k+1}+w_{i-1,j}^{k+1} \bigr)+a_{4} w_{i,j+1}^{k} \\ &\phantom{rhs_{i,j+1}=}{} +a_{5} \bigl(w_{i+1,j+1}^{k}+w_{i-1,j+1}^{k}+w_{i,j+2}^{k}+w_{i,j}^{k} \bigr)+a_{6} \bigl(w_{i+1,j+2}^{k}+w_{i-1,j+2}^{k} \\ &\phantom{rhs_{i,j+1}=}{} +w_{i+1,j}^{k}+w_{i-1,j}^{k} \bigr)+ \frac{25}{36}\tau f_{i,j+1}^{k+ \frac{1}{2}}+\frac{5}{72} \tau \bigl(f_{i+1,j+1}^{k+\frac{1}{2}}+f_{i-1,j+1}^{k+ \frac{1}{2}}+f_{i,j+2}^{k+\frac{1}{2}} \\ &\phantom{rhs_{i,j+1}=}{} +f_{i,j}^{k+\frac{1}{2}} \bigr)+ \frac{\tau }{144} \bigl(f_{i+1,j+2}^{k+ \frac{1}{2}}+f_{i-1,j+2}^{k+\frac{1}{2}}+f_{i+1,j}^{k+\frac{1}{2}}+f_{i-1,j}^{k+ \frac{1}{2}} \bigr) \\ &\phantom{rhs_{i,j+1}=}{} +s_{1}\sum_{l=2}^{k+1} \lambda _{l} \biggl(\frac{-10}{3}w_{i,j+1}^{k+1-l}+ \frac{2}{3} \bigl(w_{i+1,j+1}^{k+1-l}+w_{i-1,j+1}^{k+1-l}+w_{i,j+2}^{k+1-l} \\ &\phantom{rhs_{i,j+1}=}{} +w_{i,j}^{k+1-l} \bigr)+\frac{1}{6} \bigl(w_{i+1,j+2}^{k+1-l}+w_{i-1,j+2}^{k+1-l}+w_{i+1,j}^{k+1-l}+w_{i-1,j}^{k+1-l} \bigr) \biggr) \\ &\phantom{rhs_{i,j+1}=}{}+s_{1}\sum_{l=1}^{k} \lambda _{l} \biggl(\frac{-10}{3}w_{i,j+1}^{k-l}+ \frac{2}{3} \bigl(w_{i+1,j+1}^{k-l}+w_{i-1,j+1}^{k-l}+w_{i,j+2}^{k-l} +w_{i,j}^{k-l} \bigr) \\ &\phantom{rhs_{i,j+1}=}{} +\frac{1}{6} \bigl(w_{i+1,j+2}^{k-l}+w_{i-1,j+2}^{k-l}+w_{i+1,j}^{k-l}+w_{i-1,j}^{k-l} \bigr) \biggr). \end{aligned}$$

The matrix (9) is inverted to get the high-order compact explicit group equation

$$\begin{aligned} \begin{bmatrix} w_{i,j} \\ w_{i+1,j} \\ w_{i+1,j+1} \\ w_{i,j+1} \end{bmatrix} =\frac{1}{d} \begin{bmatrix} \phi _{1} & \phi _{2} & \phi _{3} & \phi _{2} \\ \phi _{2} & \phi _{1} & \phi _{2} & \phi _{3} \\ \phi _{3} & \phi _{2} & \phi _{1} & \phi _{2} \\ \phi _{2} & \phi _{3} & \phi _{2} & \phi _{1} \end{bmatrix} \begin{bmatrix} rhs_{i,j} \\ rhs_{i+1,j} \\ rhs_{i+1,j+1} \\ rhs_{i,j+1} \end{bmatrix}, \end{aligned}$$
(12)

where

$$\begin{aligned} &\phi _{1}=a_{1}^{3}-2a_{1}a_{2}^{2}-2a_{2}^{2}a_{3}-a_{1}a_{3}^{2}, \qquad \phi _{2}=a_{1}^{2}a_{2}+2a_{1}a_{2}a_{3}+a_{2}a_{3}^{2}, \\ & \phi _{3}=2a_{1}a_{2}^{2}+a_{1}^{2}a_{3}+2a_{2}^{2}a_{3}-a_{3}^{3}, \qquad d= \bigl(-4a_{2}^{2}+(a_{1}-a_{3})^{2} \bigr) (a_{1}+a_{3})^{2}. \end{aligned}$$

Figure 1 shows grid points on the xy plane with mesh size \(m=10\), where the groups of four points are computed using (10) and the remaining points are computed using (8).

Figure 1
figure 1

Groups of four points for HEGM

3 Stability of the proposed method

First we recall the following lemma.

Lemma 1

([34])

The coefficients \(\eta _{l}\) satisfy the following relations:

$$\begin{aligned} & (1)\quad \eta _{0}=1, \qquad\eta _{1}=\gamma -1,\qquad \eta _{l}< 0,\quad l=1,2,\dots, \\ & (2)\quad \sum_{l=0}^{\infty } \eta _{l}=0, \quad\forall n\in N, \quad - \sum _{l=1}^{n} \eta _{l}< 1. \end{aligned}$$

The stability of the proposed method is analyzed using the matrix analysis method. Form (9), we obtain

$$\begin{aligned} \begin{aligned} & M_{1} w^{1}=N_{1} w^{0}+\tau P_{1} \bigl(F^{\frac{1}{2}} \bigr),\quad k=0, \\ & M_{1} w^{k+1}=N_{1} w^{k}+s_{1} \sum_{l=2}^{k+1}\lambda _{l}P_{1} \bigl(w^{k+1-l} \bigr) \\ &\phantom{M_{1} w^{k+1}=}{}+s_{1}\sum_{l=1}^{k} \lambda _{l}P_{1} \bigl(w^{k-l} \bigr)+\tau P_{1} \bigl(F^{k+ \frac{1}{2}} \bigr),\quad k>0, \end{aligned} \\ &M_{1}= \begin{bmatrix} R_{1} & R_{3} & & & 0 \\ R_{2}& R_{1} & R_{3} & & \\ & R_{2} & R_{1} & & \\ & & & \ddots & R_{3} \\ 0 & & &R_{2} & R_{1} \end{bmatrix}, \qquad N_{1}= \begin{bmatrix} P_{1} & P_{3} & & & 0 \\ P_{2}& P_{1} & P_{3} & & \\ & P_{2} & P_{1} & & \\ & & & \ddots & P_{3} \\ 0 & & &P_{2} & P_{1} \end{bmatrix}, \\ & P_{1}= \begin{bmatrix} Q_{1} & Q_{3} & & & 0 \\ Q_{2}& Q_{1} & Q_{3} & & \\ & Q_{2} & Q_{1} & & \\ & & & \ddots & Q_{3} \\ 0 & & &Q_{2} & Q_{1} \end{bmatrix},\qquad R_{1}= \begin{bmatrix} G_{1} & G_{3} & & & \\ G_{2}& G_{1} & G_{3} & & \\ & G_{2} & G_{1} & & \\ & & & \ddots & G_{3} \\ & & &G_{2} & G_{1} \end{bmatrix}, \\ & R_{2}= \begin{bmatrix} G_{6} & G_{4} & & & \\ G_{8}& G_{6} & G_{4} & & \\ & G_{8} & G_{6} & & \\ & & & \ddots & G_{4} \\ & & &G_{8} & G_{6} \end{bmatrix}, \qquad R_{3}= \begin{bmatrix} G_{7} & G_{9} & & & \\ G_{5}& G_{7} & G_{9} & & \\ & G_{5} & G_{7} & & \\ & & & \ddots & G_{9} \\ & & &G_{5} & G_{7} \end{bmatrix}, \\ & P_{1}= \begin{bmatrix} H_{1} & H_{3} & & & \\ H_{2}& H_{1} & H_{3} & & \\ & H_{2} & H_{1} & & \\ & & & \ddots & H_{3} \\ & & &H_{2} & H_{1} \end{bmatrix}, \qquad p_{2}= \begin{bmatrix} H_{6} & H_{4} & & & \\ H_{8}& H_{6} & H_{4} & & \\ & H_{8} & H_{6} & & \\ & & & \ddots & H_{4} \\ & & &H_{8} & H_{6} \end{bmatrix}, \\ & P_{3}= \begin{bmatrix} H_{7} & H_{9} & & & \\ H_{5}& H_{7} & H_{9} & & \\ & H_{5} & H_{7} & & \\ & & & \ddots & H_{9} \\ & & &H_{5} & H_{7} \end{bmatrix}, \qquad Q_{1}= \begin{bmatrix} L_{1} & L_{3} & & & \\ L_{2}& L_{1} & L_{3} & & \\ & L_{2} & L_{1} & & \\ & & & \ddots & L_{3} \\ & & &L_{2} & L_{1} \end{bmatrix}, \\ & Q_{2}= \begin{bmatrix} L_{6} & L_{4} & & & \\ L_{8}& L_{6} & L_{4} & & \\ & L_{8} & L_{6} & & \\ & & & \ddots & L_{4} \\ & & &L_{8} & L_{6} \end{bmatrix}, \qquad Q_{3}= \begin{bmatrix} L_{7} & L_{9} & & & \\ L_{5}& L_{7} & L_{9} & & \\ & L_{5} & L_{7} & & \\ & & & \ddots & L_{9} \\ & & &L_{5} & L_{7} \end{bmatrix}, \\ & G_{1}= \begin{bmatrix} a_{1} & -a_{2} & -a_{3} & -a_{2} \\ -a_{2} &a_{1} & -a_{2} & -a_{3} \\ -a_{3}& -a_{2} & a_{1} & -a_{2} \\ -a_{2} & -a_{3} & -a_{2} & a_{1} \end{bmatrix}, \qquad G_{2}= \begin{bmatrix} 0 & 0 & -a_{3} & -a_{2} \\ 0&0 & -a_{2} & -a_{3} \\ 0& 0 &0 & 0 \\ 0& 0 &0 & 0 \end{bmatrix}, \\ & G_{3}= \begin{bmatrix} 0 & 0 & 0& 0 \\ 0&0 & 0 & 0 \\ -a_{3}& -a_{2} &0 & 0 \\ -a_{2}& -a_{3} &0 & 0 \end{bmatrix}, \qquad G_{4}= \begin{bmatrix} 0 & 0 & 0& 0 \\ 0&0 & 0 & 0 \\ 0& 0 &0 & 0 \\ 0& -a_{3} &0 & 0 \end{bmatrix}, \\ & G_{5}= \begin{bmatrix} 0 & 0 & 0& 0 \\ 0&0 & 0 & -a_{3} \\ 0& 0 &0 & 0 \\ 0& 0 &0 & 0 \end{bmatrix},\qquad G_{6}= \begin{bmatrix} 0 & -a_{2}& -a_{3}& 0 \\ 0&0 & 0 & 0 \\ 0& 0 &0 & 0 \\ 0& -a_{3} &-a_{2} & 0 \end{bmatrix}, \\ & G_{7}= \begin{bmatrix} 0 & 0& 0& 0 \\ -a_{2}&0 & 0 & -a_{3} \\ -a_{3}& 0 &0 & -a_{2} \\ 0&0 &0 & 0 \end{bmatrix},\qquad G_{8}= \begin{bmatrix} 0 & 0& -a_{3}& 0 \\ 0&0 & 0 & 0 \\ 0& 0 &0 & 0 \\ 0&0 &0 & 0 \end{bmatrix}, \\ & G_{9}= \begin{bmatrix} 0 & 0& 0& 0 \\ 0&0 & 0 & 0 \\ -a_{3}& 0 &0 & 0 \\ 0&0 &0 & 0 \end{bmatrix}, \\ & H_{1}= \begin{bmatrix} a_{4} & a_{5} & a_{6} & a_{5} \\ a_{5} &a_{4} & a_{5} & a_{6} \\ a_{6} & a_{5} & a_{4} & a_{5} \\ a_{5} & a_{6} & a_{5} & a_{4} \end{bmatrix}, \qquad H_{2}= \begin{bmatrix} 0 & 0 & a_{6} & a_{5} \\ 0&0 & a_{5} & a_{6} \\ 0& 0 &0 & 0 \\ 0& 0 &0 & 0 \end{bmatrix}, \\ & H_{3}= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0&0 & 0 & 0 \\ a_{6} & a_{5}&0 & 0 \\ a_{5} & a_{6} &0 & 0 \end{bmatrix},\qquad H_{4}= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0&0 & 0 & 0 \\ 0& 0 &0 & 0 \\ 0& a_{6} &0 & 0 \end{bmatrix}, \\ & H_{5}= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0&0 & 0 & a_{6} \\ 0& 0 &0 & 0 \\ 0& 0 &0 & 0 \end{bmatrix},\qquad H_{6}= \begin{bmatrix} 0 & a_{5} & a_{6} & 0 \\ 0&0 & 0 & 0 \\ 0& 0 &0 & 0 \\ 0& a_{6} &a_{5} & 0 \end{bmatrix}, \\ & H_{7}= \begin{bmatrix} 0 & 0 & 0& 0 \\ a_{5} &0 & 0 & a_{6} \\ a_{6} & 0 &0 & a_{5} \\ 0& 0&0 & 0 \end{bmatrix}, \qquad H_{8}= \begin{bmatrix} 0 & 0 & a_{6}& 0 \\ 0 &0 & 0 & 0 \\ 0 & 0 &0 & 0 \\ 0& 0&0 & 0 \end{bmatrix},\qquad H_{9}= \begin{bmatrix} 0 & 0 & 0& 0 \\ 0 &0 & 0 & 0 \\ a_{6} & 0 &0 & 0 \\ 0& 0&0 & 0 \end{bmatrix}, \\ & L_{1}=\frac{1}{3} \begin{bmatrix} -10 & 2 & \frac{1}{2} & 2 \\ 2 & -10 & 2 & \frac{1}{2} \\ \frac{1}{2}& 2 & -10 & 2 \\ 2 &\frac{1}{2} & 2 & -10 \end{bmatrix},\qquad L_{2}=\frac{1}{3} \begin{bmatrix} 0 & 0 & \frac{1}{2} & 2 \\ 0 & 0 & 2& \frac{1}{2} \\ 0& 0 & 0 & 0 \\ 0&0 & 0 &0 \end{bmatrix}, \\ & L_{3}=\frac{1}{3} \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0& 0 \\ \frac{1}{2}& 2 & 0 & 0 \\ 2&\frac{1}{2} & 0 &0 \end{bmatrix}, \qquad L_{4}= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0& 0 \\ 0& 0 & 0 & 0 \\ 0&\frac{1}{6} & 0 &0 \end{bmatrix}, \\ & L_{5}= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0& \frac{1}{6} \\ 0& 0 & 0 & 0 \\ 0& 0& 0 &0 \end{bmatrix},\qquad L_{6}= \frac{1}{3} \begin{bmatrix} 0 & 2 & \frac{1}{2} & 0 \\ 0 & 0 & 0& 0 \\ 0& 0 & 0 & 0 \\ 0& \frac{1}{2}& 2&0 \end{bmatrix}, \\ & L_{7}=\frac{1}{3} \begin{bmatrix} 0 & 0 & 0 & 0 \\ 2 & 0 & 0&\frac{1}{2} \\ \frac{1}{2}& 0 & 0 & 2 \\ 0& 0& 0&0 \end{bmatrix},\qquad L_{8}= \begin{bmatrix} 0 & 0 & \frac{1}{6} & 0 \\ 0 & 0 & 0&0 \\ 0& 0 & 0 &0 \\ 0& 0& 0&0 \end{bmatrix},\qquad L_{9}= \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0&0 \\ \frac{1}{6}& 0 & 0 &0 \\ 0& 0& 0&0 \end{bmatrix}. \end{aligned}$$
(13)

Proposition 1

The high-order explicit group scheme (12) is unconditionally stable.

Proof

Let \(w_{i,j}^{k}\) and \(W_{i,j}^{k}\) be the approximate and exact solutions, respectively, for (1), and let \(\epsilon _{i,j}^{k}=W_{i,j}^{k}-w_{i,j}^{k}\) denote the error at time level k. Then from (11),

$$\begin{aligned} \begin{aligned} & M E^{1}=N E^{0}+ \tau P_{1} \bigl(F^{\frac{1}{2}} \bigr),\quad k=0, \\ & M E^{k+1}=N E^{k}+s_{1}\sum _{l=2}^{k+1}\lambda _{l}P_{1} \bigl(E^{k+1-l} \bigr) +s_{1}\sum_{l=1}^{k} \lambda _{l}P_{1} \bigl(E^{k+1-l} \bigr)+\tau P_{1} \bigl(F^{k+ \frac{1}{2}} \bigr),\quad k>0, \end{aligned} \end{aligned}$$
(14)

where

$$\begin{aligned} \begin{aligned} &E^{k+1}= \begin{bmatrix} E_{1}^{k+1} \\ E_{1}^{k+1} \\ \vdots \\ E_{1}^{k+1} \\ E_{1}^{k+1} \end{bmatrix},\qquad E_{1}^{k+1}= \begin{bmatrix} \epsilon _{1}^{k+1} \\ \epsilon _{2}^{k+1} \\ \vdots \\ \epsilon _{m-2}^{k+1} \\ \epsilon _{m-1}^{k+1} \end{bmatrix},\qquad \epsilon _{i}^{k+1}= \begin{bmatrix} \epsilon _{i,j}^{k+1} \\ \epsilon _{i+1,j}^{k+1} \\ \epsilon _{i+1,j+1}^{k+1} \\ \epsilon _{i,j+1}^{k+1} \end{bmatrix},\quad i=1,2,\dots,m-1, \\ & \quad j=1,2,\dots,m-1. \end{aligned} \end{aligned}$$

From (11) we know

$$\begin{aligned} \begin{aligned} &M_{1}=G_{1}I+(G_{2}+G_{3})E+G_{6}I+ (G_{4}+G_{8})E+G_{7}I+(G_{5}+G_{9})E, \\ & N_{1}=H_{1}I+(H_{2}+H_{3})E+H_{6}I+ (H_{4}+H_{8})E+H_{7}I+(H_{5}+H_{9})E, \\ & P_{1}=L_{1}I+(L_{2}+L_{3})E+L_{6}I+ (L_{4}+L_{8})E+L_{7}I+(L_{5}+L_{9})E, \end{aligned} \end{aligned}$$
(15)

where I is the identity matrix and E is the matrix with unity values along each diagonal immediately below and above the main diagonal. Let \(\rho _{1},\rho _{2} \), and \(\rho _{3} \) represent the maximum eigenvalues for \(M_{1}\), \(N_{1}\), and \(P_{1}\), respectively, then

$$\begin{aligned} \rho _{1} = a_{1}-a_{3}-2a_{2}, \qquad \rho _{2} = a_{4}-2a_{6}+a_{5}, \qquad\rho _{3} =\frac{9}{2}. \end{aligned}$$
(16)

From (12), when \(k=0\),

$$\begin{aligned} &E^{1}=M_{1}^{-1}N_{1} E^{0}, \\ & \bigl\Vert E^{1} \bigr\Vert \leq \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert \bigl\Vert E^{0} \bigr\Vert \leq \frac{ \vert a_{4}+2a_{6}+a_{5} \vert }{ \vert a_{1}-a_{3}-2a_{2} \vert } \bigl\Vert E^{0} \bigr\Vert , \\ & \bigl\Vert E^{1} \bigr\Vert \leq \frac{ \vert 121h^{2}-132(\tau +\gamma \tau ^{\gamma }) \vert }{ \vert 81(h^{2}+4(\tau + \tau ^{\gamma })) \vert } \bigl\Vert E^{0} \bigr\Vert , \\ & \bigl\Vert E^{1} \bigr\Vert \leq \bigl\Vert E^{0} \bigr\Vert \quad\because \text{ denominator $>$ numerator}. \end{aligned}$$

Supposing

$$\begin{aligned} \bigl\Vert E^{s} \bigr\Vert \leq \bigl\Vert E^{0} \bigr\Vert ,\quad s=2,3, \dots,k, \end{aligned}$$
(17)

we will prove it for \(s=k+1\). Indeed, from (12)

$$\begin{aligned} \bigl\Vert E^{k+1} \bigr\Vert ={}& \Biggl\Vert M_{1}^{-1} \Biggl( N_{1}E^{k}+s_{1} \sum_{l=2}^{k+1}\lambda _{l}P_{1} E^{k+1-l}+s_{1}\sum_{l=1}^{k} \lambda _{l}P_{1} E^{k-l} \Biggr) ) \Biggr\Vert \\ \leq{}& \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert \bigl\Vert E^{k} \bigr\Vert +s_{1} \sum _{l=2}^{k+1}\lambda _{l}P_{1} E^{k+1-l} \bigl\Vert M_{1}^{-1}P_{1} \bigr\Vert \bigl\Vert E^{k+1-l} \bigr\Vert \\ &{} +s_{1}\sum_{l=1}^{k}\lambda _{l} \bigl\Vert M_{1}^{-1}P_{1} \bigr\Vert \bigl\Vert E^{k-l} \bigr\Vert \\ \leq{}& \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert \bigl\Vert E^{k} \bigr\Vert +s_{1} \bigl\Vert M_{1}^{-1} P_{1} \bigr\Vert \Biggl( \sum _{l=2}^{k+1} \lambda _{l} \bigl\Vert E^{k+1-l} \bigr\Vert + \sum_{l=1}^{k} \lambda _{l} \bigl\Vert E^{k-l} \bigr\Vert \Biggr) \\ \leq {}& \Biggl( \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert +s_{1} \bigl\Vert M_{1}^{-1} P_{1} \bigr\Vert \Biggl( \sum_{l=2}^{k+1} \lambda _{l} + \sum_{l=1}^{k} \lambda _{l} \Biggr) \Biggr) \bigl\Vert E^{0} \bigr\Vert \quad\because \text{ using (17)} \\ = {}& \Biggl( \frac{a_{4}-2a_{6}+a_{5}}{a_{1}-a_{3}-2a_{2}} + \frac{9s_{1}}{2(a_{1}-a_{3}-2a_{2})} \Biggl( \sum _{l=2}^{k+1}\lambda _{l} + \sum _{l=1}^{k}\lambda _{l} \Biggr) \Biggr) \bigl\Vert E^{0} \bigr\Vert \\ \leq{}& \Biggl( \frac{a_{4}-2a_{6}+a_{5}}{a_{1}-a_{3}-2a_{2}} + \frac{9s_{1}}{2(a_{1}-a_{3}-2a_{2})} \Biggl( \sum _{l=2}^{k+1}\lambda _{l} -1 \Biggr) \Biggr) \bigl\Vert E^{0} \bigr\Vert \quad\because \text{ using Lemma 1} \\ ={}& \Biggl( \frac{a_{4}-2a_{6}+a_{5}}{a_{1}-a_{3}-2a_{2}} + \frac{9s_{1}}{2(a_{1}-a_{3}-2a_{2})} \Biggl( \sum _{l=1}^{k+1}\lambda _{l} -1-\lambda _{1} \Biggr) \Biggr) \bigl\Vert E^{0} \bigr\Vert \\ \leq{}& \biggl( \frac{a_{4}-2a_{6}+a_{5}}{a_{1}-a_{3}-2a_{2}} + \frac{9s_{1}}{2(a_{1}-a_{3}-2a_{2})} ( -1 -1-\lambda _{1} ) \biggr) \bigl\Vert E^{0} \bigr\Vert \quad \because \text{ using Lemma 1} \\ ={}& \biggl( \frac{81 h^{2}-324 (\gamma \tau ^{\gamma }+\tau )-4.5 s_{1}(\gamma +1)}{79 h^{2}+348 (\tau ^{\gamma }+\tau )} \biggr) \bigl\Vert E^{0} \bigr\Vert \\ = {}& \biggl(\frac{81 h^{2}-(324 d_{0} +4.5d_{1})}{79 h^{2}+348 d_{0}} \biggr) \bigl\Vert E^{0} \bigr\Vert \quad\text{where } d_{0}= \bigl( \gamma \tau ^{\gamma }+ \tau \bigr) \text{ and } d_{1}=s_{1}(\gamma +1) \\ \bigl\Vert E^{k+1} \bigr\Vert \leq{}& \bigl\Vert E^{0} \bigr\Vert , \quad\because \text{ denominator $>$ numerator, because $d_{0}, d_{1} >0$ and $h \in (0,1)$}. \end{aligned}$$

So, using matrix analysis via mathematical induction, we proved that the proposed method is unconditionally stable. □

4 Convergence of the proposed method

Let us denote the truncation errors for the group of four points \(w_{i,j}^{k+\frac{1}{2}}, w_{i+1,j}^{k+\frac{1}{2}}, w_{i+1,j+1}^{k+ \frac{1}{2}}, w_{i,j+1}^{k+\frac{1}{2}}\) by \(e_{i,j}^{k+\frac{1}{2}},e_{i+1,j}^{k+\frac{1}{2}},e_{i+1,j+1}^{k+ \frac{1}{2}}\), \(e_{i,j+1}^{k+\frac{1}{2}}\) then let \(R^{k+\frac{1}{2}}=\{R_{1,1}^{k+\frac{1}{2}}, R_{1,2}^{k+\frac{1}{2}}, \dots,R_{1,\frac{m-1}{4}}^{k+\frac{1}{2}},R_{2,1}^{k+\frac{1}{2}},R_{2,2}^{k+ \frac{1}{2}},\dots, R_{\frac{m-1}{4},\frac{m-1}{4}}^{k+\frac{1}{2}} \}\) where \(R_{i,j}^{k+\frac{1}{2}}=\{e_{i,j}^{k+\frac{1}{2}},e_{i+1,j}^{k+ \frac{1}{2}},e_{i+1,j+1}^{k+\frac{1}{2}},e_{i,j+1}^{k+\frac{1}{2}}\}\) and \(i,j=1,2,\dots,\frac{m-1}{4}\). Then from (8) we have

$$\begin{aligned} \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert \leq \varphi \bigl( \tau +h^{4} \bigr), \quad k= 0,1,2,\dots, N-1, \end{aligned}$$
(18)

where φ is a constant.

Define the error as

$$\begin{aligned} &E^{k+1}= \begin{bmatrix} E_{1}^{k+1} \\ E_{1}^{k+1} \\ \vdots \\ E_{1}^{k+1} \\ E_{1}^{k+1} \end{bmatrix},\qquad E_{1}^{k+1}= \begin{bmatrix} \epsilon _{1}^{k+1} \\ \epsilon _{2}^{k+1} \\ \vdots \\ \epsilon _{m-2}^{k+1} \\ \epsilon _{m-1}^{k+1} \end{bmatrix}, \qquad\epsilon _{i}^{k+1}= \begin{bmatrix} \epsilon _{i,j}^{k+1} \\ \epsilon _{i+1,j}^{k+1} \\ \epsilon _{i+1,j+1}^{k+1} \\ \epsilon _{i,j+1}^{k+1} \end{bmatrix},\quad i=1,2,\dots,m-1, \\ & \quad j=1,2,\dots,m-1. \end{aligned}$$
(19)

By substituting (19) into (11) and using \(E^{0}=0\), we get

$$\begin{aligned} \begin{aligned} &M_{1}E^{1} = R^{\frac{1}{2}}, \quad k=0, \\ & M_{1} E^{k+1}=N_{1} E^{k}+s_{1} \sum_{l=2}^{k+1}\lambda _{l}P_{1} \bigl(E^{k+1-l} \bigr) +s_{1}\sum_{l=1}^{k} \lambda _{l}P_{1} \bigl(E^{k-l} \bigr)+ \bigl(R^{k+\frac{1}{2}} \bigr),\quad k>0. \end{aligned} \end{aligned}$$
(20)

Proposition 2

Suppose \(E^{k+1}\) \((k=0,1,\dots,N)\) satisfy (20), then

$$\begin{aligned} \bigl\Vert E^{k+1} \bigr\Vert \leq \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert . \end{aligned}$$

Proof

We will use mathematical induction. When \(k=0\),

$$\begin{aligned} &M_{1}E^{1} = R^{\frac{1}{2}}, \\ & \bigl\Vert E^{1} \bigr\Vert \leq \bigl\Vert M_{1}^{-1} \bigr\Vert \bigl\Vert R^{ \frac{1}{2}} \bigr\Vert =\frac{1}{ \vert a_{1}-a_{3}-2a_{2} \vert } \bigl\Vert R^{ \frac{1}{2}} \bigr\Vert \leq \frac{1}{81(h^{2}+4(\tau +\tau ^{\gamma }))} \bigl\Vert R^{\frac{1}{2}} \bigr\Vert , \\ & \bigl\Vert E^{1} \bigr\Vert \leq \mu _{0} \bigl\Vert R^{\frac{1}{2}} \bigr\Vert ,\quad \text{where } \mu _{0}= \frac{1}{81(h^{2}+4(\tau +\tau ^{\gamma }))}, \\ & \bigl\Vert E^{1} \bigr\Vert \leq \bigl\Vert R^{\frac{1}{2}} \bigr\Vert . \end{aligned}$$

Assume that

$$\begin{aligned} \bigl\Vert E^{s} \bigr\Vert \leq \bigl\Vert R^{(s-1)+\frac{1}{2}} \bigr\Vert ,\quad s=2,3,\dots,k, \end{aligned}$$
(21)

then for \(s=k+1\),

$$\begin{aligned} &M_{1}E^{k+1}= N_{1}E^{k}+s_{1} \sum_{l=2}^{k+1}P_{1} \bigl(E^{k+1-s} \bigr)+ s_{1}\sum_{l=1}^{k}P_{1} \bigl(E^{k-s} \bigr)+R^{k+\frac{1}{2}}, \\ & \bigl\Vert E^{k+1} \bigr\Vert = \Biggl\Vert M_{1}^{-1} \Biggl( N_{1}E^{k}+s_{1} \sum _{l=2}^{k+1} \lambda _{l}P_{1} \bigl(E^{k+1-s} \bigr)+ s_{1}\sum_{l=1}^{k} \lambda _{l}P_{1} \bigl(E^{k-s} \bigr)+R^{k+\frac{1}{2}} \Biggr) \Biggr\Vert , \\ & \bigl\Vert E^{k+1} \bigr\Vert \leq \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert \bigl\Vert E^{k} \bigr\Vert + \bigl\Vert M_{1}^{-1} P_{1} \bigr\Vert s_{1} \Biggl( \sum _{l=2}^{k+1}\lambda _{l} \bigl\Vert E^{k+1-s} \bigr\Vert + \sum_{l=1}^{k} \lambda _{l} \bigl\Vert E^{k-s} \bigr\Vert \Biggr) \\ & \phantom{ \Vert E^{k+1} \Vert \leq}{}+ \bigl\Vert M_{1}^{-1} R^{k+\frac{1}{2}} \bigr\Vert , \\ & \bigl\Vert E^{k+1} \bigr\Vert \leq \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert \bigl\Vert E^{k} \bigr\Vert + \bigl\Vert M_{1}^{-1} P_{1} \bigr\Vert s_{1} \Biggl( \sum _{l=2}^{k+1}\lambda _{l} \bigl\Vert E^{k+1-s} \bigr\Vert + \sum_{l=1}^{k} \lambda _{l} \bigl\Vert E^{k-s} \bigr\Vert \Biggr) \\ &\phantom{ \Vert E^{k+1} \Vert \leq}{} + \bigl\Vert M_{1}^{-1} R^{k+\frac{1}{2}} \bigr\Vert , \\ & \bigl\Vert E^{k+1} \bigr\Vert \leq \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert \bigl\Vert R^{(k-1)+\frac{1}{2}} \bigr\Vert + \bigl\Vert M_{1}^{-1} P_{1} \bigr\Vert s_{1} \Biggl( \sum _{l=2}^{k+1}\lambda _{l} + \sum _{l=1}^{k} \lambda _{l} \Biggr) \bigl\Vert R^{(k-1)+\frac{1}{2}} \bigr\Vert \\ &\phantom{ \Vert E^{k+1} \Vert \leq}{} + \bigl\Vert M_{1}^{-1} \bigr\Vert \bigl\Vert R^{k+ \frac{1}{2}} \bigr\Vert , \\ & \bigl\Vert E^{k+1} \bigr\Vert = \Biggl( \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert + \bigl\Vert M_{1}^{-1} P_{1} \bigr\Vert s_{1} \Biggl( \sum_{l=2}^{k+1} \lambda _{l} + \sum_{l=1}^{k}\lambda _{l} \Biggr) + \bigl\Vert M_{1}^{-1} \bigr\Vert \Biggr) R^{k+\frac{1}{2}} \Vert , \\ & \bigl\Vert E^{k+1} \bigr\Vert \leq \bigl( \bigl\Vert M_{1}^{-1}N_{1} \bigr\Vert + s_{1} \bigl\Vert M^{-1} P_{1} \bigr\Vert ( -2 + \lambda _{1} ) + \bigl\Vert M_{1}^{-1} \bigr\Vert \bigr) \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert \\ &\phantom{ \Vert E^{k+1} \Vert } = \biggl( \frac{a_{4}-2a_{6}+a_{5}}{a_{1}-a_{3}-2a_{2}} + \frac{9s_{1}(-2-\lambda _{1})}{2(a_{1}-a_{3}-2a_{2})}+ \frac{1}{a_{1}-a_{3}-2a_{2}} \biggr) \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert \\ &\phantom{ \Vert E^{k+1} \Vert } = \biggl( \frac{a_{4}-2a_{6}+a_{5}-9s_{1}(2+\lambda _{1})+2}{2(a_{1}-a_{3}-2a_{2})} \biggr) \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert \\ & \phantom{ \Vert E^{k+1} \Vert }= \biggl( \frac{81 h^{2}-324 (\gamma \tau ^{\gamma }+\tau )-4.5 s_{1}(\gamma +1) +2}{79 h^{2}+348 (\tau ^{\gamma }+\tau )} \biggr) \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert , \\ & \bigl\Vert E^{k+1} \bigr\Vert \leq \phi \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert , \end{aligned}$$

where \(\phi = \frac{81 h^{2}-324 (\gamma \tau ^{\gamma }+\tau )-4.5 s_{1}(\gamma +1) +2}{79 h^{2}+348 (\tau ^{\gamma }+\tau )} \) and \(\phi \in (0,1)\), so

$$\begin{aligned} \bigl\Vert E^{k+1} \bigr\Vert \leq \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert . \end{aligned}$$

 □

Theorem 1

The high-order explicit group scheme (10) is convergent with the order of convergence \(O(\tau + h^{4})\).

Proof

From (18), we have

$$\begin{aligned} & \bigl\Vert E^{k+1} \bigr\Vert _{2} \leq \bigl\Vert R^{k+\frac{1}{2}} \bigr\Vert \leq \varphi \bigl(\tau +h^{4} \bigr), \\ & \bigl\Vert E^{k+1} \bigr\Vert _{2} \leq \varphi \bigl(\tau +h^{4} \bigr),\quad \forall k=0,1,2,\dots,N-1. \end{aligned}$$

Hence, we proved that the high-order explicit group scheme (10) is convergent with the order of convergence \(O(\tau + h^{4}) \). □

5 Numerical experiments and discussion

In this section, three numerical experiments were simulated using Core i7 Duo 3.40 GHz, 4 GB RAM and Windows 7 using Mathematica software. The acceleration technique “Successive over-relaxation (SOR)” is used with relaxation factor \(\omega =1.8\) and convergence tolerance \(\zeta =10^{-5}\) for the maximum error \((L_{\infty })\); \(C_{1}\)- and \(C_{2}\)-order of convergence are used for space and time variables and calculated using [34]

$$\begin{aligned} &C_{1}\text{-order} = \log _{2} \biggl( \frac{ \Vert L_{\infty }(2\tau,h) \Vert }{ \Vert L_{\infty }(\tau,h) \Vert } \biggr), \end{aligned}$$
(22)
$$\begin{aligned} &C_{2}\text{-order} = \log _{2} \biggl( \frac{ \Vert L_{\infty }(16\tau,2h) \Vert }{ \Vert L_{\infty }(\tau,h) \Vert } \biggr), \end{aligned}$$
(23)

where h, τ and \(L_{\infty }\) represent the space-step, the time-step, and the infinity norm, respectively.

The following three numerical experiments are considered:

Example 1

([27])

$$\begin{aligned} \frac{\partial w(x,y,t)}{\partial t}= {}&{}_{0}D_{1}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ &{} +\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} +e^{x+y} \biggl((1+ \gamma )t^{\gamma }-2 \frac{\Gamma (2+\gamma )}{\Gamma (1 + 2\gamma )} t^{2 \gamma } -2 t^{1+\gamma } \biggr), \end{aligned}$$

where \(0< x,y<1\), with initial and boundary conditions

$$\begin{aligned} &w(x,y,0)=0, \\ & w(0,y,t)= e^{y} t^{1+\gamma },\qquad w(x,0,t)=e^{x}t^{1+\gamma }, \\ & w(n,y,t)= e^{1+y} t^{1+\gamma },\qquad w(x,n,t)=e^{1+x}t^{1+\gamma }, \end{aligned}$$

and with the exact solution

$$\begin{aligned} w(x,y,t)=e^{x+y} t^{1+\gamma }. \end{aligned}$$

Example 2

([27])

$$\begin{aligned} \frac{\partial w(x,y,t)}{\partial t}={}& {}_{0}D_{1}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ & {}+\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} + \exp \biggl(- \frac{(x-0.5)^{2}}{\nu }-\frac{(t-0.5)^{2}}{\nu } \biggr) (1+\gamma )t^{ \gamma } \\ &{} + \biggl( \frac{(\Gamma (2+\gamma ))}{\Gamma (1+2\gamma )}t^{2 \gamma }+ t^{1+\gamma } \biggr) \biggl( \frac{4}{\nu }- \frac{4(x-0.5)^{2}}{{\nu }^{2}}-\frac{4(y-0.5)^{2}}{{\nu }^{2}} \biggr), \end{aligned}$$

where \(0< x,y<1\), with initial and boundary conditions

$$\begin{aligned} &w(x,y,0)=0, \\ & w(0,y,t)= \exp \biggl( - \biggl(\frac{0.25}{\nu }+\frac{(y-0.5)^{2}}{\nu } \biggr) \biggr)t^{1+\gamma }, \\ & w(x,0,t)= \exp \biggl( - \biggl(\frac{(x-0.5)^{2}}{\nu }+\frac{0.25}{\nu } \biggr) \biggr)t^{1+\gamma }, \\ & w(n,y,t)= \exp \biggl( - \biggl(\frac{0.25}{\nu }+\frac{(y-0.5)^{2}}{\nu } \biggr) \biggr)t^{1+\gamma }, \\ & w(x,n,t)= \exp \biggl( - \biggl(\frac{(x-0.5)^{2}}{\nu } + \frac{0.25}{\nu } \biggr) \biggr)t^{1+\gamma }, \end{aligned}$$

and with the exact solution

$$\begin{aligned} w(x,y,t)= \exp \biggl( - \biggl(\frac{(x-0.5)^{2}}{\nu }+ \frac{(y-0.5)^{2}}{\nu } \biggr) \biggr)t^{1+\gamma }. \end{aligned}$$

Example 3

$$\begin{aligned} \frac{\partial w(x,y,t)}{\partial t}= {}& {}_{0}D_{1}^{1- \gamma } \biggl( \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} + \frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} \biggr) + \frac{\partial ^{2} w(x,y,t)}{\partial x^{2}} \\ &{} +\frac{\partial ^{2} w(x,y,t)}{\partial y^{2}} + e^{t} \sin (x+y) + \frac{3t^{\gamma -1}\sin (x+y)}{\Gamma {(\gamma )}} \\ &{} + \frac{3e^{t}\Gamma (\gamma )-\Gamma (\gamma ) \sin (x+y)}{\Gamma (\gamma )}, \end{aligned}$$

where \(0< x,y<1\), with initial and boundary conditions

$$\begin{aligned} &w(x,y,0)=\sin (x+y), \\ & w(0,y,t)= e^{t}\sin (y),\qquad w(x,0,t)=e^{t}\sin (x), \\ & w(n,y,t)= e^{t}\sin (1+y),\qquad w(x,n,t)=e^{t}\sin (1+x), \end{aligned}$$

and with the exact solution

$$\begin{aligned} w(x,y,t)=e^{t}\sin (x+y). \end{aligned}$$

The execution time, error, and number of iteration are shown for the comparison between standard point and HEGM from Table 1 to Table 6. The execution time in HEGM is reduced by (5–35)%, (7–35)%, (10–25)%, (8–18)%, (12.5–28.48)%, and (21.29–42.24)% as compared to C–N point method in Tables 1 to 6, respectively, and it can also be seen in Figs. 4 and 5. Table 7 and Table 8 show the maximum errors and CPU timing at different values of γ’s for Example 1 and Example 2 respectively. Table 9 shows the maximum error at different values of the relaxation factor (ω’s). Tables 10 to 14 represent the space and time variables’ order of convergence for the HEGM, which show that the theoretical order of convergence is in agreement with the experimental order of convergence. Figures 2 to 5 represent 3D graphs for the exact and approximate solutions of Examples 1 and 2, which show that the proposed method is effective and reliable. The comparison of execution timing between FEG (HEGM) and SP (C-N) for Example 1 and Example 2 are shown in Figure 6 and Figure 7 respectively, which depicted that HEGM method required less execution time as compared to the C-N. Figures 8 and 9 show the graphs of the maximum error using HEGM when \(\gamma =0.5\) and \(\tau =\frac{1}{20}\) for Examples 1 and 2, respectively. The computational effort is shown in Tables 16 and 17; it can be seen that the HEGM requires fewer operations as compared to the high-order Crank–Nicolson finite difference method.

Figure 2
figure 2

Exact solution for Example 1

Figure 3
figure 3

Approximate solution for Example 1 when \(h = \tau = \frac{1}{30}\)

Figure 4
figure 4

Exact solution for Example 2

Figure 5
figure 5

Approximate solution for Example 2 when \(h = \tau = \frac{1}{30}\)

Figure 6
figure 6

Execution time (in s) for different mesh sizes when \(\gamma =0.75\) for Example 1

Figure 7
figure 7

Execution time (in s) for different mesh sizes when \(\gamma =0.75\) for Example 2

Figure 8
figure 8

Graphs of maximum errors using HEGM with \(\gamma =0.5\), \(\tau = 1/20\) and \(h=\frac{1}{10}\) (a), \(h = \frac{1}{20}\) (b), for Test problem 1

Figure 9
figure 9

Graphs of maximum errors using HEGM with \(\gamma =0.5\), \(\tau = 1/20\) and \(h=\frac{1}{10}\) (a), \(h = \frac{1}{20}\) (b), for Test problem 2

Table 1 Comparison between Crank–Nicolson (C–N) high-order finite difference method and HEGM for Example 1 when \(\gamma =0.75\)
Table 2 Comparison between Crank–Nicolson (C–N) high-order finite difference method and HEGM for Example 2 when \(\gamma =0.5\)
Table 3 Comparison between Crank–Nicolson (C–N) method and HEGM for Example 2 when \(\gamma =0.5\)
Table 4 Comparison between C–N high-order finite difference method and HEGM for Example 2 when \(\gamma =0.75\)
Table 5 Comparison between C–N high-order finite difference method and HEGM for Example 3 when \(\gamma =0.75\)
Table 6 Comparison between C–N high-order finite difference method and HEGM for Example 3 when \(\gamma =0.1\)
Table 7 Errors and CPU time with \(\tau =\frac{1}{20}\) for Example 1
Table 8 Errors and CPU time with \(\tau =\frac{1}{20}\) for Example 2
Table 9 Errors and relaxation factor (ω) with \(\tau =\frac{1}{20}\) and \(\gamma =0.1\) for Example 2
Table 10 \(C_{2}\)-order of convergence for Example 1 and different γ’s
Table 11 \(C_{2}\)-order of convergence of HEGM for Example 2 and different γ’s
Table 12 \(C_{1}\)-order of convergence for Example 1, when \(h=\frac{1}{8}\)
Table 13 \(C_{1}\)-order of convergence for Example 2, when \(h=\frac{1}{8}\)
Table 14 \(C_{1}\)-order of convergence for Example 3, when \(h=\frac{1}{8}\)
Table 15 Computational complexity for the HEGM and C–N high-order finite difference method method
Table 16 The total computation effort for different mesh size for Example 1 when \(\alpha =0.75\)
Table 17 The total computation effort for different mesh size for Example 2 when \(\alpha =0.5\)

6 Conclusion

In this paper, we have solved two-dimensional fractional Rayleigh–Stokes problem for a heated generalized second-grade fluid using the HEGM. The \(C_{2}\)-order of convergence shows that the theoretical order of convergence agrees with the experimental order of convergence. The proposed method reduces execution time and computational complexity as compared to the high-order compact Crank–Nicolson finite difference scheme. We proved the unconditional stability using the matrix analysis method; moreover, the proposed method is convergent.