1 Introduction

Let \(\varOmega =(0,a)\times (0,b)\) and we consider the two-dimensional fourth-order hyperbolic equation with initial and boundary conditions:

$$ \textstyle\begin{cases} (\mathrm{a})\quad u_{tt}+\rho \Delta ^{2} u=f(x,y,t),\quad (x,y,t)\in \varOmega \times (0,T], \\ (\mathrm{b})\quad u(x,y,0)= f_{1}(x,y),\qquad \frac{\partial u}{\partial t}|_{(x,y,0)}=f _{2}(x,y),\quad (x,y)\in \varOmega , \\ (\mathrm{c})\quad u|_{x=0}=h_{1}(y,t),\qquad u|_{x=a}=h_{2}(y,t), \\ \hphantom{(\mathrm{c})\quad } u|_{y=0}=h_{3}(x,t),\qquad u|_{y=b}=h_{4}(x,t),\quad t\in [0,T], \\ (\mathrm{d})\quad \Delta u|_{x=0}=g_{1}(y,t),\qquad \Delta u|_{x=a}=g_{2}(y,t), \\ \hphantom{(\mathrm{d})\quad} \Delta u|_{y=0}=g_{3}(x,t),\qquad \Delta u|_{y=b}=g_{4}(x,t),\quad t\in [0,T], \end{cases} $$
(1)

where \(u_{tt}=\frac{\partial ^{2}u}{\partial t^{2}}\), \(\Delta ^{2} u=\frac{ \partial ^{4}u}{\partial x^{4}}+2\frac{\partial ^{4}u}{\partial x^{2} \partial y^{2}}+\frac{\partial ^{4}u}{\partial y^{4}}\). \(f(x,y,t)\) is the given source term. \(f_{1}(x,y)\) and \(f_{2}(x,y)\) are initial value functions. \(h_{1}(y,t)\), \(h_{2}(y,t)\), \(h_{3}(x,t)\), \(h_{4}(x,t)\) and \(g_{1}(y,t)\), \(g_{2}(y,t)\), \(g_{3}(x,t)\), \(g_{4}(x,t)\) are boundary value functions. ρ is a given positive constant.

The two-dimensional fourth-order hyperbolic equations have very important physical background and a wide range of applications. For example, they can be used to describe the vibration of a plate and in large-scale civil engineering, spaceflight, and active noise control (see [1,2,3,4,5]). Compared with the second-order equations [6,7,8,9,10,11], it is usually necessary to use higher-order finite element methods or thirteen-point difference schemes in order to solve the numerical solution of the two-dimensional fourth-order equations. The former is difficult to calculate. The latter has some difficulties to deal with the boundary and only achieves second-order accuracy.

The compact finite difference method, compared to the traditional finite difference method, has a narrower band width and achieves a higher accuracy. Hence, they have long been studied, for example, in [12, 13]. In the last few years, high-order computational methods for different kinds of differential equations were studied (see [6,7,8, 12,13,14,15,16,17]). In [14,15,16] fourth-order equations are written as a system of two second-order equations by introducing two new variables. Then, in order to design a high-order scheme for the problem, the spatial derivatives are discretized by applying the compact finite difference method or compact volume method.

In this paper, we apply similar ideas to the two-dimensional fourth-order hyperbolic equation (1). Firstly, the fourth-order equation is written as a system of two second-order equations by introducing two new variables. Next, we use the compact operators to approximate the second-order derivatives in the space variables and rewrite the above problem as an initial value problem for a system of two second-order ordinary differential equations. Then we develop a two time level compact finite difference scheme. We prove the stability for the high-order compact difference scheme by the Fourier method. The convergence of the high-order compact difference scheme is given by the energy method.

The rest of the paper is arranged as follows. In Sect. 2 we formulate the fourth-order compact finite difference scheme for problem (1). A stability analysis is given by the Fourier method in Sect. 3, and a convergence analysis is given by the energy method in Sect. 4. Numerical experiments are performed in Sect. 5 to test the accuracy and efficiency of the proposed compact finite difference scheme. Conclusions are given in Sect. 6.

2 Compact finite difference scheme

To design a proper finite difference scheme, we set \(v=-\rho \Delta u\), \(w=\frac{ \partial u}{\partial t}\) and reformulate problem (1) in terms of the coupled system of second-order equations

$$ \textstyle\begin{cases} (\mathrm{a})\quad \frac{\partial w}{\partial t}-\Delta v=f(x,y,t), \quad (x,y,t)\in \varOmega \times (0,T], \\ (\mathrm{b})\quad \rho \Delta w+\frac{\partial v}{\partial t}=0,\quad (x,y,t)\in \varOmega \times (0,T], \\ (\mathrm{c})\quad w(x,y,0)= f_{2}(x,y),\qquad v(x,y,0)=- \rho \Delta f_{1}(x,y),\quad (x,y) \in \varOmega , \\ (\mathrm{d})\quad w|_{x=0}=\frac{\partial h_{1}}{\partial t}(y,t),\qquad w|_{x=a}=\frac{ \partial h_{2}}{\partial t}(y,t), \\ \hphantom{(\mathrm{d})\quad} w|_{y=0}=\frac{\partial h_{3}}{\partial t}(x,t),\qquad w|_{y=b}=\frac{ \partial h_{4}}{\partial t}(x,t),\quad t\in [0,T], \\ (\mathrm{e})\quad v|_{x=0}=-\rho g_{1}(y,t),\qquad v|_{x=a}=-\rho g_{2}(y,t), \\ \hphantom{(\mathrm{e})\quad} v|_{y=0}=-\rho g_{3}(x,t),\qquad v|_{y=b}=-\rho g_{4}(x,t),\quad t\in [0,T]. \end{cases} $$
(2)

Obviously, u is a solution to (1), if and only if \((v,w)\) is a solution to (2).

Let \(h_{x}=\frac{a}{N_{x}+1}\), \(h_{y}=\frac{b}{N_{y}+1}\) be the spatial step in the x and y directions, \(\tau =\frac{T}{N}\) be the time step and \(x_{i}=ih_{x}\), \(0\leq i\leq N_{x}+1\), \(y_{j}=jh_{y}\), \(0\leq j \leq N_{y}+1\), \(t_{k}=k\tau \), \(0\leq k\leq N\), \(h=\max \{h_{x},h_{y}\}\). The theoretical solutions u, v, w at the point \((x_{i},y_{j},t_{k})\) are denoted by \(u_{ij}^{k}\), \(v_{ij}^{k}\), \(w_{ij}^{k}\) and the numerical solutions at the same mesh point will be represented by \(U_{ij}^{k}\), \(V _{ij}^{k}\), \(W_{ij}^{k}\). At each time level the number of unknowns is \(N_{xy}=N_{x}\times N_{y}\). Besides, we set \(t_{k+\frac{1}{2}}= \frac{1}{2}(t_{k}+t_{k+1})\).

Our compact method for (1) is based on the system (2). To do this, we set

$$\begin{aligned}& v_{ij}^{k+\frac{1}{2}}=\frac{1}{2}\bigl(v_{ij}^{k}+v_{ij}^{k+1} \bigr),\qquad w_{ij}^{k+\frac{1}{2}}=\frac{1}{2} \bigl(w_{ij}^{k}+w_{ij}^{k+1}\bigr), \\& \delta _{t} v_{ij}^{k+\frac{1}{2}}=\frac{1}{\tau } \bigl(v_{ij} ^{k+1}-v_{ij}^{k}\bigr),\qquad \delta _{t} w_{ij}^{k+\frac{1}{2}}=\frac{1}{\tau } \bigl(w_{ij}^{k+1}-w_{ij} ^{k}\bigr), \\& \delta _{x} v_{i-\frac{1}{2},j}^{k}=\frac{1}{h_{x}} \bigl(v_{ij} ^{k}-v_{i-1,j}^{k}\bigr),\qquad \delta _{x} w_{i-\frac{1}{2},j}^{k}=\frac{1}{h_{x}} \bigl(w_{ij}^{k}-w_{i-1,j} ^{k}\bigr), \\& \delta _{y} v_{i,j-\frac{1}{2}}^{k}=\frac{1}{h_{y}} \bigl(v_{ij} ^{k}-v_{i,j-1}^{k}\bigr),\qquad \delta _{y} w_{i,j-\frac{1}{2}}^{k}=\frac{1}{h_{y}} \bigl(w_{ij}^{k}-w_{i,j-1} ^{k}\bigr), \\& \delta _{x}^{2}v_{ij}^{k}= \frac{1}{h_{x}^{2}}\bigl(v_{i+1,j} ^{k}-2v_{ij}^{k}+v_{i-1,j}^{k} \bigr),\qquad \delta _{x}^{2}w_{ij}^{k}= \frac{1}{h_{x}^{2}}\bigl(w_{i+1,j}^{k}-2w_{ij} ^{k}+w_{i-1,j}^{k}\bigr), \\& \delta _{y}^{2}v_{ij}^{k}= \frac{1}{h_{y}^{2}}\bigl(v_{i,j+1} ^{k}-2v_{ij}^{k}+v_{i,j-1}^{k} \bigr),\qquad \delta _{y}^{2}w_{ij}^{k}= \frac{1}{h_{y}^{2}}\bigl(w_{i,j+1}^{k}-2w_{ij} ^{k}+w_{i,j-1}^{k}\bigr). \end{aligned}$$

Using the Crank–Nicolson method to approximate (2a) and (2b) at the point \((x_{i},y_{j},t_{k+\frac{1}{2}})\), we get

$$ \textstyle\begin{cases} (\mathrm{a})\quad \delta _{t}w_{ij}^{k+\frac{1}{2}}-\Delta v_{ij}^{k+\frac{1}{2}}=f _{ij}^{k+\frac{1}{2}}+g_{ij}^{k+\frac{1}{2}}, \\ (\mathrm{b})\quad \rho \Delta w_{ij}^{k+\frac{1}{2}}+\delta _{t}v_{ij}^{k+\frac{1}{2}}= \widetilde{g}_{ij}^{k+\frac{1}{2}}, \end{cases} $$
(3)

where

$$\begin{aligned}& f_{i,j}^{k+\frac{1}{2}}=f(x_{i},y_{j},t_{k+\frac{1}{2}}), \\& g_{ij}^{k+\frac{1}{2}}=\biggl\{ \frac{\tau ^{2}}{24}\frac{\partial ^{3} w}{\partial t^{3}}- \frac{\tau ^{2}}{8}\biggl[\frac{\partial ^{4} v}{ \partial x^{2}\partial t^{2}}+\frac{\partial ^{4} v}{\partial y^{2} \partial t^{2}}\biggr]\biggr\} \bigg|_{(x_{i},y_{j},t_{k+\frac{1}{2}})}+O\bigl(\tau ^{4}\bigr), \\& \widetilde{g}_{ij}^{k+\frac{1}{2}}=\biggl\{ \frac{\rho \tau ^{2}}{8}\biggl[ \frac{\partial ^{4} w}{\partial x^{2}\partial t^{2}}+\frac{\partial ^{4} w}{\partial y^{2}\partial t^{2}}\biggr]+\frac{\tau ^{2}}{24}\frac{\partial ^{3} v}{\partial t^{3}} \biggr\} \bigg|_{(x_{i},y_{j},t_{k+ \frac{1}{2}})}+O\bigl(\tau ^{4}\bigr). \end{aligned}$$

Setting \(v_{xx}=\theta \), \(v_{yy}=\vartheta \) and \(w_{xx}=\varphi \), \(w_{yy}= \psi \), (3a) and (3b) can be rewritten as

$$ \textstyle\begin{cases} (\mathrm{a})\quad \delta _{t}w_{ij}^{k+\frac{1}{2}}-(\theta _{ij}^{k+\frac{1}{2}}+ \vartheta _{ij}^{k+\frac{1}{2}})=f_{ij}^{k+\frac{1}{2}}+g_{ij}^{k+ \frac{1}{2}}, \\ (\mathrm{b})\quad \rho (\varphi _{ij}^{k+\frac{1}{2}}+\psi _{ij}^{k+\frac{1}{2}})+ \delta _{t}v_{ij}^{k+\frac{1}{2}}=\widetilde{g}_{ij}^{k+\frac{1}{2}}. \end{cases} $$
(4)

Defining the difference operators

$$ \mathbf{A}_{x}=1+\frac{h_{x}^{2}}{12}\delta _{x}^{2}, \qquad \mathbf{A}_{y}=1+\frac{h_{y}^{2}}{12}\delta _{y}^{2}, $$

and applying a Taylor expansion, we get

$$\begin{aligned}& \delta _{x}^{2} v_{ij}^{k}= \mathbf{A}_{x}\theta _{ij}^{k}+(R_{x})_{ij} ^{k},\qquad \delta _{y}^{2} v_{ij}^{k}= \mathbf{A}_{y}\vartheta _{ij}^{k}+(R_{y})_{ij} ^{k}, \\& \delta _{x}^{2} w_{ij}^{k}= \mathbf{A}_{x}\varphi _{ij}^{k}+(r_{x})_{ij} ^{k},\qquad \delta _{y}^{2} w_{ij}^{k}= \mathbf{A}_{y}\psi _{ij}^{k}+(r_{y})_{ij} ^{k}, \end{aligned}$$

where

$$\begin{aligned}& (R_{x})_{ij}^{k}=-\frac{h_{x}^{4}}{240} \frac{\partial ^{6}v_{ij}^{k}}{ \partial x^{6}}+O\bigl(h_{x}^{6}\bigr),\qquad (R_{y})_{ij}^{k}=-\frac{h_{y}^{4}}{240} \frac{\partial ^{6}v_{ij}^{k}}{ \partial y^{6}}+O\bigl(h_{y}^{6}\bigr). \\& (r_{x})_{ij}^{k}=-\frac{h_{x}^{4}}{240} \frac{\partial ^{6}w_{ij}^{k}}{ \partial x^{6}}+O\bigl(h_{x}^{6}\bigr),\qquad (r_{y})_{ij}^{k}=-\frac{h_{y}^{4}}{240} \frac{\partial ^{6}w_{ij}^{k}}{ \partial y^{6}}+O\bigl(h_{y}^{6}\bigr). \end{aligned}$$

Denote \(\mathbf{A}_{h}=\mathbf{A}_{x}\mathbf{A}_{y}\) and \(\mathbf{B} _{h}=\mathbf{A}_{y}\delta ^{2}_{x}+\mathbf{A}_{x}\delta ^{2}_{y}\). Hence, multiplying by \(\mathbf{A}_{h}\) both sides of (4), we get

$$ \textstyle\begin{cases} (\mathrm{a})\quad \mathbf{A}_{h}(\delta _{t}w_{ij}^{k+\frac{1}{2}})-\mathbf{B}_{h}v _{ij}^{k+\frac{1}{2}}=\mathbf{A}_{h}f_{ij}^{k+\frac{1}{2}}+ \overline{g}_{ij}^{k+\frac{1}{2}}, \\ (\mathrm{b})\quad \rho \mathbf{B}_{h}w_{ij}^{k+\frac{1}{2}}+\mathbf{A}_{h}(\delta _{t}v_{ij}^{k+\frac{1}{2}})=\widehat{g}_{ij}^{k+\frac{1}{2}}, \end{cases} $$
(5)

where

$$\begin{aligned}& \overline{g}_{ij}^{k+\frac{1}{2}}=\mathbf{A}_{h}g_{ij}^{k+\frac{1}{2}}- \mathbf{A}_{y}(R_{x})_{i,j}^{k+\frac{1}{2}}- \mathbf{A}_{x}(R_{y})_{i,j} ^{k+\frac{1}{2}} =O\bigl( \tau ^{2}+h_{x}^{4}+h_{y}^{4} \bigr), \\& \widehat{g}_{ij}^{k+\frac{1}{2}}=\mathbf{A}_{h} \widetilde{g}_{ij}^{k+ \frac{1}{2}}-\mathbf{A}_{y}(r_{x})_{i,j}^{k+\frac{1}{2}}- \mathbf{A} _{x}(r_{y})_{i,j}^{k+\frac{1}{2}} =O\bigl( \tau ^{2}+h_{x}^{4}+h_{y}^{4} \bigr). \end{aligned}$$

Replacing \(v_{i,j}^{k}\), \(w_{i,j}^{k}\) by their approximations \(V_{i,j}^{k}\), \(W_{i,j}^{k}\) and neglecting the higher-order terms, we derive a finite difference scheme as follows:

$$ \textstyle\begin{cases} \mathbf{A}_{h}(\delta _{t}W_{ij}^{k+\frac{1}{2}})-\mathbf{B}_{h}V_{ij} ^{k+\frac{1}{2}}=\mathbf{A}_{h}f_{ij}^{k+\frac{1}{2}},\quad 1 \leq i \leq N_{x},1 \leq j \leq N_{y},1 \leq k \leq N, \\ \rho \mathbf{B}_{h}W_{ij}^{k+\frac{1}{2}}+\mathbf{A}_{h}(\delta _{t}V _{ij}^{k+\frac{1}{2}})=0, \quad 1 \leq i \leq N_{x},1 \leq j \leq N _{y},1 \leq k \leq N, \end{cases} $$
(6)

where the discretized boundary values and initial values are denoted by

$$\begin{aligned}& W_{0,j}^{k}=\frac{\partial {h_{1}}}{\partial t}(y_{j},t_{k}), \qquad W_{N_{x}+1,j}^{k}=\frac{\partial {h_{2}}}{\partial t}(y_{j},t _{k}),\quad 0 \leq j \leq N_{y}+1, 0 \leq k \leq N,\\& W_{i,0}^{k}=\frac{\partial {h_{3}}}{\partial t}(x_{i},t_{k}), \qquad W_{i,N_{y}+1}^{k}=\frac{\partial {h_{4}}}{\partial t}(x_{i},t _{k}),\quad 0 \leq i \leq M_{x}+1, 0 \leq k \leq N,\\& V_{0,j}^{k}=-\rho g_{1}(y_{j},t_{k}), \qquad V_{N_{x}+1,j}^{k}=- \rho g_{2}(y_{j},t_{k}), \quad 0 \leq j \leq N_{y}+1, 0 \leq k \leq N,\\& V_{i,0}^{k}=-\rho g_{3}(x_{i},t_{k}), \qquad V_{i,N_{y}+1}^{k}=- \rho g_{4}(x_{i},t_{k}), \quad 0 \leq i \leq N_{x}+1, 0 \leq k \leq N,\\& W_{i,j}^{0}=f_{2}(x_{i},y_{j}), \quad 0 \leq i \leq N_{x}+1, 0 \leq j \leq N_{y}+1,\\& V_{i,j}^{0}=- \rho \Delta f_{1}(x_{i},y_{j}), \quad 0 \leq i \leq N_{x}+1, 0 \leq j \leq N_{y}+1. \end{aligned}$$

Remark 2.1

From (5) it is easy to see that the local truncation error for this scheme is \(O(\tau ^{2}+h^{4})\).

3 Stability analysis

In this section, we adopt the Fourier method to analyze stability of the scheme (6). Assume that \(h=g(\tau )\), where \(g(\tau )\) is a continuous function and \(g(0)=0\). In order to prove stability of the scheme (6), we consider a difference scheme of the form

$$ \sum_{m\in N_{1}} A_{m} \mathbf{U}_{j+m}^{n+1}=\sum_{m\in N _{0}} B_{m} \mathbf{U}_{j+m}^{n}, $$
(7)

where \(A_{m}\) and \(B_{m}\) are \(2\times 2\) matrices, \(N_{0}\) and \(N_{1}\) are finite sets containing 0, positive integers and negative integers, \(\mathbf{U}_{j}^{m}\) is a two-dimensional column vector. Using the Fourier method we get the growth factor \(G(x_{i},y_{j})\). Then the scheme (7) is stable if and only if the family of matrices

$$\begin{aligned} & \bigl\{ G^{n}(x_{i},y_{j});x_{0}=0< x_{1}< \cdots < x_{N_{x}+1}=a, \\ &\quad y_{0}=0< y_{1}< \cdots < y_{N_{y}+1}=b, n=1,2, \ldots ,N\bigr\} \end{aligned}$$
(8)

is uniformly bounded. We introduce the following two lemmas.

Lemma 3.1

([18])

To prove that the family of matrices (8) is uniformly bounded, it is necessary and sufficient to prove that the family of matrices

$$ \bigl\{ G^{n}(x,y);0< x< a,0< y< b ,n=1,2,\ldots \bigr\} $$
(9)

is uniformly bounded.

Proof

Accuracy is obvious, we now prove the necessity. We use the meshes with \(N_{x}=2^{m}\), \(N_{y}=2^{k}\), \(m=1, 2,\ldots \) , \(k=1,2,\ldots \) . Denote by \((x_{p},y_{q})\) the grid points in the mesh for given m and k, where \(x_{p}=\frac{a}{2^{p}}\), \(y_{q}=\frac{b}{2^{q}}\), \(p=1,2,\ldots ,m\), \(q=1,2,\ldots ,k\). Assume

$$ \bigl\Vert G^{n}(x_{p},y_{q})\bigr\Vert \leq M, 0\leq n\tau \leq T, $$

where M is a constant that has nothing to do with partition. We set \(\tau \rightarrow 0\), therefore, \(h\rightarrow 0\), then

$$ \bigl\Vert G^{n}(x_{p},y_{q}) \bigr\Vert \leq M,\quad n=1,2,\ldots . $$

Noting that the bisecting points \(\{(x_{p},y_{q})\}\) are dense on \([0,a]\times [0,b]\), and \(G(x,y)\) is a continuous function, we get

$$ \bigl\Vert G^{n}(x,y) \bigr\Vert \leq M,\quad 0\leq x\leq a,0\leq y\leq b, n=1,2,\ldots . $$

 □

Lemma 3.2

([18])

Assume \(G(x,y)\) is an \(2\times 2\) matrices and use \(g_{ij}\) to represent the element of the ith row and the jth column. The eigenvalues of G are \(\lambda _{1}\) and \(\lambda _{2}\). The family of matrices \(\{G^{n}(x,y)\}\) is uniformly bounded if and only if

$$ \textstyle\begin{cases} (\alpha )\quad |\lambda _{i}(x,y)|\leq 1,\quad i=1,2, 0\leq x\leq a,0\leq y\leq b, \\ (\beta )\quad \|G(x,y)-\frac{1}{2}(g_{11}(x,y)+g_{22}(x,y))I\| \\ \hphantom{(\beta )\quad}\quad \leq M(|1-|\lambda _{1}(x,y)||+|\lambda _{1}(x,y)-\lambda _{2}(x,y)|),\quad 0 \leq x\leq a,0\leq y\leq b. \end{cases} $$
(10)

Remark 3.1

  1. (1)

    From the relationship between roots and coefficients in the quadric equation \(\lambda ^{2}-b \lambda -c=0\), the modulo of two roots is not bigger than one if and only if

    $$ |b|\leq 1-c \leq 2. $$
    (11)
  2. (2)

    In the condition \((\beta )\),we need to calculate the norm of a \(2\times 2\) matrix. We usually use the Frobenius-norm, which is defined as

    $$ \|\mathbf{K}\|_{F}=\Biggl(\sum _{i,j=1}^{2}|k_{ij}|^{2} \Biggr)^{\frac{1}{2}}, $$
    (12)

    for a matrix \(\mathbf{K}=(k_{ij})\).

Theorem 3.1

The scheme (6) is unconditionally stable.

Proof

We use the Fourier method to prove the stability of the scheme (6). Using the definitions of \({A}_{h}\) and \({B}_{h}\), the scheme (6) is written as

$$ \textstyle\begin{cases} (1)\quad \frac{1}{144}[c_{1}+10(c_{2}+c_{3})+100c_{4}]-\frac{1}{24}(r_{1}d _{1} +2r_{2} d_{2}+2r_{3} d_{3}-20r_{1} d_{4}) \\ \hphantom{(1)\quad}\quad =\frac{1}{144}[\widehat{c_{1}}+10(\widehat{c_{2}}+\widehat{c_{3}})+100 \widehat{c_{4}}] +\frac{1}{24}(r_{1}\widehat{d_{1}}+2r_{2} \widehat{d_{2}}+2r_{3}\widehat{d_{3}} -20r_{1}\widehat{d_{4}}), \\ (2)\quad \frac{\rho }{24}(r_{1} c_{1}+2r_{2} c_{2}+2r_{3} c_{3} -20r_{1} c _{4})+\frac{1}{144}[d_{1}+10(d_{2}+d_{3})+100d_{4}] \\ \hphantom{(2)\quad}\quad =\frac{-\rho }{24}(r_{1}\widehat{c_{1}}+2r_{2}\widehat{c_{2}}+2r_{3} \widehat{c_{3}} -20r_{1}\widehat{c_{4}})+\frac{1}{144}[ \widehat{d_{1}}+10(\widehat{d_{2}}+\widehat{d_{3}})+100 \widehat{d_{4}}], \end{cases} $$
(13)

where

$$\begin{aligned}& r_{x}=\frac{\tau }{h_{x}^{2}}, \qquad r_{y}=\frac{\tau }{h_{y}^{2}}, \qquad r_{1}=r _{x}+r_{y},\qquad r_{2}=5r_{x}-r_{y},\qquad r_{3}=5r_{y}-r_{x}, \\& c_{1}= W_{i-1,j-1}^{k+1}+W_{i-1,j+1}^{k+1}+W_{i+1,j-1}^{k+1}+W_{i+1,j+1} ^{k+1}, \\& c_{2}= W_{i-1,j}^{k+1}+W_{i+1,j}^{k+1}, \qquad c_{3}=W_{i,j-1}^{k+1}+W_{i,j+1}^{k+1}, \qquad c_{4}=W_{i,j}^{k+1}, \\& d_{1}=V_{i-1,j-1}^{k+1}+V_{i-1,j+1}^{k+1}+V_{i+1,j-1}^{k+1}+V _{i+1,j+1}^{k+1}, \\& d_{2}=V_{i-1,j}^{k+1}+V_{i+1,j}^{k+1}, \qquad d_{3}=V_{i,j-1}^{k+1}+V_{i,j+1}^{k+1}, \qquad d_{4}=V_{i,j}^{k+1}, \\& \widehat{c_{1}}= W_{i-1,j-1}^{k}+W_{i-1,j+1}^{k}+W_{i+1,j-1} ^{k}+W_{i+1,j+1}^{k}, \\& \widehat{c_{2}}= W_{i-1,j}^{k}+W_{i+1,j}^{k}, \qquad \widehat{c_{3}}=W_{i,j-1}^{k}+W_{i,j+1}^{k}, \qquad \widehat{c_{4}}=W_{i,j}^{k}, \\& \widehat{d_{1}}=V_{i-1,j-1}^{k}+V_{i-1,j+1}^{k}+V_{i+1,j-1} ^{k}+V_{i+1,j+1}^{k}, \\& \widehat{d_{2}}=V_{i-1,j}^{k}+V_{i+1,j}^{k}, \qquad \widehat{d_{3}}=V_{i,j-1}^{k}+V_{i,j+1}^{k}, \qquad \widehat{d_{4}}=V_{i,j}^{k}. \end{aligned}$$

Let \(W_{jm}^{k}=v_{1}^{k}e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}}\), \(V_{jm}^{k}=v_{2}^{k}e^{i\sigma _{1}jh_{x}}e^{i\sigma _{2}mh_{y}}\), where \(v_{1}^{k}\) and \(v_{2}^{k}\) are the amplitude at time level k, \(\sigma _{1}\) and \(\sigma _{2}\) represent the wave numbers in the x and y directions. By inserting these expressions into the coupled scheme (13), we have

$$\begin{aligned}& \biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e ^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} (m-1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h_{x}}e ^{i\sigma _{2} (m+1)h_{y}}\bigr)+\frac{10}{144} \bigl(e^{i\sigma _{1} jh_{x}}e^{i \sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m+1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}}+e^{i \sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} mh_{y}} \bigr)+\frac{100}{144}e^{i \sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}}\biggr]v_{1}^{k+1} \\& \qquad {}-\biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h _{y}}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} (m+1)h_{y}} \bigr)+\frac{1}{12}(5r_{x}-r_{y}) \bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} mh_{y}}\bigr)+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} jh_{x}}e^{i \sigma _{2} (m+1)h_{y}}\bigr) \\& \qquad {}-\frac{10}{12}(r_{x}+r_{y})e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} mh _{y}} \biggr]v_{2}^{k+1} \\& \quad =\biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e ^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} (m-1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h_{x}}e ^{i\sigma _{2} (m+1)h_{y}}\bigr)+\frac{10}{144} \bigl(e^{i\sigma _{1} jh_{x}}e^{i \sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m+1)h _{y}} \\& \qquad {}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}}+e^{i \sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} mh_{y}} \bigr)+\frac{100}{144}e^{i \sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}}\biggr]v_{1}^{k} \\& \qquad {}+\biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h _{y}}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} (m+1)h_{y}} \bigr)+\frac{1}{12}(5r_{x}-r_{y}) \bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} mh_{y}}\bigr)+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} jh_{x}}e^{i \sigma _{2} (m+1)h_{y}}\bigr) \\& \qquad {}-\frac{10}{12}(r_{x}+r_{y})e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}} \biggr]v_{2}^{k} \end{aligned}$$

and

$$\begin{aligned}& \rho \biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}}+e ^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}} \\& \qquad {}+e^{i \sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}}\bigr)+\frac{1}{12}(5r_{x}-r _{y}) \bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} mh_{y}} \bigr) \\& \qquad {}+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e ^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} jh_{x}}e ^{i\sigma _{2} (m+1)h_{y}}\bigr)- \frac{10}{12}(r_{x}+r_{y})e^{i\sigma _{1} jh _{x}}e^{i\sigma _{2} mh_{y}} \biggr]v_{1}^{k+1} \\& \qquad {}+\biggl[\frac{1}{144}\bigl(e^{i \sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} (j-1)h _{x}}e^{i\sigma _{2} (m+1)h_{y}}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m+1)h _{y}}\bigr)+\frac{10}{144} \bigl(e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e ^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m+1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}}+e^{i\sigma _{1} (j+1)h_{x}}e ^{i\sigma _{2} mh_{y}} \bigr)+\frac{100}{144}e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}}\biggr]v_{2}^{k+1} \\& \quad =-\rho \biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} (j-1)h _{x}}e^{i\sigma _{2} (m+1)h_{y}}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m+1)h _{y}}\bigr)+\frac{1}{12}(5r_{x}-r_{y}) \bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} mh_{y}}\bigr) \\& \qquad {}+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m-1)h _{y}}+e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m+1)h_{y}}\bigr)- \frac{10}{12}(r _{x}+r_{y})e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}} \biggr]v_{1}^{k} \\& \qquad {}+\biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m-1)h _{y}}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} (m+1)h_{y}}+e^{i\sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} (m-1)h_{y}} \\& \qquad {}+e^{i\sigma _{1} (j+1)h _{x}}e^{i\sigma _{2} (m+1)h_{y}}\bigr)+\frac{10}{144} \bigl(e^{i\sigma _{1} jh_{x}}e ^{i\sigma _{2} (m-1)h_{y}}+e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} (m+1)h _{y}} \\& \qquad {}+e^{i\sigma _{1} (j-1)h_{x}}e^{i\sigma _{2} mh_{y}}+e^{i \sigma _{1} (j+1)h_{x}}e^{i\sigma _{2} mh_{y}} \bigr)+\frac{100}{144}e^{i \sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}}\biggr]v_{2}^{k}. \end{aligned}$$

Dividing the above equations by \(e^{i\sigma _{1} jh_{x}}e^{i\sigma _{2} mh_{y}}\), we get

$$\begin{aligned} &\biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} h_{x}}e ^{i\sigma _{2} (-h_{y})} \\ &\qquad {}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2}h_{y}}\bigr)+\frac{10}{144} \bigl(e^{i \sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} (-h_{x})}+e ^{i\sigma _{1} h_{x}}\bigr)+ \frac{100}{144}\biggr]v_{1}^{n+1} \\ &\qquad {}-\biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} (-h_{y})} \\ &\qquad {}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} h_{y}}\bigr)+\frac{1}{12}(5r_{x}-r _{y}) \bigl(e^{i\sigma _{1} (-h_{x})}+e^{i\sigma _{1} h_{x}}\bigr) \\ &\qquad {}+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}\bigr)-\frac{10}{12}(r_{x}+r_{y}) \biggr]v_{2}^{n+1} \\ &\quad =\biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} h_{x}}e ^{i\sigma _{2} (-h_{y})} \\ &\qquad {}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} h_{y}}\bigr)+\frac{10}{144} \bigl(e^{i \sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} (-h_{x})}+e ^{i\sigma _{1} h_{x}}\bigr)+ \frac{100}{144}\biggr]v_{1}^{n} \\ &\qquad {}+\biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} (-h_{y})} \\ &\qquad {}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} h_{y}}\bigr)+\frac{1}{12}(5r_{x}-r _{y}) \bigl(e^{i\sigma _{1} (-h_{x})}+e^{i\sigma _{1} h_{x}}\bigr) \\ &\qquad {}+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}\bigr)-\frac{10}{12}(r_{x}+r_{y}) \biggr]v_{2}^{n} \end{aligned}$$
(14)

and

$$\begin{aligned} &\rho \biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i \sigma _{1} h_{x}}e^{i\sigma _{2} (-h_{y})} \\ &\qquad {}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} h_{y}}\bigr)+\frac{1}{12}(5r_{x}-r _{y}) \bigl(e^{i\sigma _{1} (-h_{x})}+e^{i\sigma _{1} h_{x}}\bigr) \\ &\qquad {}+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}\bigr)-\frac{10}{12}(r_{x}+r_{y}) \biggr]v_{1}^{n+1} \\ &\qquad {}+\biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} h_{x}}e ^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2}h_{y}} \bigr) \\ &\qquad {}+\frac{10}{144}\bigl(e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}+e ^{i\sigma _{1} (-h_{x})}+e^{i\sigma _{1} h_{x}}\bigr)+\frac{100}{144}\biggr]v_{2} ^{n+1} \\ &\quad =-\rho \biggl[\frac{1}{24}(r_{x}+r_{y}) \bigl(e ^{i\sigma _{1} (-h_{x})}e^{i \sigma _{2} (-h_{y})}+e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e ^{i\sigma _{1} h_{x}}e^{i\sigma _{2} (-h_{y})} \\ &\qquad {}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2} h_{y}}\bigr)+\frac{1}{12}(5r_{x}-r _{y}) \bigl(e^{i\sigma _{1} (-h_{x})}+e^{i\sigma _{1} h_{x}}\bigr) \\ &\qquad {}+\frac{1}{12}(5r_{y}-r_{x}) \bigl(e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}\bigr)-\frac{10}{12}(r_{x}+r_{y}) \biggr]v_{1}^{n} \\ &\qquad {}+\biggl[\frac{1}{144}\bigl(e^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} (-h_{y})}+e ^{i\sigma _{1} (-h_{x})}e^{i\sigma _{2} h_{y}}+e^{i\sigma _{1} h_{x}}e ^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{1} h_{x}}e^{i\sigma _{2}h_{y}} \bigr) \\ &\qquad {}+\frac{10}{144}\bigl(e^{i\sigma _{2} (-h_{y})}+e^{i\sigma _{2} h_{y}}+e ^{i\sigma _{1} (-h_{x})}+e^{i\sigma _{1} h_{x}}\bigr)+\frac{100}{144}\biggr]v_{2} ^{n}. \end{aligned}$$
(15)

Equations (14) and (15) can be written as

( a b ρ b a ) ( v 1 k + 1 v 2 k + 1 ) = ( a b ρ b a ) ( v 1 k v 2 k ) ,
(16)

where

$$\begin{aligned}& \begin{aligned} a&=\frac{1}{144}\bigl[4\cos \sigma _{1} h_{x}\cos \sigma _{2} h_{y}+20(\cos \sigma _{1} h_{x}+\cos \sigma _{2} h_{y})+100 \bigr] \\ &=\frac{1}{36}\biggl[2\biggl(\cos ^{2}\frac{\sigma _{1} h_{x}}{2}\cos ^{2}\frac{\sigma _{2} h_{y}}{2}+\sin ^{2}\frac{\sigma _{1} h_{x}}{2}\sin ^{2}\frac{\sigma _{2} h_{y}}{2}\biggr) \\ &\quad {}+10\biggl(\cos ^{2}\frac{\sigma _{1} h_{x}}{2}+ \cos ^{2} \frac{\sigma _{2} h_{y}}{2}\biggr)+14\biggr] \\ &>0, \end{aligned} \\& \begin{aligned} b&=-\frac{1}{24}\bigl[4(r_{x}+r_{y}) \cos \sigma _{1} h_{x}\cos \sigma _{2} h _{y}+4(5r_{x}-r_{y})\cos \sigma _{1} h_{x} \\ &\quad {}+4(5r_{y}-r_{x})\cos \sigma _{2} h_{y}-20(r_{x}+r_{y})\bigr] \\ &=-\frac{r_{x}}{6}(\cos \sigma _{1} h_{x}\cos \sigma _{2} h _{y}+5\cos \sigma _{1} h_{x}- \cos \sigma _{2} h_{y}-5) \\ &\quad {}-\frac{r_{y}}{6}( \cos \sigma _{1} h_{x}\cos \sigma _{2} h_{y}-\cos \sigma _{1} h_{x}+5 \cos \sigma _{2} h_{y}-5) \\ &=-\frac{2r_{x}}{3}\biggl(\cos ^{2}\frac{\sigma _{1} h_{x}}{2} \cos ^{2}\frac{\sigma _{2} h_{y}}{2}-\cos ^{2}\frac{\sigma _{2} h_{y}}{2}+2 \cos ^{2}\frac{\sigma _{1} h_{x}}{2}-2\biggr) \\ &\quad {}-\frac{2r_{y}}{3}\biggl(\cos ^{2}\frac{\sigma _{1} h_{x}}{2}\cos ^{2}\frac{\sigma _{2} h_{y}}{2}-\cos ^{2}\frac{\sigma _{1} h_{x}}{2}+2 \cos ^{2}\frac{\sigma _{2} h_{y}}{2}-2\biggr) \\ &\geq 0. \end{aligned} \end{aligned}$$

Then from (16) we immediately get the matrix of growth of the scheme (13),

G( σ 1 h x , σ 2 h y )= ( a b ρ b a ) 1 ( a b ρ b a ) = 1 a 2 + b 2 ρ ( a 2 b 2 ρ 2 a b 2 ρ a b a 2 b 2 ρ ) .
(17)

By calculation, we achieve the quadratic equation about eigenvalues of the growth matrix \(G(\sigma _{1} h_{x},\sigma _{2} h_{y})\) as follows:

$$ \lambda ^{2}-2\frac{a^{2}-b^{2}\rho }{a^{2}+b^{2}\rho }\lambda + \frac{[a ^{2}-b^{2}\rho ]^{2}}{[a^{2}+b^{2}\rho ]^{2}}+4\frac{a^{2}b^{2}\rho }{[a ^{2}+b^{2}\rho ]^{2}}=0. $$
(18)

Obviously, we have

$$\begin{aligned}& \lambda _{1,2}=\frac{a^{2}-b^{2}\rho }{a^{2}+b^{2}\rho }\pm 2i \frac{ab\sqrt{ \rho }}{a^{2}+b^{2}\rho }, \end{aligned}$$
(19)
$$\begin{aligned}& |\lambda _{1,2}|^{2}=\frac{(a^{2}-b^{2} \rho )^{2}+(2ab\sqrt{\rho })^{2}}{(a ^{2}+b^{2}\rho )^{2}}= \frac{(a^{2}+b^{2}\rho )^{2}}{(a^{2}+b^{2} \rho )^{2}}=1. \end{aligned}$$
(20)

That is, the condition (10α) in Lemma 3.2 is satisfied.

Next, we have

$$\begin{aligned}& \biggl\Vert G(\sigma _{1} h_{x},\sigma _{2} h_{y})-\frac{1}{2}\bigl(g_{11}(\sigma _{1} h_{x},\sigma _{2} h_{y})+g_{22}(\sigma _{1} h_{x},\sigma _{2} h_{y})\bigr)I \biggr\Vert \\& \quad =2\sqrt{1+\rho ^{2}}\frac{ab}{a^{2}+b^{2}\rho }, \end{aligned}$$
(21)
$$\begin{aligned}& \bigl\vert 1- \bigl\vert \lambda _{1}(\sigma _{1} h_{x},\sigma _{2} h_{y}) \bigr\vert \bigr\vert + \bigl\vert \lambda _{1}( \sigma _{1} h_{x},\sigma _{2} h_{y})-\lambda _{2}( \sigma _{1} h_{x},\sigma _{2} h_{y}) \bigr\vert =4\sqrt{\rho }\frac{ab}{a^{2}+b^{2}\rho }. \end{aligned}$$
(22)

Hence, there exists a constant \(M \geq \frac{1}{2}\frac{\sqrt{1+ \rho ^{2}}}{\sqrt{\rho }}\) such that (10β) holds for any \(r_{x}>0\), \(r_{y}>0\). Then from Lemma 3.2 we know that the difference scheme (13) is stable. □

4 Error analysis

In this section we give the convergence analysis by the energy method. We introduce the spaces \(S_{h}=\{u|u\in R^{(N_{x}+2)\times (N_{y}+2)} \}\), \(S_{h}^{0}=\{u|u\in R^{(N_{x}+2)\times (N_{y}+2)},u_{0,j}=u_{N _{x}+1,j}=u_{i,0}=u_{N_{y}+1,0}=0,0\leq i\leq N_{x}+1,0\leq j\leq N _{y}+1\}\). \(\forall u, v\in S_{h}^{0}\), we define inner products and norms as follows:

$$\begin{aligned}& (u,v)=\sum_{i=1}^{N_{x}}\sum _{j=1}^{N_{y}}u_{ij}v_{ij}h_{x}h_{y}, \\& (\delta _{x}u,\delta _{x}v)=\sum _{i=1}^{N_{x}+1}\sum_{j=1}^{N_{y}}( \delta _{x}u_{i-\frac{1}{2},j}) (\delta _{x}v_{i-\frac{1}{2},j})h_{x}h _{y}, \\& \bigl(\delta _{x}^{2}u,v\bigr)=\sum _{i=1}^{N_{x}}\sum_{j=1}^{N_{y}} \bigl(\delta _{x} ^{2}u_{ij}\bigr)v_{ij}h_{x}h_{y}, \\& \bigl(\delta _{y}^{2}\delta _{x}u,\delta _{x}v\bigr)=\sum_{i=1}^{N_{x}+1}\sum _{j=1} ^{N_{y}}\bigl(\delta _{y}^{2}\delta _{x}u_{i-\frac{1}{2},j}\bigr) (\delta _{x}v_{i- \frac{1}{2},j})h_{x}h_{y}, \\& \bigl(\delta _{x}^{2}\delta _{y}u,\delta _{y}v\bigr)=\sum_{i=1}^{N_{x}}\sum _{j=1} ^{N_{y}+1}\bigl(\delta _{x}^{2}\delta _{y}u_{i,j-\frac{1}{2}}\bigr) (\delta _{y}v _{i,j-\frac{1}{2}})h_{x}h_{y}, \\& \bigl( \delta _{x}^{2}\delta _{y}^{2}u,v\bigr)= \sum_{i=1}^{N_{x}}\sum _{j=1}^{N_{y}}\bigl( \delta _{x}^{2} \delta _{y}^{2}u_{ij}\bigr)v_{ij}h_{x}h_{y}, \\& \|u\|^{2}=(u,u),\qquad \|\delta _{x}u\|^{2}=( \delta _{x}u,\delta _{x}u), \\& \|\delta _{x} \delta _{y}u\|^{2}=\sum_{i=1}^{N_{x}+1} \sum_{j=1}^{N_{y}+1}( \delta _{x} \delta _{y}u_{i-\frac{1}{2},j-\frac{1}{2}})^{2}h_{x}h_{y}. \end{aligned}$$

Similarly, \((\delta _{y}^{2}u,v)\), \((\delta _{y}u,\delta _{y}v)\), \(\|\delta _{y}u\|^{2}\) and \(\|\delta _{y}\delta _{x}u\|^{2}\) can be defined.

For the error analysis, we first note that our numerical scheme is based on (5) with higher-order terms dropped,

$$\begin{aligned} &\mathbf{A}_{h}\biggl(\frac{w_{ij}^{k+1}-w_{ij}^{k}}{\tau }\biggr)-\frac{1}{2} \mathbf{B}_{h}\bigl(v_{ij}^{k+1} +v_{ij}^{k} \bigr)=\mathbf{A}_{h}f_{ij}^{k+ \frac{1}{2}}+ \overline{g}_{ij}^{k+\frac{1}{2}}, \\ &\quad 1\leq i\leq N_{x},1\leq j\leq N_{y},1\leq k\leq N, \\ &\frac{\rho }{2}\mathbf{B}_{h}\bigl(w_{ij}^{k+1}+w_{ij}^{k} \bigr)+\mathbf{A} _{h}\biggl(\frac{v_{ij}^{k+1}-v_{ij}^{k}}{\tau }\biggr) = \widehat{g}_{ij}^{k+ \frac{1}{2}}, \\ &\qquad 1\leq i\leq N_{x},1\leq j\leq N_{y},1\leq k\leq N, \end{aligned}$$

where

$$ \bigl\Vert \overline{g}^{k+\frac{1}{2}} \bigr\Vert \leq C_{1}\bigl(\tau ^{2}+h^{4}\bigr),\qquad \bigl\Vert \widehat{g}^{k+\frac{1}{2}} \bigr\Vert \leq C_{2}\bigl(\tau ^{2}+h^{4}\bigr), $$
(23)

with \(C_{1}\), \(C_{2}\) positive constants. And our numerical scheme (6) is equivalent to

$$\begin{aligned} &\mathbf{A}_{h}\biggl(\frac{W_{ij}^{k+1}-W_{ij}^{k}}{\tau }\biggr)-\frac{1}{2} \mathbf{B}_{h}\bigl(V_{ij}^{k+1}+V_{ij}^{k} \bigr)=\mathbf{A}_{h}f_{ij}^{k+ \frac{1}{2}}, \\ &\quad 1\leq i\leq N_{x},1\leq j\leq N_{y},1\leq k\leq N, \\ &\frac{\rho }{2}\mathbf{B}_{h}\bigl(W_{ij}^{k+1}+W_{ij}^{k} \bigr)+\mathbf{A} _{h}\biggl(\frac{V_{ij}^{k+1}-V_{ij}^{k}}{\tau }\biggr)=0, \\ &\quad 1\leq i\leq N_{x},1\leq j\leq N_{y},1\leq k\leq N. \end{aligned}$$

Letting \(\xi _{i,j}^{k}=w_{i,j}^{k}-W_{i,j}^{k}\) and \(\eta _{i,j}^{k}=v_{i,j}^{k}-V_{i,j}^{k}\) replace the approximation errors, we can get the error equations

$$\begin{aligned} &\mathbf{A}_{h}\bigl(\xi _{ij}^{k+1}-\xi _{ij}^{k}\bigr)-\frac{\tau }{2} \mathbf{B}_{h} \bigl(\eta _{ij}^{k+1}+\eta _{ij}^{k}\bigr)= \tau \overline{g}_{ij} ^{k+\frac{1}{2}}, \\ &\quad 1\leq i\leq N_{x},1\leq j\leq N_{y},1\leq k\leq N, \end{aligned}$$
(24)
$$\begin{aligned} &\frac{\rho \tau }{2}\mathbf{B}_{h}\bigl(\xi _{ij}^{k+1}+ \xi _{ij}^{k}\bigr)+ \mathbf{A}_{h}\bigl(\eta _{ij}^{k+1}-\eta _{ij}^{k}\bigr)=\tau \widehat{g}_{ij} ^{k+\frac{1}{2}}, \\ &\quad 1\leq i\leq N_{x},1\leq j\leq N_{y},1\leq k\leq N. \end{aligned}$$
(25)

Using the discrete Green formula, we know that the difference operators \(\delta _{x}^{2}\) and \(\delta _{y}^{2}\) are self-adjoint and symmetric positive definite. We find that the difference operators \(\mathbf{A}_{h}\), \(\mathbf{B}_{h}\) are self-adjoint and symmetric positive definite as well. To give the error estimate, the lemmas used later are first given as follows.

Lemma 4.1

([7])

For any grid function \(u, v\in S_{h}^{0}\), we have

  1. (1)

    \((\mathbf{A}_{h}u,v)=(u,\mathbf{A}_{h}v)\), \((\mathbf{B} _{h}u,v)=(u,\mathbf{B}_{h}v)\),

  2. (2)

    \((\delta _{x}^{2}u,v)=(u,\delta _{x}^{2}v)\), \((\delta _{y} ^{2}u,v)=(u,\delta _{y}^{2}v)\).

Lemma 4.2

([19, 20])

For any grid function \(u\in S_{h}^{0}\), we have

  1. (1)

    \(\frac{2}{3}\|u\|^{2}\leq (\mathbf{A}_{x}u,u)\leq \|u\|^{2}\), \(\frac{2}{3}\|u\|^{2}\leq (\mathbf{A}_{y}u,u)\leq \|u\|^{2}\);

  2. (2)

    \(\frac{4}{9}\|u\|^{2}\leq (\mathbf{A}_{h}u,u)\leq \|u\|^{2}\);

  3. (3)

    \(h_{x}^{2}(\mathbf{A}_{y}\delta _{x}u,\delta _{x}u)\leq 4( \mathbf{A}_{y}u,u)\), \(h_{y}^{2}(\mathbf{A}_{x}\delta _{y}u,\delta _{y}u) \leq 4(\mathbf{A}_{x}u,u)\).

Theorem 4.1

Let \(\{w^{k},v^{k}\}\) be the solution of Eq. (2) and \(\{W^{k},V^{k} \}\) be the solution of scheme (6). For the compact finite difference scheme, assuming that both \(r_{x}\) and \(r_{y}\) are bounded, we have

$$ \max_{0\leq k\tau \leq T}\bigl\{ \bigl\Vert w^{k}-W^{k} \bigr\Vert + \bigl\Vert v^{k}-V^{k} \bigr\Vert \bigr\} \leq C\bigl( \tau ^{2}+h^{4}\bigr). $$
(26)

Proof

Taking the inner product with \(\xi ^{k+1}+\xi ^{k}\) on both sides of (24), we have

$$ \bigl(\mathbf{A}_{h}\bigl(\xi ^{k+1}-\xi ^{k}\bigr),\xi ^{k+1}+\xi ^{k}\bigr)- \frac{\tau }{2}\bigl( \mathbf{B}_{h}\bigl(\eta ^{k+1}+\eta ^{k}\bigr),\xi ^{k+1}+\xi ^{k}\bigr) =\tau \bigl( \overline{g}^{k+\frac{1}{2}},\xi ^{k+1}+\xi ^{k}\bigr). $$
(27)

Taking the inner product with \(\eta ^{k+1}+\eta ^{k}\) on both sides of (25), we have

$$ \frac{\rho \tau }{2}\bigl(\mathbf{B}_{h}\bigl(\xi ^{k+1}+\xi ^{k}\bigr),\eta ^{k+1}+\eta ^{k} \bigr)+\bigl(\mathbf{A}_{h}\bigl(\eta ^{k+1}-\eta ^{k}\bigr),\eta ^{k+1}+\eta ^{k}\bigr) =\tau \bigl( \widehat{g}^{k+\frac{1}{2}},\eta ^{k+1}+\eta ^{k}\bigr). $$
(28)

From Lemma 4.1 we have

$$ \bigl(\mathbf{B}_{h}\bigl(\eta ^{k+1}+\eta ^{k}\bigr),\xi ^{k+1}+\xi ^{k}\bigr)=\bigl(\mathbf{B} _{h}\bigl(\xi ^{k+1}+\xi ^{k}\bigr),\eta ^{k+1}+\eta ^{k}\bigr). $$
(29)

Multiplying by ρ both sides of (27), we have

$$\begin{aligned} &\rho \bigl(\mathbf{A}_{h}\bigl(\xi ^{k+1}-\xi ^{k} \bigr),\xi ^{k+1}+\xi ^{k}\bigr)-\frac{ \rho \tau }{2}\bigl( \mathbf{B}_{h}\bigl(\eta ^{k+1}+\eta ^{k}\bigr),\xi ^{k+1}+\xi ^{k}\bigr) \\ &\quad =\rho \tau \bigl(\overline{g}^{k+\frac{1}{2}},\xi ^{k+1}+\xi ^{k}\bigr). \end{aligned}$$
(30)

Combining (28) with (30), we obtain

$$\begin{aligned} &\rho \bigl(\mathbf{A}_{h}\bigl(\xi ^{k+1}-\xi ^{k} \bigr),\xi ^{k+1}+\xi ^{k}\bigr)+\bigl( \mathbf{A}_{h} \bigl(\eta ^{k+1}-\eta ^{k}\bigr),\eta ^{k+1}+\eta ^{k}\bigr) \\ &\quad =\rho \tau \bigl(\overline{g}^{k+\frac{1}{2}},\xi ^{k+1}+\xi ^{k}\bigr)+\tau \bigl( \widehat{g}^{k+\frac{1}{2}},\eta ^{k+1}+ \eta ^{k}\bigr). \end{aligned}$$

Using Lemma 4.1, we obtain

$$\begin{aligned} &\rho \bigl(\mathbf{A}_{h}\xi ^{k+1},\xi ^{k+1} \bigr)-\rho \bigl(\mathbf{A}_{h}\xi ^{k}, \xi ^{k} \bigr)+ \bigl(\mathbf{A}_{h}\eta ^{k+1},\eta ^{k+1} \bigr)-\bigl(\mathbf{A}_{h}\eta ^{k},\eta ^{k} \bigr) \\ &\quad =\rho \tau \bigl(\overline{g}^{k+\frac{1}{2}},\xi ^{k+1}+\xi ^{k}\bigr)+\tau \bigl( \widehat{g}^{k+\frac{1}{2}},\eta ^{k+1}+ \eta ^{k}\bigr). \end{aligned}$$

By the inequality \(ab\leq \frac{1}{2}(a^{2}+b^{2})\) and \((a+b)^{2} \leq 2(a^{2}+b^{2})\), we get

$$\begin{aligned} &\rho \bigl(\mathbf{A}_{h}\xi ^{k+1},\xi ^{k+1} \bigr)-\rho \bigl(\mathbf{A}_{h}\xi ^{k}, \xi ^{k} \bigr)+ \bigl(\mathbf{A}_{h}\eta ^{k+1},\eta ^{k+1} \bigr)-\bigl(\mathbf{A}_{h}\eta ^{k},\eta ^{k} \bigr) \\ &\quad \leq \frac{\rho \tau }{2} \bigl\Vert \overline{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}+ \rho \tau \bigl( \bigl\Vert \xi ^{k+1} \bigr\Vert ^{2}+ \bigl\Vert \xi ^{k} \bigr\Vert ^{2}\bigr)+\frac{\tau }{2} \bigl\Vert \widehat{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}+\tau \bigl( \bigl\Vert \eta ^{k+1} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{k} \bigr\Vert ^{2}\bigr). \end{aligned}$$

Summing k from 0 to n, then

$$\begin{aligned} &\rho \bigl(\mathbf{A}_{h}\xi ^{n+1},\xi ^{n+1} \bigr)-\rho \bigl(\mathbf{A}_{h}\xi ^{0}, \xi ^{0} \bigr)+ \bigl(\mathbf{A}_{h}\eta ^{n+1},\eta ^{n+1} \bigr)-\bigl(\mathbf{A}_{h}\eta ^{0},\eta ^{0} \bigr) \\ &\quad \leq \frac{\tau }{2}\sum_{k=0}^{n} \bigl(\rho \bigl\Vert \overline{g}^{k+ \frac{1}{2}} \bigr\Vert ^{2}+ \bigl\Vert \widehat{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}\bigr)+\rho \tau \sum_{k=0}^{n}\bigl( \bigl\Vert \xi ^{k+1} \bigr\Vert ^{2}+ \bigl\Vert \xi ^{k} \bigr\Vert ^{2}\bigr) \\ &\qquad {}+\tau \sum_{k=0} ^{n}\bigl( \bigl\Vert \eta ^{k+1} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{k} \bigr\Vert ^{2} \bigr), \end{aligned}$$

which implies that

$$\begin{aligned} &\rho \bigl(\mathbf{A}_{h}\xi ^{n+1},\xi ^{n+1} \bigr)+ \bigl(\mathbf{A}_{h}\eta ^{n+1}, \eta ^{n+1} \bigr) \\ &\quad \leq \rho \bigl(\mathbf{A}_{h}\xi ^{0},\xi ^{0}\bigr)+\bigl(\mathbf{A}_{h}\eta ^{0}, \eta ^{0}\bigr) +\frac{\tau }{2}\sum_{k=0}^{n} \bigl(\rho \bigl\Vert \overline{g}^{k+ \frac{1}{2}} \bigr\Vert ^{2}+ \bigl\Vert \widehat{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}\bigr) \\ &\qquad {}+2\rho \tau \sum_{k=0}^{n+1} \bigl\Vert \xi ^{k} \bigr\Vert ^{2}+ 2\tau \sum _{k=0}^{n+1} \bigl\Vert \eta ^{k} \bigr\Vert ^{2}. \end{aligned}$$

From Lemma 4.2 and \(\xi _{i,j}^{0}=0\), \(\eta _{i,j}^{0}=0\) we have

$$ \frac{4}{9}\bigl(\rho \bigl\Vert \xi ^{n+1} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{n+1} \bigr\Vert ^{2}\bigr) \leq \frac{ \tau }{2}\sum_{k=0}^{n} \bigl(\rho \bigl\Vert \overline{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}+ \bigl\Vert \widehat{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}\bigr) +2\tau \sum_{k=0}^{n+1}\bigl(\rho \bigl\Vert \xi ^{k} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{k} \bigr\Vert ^{2}\bigr), $$
(31)

which implies that

$$ \rho \bigl\Vert \xi ^{n+1} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{n+1} \bigr\Vert ^{2} \leq \frac{9\tau }{8}\sum_{k=0}^{n}\bigl(\rho \bigl\Vert \overline{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}+ \bigl\Vert \widehat{g} ^{k+\frac{1}{2}} \bigr\Vert ^{2}\bigr)+\frac{9\tau }{2} \sum_{k=0}^{n+1}\bigl(\rho \bigl\Vert \xi ^{k} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{k} \bigr\Vert ^{2}\bigr). $$
(32)

Applying the discrete Gronwall lemma to (32), we get

$$ \rho \bigl\Vert \xi ^{n+1} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{n+1} \bigr\Vert ^{2} \leq C\tau \sum _{k=0}^{n}\bigl( \rho \bigl\Vert \overline{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}+ \bigl\Vert \widehat{g}^{k+ \frac{1}{2}} \bigr\Vert ^{2}\bigr). $$
(33)

From (23) we obtain

$$ \rho \tau \sum_{k=0}^{n} \bigl\Vert \overline{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}\leq C\bigl( \tau ^{2}+h^{4}\bigr)^{2},\qquad \tau \sum _{k=0}^{n} \bigl\Vert \widehat{g}^{k+\frac{1}{2}} \bigr\Vert ^{2}\leq C\bigl(\tau ^{2}+h^{4} \bigr)^{2}. $$
(34)

Hence

$$ \rho \bigl\Vert \xi ^{n+1} \bigr\Vert ^{2}+ \bigl\Vert \eta ^{n+1} \bigr\Vert ^{2}\leq C\bigl(\tau ^{2}+h^{4}\bigr)^{2}. $$
(35)

This completes the proof. □

5 Numerical experiments

In this section we give some numerical results for the two-dimensional model problems given below. These results are obtained by using Matlab.

Example 1

We seek the numerical solution for the following problem:

$$ \textstyle\begin{cases} (\mathrm{a})\quad u_{tt}+\Delta ^{2} u=f(x,y,t),\quad 0< x,y< 1,t>0, \\ (\mathrm{b})\quad u(x,y,0)= f_{1}(x,y),\qquad \frac{\partial u}{\partial t}|_{(x,y,0)}=f _{2}(x,y),\quad 0\leq x,y \leq 1, \\ (\mathrm{c})\quad u|_{x=0}=h_{1}(y,t),\qquad u|_{x=1}=h_{2}(y,t), \\ \hphantom{(\mathrm{c})\quad} u|_{y=0}=h_{3}(x,t), \qquad u|_{y=1}=h_{4}(x,t),\quad t\geq 0, \\ (\mathrm{d})\quad \Delta u|_{x=0}=g_{1}(y,t),\qquad \Delta u|_{x=1}=g_{2}(y,t), \\ \hphantom{(\mathrm{d})\quad}\Delta u|_{y=0}=g_{3}(x,t),\qquad \Delta u|_{y=1}=g_{4}(x,t),\quad t\geq 0. \end{cases} $$
(36)

The theoretical solution is taken as \(u(x,y,t)=e^{-\pi t}\sin (\pi x) \sin (\pi y)\). \(f(x,y,t)\), the initial and boundary value functions in (36), can be obtained from \(u(x,y,t)\). We have \(v(x,y,t)=2\pi ^{2}e ^{-\pi t}\sin (\pi x)\sin (\pi y)\) and \(w(x,y,t)=-\pi e^{-\pi t} \sin (\pi x)\sin (\pi y)\). The compact difference scheme (6) is used to solve the problem (36). As comparison with our method, the central difference scheme is used to solve this problem.

In our numerical results, errors and computational orders in \(L^{2}\)-norm and \(L^{\infty }\)-norm of the compact difference scheme and the central difference scheme are given in Tables 14. From these tables we can find that the compact difference scheme can achieve a higher accuracy and efficiency than the central difference scheme in identical mesh. The exact results \(v(x,y,t)\) and \(w(x,y,t)\), with a mesh for \(h_{x}=h_{y}=0.05\) are plotted in Figs. 1 and 2 for \(t=1\), respectively. The numerical results \(\{V_{ij}^{n+1}\}\) and \(\{W_{ij} ^{n+1}\}\), with a mesh for \(h_{x}=h_{y}=0.05\), are plotted in Figs. 3 and 4 for \(t=1\).

Figure 1
figure 1

Exact solution v

Figure 2
figure 2

Exact solution w

Figure 3
figure 3

Numerical solution V

Figure 4
figure 4

Numerical solution W

Table 1 Errors and computational orders of compact difference scheme for W
Table 2 Errors and computational orders of central difference scheme for W
Table 3 Errors and computational orders of compact difference scheme for V
Table 4 Errors and computational orders of central difference scheme for V

Example 2

We consider the numerical solution for the problem (36) with the exact solution

$$ u(x,y,t)=t^{2}e^{\frac{-(x-0.5)^{2}-(y-0.5)^{2}}{\beta }}, \quad 0\leq x,y\leq 1,0\leq t\leq 1. $$

Then \(f(x,y,t)\), the initial and boundary value functions in (36), can be obtained from \(u(x,y,t)\). And we get the functions

$$\begin{aligned}& v(x,y,t)=\biggl(\frac{4}{\beta }-\frac{(2x-1)^{2}}{\beta ^{2}}-\frac{(2y-1)^{2}}{ \beta ^{2}} \biggr)t^{2}e^{\frac{-(x-0.5)^{2}-(y-0.5)^{2}}{\beta }}, \quad 0\leq x,y\leq 1,0\leq t\leq 1, \\& w(x,y,t)=2te^{\frac{-(x-0.5)^{2}-(y-0.5)^{2}}{\beta }}, \quad 0\leq x,y\leq 1,0\leq t\leq 1. \end{aligned}$$

The compact difference scheme (6) is used to solve the non-homogeneous problem with \(\beta =10\) and \(\beta =\frac{1}{10}\).

Errors and computational orders in \(L^{2}\)-norm and \(L^{\infty }\)-norm of the compact difference scheme and the central difference scheme with \(\beta =10\) are given in Tables 58. Tables 912 show errors and computational orders in \(L^{2}\)-norm and \(L^{\infty }\)-norm of the compact difference scheme and the central difference scheme with \(\beta =\frac{1}{10}\). From these tables we can see that the compact difference scheme can achieve a higher accuracy than the central difference scheme in identical mesh when they are applied to solving the problem based on the Gaussian pulse.

Table 5 Errors and computational orders of compact difference scheme with \(\beta =10\) for W
Table 6 Errors and computational orders of central difference scheme with \(\beta =10\) for W
Table 7 Errors and computational orders of compact difference scheme with \(\beta =10\) for V
Table 8 Errors and computational orders of central difference scheme with \(\beta =10\) for V
Table 9 Errors and computational orders of compact difference scheme with \(\beta =\frac{1}{10}\) for W
Table 10 Errors and computational orders of central difference scheme with \(\beta =\frac{1}{10}\) for W
Table 11 Errors and computational orders of compact difference scheme with \(\beta =\frac{1}{10}\) for V
Table 12 Errors and computational orders of central difference scheme with \(\beta =\frac{1}{10}\) for V

6 Conclusions

In this article, we have developed a compact finite difference scheme for two-dimensional fourth-order hyperbolic equation. The stability of the scheme is proved by using a Fourier analysis and the convergence of the scheme is obtained. The numerical results show that this scheme has high order of accuracy and is efficient.