1 Introduction

Recently, differential equations with piecewise constant arguments (EPCA) have received much attention from a number of investigators [15] in such various fields as population dynamics, physics, mechanical systems, control science and economics. The theory of EPCA was initiated in 1983 and 1984 with the contributions of Cooke and Wiener [6], Shah and Wiener [7], Wiener [8], and it has been developed by many authors [913]. In 1993, Wiener, pioneer of EPCAs, recollects in the book [14] the investigation of EPCA until that moment. Later, continuous efforts have been made devoted to considering various properties of EPCA [1518].

Generally speaking, in many cases analytic solutions of EPCA are hard to achieve and we are forced to use numerical methods to approximate them. Nevertheless, compared with the qualitative investigation of EPCA, the numerical study of EPCA is very late and rare. The original work for this field should be attributed to Liu et al. [19]. We think that it is the key step toward solving EPCA by numerical methods. Next, several results about the convergence, the stability and the dissipativity of numerical solutions for EPCA have been reported [2024]. However, all of them are based on ordinary differential equations (ODEs). To the best of the author’s knowledge, only few results were presented in the case of numerical treatment of partial differential equations with piecewise constant arguments (PEPCA) [25, 26]. In these two articles, the authors investigated the numerical stability of θ-methods and Galerkin methods for a simple PEPCA, respectively. In contrast to [25, 26], in the present paper we study a more complicated model and analyze the numerical stability.

In this paper, we consider the following initial boundary value problem (IBVP):

$$ \textstyle\begin{cases} u_{t}(x,t)=a^{2}u_{xx}(x,t)+bu_{xx}(x,[t])+cu_{xx} (x,2 [\frac {t+1}{2} ] ), \quad t> 0, \\ u(0,t)=u(1,t)=0, \\ u(x,0)=v(x), \end{cases} $$
(1)

where \(a,b,c \in\mathbb{R}\) and \(a\neq0\), \(u: \Omega=[0,1] \times [0,\infty)\rightarrow\mathbb{R}\), \(v: [0,1]\rightarrow\mathbb{R}\), \([\cdot]\) signifies the greatest integer function.

For the sake of the coming discussion, we derive the following stability conditions of (1) by using the similar method in [27, 28].

Lemma 1

If the following conditions are satisfied:

$$ \textstyle\begin{cases} (a^{2}+b+c)((a^{2}+b-c)e^{-a^{2}\pi^{2}j^{2}}-(b-a^{2}-c))>0, \\ (a^{2}+b+c)((a^{2}+b+c)e^{-a^{2}\pi^{2}j^{2}}-(b-a^{2}+c))>0, \end{cases} $$
(2)

where

$$c\neq\frac{a^{2}}{e^{-a^{2}\pi^{2}j^{2}}-1},\quad a\neq0, $$

then the zero solution of the equation in (1) is asymptotically stable.

2 The stability of the numerical solution

In this section, we consider the numerical asymptotic stability of θ-methods for (1).

2.1 The difference equation

Let \(\Delta t>0\) and \(\Delta x>0\) be time and spatial stepsizes, respectively. We also assume that Δt satisfies \(\Delta t=1/m\), where \(m \geq1\) is an integer, and Δx satisfies \(\Delta x=1/p\) for \(p\in\mathbb{N}\). Define the mesh points

$$t_{n}=n\Delta t,\quad n=0,1,2,\ldots, $$

and

$$x_{i}=i\Delta x, \quad i=0,1,2,\ldots,p. $$

Applying the θ-methods to (1), we have

$$ \textstyle\begin{cases} \frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t} \\ \quad = \theta\{ a^{2}\frac{u_{i+1}^{n+1}-2u_{i}^{n+1}+u_{i-1}^{n+1}}{\Delta x^{2}}+b\frac {u^{h}(x_{i+1},[t_{n+1}])-2u^{h}(x_{i},[t_{n+1}])+u^{h}(x_{i-1},[t_{n+1}])}{\Delta x^{2}} \\ \qquad {}+c\frac{u^{h}(x_{i+1},2[\frac{t_{n+1}+1}{2}])-2u^{h}(x_{i},2[\frac {t_{n+1}+1}{2}])+u^{h}(x_{i-1},2[\frac{t_{n+1}+1}{2}])}{\Delta x^{2}}\} \\ \qquad {}+(1-\theta)\{a^{2}\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x^{2}}+b\frac {u^{h}(x_{i+1},[t_{n}])-2u^{h}(x_{i},[t_{n}])+u^{h}(x_{i-1},[t_{n}])}{\Delta x^{2}} \\ \qquad {}+c\frac{u^{h}(x_{i+1},2[\frac{t_{n}+1}{2}])-2u^{h}(x_{i},2[\frac {t_{n}+1}{2}])+u^{h}(x_{i-1},2[\frac{t_{n}+1}{2}])}{\Delta x^{2}}\}, \\ u_{0}^{n}=u_{p}^{n}=0, \quad n=0,1,2,\ldots, \\ u_{i}^{0}=v(x_{i}), \quad i=0,1,2,\ldots,p, \end{cases} $$
(3)

where \(u_{i}^{n}\), \(u^{h}(x_{i},2[(t_{n}+1)/2])\) and \(u^{h}(x_{i},[t_{n}])\) are approximations to \(u(x_{i},t_{n})\), \(u(x_{i},2[(t_{n}+1)/2])\) and \(u(x_{i},[t_{n}])\), respectively.

Denote \(n=km+l\), \(k=0,1,2,\ldots\) , \(l=0,1,\ldots, m-1\), by the same technique in [29], we can define \(u^{h}(x_{i},[t_{n}+\eta h])\triangleq u_{i}^{km}\), \(u^{h}(x_{i},2[(2k-1+lh+\eta h+1)/2])\triangleq u_{i}^{2km}\) and \(u^{h}(x_{i},2[(2k+lh+\eta h+1)/2])\triangleq u_{i}^{2km}\), where \(\eta\in[0,1]\). So the equation in (3) reduces to the following two recurrence relations:

$$\begin{aligned}& \frac{u_{i}^{km+l+1}-u_{i}^{km+l}}{\Delta t} \\& \quad =a^{2}\theta \biggl(\frac {u_{i+1}^{km+l+1}-2u_{i}^{km+l+1}+u_{i-1}^{km+l+1}}{\Delta x^{2}} \biggr) +a^{2}(1-\theta) \biggl(\frac {u_{i+1}^{km+l}-2u_{i}^{km+l}+u_{i-1}^{km+l}}{\Delta x^{2}} \biggr) \\& \qquad {}+(b+c) \biggl(\frac{u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km}}{\Delta x^{2}} \biggr), \end{aligned}$$
(4)

when k is even and

$$\begin{aligned}& \frac{u_{i}^{km+l+1}-u_{i}^{km+l}}{\Delta t} \\& \quad =a^{2}\theta \biggl(\frac {u_{i+1}^{km+l+1}-2u_{i}^{km+l+1}+u_{i-1}^{km+l+1}}{\Delta x^{2}} \biggr) +a^{2}(1-\theta) \biggl(\frac {u_{i+1}^{km+l}-2u_{i}^{km+l}+u_{i-1}^{km+l}}{\Delta x^{2}} \biggr) \\& \qquad {}+b \biggl(\frac{u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km}}{\Delta x^{2}} \biggr)+c \biggl(\frac{u_{i+1}^{(k+1)m}-2u_{i}^{(k+1)m}+u_{i-1}^{(k+1)m}}{\Delta x^{2}} \biggr), \end{aligned}$$
(5)

when k is odd.

Basically, in each interval \([n,n+1)\), the equation in (1) can be seen as an original PDE, so the θ-methods for (1) are convergent of \(O(\Delta t+\Delta x^{2})\) if \(\theta\neq 1/2\), of \(O(\Delta t^{2}+\Delta x^{2})\) if \(\theta= 1/2\). A more detailed analysis on the convergence of the θ-methods can be found in [30, 31].

Let \(r=\Delta t/\Delta x^{2}\), so (4) and (5) become

$$\begin{aligned}& -a^{2}\theta ru_{i+1}^{km+l+1}+\bigl(1+2a^{2} \theta r\bigr)u_{i}^{km+l+1}-a^{2}\theta ru_{i-1}^{km+l+1} \\& \quad =a^{2}(1-\theta)ru_{i+1}^{km+l}+ \bigl(1-2a^{2}(1-\theta)r\bigr)u_{i}^{km+l}+a^{2}(1- \theta )ru_{i-1}^{km+l} \\& \qquad {}+(b+c)r\bigl(u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km} \bigr) \end{aligned}$$
(6)

and

$$\begin{aligned}& -a^{2}\theta ru_{i+1}^{km+l+1}+\bigl(1+2a^{2} \theta r\bigr)u_{i}^{km+l+1}-a^{2}\theta ru_{i-1}^{km+l+1} \\& \quad =a^{2}(1-\theta)ru_{i+1}^{km+l}+ \bigl(1-2a^{2}(1-\theta)r\bigr)u_{i}^{km+l}+a^{2}(1- \theta )ru_{i-1}^{km+l} \\& \qquad {} +br\bigl(u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km} \bigr)+cr\bigl(u_{i+1}^{(k+1)m}-2u_{i}^{(k+1)m}+u_{i-1}^{(k+1)m} \bigr), \end{aligned}$$
(7)

respectively. Moreover, let \(i=1,2,\ldots,p-1\), (6) and (7) yield

$$\begin{aligned}& \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1+2a^{2}\theta r & -a^{2}\theta r & \cdots& 0 & 0\\ -a^{2}\theta r & 1+2a^{2}\theta r & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& 1+2a^{2}\theta r & -a^{2}\theta r\\ 0 & 0 & \cdots& -a^{2}\theta r & 1+2a^{2}\theta r \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l+1}\\ u_{2}^{km+l+1}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l+1} \end{array}\displaystyle \right ) \\& \quad =\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \omega& a^{2}(1-\theta)r & \cdots& 0 & 0\\ a^{2}(1-\theta)r & \omega& \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& \omega& a^{2}(1-\theta)r\\ 0 & 0 & \cdots& a^{2}(1-\theta)r & \omega \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l}\\ u_{2}^{km+l}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l} \end{array}\displaystyle \right ) \\& \qquad {}+(b+c)r \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 1 & \cdots& 0 & 0\\ 1 & -2 & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& -2 & 1\\ 0 & 0 & \cdots& 1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km}\\ u_{2}^{km}\\ \vdots\\ \vdots\\ u_{p-1}^{km} \end{array}\displaystyle \right ) \end{aligned}$$

and

$$\begin{aligned}& \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1+2a^{2}\theta r & -a^{2}\theta r & \cdots& 0 & 0\\ -a^{2}\theta r & 1+2a^{2}\theta r & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& 1+2a^{2}\theta r & -a^{2}\theta r\\ 0 & 0 & \cdots& -a^{2}\theta r & 1+2a^{2}\theta r \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l+1}\\ u_{2}^{km+l+1}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l+1} \end{array}\displaystyle \right ) \\& \quad =\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \omega& a^{2}(1-\theta)r & \cdots& 0 & 0\\ a^{2}(1-\theta)r & \omega& \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& \omega& a^{2}(1-\theta)r\\ 0 & 0 & \cdots& a^{2}(1-\theta)r & \omega \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l}\\ u_{2}^{km+l}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l} \end{array}\displaystyle \right ) \\& \qquad {}+br \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 1 & \cdots& 0 & 0\\ 1 & -2 & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& -2 & 1\\ 0 & 0 & \cdots& 1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km}\\ u_{2}^{km}\\ \vdots\\ \vdots\\ u_{p-1}^{km} \end{array}\displaystyle \right ) \\& \qquad {}+cr \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 1 & \cdots& 0 & 0\\ 1 & -2 & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& -2 & 1\\ 0 & 0 & \cdots& 1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{(k+1)m}\\ u_{2}^{(k+1)m}\\ \vdots\\ \vdots\\ u_{p-1}^{(k+1)m} \end{array}\displaystyle \right ), \end{aligned}$$

respectively, where \(\omega=1-2a^{2}(1-\theta)r\).

Introducing \(\mathbf{u}^{n}=(u_{1}^{n},u_{2}^{n},\ldots,u_{p-1}^{n})^{T}\), \(n=0,1,2,\ldots \) , \(\mathbf{v}(x)=(v(x_{1}),v(x_{2}),\ldots,v(x_{p-1}))^{T}\) and the \((p-1)\times(p-1)\) triple-diagonal matrix \(\mathbf{F}=\mathsf {diag}(-1,2,-1)\), then (3) becomes

$$ \textstyle\begin{cases} (\mathbf{I}+a^{2}\theta r\mathbf{F})\mathbf{u}^{km+l+1}=(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F})\mathbf{u}^{km+l}-(b+c)r\mathbf{F}\mathbf {u}^{km}, \\ \mathbf{u}^{0}=\mathbf{v}(x), \end{cases} $$
(8)

when k is even, and

$$ \textstyle\begin{cases} (\mathbf{I}+a^{2}\theta r\mathbf{F})\mathbf{u}^{km+l+1}=(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F})\mathbf{u}^{km+l}-br\mathbf{F}\mathbf {u}^{km}-cr\mathbf{F}\mathbf{u}^{(k+1)m}, \\ \mathbf{u}^{0}=\mathbf{v}(x), \end{cases} $$
(9)

when k is odd.

2.2 Stability analysis

From (8), we obtain

$$ \mathbf{u}^{km+l+1}=\mathbf{R}\mathbf{u}^{km+l}+ \mathbf{S}\mathbf{u}^{km}, $$
(10)

where

$$\begin{aligned}& \mathbf{R}=\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\bigl(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F}\bigr), \\& \mathbf{S}=-(b+c)r\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\mathbf{F}. \end{aligned}$$

By (9), we also obtain

$$ \mathbf{u}^{km+l+1}=\mathbf{R}\mathbf{u}^{km+l}+ \mathbf{S}_{1}\mathbf {u}^{km}+\mathbf{S}_{2} \mathbf{u}^{(k+1)m}, $$
(11)

where

$$\begin{aligned}& \mathbf{R}=\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\bigl(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F}\bigr), \\& \mathbf{S}_{1}=-br\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\mathbf{F}, \\& \mathbf{S}_{2}=-cr\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\mathbf{F}. \end{aligned}$$

Iteration of (10) gives

$$ \mathbf{u}^{km+l+1}=\bigl(\mathbf{R}^{l+1}+\bigl( \mathbf{R}^{l+1}-\mathbf {I}\bigr) (\mathbf{R}-\mathbf{I})^{-1} \mathbf{S}\bigr)\mathbf{u}^{km}, $$
(12)

in the same way, from (11) we have

$$ \mathbf{u}^{km+l+1}=\bigl(\mathbf{R}^{l+1}+\bigl( \mathbf{R}^{l+1}-\mathbf {I}\bigr) (\mathbf{R}-\mathbf{I})^{-1} \mathbf{S}_{1}\bigr)\mathbf{u}^{km}+\bigl(\mathbf {R}^{l+1}-\mathbf{I}\bigr) (\mathbf{R}-\mathbf{I})^{-1} \mathbf{S}_{2}\mathbf {u}^{(k+1)m}. $$
(13)

Thus we get

$$ \mathbf{u}^{n}= \textstyle\begin{cases} (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S})\mathbf{u}^{km}, & k \mbox{ is even}, \\ (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S}_{1})\mathbf{u}^{km} \\ \quad {} +(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}_{2}\mathbf{u}^{(k+1)m}, & k \mbox{ is odd}. \end{cases} $$
(14)

So

$$ \mathbf{u}^{n}= \textstyle\begin{cases} (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S})\mathbf{u}^{2jm}, & n=2jm+l, \\ (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S}_{1})\mathbf{u}^{(2j-1)m} \\ \quad {}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}_{2}\mathbf{u}^{2jm}, & n=(2j-1)m+l. \end{cases} $$
(15)

Let \(l=m-1\) in (15) gives

$$\textstyle\begin{cases} \mathbf{u}^{(2j+1)m}=(\mathbf{R}^{m}+(\mathbf{R}^{m}-\mathbf{I})(\mathbf {R}-\mathbf{I})^{-1}\mathbf{S})\mathbf{u}^{2jm}, \quad j=0,1,\ldots, \\ \mathbf{u}^{2jm}=(\mathbf{I}-(\mathbf{R}^{m}-\mathbf{I})(\mathbf {R}-\mathbf{I})^{-1}\mathbf{S}_{2})^{-1}(\mathbf{R}^{m}+(\mathbf {R}^{m}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf{S}_{1})\mathbf {u}^{(2j-1)m}, \quad j=1,2,\ldots. \end{cases} $$

Hence we have \(\mathbf{u}^{(2j+1)m}=\mathbf{M}\mathbf{u}^{(2j-1)m}\), where

$$\mathbf{M}=\bigl(\mathbf{R}^{m}+\bigl(\mathbf{R}^{m}- \mathbf{I}\bigr) (\mathbf {R}-\mathbf{I})^{-1}\mathbf{S}\bigr) \bigl( \mathbf{R}^{m}+\bigl(\mathbf{R}^{m}-\mathbf {I}\bigr) ( \mathbf{R}-\mathbf{I})^{-1}\mathbf{S}_{1}\bigr) \bigl( \mathbf{I}-\bigl(\mathbf {R}^{m}-\mathbf{I}\bigr) (\mathbf{R}- \mathbf{I})^{-1}\mathbf{S}_{2}\bigr)^{-1}. $$

Therefore

$$ \mathbf{u}^{n}= \textstyle\begin{cases} (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S})\mathbf{M}^{j}\mathbf{u}^{0}, & n=2jm+l, j=0,1,\ldots, \\ (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S}_{1})\mathbf{M}^{j-1}\mathbf{u}^{1} \\ \quad {}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}_{2}\mathbf{N}\mathbf{M}^{j}\mathbf{u}^{0},& n=(2j-1)m+l, j=1,2,\ldots, \end{cases} $$
(16)

where \(\mathbf{u}^{1}=\mathbf{N}\mathbf{u}^{0}\), \(\mathbf{N}=\mathbf {R}^{m}+(\mathbf{R}^{m}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}\) and \(l=0,1,\ldots,m-1\).

Lemma 2

If the coefficients a, b and c satisfy

$$ \biggl\vert \frac{\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1)}{1-\frac{c}{a^{2}}(\beta ^{m}-1)} \biggr\vert < 1 $$
(17)

and

$$ \biggl\vert \beta^{m}+\frac{b+c}{a^{2}}\bigl( \beta^{m}-1\bigr) \biggr\vert < 1, $$
(18)

then the zero solution of the equation in (3) is asymptotically stable, where

$$ \beta=\frac{1-a^{2}(1-\theta)r\lambda_{\mathbf{F}}}{1+a^{2}\theta r\lambda _{\mathbf{F}}}. $$
(19)

Proof

From (16) and [25], we know that the largest eigenvalue (in modulus) of the matrix M is

$$\lambda_{\mathbf{M}}=\frac{ (\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1) ) (\beta^{m}+\frac{b+c}{a^{2}}(\beta^{m}-1) )}{1-\frac{c}{a^{2}}(\beta^{m}-1)}, $$

where β is defined in (19). The zero solution of the equation in (3) is asymptotically stable if and only if \(|\lambda_{\mathbf{M}}|<1\). So (17) and (18) are got. □

Theorem 1

Under the conditions of Lemma 2, if the conditions

$$ \bigl(a^{2}+b+c\bigr) \bigl(\beta^{m}-1\bigr) \bigl(\bigl(a^{2}+b-c\bigr)\beta^{m}-\bigl(b-a^{2}-c \bigr)\bigr)< 0 $$
(20)

and

$$ \bigl(a^{2}+b+c\bigr) \bigl(\beta^{m}-1\bigr) \bigl(\bigl(a^{2}+b+c\bigr)\beta^{m}-\bigl(b-a^{2}+c \bigr)\bigr)< 0 $$
(21)

are satisfied, where \(c\neq a^{2}/(\beta^{m}-1)\), \(a\neq0\), then the zero solution of the equation in (3) is asymptotically stable.

Proof

If \(a\neq0\), (17) and (18) are equivalent to

$$\biggl(\frac{\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1)}{1-\frac{c}{a^{2}}(\beta ^{m}-1)}+1 \biggr) \biggl(\frac{\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1)}{1-\frac {c}{a^{2}}(\beta^{m}-1)}-1 \biggr)< 0 $$

and

$$\biggl(\beta^{m}+\frac{b+c}{a^{2}}\bigl(\beta^{m}-1\bigr)+1 \biggr) \biggl(\beta^{m}+\frac {b+c}{a^{2}}\bigl(\beta^{m}-1 \bigr)-1 \biggr)< 0. $$

After some derivations we can get (20) and (21). The proof is completed. □

Definition 1

The set of all points \((a,b,c)\) which satisfies (2) is called an asymptotic stability region denoted by H.

Definition 2

The set of all points \((a,b,c)\) at which the θ-methods for (1) which satisfies (20) is asymptotically stable is called an asymptotic stability region denoted by S.

For convenience, we divide the region H into three parts:

$$\begin{aligned}& H_{0}=\bigl\{ (a,b,c)\in H: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)=0\bigr\} , \\& H_{1}=\bigl\{ (a,b,c)\in H: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)>0\bigr\} , \\& H_{2}=\bigl\{ (a,b,c)\in H: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)< 0\bigr\} . \end{aligned}$$

In the similar way, we denote

$$\begin{aligned}& S_{0}=\bigl\{ (a,b,c)\in S: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)=0\bigr\} , \\& S_{1}=\bigl\{ (a,b,c)\in S: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)>0\bigr\} , \\& S_{2}=\bigl\{ (a,b,c)\in S: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)< 0\bigr\} . \end{aligned}$$

It is easy to see that \(H=H_{0}\cup H_{1}\cup H_{2}\), \(S=S_{0}\cup S_{1}\cup S_{2}\), \(H_{i}\cap H_{j}=\Phi\), \(S_{i}\cap S_{j}=\Phi\) and \(H_{i}\cap S_{j}=\Phi\), \(i \neq j\), \(i,j=0,1,2\).

Theorem 2

Under the constraints

$$ \frac{b-a^{2}-c}{b+a^{2}-c}\leq0 $$
(22)

and

$$ \frac{b-a^{2}+c}{b+a^{2}+c}\leq0, $$
(23)

if the following conditions are satisfied:

\(r\neq1/(a^{2} \lambda_{\mathbf{F}} (1-\theta))\) and

$$ \textstyle\begin{cases} r< \min\frac{2}{a^{2} \lambda_{\mathbf{F}} (1-2\theta)},\quad 0 \leq \theta< 1/2, \\ r>0,\quad 1/2 \leq\theta\leq1, \end{cases} $$
(24)

for m is even, and

$$ r< \min\frac{1}{a^{2} \lambda_{\mathbf{F}} (1-\theta)}, $$
(25)

for m odd, then \(H_{1}\subseteq S_{1}\).

Proof

By the definition of \(H_{1}\) we know that (2) is satisfied when (22) and (23) hold. In the same way, according to the definition of \(S_{1}\) we know that (20) is satisfied when (22) and (23) hold and \(0<\beta ^{m}<1\), where β is defined in (19), then we can get \(H_{1}\subseteq S_{1}\). Therefore, (24) and (25) can be obtained from \(0<\beta^{m}<1\). The proof is completed. □

Theorem 3

Under the constraints

$$ \frac{b-a^{2}-c}{b+a^{2}-c}\geq1 $$
(26)

and

$$ \frac{b-a^{2}+c}{b+a^{2}+c}\leq0, $$
(27)

if the following conditions are satisfied:

\(r\neq1/(a^{2} \lambda_{\mathbf{F}} (1-\theta))\) and

$$\textstyle\begin{cases} r< \min\frac{2}{a^{2} \lambda_{\mathbf{F}} (1-2\theta)},\quad 0 \leq \theta< 1/2, \\ r>0, \quad 1/2 \leq\theta\leq1, \end{cases} $$

for m even, and

$$r< \min\frac{1}{a^{2} \lambda_{\mathbf{F}} (1-\theta)}, $$

for m odd, then \(H_{2}\subseteq S_{2}\).

Proof

Similar to the proof of Theorem 2, we can omit it. □

Theorem 4

Under the constraints

$$\begin{aligned}& \frac{b+a^{2}-c}{b+a^{2}+c}=0, \end{aligned}$$
(28)
$$\begin{aligned}& \frac{b-a^{2}-c}{b+a^{2}+c}< 0, \end{aligned}$$
(29)

and

$$ \frac{b-a^{2}+c}{b+a^{2}+c}\leq0, $$
(30)

if the following conditions are satisfied:

\(r\neq1/(a^{2} \lambda_{\mathbf{F}} (1-\theta))\) and

$$\textstyle\begin{cases} r< \min\frac{2}{a^{2} \lambda_{\mathbf{F}} (1-2\theta)},\quad 0 \leq \theta< 1/2, \\ r>0,\quad 1/2 \leq\theta\leq1, \end{cases} $$

for m even, and

$$r< \min\frac{1}{a^{2} \lambda_{\mathbf{F}} (1-\theta)}, $$

for m odd, then \(H_{0}\subseteq S_{0}\).

Proof

Follows directly from the proof of Theorem 2. □

Remark 1

If \(\theta=1\), then the corresponding fully implicit finite difference scheme is asymptotically stable unconditionally.

3 Numerical experiments

To demonstrate our theoretical result, some numerical examples are adopted in this section. Consider the following two problems:

$$\begin{aligned}& \textstyle\begin{cases} u_{t}(x,t)=u_{xx}(x,t)+\frac{1}{2} u_{xx}(x,[t])+\frac{1}{4} u_{xx} (x,2 [\frac{t+1}{2} ] ), \quad t> 0, \\ u(0,t)=u(1,t)=0, \\ u(x,0)=\sin(\pi x), \end{cases}\displaystyle \end{aligned}$$
(31)
$$\begin{aligned}& \textstyle\begin{cases} u_{t}(x,t)=u_{xx}(x,t)-2u_{xx}(x,[t])+2u_{xx} (x,2 [\frac {t+1}{2} ] ),\quad t> 0, \\ u(0,t)=u(1,t)=0, \\ u(x,0)=\sin(\pi x). \end{cases}\displaystyle \end{aligned}$$
(32)

In Tables 14 we list the absolute errors \(\operatorname{AE}(1/m,1/p)\), \(\operatorname{AE}(1/4m,1/2p)\) and \(\operatorname{AE}(1/2m, 1/2p)\) at \(x=1/2\), \(t=1\) of the θ-methods for (31) and (32), the ratio of \(\operatorname{AE}(1/m,1/p)\) over \(\operatorname{AE}(1/4m,1/2p)\) in Tables 1, 3 and the ratio of \(\operatorname{AE}(1/m,1/p)\) over \(\operatorname{AE}(1/2m,1/2p)\) in Tables 2, 4. We can see from these tables that the numerical methods conserve their orders of convergence.

Table 1 Errors of (31) with \(\theta= 0\)
Table 2 Errors of (31) with \(\theta= 1/2\)
Table 3 Errors of (32) with \(\theta= 0\)
Table 4 Errors of (32) with \(\theta= 1/2\)

In Figures 14, we draw the numerical solutions of the θ-methods. It is easy to see that the numerical solutions are asymptotically stable. In Figures 5 and 6, we draw the error figures for the numerical solutions with \(\theta=1\). It can be seen that the numerical method is of high accuracy.

Figure 1
figure 1

The numerical solution of (31) with \(\theta= 0\), \(m = 6400\), \(p = 20\) and \(r = 1/16\)

Figure 2
figure 2

The numerical solution of (31) with \(\theta= 0.5\), \(m = 128\), \(p = 16\) and \(r = 2\)

Figure 3
figure 3

The numerical solution of (32) with \(\theta= 0\), \(m = 6400\), \(p = 20\) and \(r = 1/16\)

Figure 4
figure 4

The numerical solution of (32) with \(\theta= 0.5\), \(m = 128\), \(p = 16\) and \(r = 2\)

Figure 5
figure 5

Errors of (31) with \(\theta= 1\), \(m = 1024\), \(p = 32\) and \(r = 1\)

Figure 6
figure 6

Errors of (32) with \(\theta= 1\), \(m = 1024\), \(p = 32\) and \(r = 1\)