1 Introduction

The impulsive differential equations (IDEs) are widely applied in numerous fields of science and technology: theoretical physics, mechanics, population dynamics, pharmacokinetics, industrial robotics, chemical technology, biotechnology, economics, etc. Recently, the theory of IDEs has been an object of active research. Especially, stability of the exact solutions of IDEs has been widely studied (see [1, 2, 9, 16, 18] and the references therein). However, many IDEs cannot be solved analytically or their solving is more complicated. Hence taking numerical methods is a good choice.

In recent years, the stability of numerical methods for IDEs has attracted more and more attention (see [11, 12, 15, 17, 22, 29] etc.). Stability of Runge–Kutta methods with the constant stepsize for scalar linear IDEs has been studied by [17]. Runge–Kutta methods with variable stepsizes for multidimensional linear IDEs has been investigated in [12]. Collocation methods for linear nonautonomous IDEs has been considered in [29]. An improved linear multistep method for linear IDEs has been investigated in [13]. Stability of the exact and numerical solutions of nonlinear IDEs has been studied by the Lyapunov method in [11]. Stability of Runge–Kutta methods for a special kind of nonlinear IDEs has been investigated by the properties of the differential equations without impulsive perturbations in [15]. Stability and asymptotic stability of implicit Euler method for stiff IDEs in Banach space has been studied by [22]. There is a lot of significant work on the numerical solution of impulsive differential equations, for example [6, 7, 10, 14, 2327]. However, in this work the authors did not investigate the stability of the numerical methods for non-stiff nonlinear IDEs under Lipschitz conditions. Consider the equation of the form

$$ \left \{ \textstyle\begin{array}{l} x'(t)=f(t,x(t)),\quad t> t_{0}, t\neq\tau_{k}, k=1,2,\ldots, \\ x(\tau_{k}^{+})=I_{k}(x(\tau_{k})),\quad k=1,2,\ldots,\\ x(t_{0}^{+})=x_{0}, \end{array}\displaystyle \right . $$
(1)

where \(x(t^{+})\) is the right limit of \(x(t)\), \(t_{0}=\tau_{0}<\tau_{1}<\tau _{2}<\cdots\), \(\lim_{k\rightarrow\infty}\tau_{k}=\infty\), the function \(f:[t_{0},+\infty)\times\mathbb{C}^{d}\rightarrow\mathbb{C}^{d}\) is continuous in t and Lipschitz continuous with respect to the second variable in the following sense: there is a positive real constant α such that

$$ \bigl\Vert f(t,x_{1})-f(t,x_{2}) \bigr\Vert \leq\alpha \Vert x_{1}-x_{2} \Vert $$
(2)

for arbitrary \(t\in[t_{0},\infty)\), \(x_{1},x_{2}\in\mathbb{C}^{d}\), where \(\| \cdot\|\) is any convenient norm on \(\mathbb{C}^{d}\). And also assume that each function \(I_{k}\), \(k=1,2,\ldots \) is Lipschitz continuous i.e. there is a positive constant \(\beta_{k}\) such that

$$ \bigl\Vert I_{k}(x)-I_{k}(y) \bigr\Vert \leq\beta_{k} \Vert x-y \Vert , \quad\text{for } \forall x, y\in \mathbb{C}^{d}. $$
(3)

Definition 1.1

(See [1])

A function \(x:[t_{0},\infty)\rightarrow\mathbb{C}^{d}\) is said to be a solution of (1), if

  1. (i)

    \(\lim_{t\rightarrow t_{0}^{+}}x(t)=x_{0}\),

  2. (ii)

    for \(t\in(t_{0},+\infty)\), \(t\neq\tau_{k}\), \(k=1,2,\dots\), \(x(t)\) is differentiable and \(x'(t)=f(t,x(t))\),

  3. (iii)

    \(x(t)\) is left continuous in \((t_{0},+\infty)\) and \(x(\tau _{k}^{+})=I_{k}(x(\tau_{k}))\), \(k=1,2,\dots\).

2 Asymptotical stability of the exact solution

In this section, we study the asymptotical stability of the exact solution of (1). In order to investigate the asymptotical stability of \(x(t)\), consider Eq. (1) with another initial data:

$$ \left \{ \textstyle\begin{array}{l} y'(t)=f(t,y(t)),\quad t>t_{0}, t\neq\tau_{k}, k\in\mathbb{Z}^{+}, \\ y(\tau_{k}^{+})=I_{k}(y(\tau_{k})),\quad k\in\mathbb{Z}^{+},\\ y(t_{0}^{+})=y_{0}, \end{array}\displaystyle \right . $$
(4)

where \(\mathbb{Z}^{+}=\{1,2,\ldots\}\).

Definition 2.1

([1, 18])

The exact solution \(x(t)\) of (1) is said to be

  1. 1

    stable if, for an arbitrary \(\epsilon>0\), there exists a positive number \(\delta=\delta(\epsilon)\) such that, for any other solution \(y(t)\) of (4), \(\|x_{0}-y_{0}\|<\delta\) implies

    $$ \bigl\Vert x(t)-y(t) \bigr\Vert < \epsilon,\quad \forall t>t_{0}; $$
  2. 2

    asymptotically stable, if it is stable and \(\lim_{t\rightarrow\infty}\|x(t)-y(t)\|=0\).

Theorem 2.2

Assume that there exists a positive constantγsuch that \(\tau _{k}-\tau_{k-1}\leq\gamma\), \(k\in\mathbb{Z}^{+}\). The exact solution of (1) is asymptotically stable if there is a positive constantCsuch that

$$ \beta_{k} \mathrm{e}^{\alpha(\tau_{k}-\tau_{k-1})}\leq C< 1 $$
(5)

for arbitrary \(k\in\mathbb{Z}^{+}\).

Proof

For arbitrary \(t\in(\tau_{k}, \tau_{k+1}]\), \(k=0,1,2,\dots \), we obtain

$$\begin{aligned} \bigl\Vert x(t)-y(t) \bigr\Vert &= \biggl\Vert x\bigl( \tau_{k}^{+}\bigr)-y\bigl(\tau_{k}^{+}\bigr)+ \int_{\tau _{k}}^{t}\bigl(f\bigl(s,x(s)\bigr)-f\bigl(s,y(s) \bigr)\bigr)\,ds \biggr\Vert \\ &\leq \bigl\Vert x\bigl(\tau_{k}^{+}\bigr)-y\bigl( \tau_{k}^{+}\bigr) \bigr\Vert + \int_{\tau_{k}}^{t} \bigl\Vert f\bigl(s,x(s)\bigr)-f \bigl(s,y(s)\bigr) \bigr\Vert \,ds \\ &\leq \bigl\Vert x\bigl(\tau_{k}^{+}\bigr)-y\bigl( \tau_{k}^{+}\bigr) \bigr\Vert +\alpha \int_{\tau _{k}}^{t} \bigl\Vert x(s)-y(s) \bigr\Vert \,ds. \end{aligned}$$

By the Gronwall theorem, for arbitrary \(t\in(\tau_{k}, \tau_{k+1}]\), \(k=0,1,2,\dots\), we have

$$ \bigl\Vert x(t)-y(t) \bigr\Vert \leq \bigl\Vert x\bigl( \tau_{k}^{+}\bigr)-y\bigl(\tau_{k}^{+}\bigr) \bigr\Vert \mathrm{e}^{\alpha(t-\tau_{k})}, $$

which implies, by Definition 2.1(iii),

$$ \bigl\Vert x(\tau_{k+1})-y(\tau_{k+1}) \bigr\Vert =\lim _{t\rightarrow\tau ^{-}_{k+1}} \bigl\Vert x(t)-y(t) \bigr\Vert \leq \bigl\Vert x \bigl(\tau_{k}^{+}\bigr)-y\bigl(\tau _{k}^{+}\bigr) \bigr\Vert \mathrm{e}^{\alpha(\tau_{k+1}-\tau_{k})}, $$

which also implies

$$\begin{aligned} & \bigl\Vert x\bigl(\tau^{+}_{k+1}\bigr)-y\bigl(\tau^{+}_{k+1} \bigr) \bigr\Vert \\ &\quad= \bigl\Vert I_{k+1}\bigl(x(\tau_{k+1})\bigr)-I_{k+1} \bigl(y(\tau_{k+1})\bigr) \bigr\Vert \\ &\quad\leq\beta_{k+1} \bigl\Vert x(\tau_{k+1})-y( \tau_{k+1}) \bigr\Vert \\ &\quad\leq \bigl\Vert x\bigl(\tau_{k}^{+}\bigr)-y\bigl( \tau_{k}^{+}\bigr) \bigr\Vert \beta_{k+1}\mathrm {e}^{\alpha(\tau_{k+1}-\tau_{k})}. \end{aligned}$$

Therefore, by the method of introduction and the conditions (3) and (5), for arbitrary \(t\in(\tau_{k}, \tau _{k+1}]\), \(k=0,1,2,\dots\), we obtain

$$\begin{aligned} & \bigl\Vert x(t)-y(t) \bigr\Vert \\ &\quad\leq \Vert x_{0}-y_{0} \Vert \bigl(\beta_{1} \mathrm {e}^{\alpha(\tau_{1}-\tau_{0})}\bigr) \bigl(\beta_{2}\mathrm{e}^{\alpha(\tau_{2}-\tau _{1})} \bigr) \bigl(\beta_{k} \mathrm{e}^{\alpha(\tau_{k}-\tau_{k-1})}\bigr) \mathrm {e}^{\alpha(t-\tau_{k})} \\ &\quad\leq C^{k} \Vert x_{0}-y_{0} \Vert \mathrm{e}^{\alpha(t-\tau_{k})} \\ &\quad\leq C^{k} \Vert x_{0}-y_{0} \Vert \mathrm{e}^{\alpha(\tau_{k+1}-\tau_{k})} \\ &\quad\leq C^{k} \Vert x_{0}-y_{0} \Vert \mathrm{e}^{\alpha\gamma}, \end{aligned}$$

which implies \(\|x(\tau_{k+1})-y(\tau_{k+1})\| \leq C^{k}\|x_{0}-y_{0}\| \mathrm{e}^{\alpha\gamma}\) and \(\|x(\tau^{+}_{k+1})-y(\tau^{+}_{k+1})\| \leq\|x_{0}-y_{0}\| C^{k+1}\). Hence for an arbitrary \(\epsilon>0\), there exists \(\delta=\mathrm {e}^{-\alpha\gamma}\epsilon\) such that \(\|x_{0}-y_{0}\|<\delta\) implies

$$ \bigl\Vert x(t)-y(t) \bigr\Vert \leq C^{k} \Vert x_{0}-y_{0} \Vert \mathrm {e}^{\alpha\gamma}\leq \Vert x_{0}-y_{0} \Vert \mathrm{e}^{\alpha\gamma } < \epsilon $$

for arbitrary \(t\in(\tau_{k}, \tau_{k+1}]\), \(k=0,1,2,\dots\), i.e.

$$ \bigl\Vert x(t)-y(t) \bigr\Vert < \epsilon,\quad \forall t>t_{0}. $$

So the exact solution of (1) is stable. Obviously, for arbitrary \(t\in(\tau_{k}, \tau_{k+1}]\), \(k=0,1,2,\dots\),

$$ \bigl\Vert x(t)-y(t) \bigr\Vert \leq C^{k} \Vert x_{0}-y_{0} \Vert \mathrm {e}^{\alpha\gamma}\rightarrow0,\quad k\rightarrow\infty. $$

Similarly, we also obtain

$$ \bigl\Vert x(\tau_{k+1})-y(\tau_{k+1}) \bigr\Vert \leq C^{k} \Vert x_{0}-y_{0} \Vert \mathrm{e}^{\alpha\gamma}\rightarrow0,\quad k\rightarrow \infty, $$

and

$$ \bigl\Vert x\bigl(\tau^{+}_{k+1}\bigr)-y\bigl(\tau^{+}_{k+1} \bigr) \bigr\Vert \leq C^{k+1} \Vert x_{0}-y_{0} \Vert \rightarrow0,\quad k\rightarrow\infty. $$

Consequently, the exact solution of (1) is asymptotically stable. □

From the proof of Theorem 2.2, we can obtain the following result.

Remark 2.3

If the condition (5) of Theorem 2.2 is changed into the weaker condition

$$ \beta_{k} \mathrm{e}^{\alpha(\tau_{k}-\tau_{k-1})}\leq1,\quad \forall k\in \mathbb{Z}^{+}, $$
(6)

then the exact solution of (1) is stable.

3 Runge–Kutta methods

In this section, Runge–Kutta methods for (1) can be constructed as follows:

$$ \textstyle\begin{cases} X_{k,l}^{i}=x_{k,l}+h_{k} \sum_{j=1}^{s} a_{ij} f(t_{k,l}^{j},X_{k,l}^{j}),\quad \mbox{$ k\in\mathbb{N}$, $i=1,2,\ldots,s$},\\ x_{k,l+1}=x_{k,l}+h_{k} \sum_{i=1}^{s} b_{i} f(t_{k,l}^{i},X_{k,l}^{i}),\quad \mbox{$l=0,1,2,\ldots,m-1$},\\ x_{k+1,0}=I_{k+1}(x_{k,m}),\\ x_{0,0}=x_{0}, \end{cases} $$
(7)

where \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(t_{k,l}=\tau_{k}+l h_{k}\), \(t_{k,l}^{i}=t_{k,l}+c_{i}h_{k}\), \(x_{k,l}\) is an approximation to the exact solution \(x(t_{k,l})\) and \(X_{k,l}^{i}\) is an approximation to the exact solution \(x(t_{k,l}^{i})\), \(k\in\mathbb{N}=\{0,1,2,\ldots\}\), \(l=0,1,\ldots,m-1\), \(i=1,2,\ldots,s\), s is referred to as the number of stages. The weights \(b_{i}\), the abscissae \(c_{i}=\sum_{j=1}^{s} a_{ij}\) and the matrix \(A=[a_{ij}]_{i,j=1}^{s}\) will be denoted by \((A, b, c)\). Similarly, the Runge–Kutta methods for (4) can be constructed as follows:

$$ \textstyle\begin{cases} Y_{k,l}^{i}=y_{k,l}+h_{k} \sum_{j=1}^{s} a_{ij} f(t_{k,l}^{j},Y_{k,l}^{j}), \quad\mbox{$ k\in\mathbb{N}$, $i=1,2,\ldots,s$},\\ y_{k,l+1}=y_{k,l}+h_{k} \sum_{i=1}^{s} b_{i} f(t_{k,l}^{i},Y_{k,l}^{i}), \quad\mbox{$l=0,1,2,\ldots,m-1$},\\ y_{k+1,0}=I_{k+1}(y_{k,m}),\\ y_{0,0}=y_{0}. \end{cases} $$
(8)

Definition 3.1

The Runge–Kutta method (7) for impulsive differential equation (1) is said to be

  1. 1

    stable, if \(\exists M>0\), \(m\geq M\), \(h_{k}=\frac{\tau_{k+1}-\tau _{k}}{m}\), \(k\in\mathbb{N}\),

    1. (i)

      \(I-zA\) is invertible for all \(z=\alpha h_{k}\),

    2. (ii)

      for an arbitrary \(\epsilon>0\), there exists such a positive number \(\delta=\delta(\epsilon)\) that, for any other numerical solutions of (8), \(\|x_{0}-y_{0}\|<\delta\) implies

      $$ \bigl\Vert \vert X_{k}-Y_{k} \vert \bigr\Vert < \epsilon, \quad\forall k\in\mathbb{N}, $$

      where \(X_{k}=(x_{k,0},x_{k,1},\ldots,x_{k,m})^{T}\), \(Y_{k}=(y_{k,0},y_{k,1},\ldots,y_{k,m})^{T}\) and

      $$\bigl\Vert \vert X_{k}-Y_{k} \vert \bigr\Vert = \max_{0\leq l \leq m}\bigl\{ \Vert x_{k,l}-y_{k,l} \Vert \bigr\} ; $$
  2. 2

    asymptotically stable, if it is stable and if \(\exists M_{1}>0\), for any \(m\geq M_{1}\), \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(k\in\mathbb {N}\), the following holds:

    $$ \lim_{k\rightarrow\infty} \bigl\Vert \vert X_{k}-Y_{k} \vert \bigr\Vert =0. $$

Lemma 3.2

([3, 5, 8, 21])

The \((\mathbf{j},\mathbf{k})\)-Padé approximation to \(\mathrm{e}^{z}\)is given by

$$ \mathbf{R}(z)=\frac{P_{\mathbf{j}}(z)}{Q_{\mathbf{k}}(z)}, $$
(9)

where

$$\begin{gathered} P_{\mathbf{j}}(z)=1+\frac{\mathbf{j}}{\mathbf{j}+\mathbf{k}}\cdot z+\frac{\mathbf{j}(\mathbf{j}-1)}{(\mathbf{j}+\mathbf{k})(\mathbf {j}+\mathbf{k}-1)}\cdot \frac{z^{2}}{2!}+\cdots+\frac{\mathbf{j} !\mathbf {k}!}{(\mathbf{j}+\mathbf{k})!}\cdot\frac{z^{\mathbf{j}}}{\mathbf{j} !}, \\ Q_{\mathbf{k}}(z)=1-\frac{\mathbf{k}}{\mathbf{j}+\mathbf{k}}\cdot z+\frac{\mathbf{k}(\mathbf{k}-1)}{(\mathbf{j}+\mathbf{k})(\mathbf {j}+\mathbf{k}-1)}\cdot \frac{z^{2}}{2!}+\cdots+(-1)^{\mathbf{k}}\cdot \frac{\mathbf{k}!\mathbf{j} !}{(\mathbf{j}+\mathbf{k})!}\cdot \frac {z^{\mathbf{k}}}{\mathbf{k}!},\end{gathered} $$

with error

$$\mathrm{e}^{z}-\mathbf{R}(z)=(-1)^{\mathbf{k}}\cdot \frac{\mathbf {j}!\mathbf{k}!}{(\mathbf{j}+\mathbf{k})!(\mathbf{j}+\mathbf {k}+1)!}\cdot z^{\mathbf{j}+\mathbf{k}+1}+O\bigl(z^{j+k+2}\bigr). $$

It is the unique rational approximation to \(\mathrm{e}^{z}\)of order \(\mathbf{j}+\mathbf{k}\), such that the degrees of numerator and denominator arejandk, respectively.

Lemma 3.3

([19, 20, 28])

Assume that \(\mathbf{R}(z)\)is the \((\mathbf{j},\mathbf{k})\)-Padé approximation to \(\mathrm{e}^{z}\). Then \(\mathbf{R}(z)<\mathrm{e}^{z}\)for all \(z>0\)if and only ifkis even, when \(z>0\).

Theorem 3.4

Assume that \(\mathbf{R}(z)\)is the stability function of Runge–Kutta method (7) i.e.

$$\mathbf{R}(z)=1+zb^{T}(I-zA)^{-1}e=\frac{P_{\mathbf{j}}(z)}{Q_{\mathbf{k}}(z)}. $$

Under the conditions of Theorem 2.2, Runge–Kutta method (7) with nonnegative coefficients (\(a_{ij}\geq0\)and \(b_{i}\geq 0\), \(1\leq i\leq s\), \(1\leq j\leq s\)) for (1) is asymptotically stable for \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(k\in \mathbb{N}\), \(m\in\mathbb{Z}^{+}\)and \(m\geq M\), ifkis even, where \(M=\inf\{m: I-zA\)is invertible and \((I-zA)^{-1}e\geq0, z=\alpha h_{k}, k\in\mathbb{N}, m\in\mathbb{Z}^{+}\}\). (The last inequality should be interpreted entrywise.)

Proof

Because \(a_{ij}\geq0\) and \(b_{i}\geq0\), \(1\leq i\leq s\), \(1\leq j\leq s\), we obtain

$$\begin{aligned} & \bigl\Vert X_{k,l}^{i}-Y_{k,l}^{i} \bigr\Vert \\ &\quad= \Biggl\Vert x_{k,l}-y_{k,l}+h_{k} \sum _{j=1}^{s}a_{ij}\bigl(f \bigl(t_{k,l}^{j},X_{k,l}^{j}\bigr)-f \bigl(t_{k,l}^{j},Y_{k,l}^{j}\bigr)\bigr) \Biggr\Vert \\ &\quad\leq \Vert x_{k,l}-y_{k,l} \Vert +h_{k} \sum _{j=1}^{s}a_{ij} \bigl\Vert f \bigl(t_{k,l}^{j},X_{k,l}^{j}\bigr)-f \bigl(t_{k,l}^{j},Y_{k,l}^{j}\bigr) \bigr\Vert \\ &\quad\leq \Vert x_{k,l}-y_{k,l} \Vert +\alpha h_{k} \sum_{j=1}^{s}a_{ij} \bigl\Vert X_{k,l}^{j}-Y_{k,l}^{j} \bigr\Vert . \end{aligned}$$

And when \(m\geq M\), \((I-zA)^{-1}e\geq0\), \(z=\alpha h_{k}\), \(k\in\mathbb {Z}^{+}\), so

$$ \bigl[ \bigl\Vert X_{k,l}^{i}-Y_{k,l}^{i} \bigr\Vert \bigr]\leq(I-\alpha h_{k} A)^{-1} e \Vert x_{k,l}-y_{k,l} \Vert , $$

where \([\|X_{k,l}^{i}-Y_{k,l}^{i}\|]=(\|X_{k,l}^{1}-Y_{k,l}^{1}\|,\| X_{k,l}^{2}-Y_{k,l}^{2}\|,\ldots, \|X_{k,l}^{s}-Y_{k,l}^{s}\|)^{T}\). By Lemma 3.2 and Lemma 3.3, we can obtain

$$\begin{aligned} & \Vert x_{k,l+1}-y_{k,l+1} \Vert \\ &\quad= \Biggl\Vert x_{k,l}-y_{k,l}+h_{k} \sum _{j=1}^{s} b_{j} \bigl(f \bigl(t_{k,l}^{j},X_{k,l}^{j}\bigr)- f \bigl(t_{k,l}^{j},Y_{k,l}^{j}\bigr)\bigr) \Biggr\Vert \\ &\quad\leq \Vert x_{k,l}-y_{k,l} \Vert +h_{k} \sum _{j=1}^{s} b_{j} \bigl\Vert f \bigl(t_{k,l}^{j},X_{k,l}^{j}\bigr)- f \bigl(t_{k,l}^{j},Y_{k,l}^{j}\bigr) \bigr\Vert \\ &\quad\leq \Vert x_{k,l}-y_{k,l} \Vert +\alpha h_{k} \sum_{j=1}^{s} b_{j} \bigl\Vert X_{k,l}^{j}-Y_{k,l}^{j} \bigr\Vert \\ &\quad= \Vert x_{k,l}-y_{k,l} \Vert +\alpha h_{k} b^{T} \bigl[ \bigl\Vert X_{k,l}^{i}-Y_{k,l}^{i} \bigr\Vert \bigr] \\ &\quad\leq\bigl(1+\alpha h_{k} b^{T}(I-\alpha h_{k} A)^{-1}e\bigr) \Vert x_{k,l}-y_{k,l} \Vert \\ &\quad=\mathbf{R}(\alpha h_{k}) \Vert x_{k,l}-y_{k,l} \Vert \\ &\quad\leq\mathrm{e}^{\alpha h_{k}} \Vert x_{k,l}-y_{k,l} \Vert . \end{aligned}$$

Hence for arbitrary \(k=0,1,2,\ldots\) and \(l=0,1,\ldots,m\), we have

$$ \Vert x_{k,l}-y_{k,l} \Vert \leq \Vert x_{k,0}-y_{k,0} \Vert \mathrm{e}^{\alpha lh_{k}}. $$

Therefore, by the method of the introduction and the condition (5), we obtain

$$\begin{aligned} & \Vert x_{k,l}-y_{k,l} \Vert \\ &\quad\leq \Vert x_{0}-y_{0} \Vert \bigl(\beta _{1}\mathrm{e}^{\alpha(\tau_{1}-\tau_{0})}\bigr) \bigl(\beta_{2} \mathrm{e}^{\alpha(\tau _{2}-\tau_{1})}\bigr) \bigl(\beta_{k}\mathrm{e}^{\alpha(\tau_{k}-\tau_{k-1})} \bigr) \mathrm {e}^{\alpha lh_{k}} \\ &\quad\leq \Vert x_{0}-y_{0} \Vert C^{k} \mathrm{e}^{\alpha\gamma}, \end{aligned}$$

which implies that Runge–Kutta method for (1) is asymptotically stable for \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(k\in \mathbb{N}\), \(m\in\mathbb{Z}^{+}\) and \(m\geq M\). □

Remark 3.5

  1. (1)

    For z sufficiently close to zero, the matrix \(I-zA\) is invertible and \((I-zA)^{-1}e\geq0\). Therefore, taking stepsizes \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(k\in\mathbb{N}\), \(m\in\mathbb {Z}^{+}\) and \(m\geq M\) and \(M=\inf\{m: I-zA\text{ is invertible}, (I-zA)^{-1}e\geq0, z=\alpha h_{k}, k\in\mathbb{N}, m\in\mathbb{Z}^{+}\}\) in Theorem 3.4 is reasonable.

  2. (2)

    Under the conditions of Remark 2.3, Runge–Kutta method (7) with nonnegative coefficients (\(a_{ij}\geq0\) and \(b_{i}\geq0\), \(1\leq i\leq s\), \(1\leq j\leq s\)) for (1) is stable for \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(k\in\mathbb{N}\), \(m\in \mathbb{Z}^{+}\) and \(m\geq M\), if k is even, where \(M=\inf\{ m: I-zA\) is invertible and \((I-zA)^{-1}e\geq0, z=\alpha h_{k}, k\in\mathbb {N}, m\in\mathbb{Z}^{+}\}\).

By Theorem 3.4 as \(\mathbf{k}=0\), we can obtain the following corollary.

Corollary 3.6

Under the conditions of Theorem 2.2, the followingp-stagepth order explicit Runge–Kutta methods with nonnegative coefficients (\(a_{ij}\geq0\)and \(b_{i}\geq0\), \(1\leq j< i\), \(1\leq i\leq p\)) for (1) are asymptotically stable for \(h_{k}=\frac{\tau_{k+1}-\tau _{k}}{m}\), \(k\in\mathbb{N}\), \(m\in\mathbb{Z}^{+}\), when \(p\leq4\).

  1. (1)

    Explicit Euler method;

  2. (2)

    2-stage second order explicit Runge–Kutta methods

    000a1212001000a1101212000a343401323Modified Euler methodHeun’s method, order 2Ralston’s method
  3. (3)

    3-stage third order explicit Runge–Kutta methods

    0000131300a230230140340000232300a231313014034Heun’s method, order 3Runge–Kutta method, order 3
  4. (4)

    The classical 4-stage fourth order explicit Runge–Kutta method

    $$ \textstyle\begin{array}{c|cccc} 0 & 0 &0 &0 &0\\ \frac{1}{2} & \frac{1}{2} &0 &0 &0\\ \frac{1}{2} &0 &\frac{1}{2} &0 &0\\ \vphantom{\displaystyle\sum_{a}}1 &0 &0 &1 &0\\ \hline &\frac{1}{6}&\frac{1}{3}&\frac{1}{3}&\frac{1}{6} \end{array} $$

Unfortunately, we cannot obtain the p-stage explicit Runge–Kutta methods of order p for \(p\geq5\), because of the Butcher barriers (see [4, Theorem 370B, pp. 259] or [8, Theorem 5.1 pp. 173]).

In the following of this section, we will consider the θ-method for (1):

$$ \left \{ \textstyle\begin{array}{lll} x_{k,l+1}=x_{k,l}+h_{k}(1-\theta)f(t_{k,l},x_{k,l})+h_{k}\theta f(t_{k,l+1},x_{k,l+1}),\\ x_{k+1,0}=I_{k+1}(x_{k,m}), \\ x_{0,0}=x_{0}, \end{array}\displaystyle \right . $$
(10)

where \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(m\geq1\), m is an integer, \(k=0,1,2,\dots\).

Lemma 3.7

(See [19])

For \(m>\sup\{\alpha\tau _{k}-\tau_{k-1}\}\)and \(z_{k}=h_{k} \alpha\), \(h=\frac{\tau_{k}-\tau_{k-1}}{m}\), \(m, k\in\mathbb{Z}^{+}\), then

$$\biggl(1+\frac{z_{k}}{1-z_{k}\theta}\biggr)^{m}\leq\mathrm{e}^{h_{k} \alpha} $$

if and only if \(0\leq\theta\leq\varphi(1)\), where \(\varphi(x)=\frac{1}{x}-\frac{1}{\mathrm{e}^{x}-1}\).

Theorem 3.8

Under the conditions of Theorem 2.2, if \(0\leq \theta\leq\varphi(1)\), there is a positiveMsuch thatθmethod for (1) is asymptotically stable for \(h_{k}=\frac{\tau _{k+1}-\tau_{k}}{m}\), \(k\in\mathbb{N}\), \(m\in\mathbb{Z}^{+}\)and \(m\geq M\).

Proof

Obviously, we can obtain

$$\begin{aligned} & \Vert x_{k,l+1}-y_{k,l+1} \Vert \\ &\quad\leq \bigl\Vert x_{k,l}-y_{k,l}+(1-\theta)h_{k} \bigl(f(t_{k,l},x_{k,l})- f(t_{k,l},y_{k,l}) \bigr) \bigr\Vert \\ &\qquad{}+\theta h_{k} \bigl\Vert f(t_{k,l+1},x_{k,l+1})- f(t_{k,l+1},y_{k,l+1}) \bigr\Vert \\ &\quad\leq\bigl(1+(1-\theta)\alpha h_{k}\bigr) \Vert x_{k,l}-y_{k,l} \Vert +\theta \alpha h_{k} \Vert x_{k,l+1}-y_{k,l+1} \Vert , \end{aligned}$$

which implies

$$\Vert x_{k,l+1}-y_{k,l+1} \Vert \leq\frac{1+(1-\theta)\alpha h_{k}}{1-\theta\alpha h_{k}} \cdot \Vert x_{k,l}-y_{k,l} \Vert . $$

Therefore, by Lemma 3.7 and the method of introduction, we obtain

$$\Vert x_{k,l+1}-y_{k,l+1} \Vert \leq\mathrm{e}^{\alpha h_{k}} \Vert x_{k,l}-y_{k,l} \Vert . $$

So θ-method for (1) is asymptotically stable for \(h_{k}=\frac{\tau_{k+1}-\tau_{k}}{m}\), \(k\in\mathbb{N}\), \(m\in\mathbb {Z}^{+}\) and \(m> \sup\{\alpha(\tau_{k+1}-\tau_{k})\}\), if \(0\leq\theta \leq\varphi(1)\). □

4 Numerical experiments

In this section, two simple numerical examples in real space are given.

Example 4.1

Consider the following scalar impulsive differential equation:

$$ \left \{ \textstyle\begin{array}{l} x'(t)=\sin( x(t)),\quad t>0, t\neq\tau_{k}, \tau_{k}=k+2^{-k}, k=1,2,\ldots, \\ x(\tau_{k}^{+})=\frac{x(\tau_{k})}{3} ,\quad k=1,2,\ldots,\\ x(0^{+})=x_{0}. \end{array}\displaystyle \right . $$
(11)

Obviously, for arbitrary \(x,y\in\mathbb{R}\), we obtain

$$ \bigl\vert \sin(x)-\sin(y) \bigr\vert = \biggl\vert 2\cos\biggl( \frac{x+y}{2}\biggr)\sin\biggl(\frac {x-y}{2}\biggr) \biggr\vert \leq 2 \biggl\vert \frac{x-y}{2} \biggr\vert = \vert x-y \vert , $$

which implies the Lipschitz coefficient \(\alpha=1\). Hence, for \(k\geq2\),

$$\beta_{k} \mathrm{e}^{\alpha(\tau_{k}-\tau_{k-1})}=\frac{\mathrm {e}^{k+2^{-k}-(k-1)-2^{-(k-1)}} }{3} < \frac{\mathrm{e}}{3}< 1. $$

Therefore, by Theorem 2.2, the exact solution of (11) is asymptotically stable.

By Corollary 3.6, the explicit Euler method (see Fig. 1) and classical 4-stage fourth order explicit Runge–Kutta method (see Fig. 2) for (11) are asymptotically stable for \(h_{0}=\frac{3}{2m}\) and \(h_{k}=\frac{1+2^{-(k+1)}-2^{-k}}{m}\), \(k\in\mathbb {Z}^{+}\), \(m\in\mathbb{Z}^{+}\) and \(m\geq2\).

Figure 1
figure 1

Explicit Euler method for (11)

Figure 2
figure 2

The classical 4-stage fourth order Runge–Kutta method for (11)

Example 4.2

Consider the following scalar nonlinear impulsive differential equation:

$$ \left \{ \textstyle\begin{array}{l} x'(t)=\frac{\sqrt{1+x^{2}(t)}}{2},\quad t\geq0, t\neq k, k=1,2,\ldots, \\ x(t^{+})=\frac{\sin(x(t))}{2},\quad t=k, k=1,2,\ldots,\\ x(0^{+})=x_{0}. \end{array}\displaystyle \right . $$
(12)

Obviously, for arbitrary \(x,y\in\mathbb{R}\), we have

$$ \biggl\vert \frac{\sqrt{1+x^{2}}}{2}- \frac{\sqrt {1+y^{2}}}{2} \biggr\vert \leq \frac{1}{2} \vert x-y \vert , $$

which implies the Lipschitz constant \(\alpha=\frac{1}{2}\). So

$$\beta_{k} \mathrm{e}^{\alpha(\tau_{k}-\tau_{k-1})}=\biggl(\frac{1}{2}\biggr) \mathrm {e}^{\frac{1}{2} (k-(k-1))}=\frac{\sqrt{\mathrm{e}}}{2}< 1. $$

Therefore, by Theorem 2.2, the exact solution of (12) is asymptotically stable.

By Corollary 3.6, the explicit Euler method (see Fig. 3) and classical 4-stage fourth order explicit Runge–Kutta methods (see Fig. 4) for (12) are asymptotically stable for \(h_{k}=\frac{1}{m}\), \(k\in\mathbb{N}\), m is an arbitrary positive integer.

Figure 3
figure 3

Explicit Euler method for (12) as \(h=\frac{1}{10}\)

Figure 4
figure 4

The classical 4-stage fourth order Runge–Kutta method for (12) as \(h=\frac{1}{10}\)

From Tables 1 and 2, we can see that the Runge–Kutta methods conserve their orders of convergence.

Table 1 The errors Runge–Kutta methods for (11)
Table 2 The errors Runge–Kutta methods for (12)