1 Introduction

During the past few decades, the stability theory of stochastic differential equations and impulsive differential equations has been developed very quickly; see for instance [115]. A lot of stability criteria on impulsive stochastic differential equations have also been reported (see [1623] and the references therein). Almost all of them mainly focus on the stability of the zero solution, but there is very little of research addressing the stability of sets.

The concept of stability of sets of nonlinear systems, which includes as a special case stability in the sense of Lyapunov (see Krasovskii [24]; Rouche et al. [25]), such as stability of the trivial solution, stability of the solution, stability with respect to part of the variables and so on, has become one of the most important issues in the stability theory of nonlinear systems [2628]. The theoretical works of the stability of sets with respect to nonlinear ordinary differential equations may be traced back to Yoshizawa [2931] in the previous century. The research to the stability of sets of impulsive differential equations can be found in [15, 3235]. For stochastic differential equations and impulsive stochastic differential equations, we refer the reader to [11, 3639] and the references therein.

In this paper, we shall extend the Razumikhin method developed in [7, 14, 40] to investigate the stability of sets for a class of impulsive stochastic functional differential equations. Meanwhile, our results show that the impulsive effects play an important part in the stability for stochastic functional differential equations, that is, an unstable stochastic delay system can be successfully stabilized by impulses.

The rest of this paper is organized as follows. Some preliminary notes are given in Section 2. Several theorems on stability of sets of impulsive stochastic functional differential equation are established in Section 3. In Section 4, three examples are presented to illustrate the applications of the results obtained.

2 Preliminaries

Throughout this paper, we use the following notations.

Let \((\Omega, \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq0}, P)\) be a complete probability space with a natural filtration \(\{\mathcal {F}_{t}\}_{t\geq0}\) satisfying the usual conditions (i.e. it is right continuous and \(\mathcal{F}_{0}\) contains all P-null sets), and \(E[\cdot]\) stand for the correspondent expectation operator with respect to the given probability measure P. Let \(W(t)=(W_{1}(t), \ldots, W_{m}(t))^{T}\) be an m-dimensional Wiener process defined on a complete probability space with a natural filtration. Let \(|\cdot|\) denote the Euclidean norm in \(\mathbb{R}^{n}\).

Let \(\tau>0\) and \(PC([-\tau,0]; \mathbb{R}^{n})\) = {\(\phi:[-\tau ,0]\rightarrow\mathbb{R}^{n}\mid\phi(t)\) is continuous everywhere except at the points \(t=t_{k}\in[t_{0},\infty)\), \(\phi(t_{k}^{+})\) and \(\phi(t_{k}^{-})\) exist with \(\phi(t_{k}^{+})=\phi(t_{k})\)} with the norm \(\|\phi\|=\sup_{-\tau\leq\theta\leq0}|\phi(\theta)|\), where \(\phi(t^{+})\) and \(\phi(t^{-})\) denote the right-hand and left-hand limits of function \(\phi(t)\) at t.

Denote \(PC_{\mathcal{F}_{0}}^{b}([-\tau,0]; \mathbb{R}^{n})\) by the family of all bounded, \(\mathcal{F}_{0}\)-measurable, \(PC([-\tau,0]; \mathbb{R}^{n})\)-valued random variables. For \(p>0\), denote by \(PC_{\mathcal{F}_{t}}^{p}([-\tau,0]; \mathbb{R}^{n})\) the family of all \(\mathcal{F}_{t}\)-measurable \(PC([-\tau,0]; \mathbb{R}^{n})\)-valued random variables ϕ such that \(E\|\phi\|^{p}<\infty\).

In this paper, we shall consider the following impulsive stochastic functional differential equation:

$$ \left \{ \textstyle\begin{array}{l} dx(t)=f(t,x_{t})\, dt+g(t,x_{t})\, dW(t),\quad t\geq t_{0}, t\neq t_{k}, \\ \Delta x(t_{k})=I_{k}(t_{k},x(t_{k}^{-})), \quad t= t_{k}, k\in \mathbb{Z}^{+}, \\ x_{t_{0}}(s)=\xi(s), \quad s\in[-\tau,0], \end{array}\displaystyle \right . $$
(2.1)

where \(\mathbb{Z}^{+}\) is the set of all positive integers, \(\xi=\{\xi(s): -\tau\leq s\leq0\}\in PC_{\mathcal{F}_{0}}^{b}([-\tau ,0]; \mathbb{R}^{n})\), \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{T}\), and \(x_{t}=\{x(t+\theta): -\tau\leq\theta\leq0\}\), \(x(t_{k}^{-})=\lim_{h\rightarrow0^{-}}x(t_{k}+h)\), \(x(t_{k})=\lim_{h\rightarrow 0^{+}}x(t_{k}+h)\), \(t_{k}\) (\(k=1,2,\ldots\)) are impulsive moments satisfying \(0\leq t_{0}< t_{1}<\cdots<t_{k}<t_{k+1}<\cdots\) with \(\lim_{k\rightarrow+\infty}t_{k}=+\infty\), \(\Delta x(t_{k})=x(t_{k}^{+})-x(t_{k}^{-})=x(t_{k})-x(t_{k}^{-})\) represents the jump in the state x at \(t_{k}\) with \(I_{k}\) determining the size of the jump. \(f: [t_{0},\infty)\times PC([-\tau,0]; \mathbb{R}^{n}) \rightarrow \mathbb{R}^{n}\) and \(g: [t_{0},\infty)\times PC([-\tau,0]; \mathbb {R}^{n}) \rightarrow\mathbb{R}^{n\times m}\) are Borel measurable, and \(I_{k} \in C(\mathbb{R}^{+}\times\mathbb{R}^{n}, \mathbb{R}^{n})\).

Definition 2.1

An \(\mathbb{R}^{n}\)-valued stochastic process \(x(t)\) is called a solution of the problem (2.1) corresponding to initial value σ, if

  1. (i)

    x: \([\sigma-\tau, \sigma+\beta)\) for some β (\(0<\beta\leq \infty\)) is continuous for \(t\in[\sigma-\tau, \sigma+\beta)\backslash\{ t_{k}: k=1,2,\ldots\}\), \(x(t^{+}_{k})\) and \(x(t^{-}_{k})\) exist with \(x(t^{+}_{k})=x(t_{k})\) for \(t_{k}\in[\sigma-\tau, \sigma+\beta)\), and \(\{x_{t}\}_{t\geq t_{0}}\) is \(\mathcal{F}_{t}\)-adapted;

  2. (ii)

    \(\{f(t,x_{t})\}\in L^{1}([t_{0},\infty];\mathbb{R}^{n})\) and \(\{ g(t,x_{t})\}\in L^{2}([t_{0},\infty];\mathbb{R}^{n\times m})\);

  3. (iii)

    \(x(t)\) satisfies (2.1).

We denote the solution of the initial problem (2.1) by \(x(t;\sigma,\xi)\), and we denote by \([\sigma-\tau, \sigma+\beta)\) the maximal right interval in which the solution \(x(t;\sigma,\xi)\) is defined.

Let \(M\subset[t_{0}-\tau, \infty)\times\mathbb{R}^{n}\). We introduce the following notations:

$$\begin{aligned} \begin{aligned} &M(t)=\bigl\{ x\in\mathbb{R}^{n}:(t,x)\in M\bigr\} , \quad t \in[t_{0}-\tau, \infty); \\ &M(t,\epsilon)=\bigl\{ x\in\mathbb{R}^{n}:d\bigl(x,M(t)\bigr)< \epsilon, \epsilon>0\bigr\} , \end{aligned} \end{aligned}$$

where

$$d\bigl(x,M(t)\bigr)=\inf_{y\in M(t)}E|x-y| $$

is the distance between x and the set \(M(t)\);

$$M_{0}(t,\epsilon)=\bigl\{ \varphi\in PC\bigl([-\tau,0]; \mathbb {R}^{n}\bigr):d_{0}\bigl(\varphi,M(t)\bigr)< \epsilon, \epsilon>0\bigr\} , $$

where

$$d_{0}\bigl(\varphi,M(t)\bigr)=\max_{s\in[-\tau, 0]} d\bigl( \varphi(s), M(t+s)\bigr) \quad \mbox{and} \quad \varphi\in PC\bigl([-\tau,0]; \mathbb{R}^{n}\bigr). $$

We assume that the following conditions (H1)-(H4) are satisfied, so that the initial value problem (2.1) has one unique solution.

(H1):

For all \(\psi\in PC([-\tau,0]; \mathbb{R}^{n})\) and \(k\in\mathbb{Z}^{+}\), the limits

$$\lim_{(t,\varphi)\rightarrow(t^{-}_{k},\psi)}f(t,\varphi )=f\bigl(t^{-}_{k}, \psi\bigr), \qquad \lim_{(t,\varphi)\rightarrow(t^{-}_{k},\psi )}g(t,\varphi)=g\bigl(t^{-}_{k}, \psi\bigr) $$

exist.

(H2):

f and g satisfy the locally Lipschitz condition in ϕ on each compact set in \(PC([-\tau,0]; \mathbb{R}^{n})\). More precisely, for every \(a\in[t_{0}, \sigma+\beta)\) and every compact set \(G\in PC([-\tau,0]; \mathbb{R}^{n})\), there exists a constant \(L=L(a,G)\) such that

$$\bigl\vert f(t,\varphi)-f(t,\psi)\bigr\vert \vee\bigl\vert g(t, \varphi)-g(t,\psi)\bigr\vert \leq L\|\varphi -\psi\|, $$

whenever \(t\in[t_{0}, a)\) and \(\varphi, \psi\in G\).

(H3):

For any \(\rho>0\) there exists \(0<\rho_{1}\leq\rho\), such that

$$x\in M(t,\rho_{1}) \quad \mbox{implies that} \quad x+I_{k}(t_{k},x)\in M(t,\rho) $$

for all \(k\in\mathbb{Z}^{+}\).

(H4):

\(f(t,x_{t}), g(t,x_{t})\in PC([t_{0}, \infty), \mathbb {R}^{n})\) for \(x_{t}\in PC([\sigma-\tau, \infty), \mathbb{R}^{n})\).

For any \(t\geq t_{0}\) and \(\kappa\geq0\), let \(PC_{\kappa}=\{\phi\in PC([-\tau,0]; \mathbb{R}^{n}): \|\phi\|\leq\kappa\}\).

We shall say that condition (A) is fulfilled if the following conditions hold:

(A1):

for each \(t\in[t_{0}, \infty)\) the set \(M(t)\) is not empty;

(A2):

for any compact subset F of \([t_{0}, \infty)\times \mathbb{R}^{n}\) there exists a constant \(K>0\) depending on F such that if \((t,x), (t',x)\in F\), then the following inequality holds:

$$\bigl\vert d\bigl(x,M(t)\bigr)-d\bigl(x,M\bigl(t'\bigr)\bigr) \bigr\vert \leq K\bigl\vert t-t'\bigr\vert ; $$
(A3):

if for solution \(x(t;\sigma,\xi)\) there exists \(h>0\) satisfying

$$d\bigl(x(t;\sigma,\xi),M(t,\rho)\bigr)\leq h< \infty\quad \mbox{for } t\in [ \sigma,\sigma+\beta), $$

where ρ is a constant, then \(x(t;\sigma,\xi)\) is defined in the interval \([\sigma,\infty)\).

Definition 2.2

A function \(V(t,x):[t_{0}-\tau, \infty)\times M(t,\rho)\rightarrow \mathbb{R}^{+}\) belongs to the class \(\nu_{0}\) if

(B1):

V is continuous on each of the set \(([t_{0}-\tau ,t_{0}]\cup[t_{k-1},t_{k}))\times M(t,\rho)\) for all \(x\in M(t,\rho)\) and for \(k\in\mathbb{Z}^{+}\), the limit \(\lim_{(t,y)\rightarrow (t^{-}_{k},x)}V(t,y)=V(t^{-}_{k},x)\) exists;

(B2):

V is locally Lipschitz in \(x\in M(t,\rho)\), \(V(t,0)=0\) for \((t,x)\in M\) and \(V(t,x)>0\) for \((t,x)\notin M\).

Definition 2.3

For each \(V\in\nu_{0}\), we define the operator LV from \(\mathbb {R}^{+}\times\mathbb{R}^{n}\) to \(\mathbb{R}\) by

$$\begin{aligned} L V(t,\phi) =&V_{t}(t,x)+V_{x}(t,x)f(t,\phi) \\ &{} + \frac{1}{2}\operatorname{trace}\bigl[g^{T}(t,\phi )V_{xx}(t,x)g(t,\phi)\bigr], \end{aligned}$$

where

$$\begin{aligned}& V_{t}(t,x)= \frac{\partial V(t,x)}{\partial t}, \\& V_{x}(t,x)= \biggl(\frac{\partial V(t,x)}{\partial x_{1}},\ldots,\frac{\partial V(t,x)}{\partial x_{n}} \biggr), \\& V_{xx}(t,x)= \biggl(\frac{\partial^{2} V(t,x)}{\partial x_{i}\, \partial x_{j}} \biggr)_{n\times n}. \end{aligned}$$

We shall give the definitions of stability of the set M with respect to system (2.1).

Definition 2.4

The set M with respect to the solution of system (2.1) is said to be:

(S1):

stable, if for any \(\sigma\geq t_{0}\), \(\alpha>0\), and \(\epsilon>0\), there is a \(\delta(\sigma,\epsilon,\alpha)>0\) such that \(\xi\in PC_{\alpha}\cap M_{0}(\sigma,\delta)\) implies that \(x(t,\sigma ,\xi)\in M(t,\epsilon)\) for \(t\geq\sigma\);

(S2):

uniformly stable, if the δ in (S1) is independent of σ;

(S3):

asymptotically stable, if it is stable and for any \(\sigma \geq t_{0}\) and \(\alpha>0\), there exists a \(\delta=\delta(\sigma,\alpha )\) such that \(\xi\in PC_{\alpha}\cap M_{0}(\sigma,\delta)\) implies that \(x(t,\sigma,\xi)\rightarrow M(t)\) as \(t\rightarrow\infty\);

(S4):

uniformly asymptotically stable, if it is uniformly stable, and for any \(\alpha>0\) there exists a \(\delta(\alpha)>0\), such that for any \(\epsilon>0\) there is a \(T(\epsilon,\alpha,\delta)>0\) such that \(\sigma\geq t_{0}\) and \(\xi\in PC_{\alpha}\cap M_{0}(\sigma,\delta )\) implies that \(x(t,\sigma,\xi)\in M(t,\epsilon)\) for \(t\geq\sigma+T\).

In order to obtain our results, we will use the following function classes:

$$\begin{aligned}& K_{1}=\bigl\{ u\in C\bigl(\mathbb{R}^{+}, \mathbb{R}^{+}\bigr):u(0)=0,u(s) \mbox{ is strictly increasing in } s \bigr\} ; \\& K_{2}=\bigl\{ u\in C\bigl(\mathbb{R}^{+}, \mathbb{R}^{+}\bigr):u(0)=0,u(s)>0 \mbox{ for } s>0 \bigr\} ; \\& K_{3}=\bigl\{ u\in C\bigl(\mathbb{R}^{+}, \mathbb{R}^{+}\bigr):u(0)=0,u(s)>s\mbox{ for }s>0, u(s) \mbox{ is strictly increasing in } s \bigr\} . \end{aligned}$$

3 Main results

In this section, we present and prove our main results on uniform stability and asymptotic stability of the sets of system (2.1) by utilizing piecewise continuous Lyapunov functions with Razumickhin methods.

Theorem 3.1

Let conditions (A) and (H1)-(H4) be satisfied and suppose that there exist functions \(V\in\nu_{0}\), \(a, b\in K_{1}\), \(c\in K_{2}\), \(P\in K_{3}\), and the following conditions are fulfilled:

  1. (i)

    \(a(d(x,M(t)))\leq EV(t,x)\leq b(d(x,M(t)))\) for all \((t,x)\in[t_{0}-\tau, \infty)\times M(t,\rho)\);

  2. (ii)

    \(ELV(t,x(t))\leq\eta(t)c(EV(t,x(t)))\), \(t\neq t_{k}\), whenever \(EV(t+s,x(t+s))\leq P(EV(t,x(t)))\) for \(-\tau\leq s\leq0\), where \(x(t)\) is any solution of system (2.1), and \(\eta:[t_{0},\infty)\rightarrow\mathbb{R}^{+}\) is locally integrable;

  3. (iii)

    \(EV(t_{k},x+I_{k}(t_{k},x))\leq P^{-1}(EV(t^{-}_{k},x))\) for each \(k\in\mathbb{Z}^{+}\), and all \(x\in M(t, \rho_{1})\), where \(P^{-1}\) is the inverse of the function P;

  4. (iv)

    \(\sup_{k\in\mathbb{Z}^{+}}\{t_{k}-t_{k-1}\}<\infty \), and \(\int_{P^{-1}(\mu)}^{\mu}\frac{ds}{c(s)}-\int_{t_{k-1}}^{t_{k}}\eta (s)\, ds>0\) for all \(\mu\in(0,\infty)\), \(k\in\mathbb{Z}^{+}\).

Then the set M is uniformly stable with respect to the solution of system (2.1).

Proof

For any given \(\epsilon>0\), \(\alpha>0\), without loss of generality, we assume that \(\epsilon\leq\rho_{1}\). We can choose \(\delta=\delta(\epsilon,\alpha)>0\) such that \(P(b(\delta ))<\alpha(\epsilon)\) and \(\delta<\alpha\). From \(b(\delta)< P(b(\delta))<\alpha(\epsilon)<b(\epsilon)\) we know that \(\delta<\epsilon\).

For \(\sigma\geq t_{0}\), \(\xi\in PC_{\alpha}\cap M_{0}(\sigma,\delta )\), let \(x(t)=x(t;\sigma,\xi)\) be the solution of system (2.1), where \(\sigma\in[t_{m-1},t_{m})\) for some \(m\in\mathbb {Z}^{+}\). Then, for \(\sigma-\tau\leq t\leq\sigma\), from condition (i) we have

$$ a\bigl(d\bigl(x(t),M(t)\bigr)\bigr)\leq EV\bigl(t,x(t)\bigr)\leq b \bigl(d\bigl(x(t),M(t)\bigr)\bigr)\leq b(\delta)\leq P\bigl(b(\delta)\bigr)< a( \epsilon). $$
(3.1)

From the above inequality, we obtain \(d(x(t),M(t))<\epsilon\) for \(\sigma -\tau\leq t\leq\sigma\).

Next, we will prove \(d(x(t),M(t))<\epsilon\) for \(t\in[\sigma,\sigma +\beta)\). Suppose, on the contrary, that \(d(x(t),M(t))>\epsilon\) for some \(t\in [\sigma,\sigma+\beta)\). Then let \(\hat{t}=\inf\{\sigma\leq t\leq\sigma+\beta \mid d(x(t),M(t))>\epsilon\}\). Note that \(d(x(\sigma),M(\sigma))<\epsilon\), we see that \(\hat{t}>\sigma\), \(d(x(t),M(t))\leq\epsilon\leq\rho_{1}\), for \(t\in[\sigma-\tau,\hat{t})\) and either \(d(x(\hat{t}),M(\hat {t}))=\epsilon\) or \(d(x(\hat{t}),M(\hat{t}))>\epsilon\) and \(\hat {t}=t_{k}\) for some k.

In the latter case, \(d(x(\hat{t}),M(\hat{t}))\leq\rho\). From condition (H3) we have

$$d\bigl(x(\hat{t}),M(\hat {t})\bigr)=d\bigl(x(t_{k}),M(t_{k}) \bigr)=d\bigl(x\bigl(t^{-}_{k}\bigr)+I_{k} \bigl(t_{k},x\bigl(t^{-}_{k}\bigr) \bigr),M(t_{k})\bigr)\leq \rho, $$

it follows that in either case \(EV(t,x(t))\) is defined for \(t\in[\sigma -\tau,\hat{t}]\).

For \(t\in[\sigma,\hat{t}]\) define

$$ EV(t)=EV\bigl(t,x(t)\bigr). $$
(3.2)

Then for \(t\in[\sigma-\tau,\hat{t}]\), by condition (i), we get

$$a\bigl(d\bigl(x(t),M(t)\bigr)\bigr)\leq EV(t)\leq b\bigl(d\bigl(x(t),M(t)\bigr) \bigr). $$

Let \(\tilde{t}=\inf\{t\in[\sigma,\hat{t}] \mid EV(t)>a(\epsilon)\}\). Since \(EV(\sigma)< a(\epsilon)\) and \(EV(\tilde{t})\geq a(\epsilon)\), it follows that \(\tilde{t}\in(\sigma,\hat{t}]\) and \(EV(t)< a(\epsilon)\) for \(t\in[\sigma-\tau,\tilde{t})\). We claim that \(EV(\tilde{t})=a(\epsilon )\) and that \(\tilde{t}\neq t_{k}\) for any k. In fact, if \(EV(\tilde {t})\geq a(\epsilon)\), \(\tilde{t}=t_{k}\) for some k, by condition (iii) we have

$$a(\epsilon)\leq EV(\tilde{t})\leq P^{-1}\bigl(EV\bigl( \tilde{t}^{-}\bigr)\bigr)< EV\bigl(\tilde {t}^{-}\bigr)\leq a( \epsilon), $$

which is contradiction. Thus \(\tilde{t}\neq t_{k}\), for any k, and that in turn implies \(EV(\tilde{t})=a(\epsilon)\), since \(EV(t)\) is continuous at \(\tilde{t}\) for \(\tilde{t}\neq t_{k}\).

Now let us first consider the case \(t_{m-1}\leq\tilde{t}< t_{m}\). Let \(\bar{t}=\sup\{t\in[\sigma,\tilde{t}] \mid EV(t)\leq P^{-1}(a(\epsilon))\}\). Since \(EV(\sigma)< P^{-1}(a(\epsilon))\), \(EV(\tilde {t})=a(\epsilon)>P^{-1}(a(\epsilon))\), and \(EV(t)\) is continuous on \([\sigma,\tilde{t}]\), we have \(\bar{t}\in(\sigma,\tilde{t})\), \(EV(\bar {t})=P^{-1}(a(\epsilon))\), and \(EV(t)\geq P^{-1}(a(\epsilon))\) for \(t\in [\bar{t},\tilde{t}]\).

For \(t\in[\bar{t},\tilde{t}]\) and \(-\tau\leq s\leq0\), we have

$$EV(t+s)\leq a(\epsilon)=P\bigl(P^{-}\bigl(a(\epsilon)\bigr)\bigr)\leq P\bigl(EV(t)\bigr). $$

From condition (ii), we obtain

$$ELV(t)\leq\eta(t)c\bigl(EV(t)\bigr) $$

for all \(t\in[\bar{t},\tilde{t}]\). Integrating the above differential inequality yields

$$ \int_{EV(\bar{t})}^{EV(\tilde{t})}\frac{ds}{c(s)} \leq \int_{\bar{t}}^{\tilde{t}}\eta(s)\, ds\leq\int _{t_{m-1}}^{t_{m}}\eta(s)\, ds. $$
(3.3)

On the other hand, by condition (iv), we obtain

$$\int_{EV(\bar{t})}^{EV(\tilde{t})}\frac{ds}{c(s)}= \int _{P^{-1}(a(\epsilon))}^{a(\epsilon)}\frac{ds}{c(s)}>\int _{t_{m-1}}^{t_{m}}\eta(s)\, ds, $$

which is in contradiction with (3.3).

Now, assume that \(t_{k}<\tilde{t}<t_{k+1}\) for some \(k\in\mathbb {Z}^{+}\) and \(k\geq m\). Then by condition (iii) we have

$$EV(t_{k}) \leq P^{-1}\bigl(EV\bigl(t^{-}_{k} \bigr)\bigr)< P^{-1}\bigl(a(\epsilon)\bigr). $$

Let \(\bar{t}=\sup\{t\in[t_{k},\tilde{t}] \mid EV(t)\leq P^{-1}(a(\epsilon))\}\). Then \(\bar{t}\in(t_{k},\tilde{t})\), \(EV(\bar {t})=P^{-1}(a(\epsilon))\), and \(EV(t)\geq P^{-1}(a(\epsilon))\) for \(t\in [\bar{t},\tilde{t}]\). Therefore, for \(t\in[\bar{t},\tilde{t}]\) and \(-\tau\leq s\leq0\), we have

$$EV(t+s)\leq a(\epsilon)=P\bigl(P^{-}\bigl(a(\epsilon)\bigr)\bigr)\leq P\bigl(EV(t)\bigr). $$

Then, by condition (ii), we have

$$ELV(t)\leq\eta(t)c\bigl(EV(t)\bigr)\quad \mbox{for all } t\in[\bar{t}, \tilde{t}]. $$

Integrating the above differential inequality yields

$$ \int_{EV(\bar{t})}^{EV(\tilde{t})}\frac{ds}{c(s)} \leq \int_{\bar{t}}^{\tilde{t}}\eta(s)\, ds\leq\int _{t_{k}}^{t_{k+1}}\eta(s)\, ds. $$
(3.4)

On the other hand, by condition (iv), we have

$$\int_{EV(\bar{t})}^{EV(\tilde{t})}\frac{ds}{c(s)}= \int _{P^{-1}(a(\epsilon))}^{a(\epsilon)}\frac{ds}{c(s)}>\int _{t_{k}}^{t_{k+1}}\eta(s)\, ds, $$

which is in contradiction with (3.4). So in either case, we get a contradiction, so we obtain

$$d\bigl(x(t,\sigma,\xi),M(t)\bigr)< \epsilon\quad \mbox{for } t\in[\sigma,\sigma + \beta). $$

From condition (A3) we know that \([\sigma,\sigma+\beta)=[\sigma ,\infty)\), hence \(x(t)\in M(t,\epsilon)\), for all \(t\geq\sigma\), which implies that the set M is uniformly stable with respect to the solution of system (2.1). The proof of Theorem 3.1 is complete. □

Remark 3.1

From Theorem 3.1, we know that impulsive perturbations may cause uniform stability even if the unperturbed system is unstable.

The following result on the asymptotical stability of sets will reveal that impulsive perturbation make stable systems asymptotically stable.

Theorem 3.2

Let conditions (A) and (H1)-(H4) be satisfied and suppose that there exist functions \(V\in\nu_{0}\), \(a, b\in K_{1}\), \(h_{k}\in C(\mathbb{R}^{+}, \mathbb {R}^{+})\) for \(k\in\mathbb{Z}^{+}\), and the following conditions are fulfilled:

  1. (i)

    \(a(d(x,M(t)))\leq EV(t,x)\leq b(d(x,M(t)))\) for all \((t,x)\in[t_{0}-\tau, \infty)\times M(t,\rho)\);

  2. (ii)

    \(EV(t_{k},x+I_{k}(t_{k},x))-EV(t^{-}_{k},x)\leq -h_{k}(EV(t^{-}_{k},x))\) for all \(k\in\mathbb{Z}^{+}\) and \(x\in M(t,\rho_{1})\);

  3. (iii)

    for any solution \(x(t)\) of system (2.1), \(ELV(t,x)\leq0\);, and for any \(\sigma\geq t_{0}\), and \(r>0\), there exists \(\{r_{k}\}\) such that \(EV(t,x)\geq r\) for \(t\geq \sigma\) implies that \(h_{k}(EV(t^{-}_{k},x))\geq r_{k}\); where \(r_{k}\geq0\) with \(\sum_{k=1}^{\infty}r_{k}=\infty\).

Then the set M with respect to the solution of system (2.1) is uniformly stable and asymptotically stable.

Proof

At first, we show that the set M is uniform stability.

For given \(\epsilon>0 \) (\(\epsilon\leq\rho_{1}\)), \(\alpha>0\), we choose a \(\delta(\epsilon,\alpha)>0\) such that \(b(\delta)\leq a(\epsilon )\) and \(\delta<\alpha\). For any \(\sigma\geq t_{0}\) and \(\xi\in PC_{\alpha}\cap M_{0}(\sigma ,\delta)\), let \(x(t)=x(t;\sigma,\xi)\) be the solution of system (2.1). We will show that \(x(t)\in M(t,\epsilon)\) for \(t\in[\sigma ,\sigma+\beta)\).

Set \(EV(t)=EV(t,x(t))\), where \(\sigma\in[t_{m-1},t_{m})\) for some \(m\in\mathbb{Z}^{+}\). Then condition (iii) implies that \(ELV(t)\leq0\) for \(t\in[\sigma,\sigma +\beta)\cap([\sigma,t_{m})\cup(\bigcup_{k=m}^{\infty}[t_{k-1},t_{k})))\), \(k\in\mathbb{Z}^{+}\).

By condition (ii) we have \(EV(t_{i})-EV(t^{-}_{i})\leq0 \) for all \(\sigma\leq t_{i}\leq\sigma+\beta\). Thus \(EV(t)\) is non-increasing on \([\sigma,\sigma+\beta)\). From condition (i) it follows that

$$a\bigl(d\bigl(x(t),M(t)\bigr)\bigr)\leq EV(t)\leq EV(\sigma)\leq b(\delta)\leq a(\epsilon) $$

for \(\sigma\leq t\leq\sigma+\beta\). From condition (A3) we obtain \([\sigma,\sigma+\beta)=[\sigma,\infty )\). Since \(d(x(t),M(t))\leq\epsilon\), for all \(t\geq\sigma\), this implies that \(x(t)\in M(t,\epsilon)\) for \(t\geq\sigma\). That is, the set M is uniformly stable with respect to the solution of system (2.1).

Next we shall prove that the set M is asymptotically stable.

From conditions (ii), (iii), and \(EV(t)\geq0\), we note that \(EV(t)\) is non-increasing on the interval \([\sigma,\infty)\). So the limit \(\lim_{t\rightarrow\infty}EV(t)\) exists.

Assume \(\sigma\in[t_{m-1},t_{m}]\) for some \(m\in\mathbb{Z}^{+}\). Set \(\lim_{t\rightarrow\infty}EV(t)=r\geq0\), one can easily see that \(EV(t)\geq r\) for \(t\geq\sigma\). Then by condition (iii), it follows that there is a sequence \(\{r_{k}\}\) with \(r_{k}\geq0\) for \(k\in\mathbb {Z}^{+}\), which implies that \(h_{k}(EV(t^{-}_{k},x))\geq r_{k}\) with \(\sum_{k=1}^{\infty}r_{k}=\infty\).

By conditions (ii) and (iii) we get

$$\begin{aligned} \begin{aligned} EV(t)&\leq EV(\sigma)+\sum_{\sigma\leq t_{k}\leq t} \bigl(EV(t_{k})-EV\bigl(t^{-}_{k}\bigr)\bigr) \\ &\leq EV(\sigma)-\sum_{\sigma\leq t_{k}\leq t}h_{k}\bigl(EV \bigl(t^{-}_{k}\bigr)\bigr) \\ &\leq EV(\sigma)-\sum_{\sigma\leq t_{k}\leq t}r_{k} \rightarrow -\infty \quad (t\rightarrow\infty), \end{aligned} \end{aligned}$$

which is a contradiction. Hence we have \(r=0\), which implies that \(a(d(x,M(t)))\rightarrow0\) as \(t\rightarrow\infty\). That is, \(x(t)\rightarrow M(t)\) as \(t\rightarrow\infty\). The proof of Theorem 3.2 is complete. □

Theorem 3.3

Let conditions (A) and (H1)-(H4) be satisfied and suppose that there exist functions \(V\in\nu_{0}\), \(a, b\in K_{1}\), \(\psi_{k}\), \(C\in K_{2}\), and the following conditions are fulfilled:

  1. (i)

    \(a(d(x,M(t)))\leq EV(t,x)\leq b(d(x,M(t)))\) for all \((t,x)\in[t_{0}-\tau, \infty)\times M(t,\rho)\);

  2. (ii)

    \(EV(t_{k},x+I_{k}(t_{k},x))\leq\psi _{k}(EV(t^{-}_{k},x))\), for all \(K\in\mathbb{Z}^{+}\), and \(x\in M(t,\rho)\);

  3. (iii)

    for any solution \(x(t)\) of system (2.1), \(ELV(t,x)\leq-\theta(t)C(EV(t,x))\) for \(t\neq t_{k}\), where \(\theta:[t_{0},\infty)\rightarrow\mathbb{R}^{+}\) is locally intergrade, and there exists \(\mu_{0}\), such that for any \(\mu\in(0,\mu _{0})\),

    $$\int_{\mu}^{\psi_{k}(\mu)}\frac{ds}{C(s)}-\int _{t_{k-1}}^{t_{k}}\theta (s)\, ds\leq-\gamma_{k}, $$

    where \(\gamma_{k}\geq0\) with \(\sum_{k=1}^{\infty}\gamma_{k}=\infty\).

Then the set M with respect to the solution of system (2.1) is uniformly stable and asymptotically stable.

Proof

Without loss of generality, for any given \(\epsilon >0\), \(\alpha>0\), we can assume that \(\epsilon\leq\rho_{1}\). We choose a \(\beta:0<\beta<\min\{a(\epsilon),\mu_{0}\}\) such that \(\psi _{k}(s)< a(\epsilon)\) for \(0\leq s\leq\beta\) and for all \(k\in\mathbb{Z}^{+}\).

Set \(\delta=\delta(\epsilon,\alpha)>0\) be such that \(b(\delta )<\beta\) and \(\delta<\alpha\). Let \(x(t)=x(t;\sigma,\xi)\) be the solution of system (2.1), where \(\sigma\geq t_{0}\) and \(\xi \in PC_{\alpha}\cap M_{0}(\sigma,\delta)\). At first, we show that

$$ x(t)\in M(t,\epsilon)\quad \mbox{for } t\in [\sigma,\sigma+\beta). $$
(3.5)

Set \(EV(t)=EV(t,x(t))\) and \(\sigma\in[t_{m-1},t_{m})\) for some \(m\in \mathbb{Z}^{+}\).

By condition (iii), we get \(ELV(t,x)\leq0\) for \(\sigma\leq t< t_{m}\). It follows that

$$EV(t)\leq EV(\sigma)\leq b(\delta)< \beta< a(\epsilon) $$

for \(\sigma\leq t< t_{m}\). So for \(\sigma\leq t< t_{m}\), we have \(x(t)\in M(t,\epsilon)\). Thus if (3.5) is not true, then there exists a \(\bar{t}\in[t_{k},t_{k+1})\) for some \(k\in\mathbb{Z}^{+}\), \(k\geq m\) such that \(x(t)\in M(t,\epsilon)\) for \(\sigma\leq t<\bar{t}\), and \(x(\bar{t})\notin M(\bar{t},\epsilon)\). Using conditions (ii) and (iii), we have, for \(i=m,m+1,\ldots,k-1\),

$$ ELV(t)\leq-\theta(t)C\bigl(EV(t)\bigr), \quad t_{i}\leq t< t_{i+1} $$
(3.6)

and

$$ EV(t_{i})\leq\psi_{i}\bigl(EV \bigl(t^{-}_{i}\bigr)\bigr). $$
(3.7)

So by (3.7), we have

$$ EV(t_{m})\leq\psi_{m}\bigl(EV \bigl(t^{-}_{m}\bigr)\bigr)\leq \psi_{m}\bigl(b( \delta)\bigr)< a(\epsilon). $$
(3.8)

From (3.6) and (3.7), for \(i=m,m+1,\ldots,k-1\), we have

$$ \int_{EV(t_{i})}^{EV(t_{i+1}^{-})}\frac{ds}{C(s)}\leq- \int_{t_{i}}^{t_{i+1}}\theta(s)\, ds $$
(3.9)

and

$$ \int_{EV(t^{-}_{i+1})}^{EV(t_{i+1})}\frac{ds}{C(s)}\leq \int_{EV(t^{-}_{i+1})}^{\psi(EV(t^{-}_{i+1}))}\frac{ds}{C(s)}. $$
(3.10)

Thus by (3.9) and condition (iii), we obtain

$$ \int_{EV(t_{i})}^{EV(t_{i+1})}\frac{ds}{C(s)} \leq \int_{EV(t^{-}_{i+1})}^{\psi(EV(t^{-}_{i+1}))}\frac{ds}{C(s)} -\int _{t_{i}}^{t_{i+1}}\theta(s)\, ds\leq-\gamma_{i+1}, $$
(3.11)

which implies \(EV(t_{i+1})\leq EV(t_{i})\) for \(i=m,m+1,\ldots,k-1\). From this and (3.8) we have

$$ EV(t_{k})\leq\cdots\leq EV(m)< a(\epsilon). $$
(3.12)

But by condition (i), we have \(a(\epsilon)\leq a(d(x(\bar{t}),M(\bar {t})))\leq EV(\bar{t})\leq EV(t_{k})< a(\epsilon)\), which is a contradiction. Thus (3.5) holds, from condition (A3) it follows that \((\sigma-\tau,\sigma+\beta)=(\sigma-\tau,\infty)\), hence \(x(t)\in M(t,\epsilon)\), for all \(t\geq\epsilon\). So the set M is uniformly stable with respect to the solution of system (2.1).

To prove the asymptotically stability, we observe that, from the proof of (3.11), one finds that \(EV(t_{i+1})\leq EV(t_{i})\) holds for all \(i\geq m\). Thus we have \(\lim_{t\rightarrow\infty }EV(t_{i})=\alpha\) exists and \(\alpha\geq0\). If \(\alpha>0\), (3.11) yields

$$ \int_{EV(t_{i})}^{EV(t_{i+1})}\frac {ds}{C(s)}\leq- \gamma_{i+1},\quad i=m,m+1,\ldots. $$
(3.13)

Let \(\bar{c}=\inf_{\alpha\leq s< a(\epsilon)}C(s)\). From (3.13), we get

$$ EV(t_{i+1})\leq EV(t_{i})-\bar{c}\gamma _{i+1},\quad i=m,m+1,\ldots, $$
(3.14)

which implies

$$EV(t_{k})\leq EV(t_{m})-\bar{c}\sum _{i=m}^{k-1}\gamma_{i+1}\rightarrow -\infty $$

as \(k\rightarrow\infty\). It is a contradiction and so \(\alpha=0\).

Since \(EV(t)\leq EV(t_{k})\) for \(t_{k}\leq t< t_{k+1}\), it follows that \(\lim_{t\rightarrow\infty}EV(t)=0\), which yields \(\lim_{t\rightarrow \infty}d(x(t,M(t)))=0\). The proof of Theorem 3.3 is complete. □

4 Illustrative examples

As an application, we consider the following examples.

Example 4.1

Consider the scalar impulsive stochastic delay differential equation:

$$ \left \{ \textstyle\begin{array}{l} dx(t)=(-x(t)+1.2x(t-\tau))\, dt+\frac{1}{\sqrt{10}}x(t-\tau)\, dW(t), \quad t\neq t_{k}, \\ x(t_{k})=0.5x(t_{k}^{-}),\quad k=1,2,\ldots, \end{array}\displaystyle \right . $$
(4.1)

where \(\tau>0\), \(t_{0}< t_{1}< t_{2}<\cdots<t_{k}\rightarrow\infty\) as \(k\rightarrow\infty\). Assume that the following condition is satisfied:

  • \(t_{k}-t_{k-1}<-\frac{\ln0.5}{1.6}\), for \(k\in \mathbb{Z}^{+}\), where \(t_{0}\geq0\).

Let \(M(t)=\{(t,0):t\in[t_{0}-\tau,\infty)\}\), \(V(t,x)=V(x)=0.5x^{2}\), \(P(s)=4s\), \(c(s)=s\), then

$$EV\bigl(x+I_{k}(t_{k},x)\bigr)=EV(0.5x)=E \bigl(0.125x^{2} \bigr)=P^{-1}\bigl(EV(x)\bigr), $$

and for any solution \(x(t)\) of system (4.1), such that

$$EV\bigl(t+s,x(t+s)\bigr)\leq P\bigl(EV\bigl(x(t)\bigr)\bigr),\quad -\tau\leq s \leq0, t\geq t_{0}. $$

Clearly, we have \(Ex^{2}(t-\tau)\leq4Ex^{2}(t)\), \(t\geq t_{0}\). Hence,

$$\begin{aligned} ELV\bigl(x(t)\bigr) =&-Ex^{2}(t)+1.2Ex(t)x(t-\tau) +0.5 \times0.1Ex^{2}(t-\tau) \\ \leq& -Ex^{2}(t)+2.4Ex^{2}(t)+0.2Ex^{2}(t) \\ =&\eta(t)c\bigl(EV\bigl(x(t)\bigr)\bigr), \end{aligned}$$

where \(\eta(t)=3.2>0\).

We have

$$t_{k}-t_{k-1}< - \frac{\ln0.5}{1.6} $$

and for any \(\mu>0\), \(k\in\mathbb{Z}^{+}\),

$$\begin{aligned} \int^{\mu}_{P^{-1}(\mu)}\frac{ds}{c(s)}-\int _{t_{k-1}}^{t_{k}}\eta(s)\, ds =& \int^{\mu}_{h^{2}\mu} \frac{ds}{c(s)}- \int_{t_{k-1}}^{t_{k}}\eta(s)\, ds \\ >& -2\ln0.5- \biggl(-\frac{\ln0.5}{1.6} \biggr)\times 2\times (1.6 ) \\ =&0. \end{aligned}$$

Thus all of the conditions in Theorem 3.1 are satisfied. Therefore, it follows from Theorem 3.1 that the set M is uniformly stable with respect to the solution of the system (4.1). The simulation result of system (4.1) is shown in Figure 1. The simulation of system (4.1) without impulses is shown in Figure 2. From Figures 1 and 2, we find that, although stochastic delay differential equations without impulse may be unstable, adding impulses may lead to stability. That is, impulsive perturbations play an important role in the stability behavior of nonlinear systems.

Figure 1
figure 1

State trajectory \(\pmb{x(t) }\) of system ( 4.1 ) in \(\pmb{(t,x)}\) plane.

Figure 2
figure 2

State trajectory \(\pmb{x(t) }\) of system ( 4.1 ) without impulses in \(\pmb{(t,x)}\) plane.

Example 4.2

Consider the scalar impulsive stochastic delay differential equation:

$$ \left \{ \textstyle\begin{array}{l} dx(t)=(mx(t)+nx(t-\tau))\, dt+(px(t)+qx(t-\tau))\, dW(t),\quad t\neq t_{k}, \\ x(t_{k})=ux(t_{k}^{-}), \quad k=1,2,\ldots, \end{array}\displaystyle \right . $$
(4.2)

where \(\tau>0\), \(t_{0}< t_{1}< t_{2}<\cdots<t_{k}\rightarrow\infty\) as \(k\rightarrow\infty\). Assume that the following condition is satisfied:

  • \(0< u<1\) and \(m+|n|u^{-1}+\frac {1}{2}p^{2}+|pq|u^{-1}+\frac{1}{2}q^{2}u^{-2}<0\).

Let \(M(t)=\{(t,0):t\in[t_{0}-\tau,\infty)\}\), \(V(t,x)=V(x)=\frac {1}{2}x^{2}\), \(h_{k}(s)=(1-u^{2})s\), then

$$\begin{aligned}& EV\bigl(x+I_{k}(t_{k},x)\bigr)=EV(ux)=E \biggl( \frac{1}{2}u^{2}x^{2} \biggr), \\& EV\bigl(t_{k},x+I_{k}(t_{k},x)\bigr)-EV \bigl(t^{-}_{k},x\bigr)=-\bigl(1-u^{2}\bigr) \frac {1}{2}x^{2}=-h_{k}\bigl(EV\bigl(t^{-}_{k},x \bigr)\bigr). \end{aligned}$$

Clearly, we have \(Ex^{2}(t-\tau)\leq h^{-2}Ex^{2}(t)\), \(t\geq t_{0}\). Hence,

$$\begin{aligned} ELV\bigl(x(t)\bigr) =&mEx^{2}(t)+nEx(t)x(t-\tau)+ \frac {1}{2}p^{2}Ex^{2}(t)+pqEx(t)x(t-\tau) + \frac{1}{2}q^{2}Ex^{2}(t-\tau) \\ \leq& mEx^{2}(t)+|n|u^{-1}Ex^{2}(t)+ \frac {1}{2}p^{2}Ex^{2}(t)+|pq|u^{-1}Ex^{2}(t)+ \frac {1}{2}q^{2}u^{-2}Ex^{2}(t) \\ =&2\biggl(m+|n|u^{-1}+\frac{1}{2}p^{2}+|pq|u^{-1}+ \frac{1}{2}q^{2}u^{-2}\biggr)EV\bigl(x(t)\bigr)< 0. \end{aligned}$$

We check that for any \(\sigma\geq t_{0}\), and \(r>0\), there exists \(\{r_{k}\}\) such that \(EV(t,x)\geq r\) for \(t\geq\sigma\) implies that \(h_{k}(EV(t^{-}_{k},x))\geq r_{k}\); where \(r_{k}\geq0\) with \(\sum_{k=1}^{\infty}r_{k}=\infty\). Since \(h_{k}(EV(t^{-}_{k},x))=(1-u^{2})EV(t^{-}_{k},x)\), when \(EV(t,x)\geq r\) for \(t\geq\sigma\), then we have \(h_{k}(EV(t^{-}_{k},x))=(1-u^{2})EV(t^{-}_{k},x)\geq(1-u^{2})r\). We take \(r_{k}=(1-u^{2})r\) and we have \(\sum_{k=1}^{\infty}r_{k}=\infty\). Thus all of the conditions in Theorem 3.2 are satisfied. Therefore, it follows from Theorem 3.2 that the set M is uniformly stable and asymptotically stable with respect to the solution of the system (4.2).

Example 4.3

Consider the scalar impulsive stochastic delay differential equation:

$$ \left \{ \textstyle\begin{array}{l} dx(t)=(ax(t)+bx(t-\tau))\, dt+(cx(t)+rx(t-\tau))\, dW(t), \quad t\neq t_{k}, \\ x(t_{k})=hx(t_{k}^{-}), \quad k=1,2,\ldots, \end{array}\displaystyle \right . $$
(4.3)

where \(\tau>0\), \(t_{0}< t_{1}< t_{2}<\cdots<t_{k}\rightarrow\infty\) as \(k\rightarrow\infty\). Assume that the following conditions are satisfied:

  1. (i)

    \(0< h<1\) and \(a+|b|h^{-1}+\frac {1}{2}c^{2}+|cr|h^{-1}+\frac{1}{2}r^{2}h^{-2}<0\);

  2. (ii)

    \(t_{k}-t_{k-1}>\frac{\ln h}{a+|b|h^{-1}+\frac {1}{2}c^{2}+|cr|h^{-1}+\frac{1}{2}r^{2}h^{-2}}\), for \(k\in\mathbb {Z}^{+}\), where \(t_{0}\geq0\).

Let \(M(t)=\{(t,0):t\in[t_{0}-\tau,\infty)\}\), \(V(t,x)=V(x)=\frac {1}{2}x^{2}\), \(\psi_{k}(s)=h^{2}s\), \(C(s)=s\), then

$$EV\bigl(x+I_{k}(t_{k},x)\bigr)=EV(hx)=E \biggl( \frac{1}{2}h^{2}x^{2} \biggr)=\psi_{k} \bigl(EV(x)\bigr), $$

and for any solution \(x(t)\) of system (4.3), such that

$$EV\bigl(t+s,x(t+s)\bigr)\leq\psi_{k}\bigl(EV\bigl(x(t)\bigr)\bigr), \quad -\tau\leq s \leq0, t\geq t_{0}. $$

Clearly, we have \(Ex^{2}(t-\tau)\leq h^{-2}Ex^{2}(t)\), \(t\geq t_{0}\). Hence,

$$\begin{aligned} ELV\bigl(x(t)\bigr) =&aEx^{2}(t)+bEx(t)x(t-\tau)+ \frac {1}{2}c^{2}Ex^{2}(t)+crEx(t)x(t-\tau) + \frac{1}{2}r^{2}Ex^{2}(t-\tau) \\ \leq& aEx^{2}(t)+|b|h^{-1}Ex^{2}(t)+ \frac {1}{2}c^{2}Ex^{2}(t)+|cr|h^{-1}Ex^{2}(t)+ \frac {1}{2}r^{2}h^{-2}Ex^{2}(t) \\ =&-\theta(t)C\bigl(EV\bigl(x(t)\bigr)\bigr)< 0, \end{aligned}$$

where \(\theta(t)=-2(a+|b|h^{-1}+\frac{1}{2}c^{2}+|cr|h^{-1}+\frac {1}{2}r^{2}h^{-2})>0\).

We have

$$t_{k}-t_{k-1}> \frac{\ln h}{a+|b|h^{-1}+\frac {1}{2}c^{2}+|cr|h^{-1}+\frac{1}{2}r^{2}h^{-2}} $$

and for any \(\mu>0\), \(k\in\mathbb{Z}^{+}\),

$$\begin{aligned} \int^{\psi_{k}(\mu)}_{\mu}\frac{ds}{C(s)}-\int _{t_{k-1}}^{t_{k}}\theta(s)\, ds =& \int ^{h^{2}\mu}_{\mu}\frac{ds}{C(s)}- \int _{t_{k-1}}^{t_{k}}\theta(s)\, ds \\ < & 2\ln h-\frac{\ln h}{a+|b|h^{-1}+\frac {1}{2}c^{2}+|cr|h^{-1}+\frac{1}{2}r^{2}h^{-2}} \\ &{} \times (-2) \biggl(a+|b|h^{-1}+\frac{1}{2}c^{2}+|cr|h^{-1}+ \frac {1}{2}r^{2}h^{-2} \biggr) \\ =&4\ln h. \end{aligned}$$

Letting \(\gamma_{k}=-4\ln h\), then \(\gamma_{k}\geq0\) with \(\sum_{k=1}^{\infty}\gamma_{k}=\infty\). Thus all of the conditions in Theorem 3.3 are satisfied. Therefore, it follows from Theorem 3.3 that the set M is uniformly stable and asymptotically stable with respect to the solution of the system (4.3).