1 Introduction

Stochastic differential equation is an emerging field drawing attention from both theoretical and applied disciplines, which has been successfully applied to problems in mechanical, electrical, economics, physics and several fields in engineering. For details, see [16] and the references therein. Recently, stability of stochastic differential equations with Markovian switching has received a lot of attention [712]. For example, Ji and Chizeck [7] and Mariton [8] studied the stability of a jump linear equation

d x ( t ) = A ( r ( t )) x ( t ) d t ,

where x(t) takes values in Rn, r(t) is a Markovian chain taking values in S = {1, 2, ..., N}. Mao [9] discussed the stability of nonlinear stochastic differential equation with Markovian switching of the form

dx ( t ) =f ( x ( t ) , t , r ( t ) ) dt+g ( x ( t ) , t , r ( t ) ) dω ( t ) .

In [10], Mao studied the stability of stochastic functional differential equation with Markovian switching of the form

dx ( t ) =f ( x t , t , r ( t ) ) dt+g ( x t , t , r ( t ) ) dω ( t ) .

Impulsive effects are common phenomena due to instantaneous perturbations at certain moment, such phenomena are described by impulsive differential equation which have been used effciently in modelling many practical problems that arise in the fields of engineering, physics, and science as well. So the theory of impulsive differential equations is also attracting much attention in recent years [1319]. Correspondingly, a lot of stability results of impulsive stochastic functional differential equations have been obtained [2026]. However, there are few results on the stability of impulsive stochastic differential equation with Markovian switching. In [27], Wu and Sun established some stability criteria of p-moment stability for stochastic differential equations with impulsive jump and Markovian switching.

In this article, we shall extend Razumikhin method [10, 12] to investigate the p th moment exponential stability of the following stochastic functional differential equations with Markovian switching and delayed impulse

d x ( t ) = f ( x t , t , r ( t ) ) d t + g ( x t , t , r ( t ) ) d ω ( t ) t 0 , t t k , Δ x ( t k ) = I k ( x ( t k ) , x t k , t k , r ( t k ) ) , k = 1 , 2 , , x ( t ) = ξ , t [ - τ , 0 ] .

The state variables on the impulses relate to the finite delay, which implies that the impulsive effects are more general than those given in [20, 22, 23]. Some Theorems on the p th moment exponential stability are derived in the case that the impulsive gain d i k + d ̄ i k <1 or d i k + d ̄ i k 1. These new results are employed to the n-dimensional impulsive hybrid stochastic systems with bounded time-varying delay. Useful criteria in terms of an M-matrix (see Berman and Plemmons [28]) which can be verified much more easily are established. Meanwhile, examples and simulations are provided to show the impulsive effects play an important role in the stability for hybrid stochastic systems. The rest of this article is organized as follows. In Section 2, stochastic functional differential equations with Markovian switching and delayed impulses together with some definitions of p th moment exponential stability are presented. In Section 3, the Razumikhin-type theorems on p th moment exponential stability for stochastic functional differential equations with Markovian switching and delayed impulses are established. In Section 4, these results will then be applied to the n dimensional hybrid stochastic delay systems and M-matrix method is introduced to verify the stability easily. Finally, examples are given to demonstrate our effective results in Section 5.

2 Preliminaries

Let R = (−∞, +∞), R+ = [0, +∞), Rndenote the n-dimensional Euclidean space with the Euclidean norm | · |. If A is a vector or matrix, its transpose is denoted by AT, and its norm is denoted by ||A||= λ max ( A T A ) , where λmax(·) is the maximum eigenvalue of a matrix. ω ( t ) = ( ω 1 ( t ) , ω 2 ( t ) , , ω m ( t ) ) T is an m-dimensional Brownian motion on a complete probability space ( Ω , F , P ) with a natural filtration { F t } t 0 satisfying the usual conditions, ( i . e . F t = σ { ω ( s ) : 0 s t } ) .

Let τ > 0 and PC ( [ - τ , 0 ] , R n ) ={ψ: [ - τ , 0 ] R n |ψ ( t + ) ,ψ ( t ) exist, and ψ( t - )=ψ ( t ) } with the norm | | ψ | | = sup - τ θ 0 | ψ ( θ ) | , where ψ(t+) and ψ(t) denote the right-hand and left-hand limits of function ψ(t) at t.

Denoted by P C F 0 b ( [ - τ , 0 ] ; R n ) the family of all bounded, F 0  - measurable , PC([−τ, 0]; Rn)-valued random variables. For p > 0, denoted by P C F t p ( [ - τ , 0 ] , R n ) the family of all F t -measurable PC ( [ - τ , 0 ] , R n ) -valued random variables ψ such that - τ 0 E | ψ ( θ ) | p d θ < . Let r(t)(t > 0) be a right-continuous Markovian chain on the probability space taking values in a finite state space S = {1, 2, ..., N} with generator Γ = ( γ i j ) N × N given by

P { r ( t + Δ ) = j | r ( t ) = i } = γ i j Δ + o ( Δ ) , if i j , 1 + γ i j Δ + o ( Δ ) , if i = j ,

where Δ > 0. Here γ ij ≥ 0 is the transition rate from i to j while

γ i i =- i j γ i j .

We assume that the Markovian chain r(t) is independent of the Brownian motion ω(t). It is known that almost every sample path of r(t) is a right-continuous step function with a finite number of simple jumps in any finite subinterval of R+.

Consider the following impulsive hybrid stochastic functional differential equation of the form

d x ( t ) = f ( x t , t , r ( t ) ) d t + g ( x t , t , r ( t ) ) d ω ( t ) , t 0 , t t k , Δ x ( t k ) = I k ( x ( t k ) , x t k , t k , r ( t k ) ) , k = 1 , 2 , , x ( t ) = ξ , t [ - τ , 0 ] ,
(2.1)

where ξ P C F 0 b ( [ - τ , 0 ] ; R n ) , x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T and x t = { x ( t + θ ) : - τ θ 0 } , , x ( t k + ) = lim h 0 + x ( t k + h ) , x ( t k ) = lim h 0 - x ( t k + h ) , t k 0 are impulsive moments satisfying t k < tk+1and lim k + t k = + , Δ x ( t k ) = x ( t k + ) - x ( t k ) represents the jump in the state x at t k with I k determining the size of the jump, f : PC([-τ, 0]; Rn) × R+× S → Rn, g : PC(-τ, 0]; RnR+×SRn×m, I k : Rn×PC([-τ, 0]; Rn) × R+ × SRn.

Throughout this article, we assume that f, g and I k satisfy the necessary conditions for the global existence and uniqueness of solutions for all t ≥ 0. For any ξ P C F 0 b ( [ - τ , 0 ] ; R n ) , there exists a unique stochastic process satisfying Equation (2.1) denoted by x(t; ξ), which is continuous on the left-hand side and limitable on the right-hand side. Also we assume that f(0, t, i) ≡ 0, g(0, t, i) ≡ 0 and I k (0, 0, t, i) ≡ 0, k = 1, 2, ..., which implies that x(t) ≡ 0 is an equilibrium solution.

Let C 1 2 ( R n × [ - τ , ) × S ; R + ) be the family of all nonnegative functions V (x, t, i) on Rn×[−τ, ∞) × S which are continuous on Rn× (tk−1, t k ] × S, V t , V x , V xx are continuous on Rn× (tk−1, t k ] × S. For each V C 1 2 ( R n × [ - τ , ) × S ; R + ) , we define an operator LV:P C F t b ( [ - τ , 0 ] ; R n ) × ( t k - 1 , t k ] ×SR associated with Equation (2.1) as follows:

L V ( ϕ , t , i ) = V t ( ϕ ( 0 ) , t , i ) + V x ( ϕ ( 0 ) , t , i ) f ( ϕ , t , i ) + 1 2 trace [ g T ( ϕ , t , i ) V x x ( ϕ ( 0 ) , t , i ) g ( ϕ , t , i ) ] + j = 1 N γ i j V ( ϕ ( 0 ) , t , j ) ,

where

V t ( x , t , i ) = V ( x , t , i ) t , V x ( x , t , i ) = V ( x , t , i ) x 1 , , V ( x , t , i ) x n , V x x ( x , t , i ) = 2 V ( x , t , i ) x i x j n × n .

Definition 2.1. The zero solution of Equation (2.1) are said to be p th moment exponentially stable if there exists η > 0 such that for any initial values ξP C F 0 b ( [ - τ , 0 ] ; R n ) and t ≥ 0

lim sup t 1 t log ( E | x ( t ; ξ ) | p ) - η .

Remark 2.1. When p = 2, it is often called to be exponentially stable in mean square.

3 Stability analysis

In the following, we shall establish some criteria on p th moment exponential stability for Equation (2.1).

Theorem 3.1. Assume that V C 1 2 ( R n × [ - τ , ) × S ; R + ) and constants p > 0, c1 > 0, c2 > 0, d i k 0, d ̄ i k 0, d i k 2 + d ̄ i k 2 0,δ>0,λ>0, η i 0,iS,k=1,2, such that

(i) c 1 |x | p V ( x , t , i ) c 2 |x | p for all (t, x) ∈ Rn × [−τ, ∞) × S;

(ii) for all t ∈ (tk-1, t k ] and iS,ELV ( ϕ , t , i ) η i EV ( ϕ ( 0 ) , t , i ) whenever E min 1 i N V ( ϕ ( θ ) , t + θ , i ) <q e λ τ E max 1 i N E V ( ϕ ( 0 ) , t , i ) ;

(iii) for all iS, E V ( ϕ ( 0 ) + I k ( ϕ ( 0 ) , ϕ ( θ ) , t k , i ) , t k + , i ) d i k E V ( ϕ ( 0 ) , t k , i ) + d ̄ i k sup - τ θ < 0 E V ( ϕ ( θ ) , t k , i ) ;

(iv) sup 1 k < + { t k - t k - 1 } δ ;

(v) for any i S , λ + η i ln q δ ,

then the zero solution of Equation (2.1) is p th moment exponentially stable with p th moment exponent λ, where ϕ = { ϕ ( θ ) | - τ θ 0 } P C F t p ( [ - τ , 0 ] ; R n ) , q = 1 max i S , 1 k < + { d i k + d ̄ i k e λ τ } > 1 .

Proof. For any ξ P C F 0 b ( [ - τ , 0 ] ; R n ) , we denote the solution x(t)= x(t; ξ) of (2.1) and extend r(t) = r(0) = r0 for all t ∈ [−τ, 0]. Let ε be small enough such that t + ε ∈ (tk−1, t k ). By generalized Itô formula, we have

E V ( x ( t + ε ) , t + ε , r ( t + ε ) ) = E V ( x ( t ) , t , r ( t ) ) + t t + ε E L V ( x ( s ) , s , r ( s ) ) d s .
(3.1)

Let ε → 0, it follows that for t ∈ (tk-1, t k ]

D + EV ( x ( t ) , t , r ( t ) ) =ELV ( x ( t ) , t , r ( t ) ) .
(3.2)

Let W(t) = eλtEV (x(t), t, i), we have for t ∈ (tk- 1, t k ]

D + W ( t ) =λ e λ t EV ( x ( t ) , t , i ) + e λ t D + ELV ( x ( t ) , t , i ) .
(3.3)

From (iii), we have

W ( t k + ) = e λ t k E V ( x ( t k + ) , t k + , i ) d i k W ( t k ) + d ̄ i k e λ τ sup - τ θ 0 W ( t k + θ ) .
(3.4)

Taking M > 0 such that

sup - τ θ 0 W ( θ ) < M q ,
(3.5)

we can claim that for t ≥ -τ

W t < M .
(3.6)

It is easy to see that W(t) < M for t ∈ [-τ, 0]. Now, we shall prove that

W ( t ) <M,t ( 0 , t 1 ] .
(3.7)

Otherwise, there exists a t* ∈ (0, t1] such that

W ( t * ) = M , W ( t ) < M , - τ < t < t * .
(3.8)

In view of the continuity of W(t) in [0, t1], there exists a t** ∈ (0, t*) such that

W ( t * * ) = M q , W ( t ) > M q , t ( t * * , t * ] .
(3.9)

For t ∈ [t**, t*], θ ∈ [-τ, 0], we have

qW ( t ) >W ( t + θ ) .
(3.10)

Then we obtain

q e λ τ max 1 i N E V ( x ( t ) , t , i ) q e - λ ( t - τ ) W ( t ) > e - λ ( t - τ ) W ( t + θ ) min 1 i N E V ( x ( t + θ ) , t + θ , i ) .
(3.11)

Together with (3.3) and (ii), for t ∈ [t∗∗, t], we have

D + W ( t ) λ e λ t E V ( x ( t ) , t , i ) + η i e λ t E V ( x ( t ) , t , i ) = ( λ + η i ) e λ t E V ( x ( t ) , t , i ) = ( λ + η i ) W ( t ) .
(3.12)

Thus

M = W ( t * ) W ( t * * ) e ( λ + η i ) ( t * - t * * ) = M q e ( λ + η i ) ( t * - t * * ) < M q e ( λ + η i ) t 1 M .
(3.13)

This is a contradiction. Hence (3.7) holds. From (iii), we obtain

W ( t 1 + ) d i 1 W ( t 1 ) + d ̄ i 1 e λ τ sup - τ θ 0 W ( t 1 + θ ) < M q < M .
(3.14)

Next, we shall show that

W ( t ) <M,t ( t 1 , t 2 ] .
(3.15)

If it does not hold, there exists a t 1 * ( t 1 , t 2 ] such that

W ( t 1 * ) =M,W ( t ) <M,t [ - τ , t 1 * ] .
(3.16)

In view of the continuity of W(t) in (t1, t2], there exists a t 1 * * ( t 1 , t 1 * ) such that

W ( t 1 * * ) = M q , W ( t ) > M q , t ( t 1 * * , t 1 * ] .
(3.17)

For t [ t 1 * * , t 1 * ] ,θ [ - τ , 0 ] ,, we have qW(t) >W(t + θ). It follows from (3.3) and (ii) that for t [ t 1 * * , t 1 * ]

D + W ( t ) ( λ + η i ) W ( t ) .
(3.18)

Then, we have

M = W ( t 1 * ) W ( t 1 * * ) e ( λ + η i ) ( t 1 * - t 1 * * ) = M q e ( λ + η i ) ( t 1 * - t 1 * * ) < M q e ( λ + η i ) ( t 2 - t 1 ) M ,
(3.19)

which is a contradiction. Thus, (3.15) holds. By induction, we can prove that for k = 1, 2, ...

W ( t ) <M,t ( t k - 1 , t k ] .
(3.20)

Therefore, we have for t ≥ -τ

W ( t ) < M .
(3.21)

From (i) and the above inequality, we have

c 1 E|x ( t ) | p EV ( t ) M e - λ t ,
(3.22)

which implies that

E | x ( t ) | p M c 1 e - λ t .
(3.23)

The proof of Theorem 3.1 is complete.

Remark 3.1. In Theorem 3.1, the zero solution of hybrid stochastic functional differential equations without impulses is allowed to be unstable. In this case, the delayed impulses are key in stabilizing the hybrid stochastic equations. It requires the nearest impulse time interval must be sufficiently small and the maximal impulsive gain max i S , 1 k < + { d i k + d ̄ i k } < 1 .

Theorem 3.2. Assume that V C 1 2 ( R n × [ - τ , ) × S ; R + ) and constants p > 0, c1 > 0, c2 > 0, d i k 0 , d ̄ i k 0 , d i k 2 + d ̄ i k 2 0 , γ i > 0, µ > 0, λ > 0, iS, k = 1, 2, ... such that

(i) c 1 |x | p V ( t , x ) c 2 |x | p for all (t, x) ∈ [−τ, ∞) × Rn;

(ii) for all t ∈ (tk-1, t k ] and iS,ε>0ELV ( ϕ , t , i ) - γ i EV ( ϕ ( 0 ) , t , i ) whenever E [ min 1 i N V ( ϕ ( θ ) , t + θ , i ) ] q e λ τ E [ max 1 i N E V ( ϕ ( 0 ) , t , i ) ] ;

(iii) for all i ∈ S, E V ( ϕ ( 0 ) + I k ( ϕ ( 0 ) , ϕ ( θ ) , t k , i ) , t k + , i ) d i k E V ( ϕ ( 0 ) , t k , i ) + d ̄ i k sup - τ θ 0 E V ( ϕ ( θ ) , t k , i ) ;

(iv) inf 1 k < + { t k - t k - 1 } μ ;

(v) for any i S , γ i - λ > ln q μ ,

then the zero solution of Equation (2.1) is p th moment exponentially stable, where ϕ = { ϕ ( θ ) | - τ θ 0 } P C F t p ( [ - τ , 0 ] ; R n ) , q = 1 max i S , 1 k < + { d i k + d ̄ i k e λ τ } > 1 .

Proof. Since (v) holds, we can choose sufficiently small ε > 0 such that γ i - λ ln ( q + ε ) μ for all iS. Let W(t) = eλt EV (x(t), t, i), we have for t ∈ (tk−1, t k ]

D + W ( t ) =λ e λ t EV ( x ( t ) , t , i ) + e λ t D + ELV ( x ( t ) , t , i ) .
(3.24)

From (iii), we have

W ( t k + ) = e λ t k E V ( x ( t k + ) , t k + , i ) d i k W ( t k ) + d ̄ i k e λ τ sup - τ θ 0 W ( t k + θ ) .
(3.25)

Taking M > 0 such that

sup - τ θ 0 W ( θ ) < M q + ε ,
(3.26)

we shall show that for t ≥ -τ

W t < M .
(3.27)

It is easy to see that W(t) < M for t ∈ [−τ, 0]. Now, we shall prove that

W ( t ) <M,t ( 0 , t 1 ] .
(3.28)

If it does not hold, there exists a t* ∈ (0, t1] such that

W ( t * ) =M,W ( t ) <M,-τ<t< t * .
(3.29)

In view of the continuity of W(t) in [0, t1], there exists a t** ∈ [0, t*) such that

W ( t * * ) = M q + ε , W ( t ) > M q + ε , t ( t * * , t * ] .
(3.30)

For t [ t ε * * , t * ] , θ [ - τ , 0 ]

( q + ε ) W ( t ) >W ( t + θ ) .
(3.31)

Then

( q + ε ) e λ τ max 1 i N E V ( x ( t ) , t , i ) min 1 i N E V ( x ( t + θ ) , t + θ , i ) .
(3.32)

Together with (3.24) and (ii), for t [ t * * , t * ] , we have

D + W ( t ) λ e λ t EV ( x ( t ) , t , i ) + e λ t ELV ( x ( t ) , t , i ) ( λ - γ i ) W ( t ) 0.
(3.33)

Thus

M = W ( t * ) W ( t * * ) = M q + ε < M .
(3.34)

This is a contradiction. Next, we shall show that

W ( t 1 ) M q + ε .
(3.35)

If it does not hold, we have

W ( t 1 ) > M q + ε .
(3.36)

Since the continuity of W((t) in [0, t1], there exists a t ̄ [ 0 , t 1 ) such that

W ( t ̄ ε ) = M q + ε , W ( t ) > M q + ε , t ( t ̄ , t 1 ] .
(3.37)

For t [ t ̄ , t 1 ] ,θ [ - τ , 0 ] , we have

( q + ε ) W ( t ) >W ( t + θ ) .
(3.38)

By (3.24) and (ii), for t [ t ̄ , t 1 ] ,θ [ - τ , 0 ] , we have

D + W ( t ) ( λ - γ i ) W ( t ) 0.
(3.39)

Thus

W ( t 1 ) W ( t ̄ ) = M q + ε ,
(3.40)

which is a contradiction. It follows from (3.25), (3.35) and (3.28) that

W ( t 1 + ) d i 1 W ( t 1 ) + d ̄ i 1 e λ τ sup - τ θ 0 W ( t 1 + θ ) ( d i 1 + d ̄ i 1 e λ τ ) M q + ε < M .
(3.41)

Furthermore, we can prove that

W ( t ) <M,t ( t 1 , t 2 ] .
(3.42)

Indeed, there exists a t ̄ 1 ( t 1 , t 2 ] such that

W ( t ̄ 1 ) =M,W ( t ) <M,t [ - τ , t ̄ 1 ) .
(3.43)

If W ( t ) > M q + ε for t ( t 1 , t ̄ 1 ) . Then (q + ε)W(t) >W(t + θ) for t ( t 1 , t ̄ 1 ) , θ ∈ [−τ, 0]. Thus by (3.24) and (ii), for t ( t 1 , t ̄ 1 ) , we have D+ W (t) ≤ (λ − γ)W(t) ≤ 0. It follows that

W t ̄ 1 W ( t 1 + ) < M .
(3.44)

This is a contradiction. If there exists a t ̄ ̄ 1 ( t 1 , t ̄ 1 ) such that

W ( t ̄ ̄ 1 ) M q + ε .
(3.45)

In view of the continuity of W(t) in (t1, t2], there exists a t ̄ ̄ 1 [ t ̄ ̄ 1 , t ̄ 1 ) such that

W ( t ̄ ̄ 1 ) = M q + ε , W ( t ) > M q + ε , t ( t ̄ ̄ 1 , t ̄ 1 ] .
(3.46)

Then for t [ t ̄ ̄ 1 , t ̄ 1 ] , θ ∈ [−τ, 0], (q + ε) W(t) >W(t + θ). Thus for W ( t ̄ 1 ) W ( t ̄ ̄ 1 ) = M q + ε < M , , D+W (t) ≤ (λ−γ)W(t) ≤ 0. It follows that

W ( t ̄ 1 ) W ( t ̄ ̄ 1 ) = M q + ε < M ,
(3.47)

which leads to a contradiction. Moreover, we can conclude that W ( t 2 ) M q + ε . If this does not hold, we have W ( t 2 ) > M q + ε . To prove the conclusion, two cases are to be considered.

Case (i). For t ∈ (t1, t2], W ( t ) > M q + ε . From (3.26), (3.28) and (3.43), we have for t ∈ (t1, t2], θ ∈ [−τ, 0]

( q + ε ) W ( t ) >W ( t + θ ) .
(3.48)

Then by (3.24) and (ii), for t ∈ (t1, t2]

D + W t λ - γ i W t .
(3.49)

Thus

W t 2 W t 1 + e ( λ - γ i ) ( t 2 - t 1 ) < M e ( λ - γ i ) ( t 2 - t 1 ) M q + ε ,
(3.50)

which leads to a contradiction.

Case (ii). There exists a t ̃ 1 ε ( t 1 , t 2 ] such that W t ̃ 1 M q + ε . Since W t 2 > M q + ε and in view of the continuity of W(t) in ( t 1 , t 2 ] , there exists a t ̃ 1 * t ̃ 1 , t 2 such that

W t ̃ 1 * = M q + ε , W t > M q + ε , t ( t ̃ 1 * , t 2 ] .
(3.51)

For t ( t ̃ 1 * , t 2 ] ,θ [ - τ , 0 ] , we have

q + ε W t > W t + θ .
(3.52)

It follows from (3.24) and (ii) that for t ( t ̃ 1 * , t 2 ]

D + W t λ - γ i W t 0 .
(3.53)

Thus

W t 2 W t ̃ 1 * = M q .
(3.54)

This is a contradiction. By induction, we can prove that for k = 1, 2, ...

W t < M , t ( t k - 1 , t k ] .
(3.55)

Therefore, we have for t ≥ −τ

W t < M .
(3.56)

The proof of Theorem 3.1 is complete.

Remark 3.2. In Theorem 3.2, it is permitted that the maximal impulsive gain max i S , 1 k < + d i k + d ̄ i k 1. This means that the hybrid stochastic equation not only can achieve exponential stability but also is exponential stability with delayed impulses. In this case, it requires that the minimal impulse time interval must be sufficiently large such that the hybrid stochastic differential equations with delayed impulses can make keep its stability property.

4 Some consequences

In the following, we shall apply the above new results to a class of linear impulsive hybrid stochastic systems by using Lyapunov function and M-matrix method.

Consider the following n-dimensional impulsive hybrid stochastic delay differential equation

d x t = A r t x t + B r t x t - τ t d t + C r t x t + D r t x t - τ t d ω t , t 0 , t t k , Δ x t k = I k x t k , x t k - τ t k , t k , r t k , k = 1 , 2 , . . . , x t = ξ , t [ - τ , 0 ] ,
(4.1)

where 0 ≤ τ (t) ≤ τ is continuous, τ is a positive constant. For convenience, we denote A(r(t)) = A i , B(r(t)) = B i , C(r(t)) = C i , D(r(t)) = D i , where A i = (a uv (i))n×n, B i = (b uv (i))n×n, C i = (c uv (i))n×n, D i = (d uv (i))n×n.

Theorem 4.1. Assume that there exist symmetric positive definite matrices Q i and constants d i k 0 , d ̄ i k 0 , d i k 2 + d ̄ i k 2 0 , δ > 0 , λ > 0 , ρ i > 0 , σ i > 0 , η ̄ i 0 , η i , i S , k = 1 , 2 , . . . such that

(i) for all iS

Γ i = Q i A i + A i T Q i + ρ i Q i 2 + C i T Q i C i + σ i C i T C i + j = 1 N γ i j Q j - η i Q 0 ,
(4.2)

and

Γ ̄ i = ρ i - 1 B i T B i + σ i - 1 D i T Q i 2 D i + D i T Q i D i - η ̄ i Q i 0 ,
(4.3)

where Γ i , Γ ̄ i 0 , 0 i N mean that matrices Γ i , Γ ̄ i are negative semi-definite;

(ii) for all iS

E x ( t k + I k x t k , x t k - τ t k , t k , i T Q i x t k + I k x t k , x t k - τ t k , t k , i d i k E x T t k Q i x t k + d i k E x T t k - τ t k Q i x t k - τ t k ;
(4.4)

(iii) sup 1 k < + t k - t k - 1 δ;

(iv) for all iS, η i +λ+ η ̄ i q α 2 2 e λ τ α 1 2 ln q δ ,

then the zero solution of Equation (4.1) is exponentially stable in the mean square, where α 1 = min 1 i N λ min Q i , α 2 = max 1 i N λ max Q i ,q= 1 max i S , 1 k < + d i k + d ̄ i k e λ τ >1.

Proof. We define V C 2 , 1 R n × [ - τ , ) × S ; R + by V x , t , i = x T t Q i x t . Clearly

α 1 x 2 V x , t , i α 2 x 2 .
(4.5)

Then for t ∈ (tk−1, t k ]

L V t , ϕ = 2 ϕ T 0 Q i A i ϕ 0 + B i ϕ - τ t + C i ϕ 0 + D i ϕ - τ t T × Q i C i ϕ 0 + D i ϕ - τ t + j = 1 N γ i j ϕ T 0 Q j ϕ 0 = ϕ T 0 Q i A i + A i T Q i ϕ 0 + 2 ϕ T 0 Q i B i ϕ - τ t + ϕ T 0 C i T Q i C i ϕ 0 + ϕ T 0 C i T Q i D i ϕ - τ t + ϕ T - τ t D i T Q i C i ϕ 0 + ϕ T - τ t D i T Q i D i ϕ - τ t + j = 1 N γ i j ϕ T 0 Q j ϕ 0 .
(4.6)

In view of for any vectors x, yRn, scalar ε > 0, the following inequality holds

2 x T y ε x T x + ε - 1 y T y ,
(4.7)

then it follows that

2 ϕ T 0 Q i B i ϕ - τ t ρ i ϕ T 0 Q i 2 ϕ 0 + ρ i - 1 ϕ - τ t B i T B i ϕ - τ t
(4.8)

and

ϕ T 0 C i T Q i D i ϕ - τ t + ϕ T - τ t D i T Q i C i ϕ 0 σ i ϕ T 0 C i T C i ϕ 0 + σ i - 1 ϕ T - τ t D i T Q i 2 D i ϕ - τ t .
(4.9)

Substituting (4.8) and (4.9) into (4.6) we can derive that

L V ϕ , t , i ϕ T 0 Q i A i + A i T Q i + ρ i Q 2 + C i T Q i C i + σ i C i T C i + j = 1 N γ i j Q j - η i Q i ϕ 0 + ϕ T - τ t ρ i - 1 B i T B i + σ i - 1 D i T Q i 2 D i + D i T Q i D i - η ̄ i Q i ϕ - τ t + η i ϕ T 0 Q i ϕ 0 + η ̄ i ϕ T - τ t Q i ϕ - τ t η i ϕ T 0 Q i ϕ 0 + η ̄ i ϕ T - τ t Q i ϕ - τ t .
(4.10)

Then

E L V ϕ , t , i η i E V ϕ 0 , t , i + η ̄ i α 2 E ϕ - τ t 2 .
(4.11)

Next, if for θ - τ , 0 , E min 1 i N V ϕ θ , t , i < q e λ τ E max 1 i N V ϕ θ , t , i , we have

E ϕ θ 2 < q α 2 e λ τ α 1 E ϕ 0 2 , θ - τ , 0 .
(4.12)

Thus

E L V ϕ , t , i η i E V ϕ 0 , t , i + η ̄ i q α 2 2 e λ τ α 1 E ϕ 0 2 η i + η ̄ i q α 2 2 e λ τ α 1 2 E V ϕ 0 , t , i .
(4.13)

For t = t k , it follows from (ii) that

E V ϕ 0 + I k ϕ , ϕ - τ t k , t k , i , t k + , i d i k E V ϕ 0 , t k , i + d i k E V ϕ - τ t k , t k - τ t k , i
(4.14)

Consequently, the conclusions follow from Theorem 3.1. This completes the proof.

Theorem 4.2. Assume that exist symmetric positive definite matrices Q i and constants d i k 0, d ̄ i k 0, d i k 2 + d ̄ i k 2 0, ε i >0, κ i >0, ζ i >0, ζ ̄ i 0,μ>0,λ>0,iS,k=1,2,... such that (4.4) holds and the following conditions hold:

  1. (i)

    for all iS

    Θ i = Q i A i + A i T Q i + ε i Q i 2 + C i T Q i C i + κ i C i T C i + j = 1 N γ i j Q j + ζ i Q i 0
    (4.15)

and

Θ ̄ i = ε i - 1 B i T B i + κ i - 1 D i T Q i 2 D i + D i T Q i D i - ζ ̄ i Q i 0 ;
(4.16)

(ii) inf 1 k < + t k - t k - 1 μ;

(iii) for all iS, ζ i - ζ ̄ i q α 2 2 e λ τ α 1 2 -λ> ln q μ ,

then the zero solution of Equation (4.1) is exponentially stable in the mean square, where q= max i S , 1 k < + d i k + d ̄ i k e λ τ 1.

Proof. By (iii), we choose sufficiently small ε > 0 such that ζ i - ζ ̄ i ( q + ε ) α 2 2 e λ τ α 1 2 -λ> ln q μ . We define

V(x, t, i) = xT(t) Q i x(t). Similar to the proof of Theorem 4.1, we get for t ∈ (tk−1, t k ]

E L V ϕ , t , i ζ i E V ϕ 0 , t , i + ζ ̄ i α 2 E ϕ - τ t 2 .
(4.17)

For θ - τ , 0 ,E min 1 i N V ϕ θ , t , i q + ε e λ τ E max 1 i N V ϕ 0 , t , i , we have

E ϕ θ 2 q + ε α 2 e λ τ α 1 E ϕ 0 2 .
(4.18)

Thus

E L V ϕ , t , i - ζ i + ζ ̄ i q + ε α 2 2 e λ τ α 1 2 E V ϕ 0 , t , i .
(4.19)

Therefore, the conclusions follow from Theorem 3.2.

In the following, we shall establish tractable exponential stability conditions. To this end, we take

Q i = r i I , ρ i = ε i = B i r i , σ i = κ i = r i D i C i , η i = 1 r i λ max r i A i + r i A i T + j = 1 N γ i j r j I + r i B i + r i C i 2 + r i C i D i , ζ i = - 1 r i λ max r i A i + r i A i T + j = 1 N γ i j r j I + r i B i + r i C i 2 + r i C i D i , η ̄ i = ζ ̄ i = B i + C i D i + D i 2 .

Corollary 4.1. Assume that there exist constants d i k 0, d ̄ i k 0, d i k + d ̄ i k 0,δ>0,λ>0, r i >0,iS,k=1,2,... such that

(i) for all iS

E x t k + I k x t k , x t k - τ t k , t k , i T x t k + I k x t k , x t k - τ t k , t k , i d i k E x T t k x t k + d ̄ i k E x T t k - τ t k x t k - τ t k ;
(4.20)

(ii) sup 1 k < + t k - t k - 1 δ;

(iii) for all iS

1 r i λ max r i A i + r i A i T + j = 1 N γ i j r j I + r i B i + r i C i 2 + r i C i D i + λ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 ln q δ ,
(4.21)

then the zero solution of Equation (4.1) is exponentially stable in the mean square, where α 1 = min 1 i N r i , α 2 = max 1 i N r i , q = 1 max i S , 1 k < + d i k + d ̄ i k e λ τ > 1 .

Corollary 4.2. Assume that there exist constants d i k 0 , d ̄ i k 0 , d i k + d ̄ i k 0 , μ > 0 , λ > 0 , r i > 0 , i S , k = 1 , 2 , . . . such that (4.20) holds and the following conditions are satisfied:

  1. (i)

    inf 1 k < + t k - t k - 1 μ;

  2. (ii)

    for all iS

    1 r i λ max r i A i + r i A i T + j = 1 N γ i j r j I + r i B i + r i C i 2 + r i C i D i + λ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 + ln q μ < 0 ,
    (4.22)

then the zero solution of Equation (4.1) is exponentially stable in the mean square, where q= max i S , 1 k < + d i k + d ̄ i k e λ τ 1.

Next, we apply Corollaries 4.1 and 4.2 to establish some very useful criteria in terms of M matrix which can be verified much more easily. If A is a vector or matrix, by A ≫ 0 we mean all elements of A are positive. If A1 and A2 are vectors or matrices with same dimensions we write A1A2 if and only if A1A2 ≫ 0. Moreover, we also adopt here the traditional natation by letting

Z N × N = A = a i j N × N | a i j 0 , i j .
(4.23)

Definition 4.1. (see [12, 28]) A square matrix A =(a ij )N×Nis called a nonsingular M matrix if A can be expressed in the form A = sIB with s > ρ(B) while all the elements of B are nonnegative, where I is the identity matrix and ρ(B) the spectral radius of B.

Remark 4.1. If A is a nonsingular M-matrix, then A has nonpositive off-diagonal and positive diagonal entries, that is

a i i > 0 , while a i j 0 , i j .

Corollary 4.3. There exist constants d i k 0 , d ̄ i k 0 , d i k + d ̄ i k 0 , i S , k = 1 , 2 , . . . such that (4.20) holds. If there exists λ > 0 such that ΞΓ is a nonsingular and for any iS

r i λ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 1 ,
(4.24)

then the zero solution of Equation (4.1) is exponentially stable in the mean square, where Ξ is the diagonal matrix

Ξ = diag - λ max A 1 + A 1 T - B 1 - C 1 2 - C 1 D 1 + ln q δ , . . . , - λ max A N + A N T - B N - C N 2 - C N D N + ln q δ , r 1 , . . . , r N = Ξ - Γ - 1 1 0 , 1 = 1 , . . . , 1 T , α 1 = min 1 i N r i , α 2 = max 1 i N r i , q = 1 max i S , 1 k < + d i k + d ̄ i k e λ τ > 1 .

Proof. We conclude that all the elements of (ΞΓ)−1 are nonnegative. So we have (4.24) holds, namely all r i are positive. Note

( - Ξ + Γ ) r = - 1 ,
(4.25)

that is

λ max A i + A i T r i + j = 1 N γ i j r j + B i + C i 2 + C i D i - ln q δ r i = - 1 .
(4.26)

Then we have for iS

1 r i λ max r i A i + r i A i T + j = 1 N γ i j r j I + r i B i + r i C i 2 + r i C i D i - r i ln q δ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 = 1 r i λ max r i A i + r i A i T + j = 1 N γ i j r j + r i B i + r i C i 2 + r i C i D i - r i ln q δ + λ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 = - 1 r i + λ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 0 .
(4.27)

The required conclusion follows from Corollary 4.1.

Corollary 4.4. There exist constants d i k 0, d ̄ i k 0, d i k + d ̄ i k 0,iS,k=1,2,... such that (4.17) holds. If there exists λ > 0 such that Ξ ̄ - Γ is a nonsingular and for any iS

r i λ + q α 2 2 e λ τ α 1 2 B i + C i D i + D i 2 < 1 ,
(4.28)

then the zero solution of Equation (4.1) is exponentially stable in the mean square, where Ξ ¯ is the diagonal matrix denoted by

Ξ ̄ = diag - λ max A 1 + A 1 T - B 1 - C 1 2 - C 1 D 1 - ln q μ , . . . , - λ max A N + A N T - B N - C N 2 - C N D N - ln q μ , r 1 , . . . , r N = Ξ ̄ - Γ - 1 1 0 , α 1 = min 1 i N r i , α 2 = max i S , 1 i N r i , q = max 1 k < + d i k + d ̄ i k e λ τ 1 .

5 Examples and numerical simulations

In this section, two examples are provided to illustrate our results.

Example 5.1. Let ω(t) be a scalar Brownian motion. Let r(t), t ≥ 0 be a right-continuous Markov chain taking values in S = {1, 2} with the generator

- 3 3 1 - 1 .

Consider the following scalar hybrid impulsive stochastic delay system

d x t = A r ( t ) x t + B r ( t ) x t - 1 d t + C r ( t ) x t + D r ( t ) x t - 1 d ω t , t 0 , t t k , Δ x t k = - 0 . 6 x t k + 0 . 4 x t k - 1 , k = 1 , 2 , . . . ,
(5.1)

where t k = 0.003k, τ(t) = 1, A1 = -1, A2 = -2, B1 = 1, B2 = 2, C1 = 1, C2 = 2, D1 = 2, D2 = 3, d1k= d2k= 0.32, d ̄ 1 k = d ̄ 2 k =0.32. Taking r1 = r2 = 1, we see that there exists λ > 0 such that

0 . 32 + 0 . 32 e λ < 1 , 1 r 1 λ max r 1 A 1 + r 1 A 1 T + j = 1 2 γ 1 j r j I + r 1 B 1 + r 1 C 1 2 + r 1 C 1 D 1 + λ + q α 2 2 e λ τ α 1 2 B 1 + C 1 D 1 + D 1 2 = 2 + λ + 7 e λ 0 . 32 + 0 . 32 e λ - ln ( 0 . 32 + 0 . 32 e λ ) 0 . 003

and

1 r 2 λ max r 2 A 2 + r 2 A 2 T + j = 1 2 γ 2 j r j I + r 2 B 2 + r 2 C 2 2 + r 2 C 2 D 2 + λ + q α 2 2 e λ τ α 1 2 B 2 + C 2 D 2 + D 2 2 = 8 + λ + 17 e λ 0 . 32 + 0 . 32 e λ - ln ( 0 . 32 + 032 e λ ) 0003 .

By Corollary 4.1, the zero solution of Equation (5.1) is exponentially stable in the mean square. Figure 1 depicts x(t) of Equation (5.1) is exponentially stable in the mean square. Figure 2 depicts x(t) of Equation (5.1) without delayed impulses.

Figure 1
figure 1

Trajectory of the states x ( t ) of Equation ( 5 .1).

Figure 2
figure 2

Trajectory of the states x ( t ) of Equation ( 5 .1) without delay impulses.

Remark 5.1. From Figures 1 and 2, although hybrid stochastic delay system without impulses may be exponentially unstable in the mean square, adding delayed impulses may lead to exponentially stable in the mean square, which implies that impulses may change the stable behavior of an system.

Example 5.2. Let r(t), t ≥ 0 be a right-continuous Markov chain taking values in S = {1, 2, 3} with the generator

- 2 2 1 - 1 .

Consider the following the 3-D hybrid impulsive stochastic delay system

d x t = A r ( t ) x t + B r ( t ) x t - 0 . 5 d t + C r ( t ) x t + D r ( t ) x t - 0 . 5 d ω t , t 0 , t t k , Δ x t k = - 0 . 6 x t k + 0 . 4 x t k - 0 . 5 , k = 1 , 2 , . . . . ,
(5.2)

where

A 1 = - 1 - 1 - 0 . 5 0 . 4 , A 2 = - 0 . 3 0 . 4 0 . 2 0 . 1 , B 1 = 0 . 6 0 . 8 - 1 0 . 6 , B 2 = - 1 0 . 6 0 . 7 - 0 . 2 , C 1 = 0 . 2 0 0 0 . 8 , C 2 = 1 0 0 0 . 6 , D 1 = 0 . 7 0 0 1 , D 2 = 1 0 0 1 ,

t k = 0.02k, τ(t) = 0.5, ||B1|| = 1.1817, ||B2|| = 1.3650, ||C1|| = 0.8, ||C2|| = 1, ||D1|| = ||D2|| = 1. It is easy to see that d 1 k = d 2 k = 0 . 20 , d ̄ 1 k = d ̄ 2 k = d ̄ 3 k = 0 . 20 . By computation, we have

λ max A i + A i T = 1 . 4518 , i = 1 , 0 . 5211 , i = 2 .

Taking λ = 0.5, we have

Ξ = diag 35 . 1065 , 35 . 2939 .

Hence

Ξ - Γ = 16 . 0354 - 2 - 1 36 . 2939 .

ΞΓ is a nonsingular M-matrix. Then

r 1 , r 2 , r 3 T = Ξ - Γ - 1 1 = 0 . 0285 , 0 . 0284 T .

Compute

r i 0 . 5 + α 2 2 e 1 4 α 1 2 B i + C i D i + D i 2 1 , i = 1 , 2 .

By Corollary 4.3, the zero solution of Equation (5.2) is exponentially stable in the mean square. Figure 3 depicts x1(t), x2(t) of Equation (5.2) is exponentially stable in the mean square.

Figure 3
figure 3

Trajectory of the states x ( t ) of Equation ( 5 .2).