1 Introduction

It is well known that Cohen-Grossberg in [1] proposed originally the Cohen-Grossberg neural networks (CGNNs). Since then the CGNNs have found their extensive applications in pattern recognition, image and signal processing, quadratic optimization, and artificial intelligence [26]. However, these successful applications are greatly dependent on the stability of the neural networks, which is also a crucial feature in the design of the neural networks. In practice, both time delays and impulse may cause undesirable dynamic network behaviors such as oscillation and instability [212]. Therefore, the stability analysis for delayed impulsive CGNNs has become a topic of great theoretic and practical importance in recent years [26]. Recently, CGNNs with Markovian jumping parameters have been extensively studied due to the fact that systems with Markovian jumping parameters are useful in modeling abrupt phenomena such as random failures, operating in different points of a nonlinear plant, and changing in the interconnections of subsystems [1318]. Noise disturbance is unavoidable in real nervous systems, which is a major source of instability and poor performances in neural networks. A neural network can be stabilized or destabilized by certain stochastic inputs. The synaptic transmission in real neural networks can be viewed as a noisy process introduced by random fluctuations from the release of neurotransmitters and other probabilistic causes. Hence, noise disturbance should be also taken into consideration in discussing the stability of neural networks [1418]. On the other hand, diffusion phenomena cannot be unavoidable in real world. Usually diffusion phenomena were simulated by linear Laplacian diffusion for simplicity in many previous literatures [2, 1921]. However, diffusion behavior is so complicated that the nonlinear reaction-diffusion models were considered in several papers [3, 2225]. The nonlinear p-Laplace diffusion (p>1) was considered in simulating some diffusion behaviors [3]. In addition, aging of electronic component, external disturbance, and parameter perturbations always result in a side-effect of partially unknown Markovian transition rates [26, 27]. To the best of our knowledge, stochastic stability for the delayed impulsive Markovian jumping p-Laplace diffusion CGNNs has rarely been considered. Besides, the stochastic exponential stability always remains the key factor of concern owing to its importance in designing a neural network, and such a situation motivates our present study. So, in this paper, we shall investigate the stochastic global exponential stability criteria for the above-mentioned CGNN by means of the linear matrix inequalities (LMIs) approach.

2 Model description and preliminaries

The stability of the following Cohen-Grossberg neural networks was studied in some previous literature via the differential inequality (see, e.g., [2]).

{ u t = ( D u ( t , x ) ) A ˜ ( u ( x , t ) ) u t = × [ B ˜ ( u ( t , x ) ) C f ˜ ( u ( t , x ) ) D g ˜ ( u ( t τ ( t ) , x ) ) + J ] , for all  t t 0 , t t k , x Ω , u ( t k , x ) = M k u ( t k , x ) + N h ˜ ( u ( t k τ ( t ) , x ) ) , k = 1 , 2 , ,
(2.1)

where u=u(t,x)= ( u 1 ( t , x ) , u 2 ( t , x ) , , u n ( t , x ) ) T , f ˜ (u)= ( f ˜ 1 ( u 1 ) , f ˜ 2 ( u 2 ) , , f ˜ n ( u n ) ) T , g ˜ (u)= ( g ˜ 1 ( u 1 ) , g ˜ 2 ( u 2 ) , , g ˜ n ( u n ) ) T .

In this paper, we always assume h ˜ 0 for some rational reason (see [3]). According to [[2], Definition 2.1], a constant vector u R n is said to be an equilibrium point of system (2.1) if

B ˜ ( u ) C f ˜ ( u ) +D g ˜ ( u ) +J=0,and( M k I) u +N h ˜ ( u ) =0.
(2.2)

Let v=u u , then system (2.1) with h ˜ 0 can be transformed into

{ v t = ( D v ( t , x ) ) A ( v ( t , x ) ) [ B ( v ( t , x ) ) C f ( v ( t , x ) ) D g ( v ( t τ ( t ) , x ) ) ] , for all  t t 0 , t t k , x Ω , v ( t k , x ) = M k v ( t k , x ) , k = 1 , 2 , ,
(2.3)

where v=v(t,x)= ( v 1 ( t , x ) , v 2 ( t , x ) , , v n ( t , x ) ) T , u = ( u 1 , u 2 , , u n ) T , A(v(t,x))= A ˜ (v(t,x)+ u )= A ˜ (u(t,x)),

B ( v ( t , x ) ) = B ˜ ( u ( t , x ) ) B ˜ ( u ) , f ( v ( t , x ) ) = f ˜ ( u ( t , x ) ) f ˜ ( u ) , g ( v ( t , x ) ) = g ˜ ( u ( t , x ) ) g ˜ ( u ) ,
(2.4)

and

f(v)= ( f 1 ( v 1 ) , f 2 ( v 2 ) , , f n ( v n ) ) T ,g(v)= ( g 1 ( v 1 ) , g 2 ( v 2 ) , , g n ( v n ) ) T .

Then, according to [[2], Definition 2.1], v0 is an equilibrium point of system (2.3). Hence, further we only need to consider the stability of the null solution of Cohen-Grossberg neural networks. Naturally, we propose the following hypotheses on system (2.3) with h0.

(A1) A(v(t,x)) is a bounded, positive, and continuous diagonal matrix, i.e., there exist two positive diagonal matrices A ̲ and A ¯ such that 0< A ̲ A(v(t,x)) A ¯ .

(A2) B(v(t,x))= ( b 1 ( v 1 ( t , x ) ) , b 2 ( v 2 ( t , x ) ) , , b n ( v n ( t , x ) ) ) T such that there exists a positive definite matrix B=diag ( B 1 , B 2 , , B n ) T R n satisfying

b j ( r ) r B j ,j=1,2,,n, and rR.

(A3) There exist constant diagonal matrices G k =diag( G 1 ( k ) , G 2 ( k ) ,, G n ( k ) ), F k =diag( F 1 ( k ) , F 2 ( k ) ,, F n ( k ) ), k=1,2 with | F j ( 1 ) | F j ( 2 ) , | G j ( 1 ) | G j ( 2 ) , j=1,2,,n, such that

F j ( 1 ) f j ( r ) r F j ( 2 ) , G j ( 1 ) g j ( r ) r G j ( 2 ) ,j=1,2,,n, and rR.

Remark 2.1 In many previous works (see, e.g., [2, 3]), authors always assumed

0 f j ( r ) r F j ,0 g j ( r ) r G j ,i=1,2,,n.

However, F j ( 1 ) , G j ( 1 ) in (A3) may not be positive constants, and hence the functions f, g are more generic.

Remark 2.2 It is obvious from (2.4) that B(0)=0=f(0)=g(0), and then B(0)Cf(0)+Dg(0)=0.

Since stochastic noise disturbance is always unavoidable in practical neural networks, it may be necessary to consider the stability of the null solution of the following Markovian jumping CGNNs:

{ d v ( t , x ) = { ( D ( t , x , v ) p v ( t , x ) ) A ( v ( x , t ) ) [ B ( v ( t , x ) ) C ( r ( t ) ) f ( v ( t , x ) ) d v ( t , x ) = D ( r ( t ) ) g ( v ( t τ ( t ) , x ) ) ] } d t + σ ( t , v ( t , x ) , v ( t τ ( t ) , x ) , r ( t ) ) d w ( t ) , for all  t t 0 , t t k , x Ω , v ( t k , x ) = M k ( r ( t ) ) v ( t k , x ) , k = 1 , 2 , .
(2.5)

The initial conditions and the boundary conditions are given by

v(θ,x)=ϕ(θ,x),(θ,x)[τ,0]×Ω,
(2.5b)

and

B [ v i ( t , x ) ] =0,(t,x)[τ,+)×Ω,i=1,2,,n.
(2.5c)

Here p>1 is a given scalar, and Ω R m is a bounded domain with a smooth boundary Ω of class C 2 by Ω, v(t,x)= ( v 1 ( t , x ) , v 2 ( t , x ) , , v n ( t , x ) ) T R n , where v i (t,x) is the state variable of the i th neuron and the j th neuron at time t and in a space variable x. Matrix D(t,x,v)= ( D j k ( t , x , v ) ) n × m satisfies D j k (t,x,v)0 for all j, k, (t,x,v), where the smooth functions D j k (t,x,v) are diffusion operators. D(t,x,v) p v= ( D j k ( t , x , v ) | v i | p 2 v i x k ) n × m denotes the Hadamard product of matrix D(t,x,v) and p v (see [13] or [28] for details).

Denote w(t)= ( w ( 1 ) ( t ) , w ( 2 ) ( t ) , , w ( n ) ( t ) ) T , where w ( j ) (t) is scalar standard Brownian motion defined on a complete probability space (Ω,F,P) with a natural filtration { F t } t 0 . Noise perturbations σ: R + × R n × R n ×S R n × n is a Borel measurable function. {r(t),t0} is a right-continuous Markov process on the probability space, which takes values in the finite space S={1,2,,s} with generator Π={ π i j } given by

P ( r ( t + δ ) = j | r ( t ) = i ) = { π i j δ + o ( δ ) , j i , 1 + π i j δ + o ( δ ) , j = i ,

where π i j 0 is a transition probability rate from i to j (ji) and π i i = j = 1 , j i s π i j , δ>0, and lim δ 0 o(δ)/δ=0. In addition, the transition rates of the Markovian chain are considered to be partially available, namely some elements in the transition rate matrix Π are time-invariant but unknown. For instance, a system with three operation modes may have the transition rate matrix Π as follows:

Π=[ π 11 ? ? ? π 22 ? π 31 π 32 π 33 ],

where ‘?’ represents the inaccessible element. For notational clarity, we denote S k n i {j, if  π i j  is known} and S u n i {j, if  π i j  is unknown, and ji} for a given iS. Denote α ˜ i max j S u n i π i j . The time-varying delay τ(t) satisfies 0<τ(t)τ with τ ˙ (t)κ<1. A(v(t,x))=diag( a 1 ( v 1 (t,x)), a 2 ( v 2 (t,x)),, a n ( v n (t,x))), B(v(t,x))= ( b 1 ( v 1 ( t , x ) ) , b 2 ( v 2 ( t , x ) ) , , b n ( v n ( t , x ) ) ) T , where a j ( v j (t,x)) represents an amplification function, and b j ( v j (t,x)) is an appropriately behavior function. C(r(t)), D(r(t)) and M k (r(t)) are denoted by C i , D i , M k i with C i = ( c l k i ) n × n , D i = ( d l k i ) n × n , respectively, and c l k i , d l k i denote the connection strengths of the k th neuron on the l th neuron in the mode r(t)=i, respectively. M k i is a symmetrical matrix for any given k, i. Denote vector functions f(v(t,x))= ( f 1 ( v 1 ( t , x ) ) , f 2 ( v 2 ( t , x ) ) , , f n ( v n ( t , x ) ) ) T , g(v(t,x))= ( g 1 ( v 1 ( t , x ) ) , , g n ( v n ( t , x ) ) ) T , where f j ( v j (t,x)), g j ( v j (t,x)) are neuron activation functions of the j th unit at time t and in a space variable x.

In addition, we always assume that t 0 =0 and v( t k + ,x)=v( t k ,x) for all k=1,2, , where v( t k ,x) and v( t k + ,x) represent the left-hand and right-hand limits of v(t,x) at t k . And each t k (k=1,2,) is an impulsive moment satisfying 0< t 1 < t 2 << t k < and lim k t k =+. The boundary condition (2.5c) is called the Dirichlet boundary condition or the Neumann boundary condition, which is defined as (6a) in [13]. Similarly as (i)-(xii) of [13], we introduce the following standard notations.

L 2 ( R × Ω ) , L F 0 2 ( [ τ , 0 ] × Ω ; R n ) , Q = ( q i j ) n × n > 0 ( < 0 ) , Q = ( q i j ) n × n 0 ( 0 ) , Q 1 Q 2 ( Q 1 Q 2 ) , Q 1 > Q 2 ( Q 1 < Q 2 ) , λ max ( Φ ) , λ min ( Φ ) , | C n × n | , | u ( t , x ) | , the identity matrix  I  and the symmetric terms .

Throughout this paper, we assume (A1)-(A3) and the following conditions hold:

(A4) σ(t,0,0,i)=0 for all iS.

Remark 2.3 The condition |H|=H is not too stringent for a semi-positive definite matrix H= ( h i j ) n × n 0. Indeed, |H|=H if all h i j 0, i,j.

(A5) There exist symmetrical matrices R j 0 with | R j |= R j , j=1,2 such that for any mode iS,

trace [ σ T ( t , v ( t , x ) , v ( t τ ( t ) , x ) , i ) σ ( t , v ( t , x ) , v ( t τ ( t ) , x ) , i ) ] v T ( t , x ) R 1 v ( t , x ) + v T ( t τ ( t ) , x ) R 2 v ( t τ ( t ) , x ) .

Similarly as is [[2], Definition 2.1], we can see from (A4) that system (2.5) has the null solution as its equilibrium point.

Lemma 2.1 [[13], Lemma 6]

Let P i =diag( p i 1 , p i 2 ,, p i n ) be a positive definite matrix for a given i, and v be a solution of system (2.5). Then we have

Ω v T P i ( ( D ( t , x , v ) p v ) ) d x = k = 1 m j = 1 n Ω p i j D j k ( t , x , v ) | v j | p 2 ( v j x k ) 2 d x = Ω ( ( D ( t , x , v ) p v ) T P i v d x .

Lemma 2.2 (see [11])

Consider the following differential inequality:

{ D + v ( t ) a v ( t ) + b [ v ( t ) ] τ , t t k , v ( t k ) a k v ( t k ) + b k [ v ( t k ) ] τ ,

where v(t)0, [ v ( t k ) ] τ = sup t τ s t v(s), [ v ( t k ) ] τ = sup t τ s < t v(s) and v(t) is continuous except t k , k=1,2, , where it has jump discontinuities. The sequence t k satisfies 0= t 0 < t 1 < t 2 << t k < t k + 1 < , and lim k t k =. Suppose that

  1. (1)

    a>b0;

  2. (2)

    t k t k 1 >δτ, where δ>1, and there exist constants γ>0, M>0 such that

    ρ 1 ρ 2 ρ k + 1 e k λ τ M e γ t k ,

where ρ i =max{1, a i + b i e λ τ }, λ>0 is the unique solution of the equation λ=ab e λ τ ;

then

v(t)M [ v ( 0 ) ] τ e ( λ γ ) t .

In addition, if θ= sup k Z {1, a k + b k e λ τ }, then

v(t)θ [ v ( 0 ) ] τ e ( λ ln ( θ e λ τ ) δ τ ) t ,t0.

3 Main results

Theorem 3.1 Assume that p>1. If the following conditions are satisfied:

(C1) there exist a sequence of positive scalars α ̲ i , α ¯ i (iS) and positive definite diagonal matrices P i =diag( p i 1 , p i 2 ,, p i n ) (iS), L 1 , L 2 and Q such that the following LMI conditions hold:

Θ i ( A i 1 0 ( F 1 + F 2 ) L 1 + P i A ¯ | C i | P i A ¯ | D i | A i 2 0 ( G 1 + G 2 ) L 2 2 L 1 0 2 L 2 ) >0,iS,
(3.1)
P i > α ̲ i I,iS,
(3.2)
P i < α ¯ i I,iS,
(3.3)

where τ ˙ (t)κ<1 for all t, and

A i 1 = 2 P i A ̲ B + α ¯ i R 1 + j S k n i π i j P j + α ˜ i j S u n i P j + Q 2 F 1 L 1 F 2 , A i 2 = ( 1 κ ) Q 2 G 1 L 2 G 2 ;

(C2) min i S { λ min Θ i λ max P i , λ min Θ i λ max Q }> λ max R 2 λ min Q max i S α ¯ i 0;

(C3) there exists a constant δ>1 such that inf k Z ( t k t k 1 )>δτ, δ 2 τ>ln(ρ e λ τ ) and λ ln ( ρ e λ τ ) δ τ >0, where ρ= max j {1, a j + b j e λ τ } with a j ( max k , i λ max ( M k i P i M k i ) λ min P i ) and b j 1, and λ>0 is the unique solution of the equation λ=ab e λ τ with a= min i S { λ min Θ i λ max P i , λ min Θ i λ max Q } and b= λ max R 2 λ min Q max i S α ¯ i ,

then the null solution of system (2.5) is stochastically exponentially stable with the convergence rate 1 2 (λ ln ( ρ e λ τ ) δ τ ).

Proof Consider the Lyapunov-Krasovskii functional

V ( t , v ( t , x ) , i ) = V 1 i + V 2 i ,iS,

where

V 1 i = Ω v T ( t , x ) P i v ( t , x ) d x = Ω | v T ( t , x ) | P i | v ( t , x ) | d x , V 2 i = Ω τ ( t ) 0 v T ( t + θ , x ) Q v ( t + θ , x ) d θ d x V 2 i = Ω τ ( t ) 0 | v T ( t + θ , x ) | Q | v ( t + θ , x ) | d θ d x .
(3.4)

From (A3), we have

2 | f T ( v ( t , x ) ) | L 1 | f ( v ( t , x ) ) | 2 | v T ( t , x ) | ( F 1 + F 2 ) L 1 | f ( v ( t , x ) ) | + 2 | v T ( t , x ) | F 1 L 1 F 2 | v ( t , x ) | 0 ,
(3.5)
2 | g T ( v ( t τ ( t ) , x ) ) | L 2 | g ( v ( t τ ( t ) , x ) ) | 2 | v T ( t τ ( t ) , x ) | ( G 1 + G 2 ) L 2 | g ( v ( t τ ( t ) , x ) ) | + 2 | v T ( t τ ( t ) , x ) | G 1 L 2 G 2 | v ( t τ ( t ) , x ) | 0 .
(3.6)

Denote ζ(t,x)= ( | v T ( t , x ) | , | v T ( t τ ( t ) , x ) | , | f T ( v ( t , x ) ) | , | g T ( v ( t τ ( t ) , x ) ) | ) T . Let ℒ be the weak infinitesimal operator such that LV(t,v(t,x),i)=L V 1 i +L V 2 i for any given iS. Next, it follows by Lemma 2.1, (A1)-(A5), (3.5) and (3.6) that for t t k ,

LV(t) Ω ζ T (t,x) Θ i ζ(t,x)dx+ Ω | v T ( t τ ( t ) , x ) | α ¯ i R 2 | v ( t τ ( t ) , x ) | dx,
(3.7)

where v=v(t,x) is a solution for system (2.5). □

Remark 3.1 Here, we employ some new methods different from those of [[3], (2.4)-(2.6)] in the proof of [[3], Theorem 2.1] and [[2], Theorem 3.1]. Hence, our LMI condition (3.1) is more effective than the LMI condition (2.1) in [[3], Theorem 2.1] even when system (2.5) is reduced to system (3.11) (see Remark 3.3 below).

It is not difficult to conclude from the Itô formula that for t[ t k , t k + 1 ),

D + EV(t) min i S { λ min Θ i λ max P i , λ min Θ i λ max Q } EV(t)+ ( λ max R 2 λ min Q max i S α ¯ i ) [ E V ( t ) ] τ .
(3.8)

Owing to t k t k 1 >δτ>τ, we can derive

V( t k ) ( max k , i λ max ( M k i P i M k i ) λ min P i ) V ( t k ) + [ V ( t k ) ] τ .
(3.9)

It follows from (C3) that ρ k + 1 e k λ τ e ( δ 2 λ ) τ e δ t k , where ρ= max j {1, a j + b j e λ τ } with a j ( max k , i λ max ( M k i P i M k i ) λ min P i ) and b j 1, and λ>0 is the unique solution of the equation λ=ab e λ τ , satisfying λ ln ( ρ e λ τ ) δ τ >0. Moreover, combining (3.8), (3.9), (C2), (C3) and Lemma 2.2 results in

E v ( t , x ) 2 2 [ max i S ( λ max P i + λ max Q ) min i S ( λ min P i ) ρ sup τ s 0 E ϕ ( s ) 2 2 ] e ( λ ln ( ρ e λ τ ) δ τ ) t .
(3.10)

Therefore, we can see by [[13], Definition 2.1] that the null solution of system (2.5) is globally stochastically exponentially stable in the mean square with the convergence rate 1 2 (λ ln ( ρ e λ τ ) δ τ ).

Remark 3.2 Although the employed Lyapunov-Krasovskii functional is simple, together with the condition (A3) it simplifies the proof process. Moreover, the obtained LMI-based criterion is more effective and less conservative than [[3], Theorem 2.1], which will be shown in a numerical example of Remark 3.3.

Moreover, if Markovian jumping phenomena are ignored, system (2.5) is reduced to the following system:

{ d v ( t , x ) = { ( D ( t , x , v ) p v ( t , x ) ) A ( v ( x , t ) ) [ B ( v ( t , x ) ) C f ( v ( t , x ) ) d v ( t , x ) = D g ( v ( t τ ( t ) , x ) ) ] } d t + σ ( t , v ( t , x ) , v ( t τ ( t ) , x ) , r ( t ) ) d w ( t ) , for all  t t 0 , t t k , x Ω , v ( t k , x ) = M v ( t k , x ) , k = 1 , 2 , ,
(3.11)

where M is a symmetrical matrix.

In order to compare with the main result of [3], we may as well educe the following corollary based on Theorem 3.1.

Corollary 3.2 If the following conditions are satisfied:

(C1*) there exist positive scalars α ̲ , α ¯ and positive definite diagonal matrices P, L 1 , L 2 and Q such that the following LMI conditions hold :

where

A 1 =2P A ̲ B+ α ¯ R 1 +Q2 F 1 L 1 F 2 , A 2 =(1κ)Q2 G 1 L 2 G 2 ;
(3.12)

(C2*) min{ λ min Θ ˜ λ max P , λ min Θ ˜ λ max Q }> λ max R 2 λ min Q α ¯ 0;

(C3*) there exists a constant δ>1 such that inf k Z ( t k t k 1 )>δτ, δ 2 τ>ln(ρ e λ τ ) and λ ln ( ρ e λ τ ) δ τ >0, where ρ=max{1, a j + b j e λ τ } with a j λ max ( M P M ) λ min P and b j 1, and λ>0 is the unique solution of the equation λ= a ˜ b ˜ e λ τ with a ˜ =min{ λ min Θ ˜ λ max P , λ min Θ ˜ λ max Q } and b ˜ = λ max R 2 λ min Q α ¯ ,

then the null solution of system (3.11) is stochastically globally exponential stable in the mean square with the convergence rate 1 2 (λ ln ( ρ e λ τ ) δ τ ).

Remark 3.3 In [[3], Theorem 2.1], R 2 in (A5) is assumed to be 0. In addition, F 1 = G 1 are also assumed to be 0. In [[3], Theorem 2.1], if there exist positive definite diagonal matrices P 1 , P 2 such that the following LMI holds:

and other two conditions similar as (C2*) and (C3*) hold, then the null solution of system (3.11) is stochastically globally exponential stable in the mean square.

Note that F 2 2 in ( C ˜ 1) exercises a malign influence on the negative definite possibility of matrix when λ min F 2 >1.

Indeed, we may consider system (3.11) with the following parameter:

A ̲ = ( 1.3 0 0 1.4 ) , A ¯ = ( 2 0 0 2 ) , B = ( 1.8 0 0 1.88 ) , F 1 = G 1 = ( 0 0 0 0 ) , G 2 = ( 2 0 0 3 ) , F 2 = ( 7.38 0 0 7.48 ) , C = D = ( 0.11 0.003 0.003 0.12 ) , R 1 = ( 0.01 0.0012 0.0012 0.01 ) ,

and τ=0.65, k=0.

Now we use matlab LMI toolbox to solve the LMI (C1*), and the result is tmin=0.0050>0, which implies the LMI ( C ˜ 1) is found infeasible, let alone other two conditions (i.e., (C2) and (C3) in [[3], Theorem 2.1]). However, by solving LMIs, Equation (3.3*) can be seen in Page 8, one can obtain tmin=0.0056<0 and α ̲ =2.0198, α ¯ =7.5439,

P = ( 4.1989 0 0 3.9019 ) , L 1 = ( 0.1515 0 0 0.1398 ) , L 2 = ( 0.8018 0 0 0.4173 ) , Q = ( 3.5352 0 0 3.7619 ) .

As pointed out in Remark 3.1 and Remark 3.2, our Theorem 3.1 and its Corollary 3.2 are more feasible and less conservative than [[3], Theorem 2.1] as a result of our new methods employed in this paper.

Remark 3.4 Both the conclusion and the proof methods of Theorem 3.1 are different from those previous related results in the literature (see, e.g., [2, 3]). Below, we shall give a numerical example to show that Theorem 3.1 is more effective and less conservative than some existing results due to significant improvement in the allowable upper bounds of delays.

4 A numerical example

Example 1 Consider the following CGNN under the Neumann boundary condition:

{ d v = [ ( D ( t , x , v ) p v ( t , x ) ) ] d t d v = ( a 1 ( v 1 ) 0 0 a 2 ( v 2 ) ) [ ( b 1 ( v 1 ) b 2 ( v 2 ) ) C i f ( v ( t , x ) ) D i g ( v ( t τ , x ) ) ] d t d v = + σ ( t , v ( t , x ) , v ( t τ , x ) , i ) d w ( t ) , i S = { 1 , 2 , 3 } , t t 0 , t t k , x Ω , v ( t k , x ) = M k i v ( t k , x ) , i S = { 1 , 2 , 3 } , k = 1 , 2 ,
(4.1)

with the initial condition

ϕ(s,x)= ( x 2 ( 1 cos ( 5 π x 2 ) ) cos 189 ( x 2 0.25 ) e 100 s ( 1 x ) sin 2 ( 4 π x 2 ) cos 201 ( x 2 0.55 ) e 100 s ) ,7.75s0,
(4.2)

and the Neumann boundary condition (or the Dirichlet boundary condition), where τ(t)τ=7.75, p=2.011, v= ( v 1 ( t , x ) , v 2 ( t , x ) ) T R 2 , x= ( x 1 , x 2 ) T Ω={ ( x 1 , x 2 ) T R 2 :| x j |< 2 ,j=1,2}, a 1 ( v 1 )=2.3+0.6 sin 2 (t x 2 ), a 2 ( v 2 )=2.4+0.5 cos 2 (t x 2 ), b 1 ( v 1 )=2.8 v 1 +2 v 1 sin 2 ( t 2 + x 2 ), b 2 ( v 2 )=2.88 v 2 + v 2 cos 2 ( t 2 + x 2 ), f(v)=g(v)= ( 0.1 v 1 , 0.1 v 2 + 0.1 v 2 sin 2 ( t x 2 ) ) T , and

D ( t , x , v ) = ( 0.003 0.005 0.004 0.0006 ) , A ̲ = ( 2.3 0 0 2.4 ) , A ¯ = ( 2.9 0 0 2.9 ) , B = ( 2.8 0 0 2.88 ) , C 1 = ( 0.11 0.003 0.003 0.12 ) = D 1 , C 2 = ( 0.16 0.003 0.003 0.18 ) = D 2 , C 3 = ( 0.13 0.003 0.003 0.12 ) = D 3 , F 1 = ( 0 0 0 0 ) = G 1 , F 2 = ( 0.1 0 0 0.2 ) = G 2 , R 1 = ( 0.0003 0 0 0.0003 ) = R 2 , M k ( r ( t ) ) = M = ( 5.5 0.001 0.001 5.5 ) , r ( t ) = i S = { 1 , 2 , 3 } , k = 1 , 2 , .
(4.3)

The two cases of the transition rate matrices are considered as follows:

Case (1): Π= ( 0.6 0.4 0.2 0.2 0.7 0.5 0.5 0.3 0.8 ) ,Case (2): Π= ( 0.6 ? ? 0.2 ? ? ? 0.3 ? ) .
(4.4)

In Case (1), S k n i = (the empty set), and hence α ˜ i j S u n i P j =0 and j S k n i π i j P j = j S π i j P j for all iS={1,2,3}.

Now we use the Matlab LMI toolbox to solve the LMIs (3.1)-(3.3) for Case (1), and the result shows tmin=0.1099<0, and α ¯ 1 =9.2330, α ̲ 1 =0.7039, α ¯ 2 =9.2305, α ̲ 2 =0.7014, α ¯ 3 =9.2321, α ̲ 3 =0.7030,

P 1 = ( 1.4624 0 0 1.3531 ) , P 2 = ( 1.4578 0 0 1.3476 ) , P 3 = ( 1.4599 0 0 1.3522 ) , Q = ( 9.2765 0 0 9.2212 ) , L 1 = ( 4.5455 0 0 4.4596 ) , L 2 = ( 4.5714 0 0 4.5093 ) .

Next, we shall prove that the above data P i , α ¯ i , α ̲ i and Q make the conditions (C2) and (C3) hold, respectively.

Indeed, by computing, we have λ min Θ 1 =7.6913, λ min Θ 2 =7.3614, λ min Θ 3 =7.6864, λ max P 1 =1.4624, λ max Q=9.2765, λ max P 2 =1.4578, λ max P 3 =1.4599, and then a= min i S { λ min Θ i λ max P i , λ min Θ i λ max Q }=0.7936, b= λ max R 2 λ min Q max i S α ¯ i =3.0038 10 4 . Then a>b0, and hence (C2) holds.

Let δ=2.1215, τ=7.75, and inf k Z ( t k t k 1 )>δτ. Solve λ=ab e λ τ , and hence λ=0.7163. Moreover, it follows by direct computation that a j ( max k , i λ max ( M k i P i M k i ) λ min P i )32.7237. Owing to b j 1, we have ρ= max j {1, a j + b j e λ τ }=290.3023. Thereby, a direct computation can derive that δ 2 τln(ρ e λ τ )=23.6587>0 and λ ln ( ρ e λ τ ) δ τ =0.0337>0.

Therefore, it follows from Theorem 3.1 that the null solution of system (4.1) is stochastically exponentially stable with the convergence rate 0.0337/2=0.0169 (see, Figures 1-3).

Figure 1
figure 1

Computer simulations of the states of v 1 (t,x) and v 2 (t,x) .

Figure 2
figure 2

Computer simulations of the state v 1 (t,x) .

Figure 3
figure 3

Computer simulations of the state v 2 (t,x) .

In Case (2), it is obvious that α ˜ 1 =0.6, α ˜ 2 =0.8, α ˜ 3 =0.7. Solving LMIs (3.1)-(3.3) for Case (2), one can obtain tmin=0.1115<0, and α ¯ 1 =9.3404, α ̲ 1 =0.7464, α ¯ 2 =9.3608, α ̲ 2 =0.7668, α ¯ 3 =9.3635, α ̲ 3 =0.7695,

P 1 = ( 1.5536 0 0 1.4320 ) , P 2 = ( 1.5984 0 0 1.4687 ) , P 3 = ( 1.6025 0 0 1.4754 ) , Q = ( 9.3823 0 0 9.3233 ) , L 1 = ( 4.5780 0 0 4.4899 ) , L 2 = ( 4.6062 0 0 4.5436 ) .

Next, we shall prove that the above data P i , α ¯ i , α ̲ i and Q make the conditions (C2) and (C3) hold, respectively.

Indeed, we can get by direct computations that λ min Θ 1 =7.7208, λ min Θ 2 =7.3554, λ min Θ 3 =7.7252, λ max P 1 =1.5536, λ max Q=9.3823, λ max P 2 =1.5984, λ max P 3 =1.6025, and then a= min i S { λ min Θ i λ max P i , λ min Θ i λ max Q }=0.7840, b= λ max R 2 λ min Q max i S α ¯ i =33.0129 10 4 , and hence a>b0. So, the condition (C2) in Theorem 3.1 holds.

Similarly, let δ=2.1215, τ=7.75, and inf k Z ( t k t k 1 )>δτ. Solve λ=ab e λ τ , and hence λ=0.7101. Moreover, it follows by direct computation that a j ( max k , i λ max ( M k i P i M k i ) λ min P i )32.9214. Owing to b j 1, we have ρ= max j {1, a j + b j e λ τ }=278.4160. Thereby, a direct computation can derive that δ 2 τln(ρ e λ τ )=23.7485>0 and λ ln ( ρ e λ τ ) δ τ =0.0330>0.

Therefore, it follows from Theorem 3.1 that the null solution of system (4.1) is stochastically exponentially stable with the convergence rate 0.0330/2=0.0165.

Table 1 shows that the convergence rate decreases when the number of unknown elements increases.

Table 1 Allowable upper bound of τ and the convergence rate for Theorem 3.1 in Case (1) and Case (2)

Remark 4.1 Table 1 shows that the null solution of system (4.1) (or (2.5)) is stochastically globally exponential stable in the mean square for the maximum allowable upper bounds τ=7.75. Hence, as pointed out in Remark 3.3, the approach developed in Theorem 3.1 is more effective and less conservative than some existing results ([[3], Theorem 2.1], [29]).

5 Conclusions

In this paper, new LMI-based stochastic global exponential stability criteria for delayed impulsive Markovian jumping reaction-diffusion Cohen-Grossberg neural networks with partially unknown transition rates and the nonlinear p-Laplace diffusion are obtained, the feasibility of which can be easily checked by the Matlab LMI toolbox. Moreover, numerical example illustrates the effectiveness and less conservatism of all the proposed methods via the significant improvement in the allowable upper bounds of time delays. For further work, we are considering how to make the nonlinear p-Laplace diffusion item play a positive role in the stability criteria, which still remains an open and challenging problem.