1 Introduction

As we know, the stochastic delayed neural networks (SDNNs) with Markovian switching have played an important role in the fields of science and engineering for their many practical applications, including image processing, pattern recognition, associative memory, and optimization problems [1, 2]. In the past several decades, the characteristics of the SDNNs with Markovian switching, such as the various stability [3, 4], have received a lot of attention from scholars in various fields of nonlinear science. Wang et al. in [5] considered exponential stability for delayed recurrent neural networks with Markovian jumping parameters. Zhang et al. investigated stochastic stability for Markovian jumping genetic regulatory networks with mixed time delays [6]. Huang et al. investigated robust stability for stochastic delayed additive neural networks with Markovian switching [7]. The researchers presented a number of sufficient conditions to achieve the global asymptotic stability and exponential stability for the SDNNs with Markovian switching [811]. As is well known, time delays, as a source of instability and oscillations, always appear in various aspects of neural networks. Recently, the time delays of neural networks have received a lot of attention [1215]. The linear matrix inequality (LMI, for short) approach is one of the most extensively used in recent publications [16, 17].

In recent years, it has been found that the synchronization of the coupled neural networks has potential applications in many fields such as biology and engineering [1821]. In the coupled nonlinear dynamical systems, many neural networks may experience abrupt changes in their structure and parameters caused by some phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances. The synchronization may help to protect interconnected neurons from the influence of random perturbations which affect all neurons in the system. Therefore, from the neurophysiological as well as theoretical point of view, it is important to investigate the impact of synchronization on the SDNNs. Moreover, in the adaptive synchronization for the neural networks, the control law needs to be adapted or updated in realtime. So, the adaptive synchronization for neural networks has been used in real neural networks control such as parameter estimation adaptive control, model reference adaptive control, etc. Some stochastic synchronization results have been investigated. For example, in [22], an adaptive feedback controller is designed to achieve complete synchronization for unidirectionally coupled delayed neural networks with stochastic perturbation. In [23], via adaptive feedback control techniques with suitable parameters update laws, several sufficient conditions are derived to ensure lag synchronization for unknown delayed neural networks with or without noise perturbation. In [24], a class of chaotic neural networks is discussed, and based on the Lyapunov stability method and the Halanay inequality lemma, a delay independent sufficient exponential synchronization condition is derived. The simple adaptive feedback scheme has been used for the synchronization for neural networks with or without time-varying delay in [25]. A general model of an array of N linearly coupled delayed neural networks with Markovian jumping hybrid coupling is introduced in [26] and some sufficient criteria have been derived to ensure the synchronization in an array of jump neural networks with mixed delays and hybrid coupling in mean square.

It should be pointed out that, to the best of our knowledge, the adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching is seldom mentioned although it is of practical importance. Motivated by the above statements, in this paper, we aim to analyze the adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching. M-matrix-based criteria for determining whether adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching are developed. An adaptive feedback controller is proposed for the SDNNs with Markovian switching. A numerical simulation is given to show the validity of the developed results.

The rest of this paper is organized as follows: in Section 2, the problem is formulated and some preliminaries are given; in Section 3, a sufficient condition to ensure the adaptive almost surely asymptotically synchronization for the SDNNs with Markovian switching is derived; in Section 4, an example of numerical simulation is given to illustrate the validity of the results; Section 5 gives the conclusion of the paper.

2 Problem formulation and preliminaries

Throughout this paper, ℰ stands for the mathematical expectation operator, x 2 is used to denote a vector norm defined by x 2 = i = 1 n x i 2 , ‘T’ represents the transpose of a matrix or a vector, I n is an n-dimensional identical matrix.

Let { r ( t ) } t 0 be a right-continuous Markov chain on the probability space taking values in a finite state space S={1,2,,N} with generator Γ= ( γ i j ) N × N given by

P { r ( t + δ ) = j | r ( t ) = i } = { γ i j δ + o ( δ ) if  i j , 1 + γ i i δ + o ( δ ) if  i = j ,

where δ>0 and γ i j 0 is the transition rate from i to j if ij while

γ i j = j i γ i j .

We denote r(0)= r 0 .

In this paper, we consider the neural network called drive system and represented by the compact form as follows:

dx(t)= [ C ( r ( t ) ) x ( t ) + A ( r ( t ) ) f ( x ( t ) ) + B ( r ( t ) ) f ( x ( t τ ( t ) ) ) + D ( r ( t ) ) ] dt,
(1)

where t0 is the time, x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T R n is the state vector associated with n neurons, f(x(t))= ( f 1 ( x 1 ( t ) ) , f 2 ( x 2 ( t ) ) , , f n ( x n ( t ) ) ) T R n denote the activation functions of the neurons, τ(t) is the transmission delay satisfying that 0<τ(t) τ ¯ and τ ˙ (t) τ ˆ <1, where τ ¯ , τ ˆ are constants. As a matter of convenience, for t0, we denote r(t)=i and A(r(t))= A i , B(r(t))= B i , C(r(t))= C i , D(r(t))= D i , respectively. In model (1), furthermore, iS, C i =diag{ c 1 i , c 2 i ,, c n i } (i.e., C i is a diagonal matrix) has positive and unknown entries c k i >0, A i = ( a j k i ) n × n and B i = ( b j k i ) n × n are the connection weight and the delayed connection weight matrices, respectively. D i = ( d 1 i , d 2 i , , d n i ) T R n is the constant external input vector.

For the drive system (1), a response system is constructed as follows:

d y ( t ) = [ C ( r ( t ) ) y ( t ) + A ( r ( t ) ) f ( y ( t ) ) + B ( r ( t ) ) f ( y ( t τ ( t ) ) ) + D ( r ( t ) ) + U ( t ) ] d t + σ ( t , r ( t ) , y ( t ) x ( t ) , y ( t τ ( t ) ) x ( t τ ( t ) ) ) d ω ( t ) ,
(2)

where y(t) is the state vector of the response system (2), U(t)= ( u 1 ( t ) , u 2 ( t ) , , u n ( t ) ) T R n is a control input vector with the form of

U ( t ) = K ( t ) ( y ( t ) x ( t ) ) = diag { k 1 ( t ) , k 2 ( t ) , , k n ( t ) } ( y ( t ) x ( t ) ) ,
(3)

ω(t)= ( ω 1 ( t ) , ω 2 ( t ) , , ω n ( t ) ) T is an n-dimensional Brown moment defined on a complete probability space (Ω,F,P) with a natural filtration { F t } t 0 (i.e., F t =σ{ω(s):0st} is a σ-algebra) and is independent of the Markovian process { r ( t ) } t 0 , and σ: R + ×S× R n × R n R n × n is the noise intensity matrix and can be regarded as a result of the occurrence of eternal random fluctuation and other probabilistic causes.

Let e(t)=y(t)x(t). For the purpose of simplicity, we mark e(tτ(t))= e τ (t) and f(x(t)+e(t))f(x(t))=g(e(t)). From the drive system (1) and the response system (2), the error system can be represented as follows:

d e ( t ) = [ C ( r ( t ) ) e ( t ) + A ( r ( t ) ) g ( e ( t ) ) + B ( r ( t ) ) g ( e τ ( t ) ) + U ( t ) ] d t + σ ( t , r ( t ) , e ( t ) , e τ ( t ) ) d ω ( t ) .
(4)

The initial condition associated with system (4) is given in the following form:

e(s)=ξ(s),s[ τ ¯ ,0]

for any ξ L F 0 2 ([ τ ¯ ,0], R n ), where L F 0 2 ([ τ ¯ ,0], R n ) is the family of all F 0 -measurable C([ τ ¯ ,0]; R n )-value random variables satisfying that sup τ ¯ s 0 E | ξ ( s ) | 2 <, and C([ τ ¯ ,0]; R n ) denotes the family of all continuous R n -valued functions ξ(s) on [ τ ¯ ,0] with the norm ξ= sup τ ¯ s 0 |ξ(s)|.

To obtain the main result, we need the following assumptions.

Assumption 1 The activation functions of the neurons f(x(t)) satisfy the Lipschitz condition. That is, there exists a constant L>0 such that

| f ( u ) f ( v ) | L|uv|,u,v R n .

Assumption 2 The noise intensity matrix σ(,,,) satisfies the linear growth condition. That is, there exist two positives H 1 and H 2 such that

trace ( σ ( t , r ( t ) , u ( t ) , v ( t ) ) ) T ( σ ( t , r ( t ) , u ( t ) , v ( t ) ) ) H 1 | u ( t ) | 2 + H 2 | v ( t ) | 2

for all (t,r(t),u(t),v(t)) R + ×S× R n × R n .

Assumption 3 In the drive system (1)

f(0)0,σ( t 0 , r 0 ,0,0)0.

Remark 1 Under Assumption 1∼Assumption 3, the error system (4) admits an equilibrium point (or trivial solution) e(t,ξ), t0.

The following stability concept and synchronization concept are needed in this paper.

Definition 1 The trivial solution e(t,ξ) of the error system (4) is said to be almost surely asymptotically stable if

P ( lim t | x ( t ; i 0 , ξ ) | = 0 ) =1

for any ξ L L 0 p ([ τ ¯ ,0]; R n ).

The response system (2) and the drive system (1) are said to be almost surely asymptotically synchronized if the error system (4) is almost surely asymptotically stable.

The main purpose of the rest of this paper is to establish a criterion of the adaptive almost surely asymptotically synchronization of system (1) and response system (2) by using the adaptive feedback control and M-matrix techniques.

To this end, we introduce some concepts and lemmas which will be frequently used in the proofs of our main results.

Definition 2 [27]

A square matrix M= ( m i j ) n × n is called a nonsingular M-matrix if M can be expressed in the form M=s I n G with some G0 (i.e., each element of G is nonnegative) and s>ρ(G), where ρ(G) is the spectral radius of G.

Lemma 1 [8]

If M= ( m i j ) n × n R n × n with m i j <0 (ij), then the following statements are equivalent:

  1. (1)

    M is a nonsingular M-matrix.

  2. (2)

    Every real eigenvalue of M is positive.

  3. (3)

    M is positive stable. That is, M 1 exists and M 1 >0 (i.e., M 1 0 and at least one element of M 1 is positive).

Lemma 2 [5]Let x R n , y R n . Then

x T y+ y T xϵ x T x+ ϵ 1 y T y

for any ϵ>0.

Consider an n-dimensional stochastic delayed differential equation (SDDE, for short) with Markovian switching

dx(t)=f ( t , r ( t ) , x ( t ) , x τ ( t ) ) dt+g ( t , r ( t ) , x ( t ) , x τ ( t ) ) dω(t)
(5)

on t[0,) with the initial data given by

{ x ( θ ) : τ ¯ θ 0 } =ξ L L 0 2 ( [ τ ¯ , 0 ] ; R n ) .

If V C 2 , 1 ( R + ×S× R n ; R + ), define an operator ℒ from R + ×S× R n to R by

L V ( t , i , x , x τ ) = V t ( t , i , x ) + V x ( t , i , x ) f ( t , i , x , x τ ) + ( 1 / 2 ) trace ( g T ( t , i , x , x τ ) V x x ( t , i , x ) g ( t , i , x , x τ ) ) + j = 1 N γ i j V ( t , j , x ) ,

where

V t ( t , i , x ) = V ( t , i , x ) t , V x ( t , i , x ) = ( V ( t , i , x ) x 1 , V ( t , i , x ) x 2 , , V ( t , i , x ) x n ) , V x x ( t , i , x ) = ( 2 V ( t , i , x ) x j x k ) n × n .

For the SDDE with Markovian switching, we have the Dynkin formula as follows.

Lemma 3 (Dynkin formula) [8, 28]

Let V C 2 , 1 ( R + ×S× R n ; R + ) and τ 1 , τ 2 be bounded stopping times such that 0 τ 1 τ 2 a.s. (i.e., almost surely). If V(t,r(t),x(t)) and LV(t,r(t),x(t), x τ (t)) are bounded on t[ τ 1 , τ 2 ] with probability 1, then

EV ( τ 2 , r ( τ 2 ) , x ( τ 2 ) ) =EV ( τ 1 , r ( τ 1 ) , x ( τ 1 ) ) +E τ 1 τ 2 LV ( s , r ( s ) , x ( s ) , x τ ( s ) ) ds.

For the SDDE with Markovian switching again, the following hypothesis is imposed on the coefficients f and g.

Assumption 4 Both f and g satisfy the local Lipschitz condition. That is, for each h>0, there is an L h >0 such that

| f ( t , i , x , y ) f ( t , i , x ¯ , y ¯ ) | + | g ( t , i , x , y ) g ( t , i , x ¯ , y ¯ ) | L h ( | x x ¯ | + | y y ¯ | )

for all (t,i)R×S and those x,y, x ¯ , y ¯ R n with xy x ¯ y ¯ h. Moreover,

sup { | f ( t , i , 0 , 0 ) | | g ( t , i , 0 , 0 ) | : t 0 , i S } <.

Now we cite a useful result given by Yuan and Mao [29].

Lemma 4 [29]

Let Assumption  4 hold. Assume that there are functions V C 2 , 1 ( R + ×S× R n ; R + ), γ L 1 ( R + ; R + ) and w 1 , w 2 C( R n ; R + ) such that

LV(t,i,x,y)γ(t) w 1 (x)+ w 2 (y),(t,i,x,y) R + ×S× R n × R n ,
(6)
w 1 (0)= w 2 (0)=0, w 1 (x)> w 2 (x),x0
(7)

and

lim | x | inf 0 t < , i S V(t,i,x)=.
(8)

Then the solution of Eq. (5) is almost surely asymptotically stable.

3 Main results

In this section, we give a criterion of the adaptive almost surely asymptotically synchronization for the drive system (1) and the response system (2).

Theorem 1 Assume that is a nonsingular M-matrix, where

η = 2 γ + α + L 2 + β + H 1 , γ = min i S min 1 j n c j i , α = max i S ( ρ ( A i ) ) 2 , β = max i S ( ρ ( B i ) ) 2 , p 2 .

Let m>0 and (in this case, ( q 1 , q 2 , , q N ) T := M 1 m 0, i.e., all elements of M 1 m are positive by Lemma  1). Assume also that

( L 2 + H 2 ) q ¯ < ( η q i + k = 1 N γ i k q k ) ,iS,
(9)

where q ¯ = max i S q i .

Under Assumptions 1∼3, the noise-perturbed response system (2) can be adaptive almost surely asymptotically synchronized with the delayed neural network (1) if the update law of the feedback control gain K(t) of the controller (3) is chosen as

k ˙ j = q i α j e j 2 ,
(10)

where α j >0 (j=1,2,,n) are arbitrary constants.

Proof Under Assumptions 1∼3, it can be seen that the error system (4) satisfies Assumption 4.

For each iS, choose a nonnegative function as follows:

V(t,i,e)= q i | e | 2 + j = 1 n 1 α j k j 2 .

Then it is obvious that condition (8) holds.

Computing LV(t,i,e, e τ ) along the trajectory of the error system (4), and using (10), one can obtain that

L V ( t , i , e , e τ ) = V t + V e [ C i e + A i g ( e ) + B i g ( e τ ) + U ( t ) ] + ( 1 / 2 ) trace ( σ T ( t , i , e , e τ ) V e e σ ( t , i , e , e τ ) ) + k = 1 N γ i k V ( t , k , e ) = 2 j = 1 n 1 α j k j k ˙ j + 2 q i e T [ C i e + A i g ( e ) + B i g ( e τ ) + U ( t ) ] + q i trace ( σ T ( t , i , e , e τ ) σ ( t , i , e , e τ ) ) + k = 1 N γ i k q k | e | 2 = 2 q i e T [ C i e + A i g ( e ) + B i g ( e τ ) ] + q i trace ( σ T ( t , i , e , e τ ) σ ( t , i , e , e τ ) ) + k = 1 N γ i k q k | e | 2 .
(11)

Now, using Assumptions 1∼2 together with Lemma 2 yields

e T C i eγ | e | 2 ,
(12)
2 e T A i g(e) e T A i ( A i ) T e+ g T (e)g(e) ( α + L 2 ) | e | 2 ,
(13)
2 e T B i g( e τ ) e T B i ( B i ) T e+ g T ( e τ )g( e τ )β | e | 2 + L 2 | e τ | 2
(14)

and

q i trace ( σ T ( t , i , e , e τ ) σ ( t , i , e , e τ ) ) q i ( H 1 | e | 2 + H 2 | e τ | 2 ) .
(15)

Substituting (12)∼(15) into (11) yields

L V ( t , i , e , e τ ) ( η q i + k = 1 N γ i k q k ) | e | 2 + ( L 2 + H 2 ) q i | e τ | 2 m | e | 2 + ( L 2 + H 2 ) q ¯ | e τ | 2 ,
(16)

where m=(η q i + k = 1 N γ i k q k ) by ( q 1 , q 2 , , q N ) T = M 1 m .

Let w 1 (e)=m | e | 2 , w 2 ( e τ )=( L 2 + H 2 ) q ¯ | e τ | 2 . Then inequalities (6) and (7) hold by using (9), where γ(t)=0 in (6). By Lemma 4, the error system (4) is adaptive almost surely asymptotically stable, and hence the noise-perturbed response system (2) can be adaptive almost surely asymptotically synchronized with the drive delayed neural network (1). This completes the proof. □

Remark 2 In Theorem 1, condition (9) of the adaptive almost surely asymptotically synchronization for the SDNN with Markovian switching obtained by using M-matrix and the Lyapunov functional method is generator-dependent and very different to other methods such as the linear matrix inequality method. And it is easy to check the condition if the drive system and the response system are given and the positive constant m is well chosen. To the best of the authors’ knowledge, this method is the first development in the research area of synchronization for neural networks.

Now, we are in a position to consider two special cases of the drive system (1) and the response system (2).

Special case 1 The Markovian jumping parameters are removed from the neural networks (1) and the response system (2). In this case, N=1 and the drive system, the response system and the error system can be represented, respectively, as follows:

dx(t)= [ C x ( t ) + A f ( x ( t ) ) + B f ( x ( t τ ( t ) ) ) + D ] dt,
(17)
d y ( t ) = [ C y ( t ) + A f ( y ( t ) ) + B f ( y ( t τ ( t ) ) ) + D + U ( t ) ] d t d y ( t ) = + σ ( t , y ( t ) x ( t ) , y ( t τ ( t ) ) x ( t τ ( t ) ) ) d ω ( t )
(18)

and

d e ( t ) = [ C e ( t ) + A g ( e ( t ) ) + B g ( e τ ( t ) ) + U ( t ) ] d t + σ ( t , e ( t ) , e τ ( t ) ) d ω ( t ) .
(19)

For this case, one can get the following result that is analogous to Theorem 1.

Corollary 1 Let

η = 2 γ + α + L 2 + β + H 1 , γ = min 1 j n c j , α = ( ρ ( A ) ) 2 , β = ( ρ ( B ) ) 2 , p 2 .

Assume that

η<0

and

L 2 + H 2 <η.
(20)

Under Assumptions 1∼3, the noise-perturbed response system (18) can be adaptive almost surely asymptotically synchronized with the delayed neural network (17) if the update law of the feedback gain K(t) of the controller (3) is chosen as

k ˙ j = α j e j 2 ,
(21)

where α j >0 (j=1,2,,n) are arbitrary constants.

Proof Choose the following nonnegative function:

V(t,e)= | e | 2 + j = 1 n 1 α j k j 2 .

The rest of the proof is similar to that of Theorem 1, and hence omitted. □

Special case 2 The noise-perturbation is removed from the response system (2), which yields the noiseless response system

d y ( t ) = [ C ˆ ( r ( t ) ) y ( t ) + A ˆ ( r ( t ) ) f ( y ( t ) ) + B ˆ ( r ( t ) ) f ( y ( t τ ( t ) ) ) + D ( r ( t ) ) + U ( t ) ] d t
(22)

and the error system

de(t)= [ C ( r ( t ) ) e ( t ) + A ( r ( t ) ) g ( e ( t ) ) + B ( r ( t ) ) g ( e τ ( t ) ) + U ( t ) ] dt,
(23)

respectively.

In this case, one can get the following results.

Corollary 2 Assume that is a nonsingular M-matrix, where

η=2γ+α+ L 2 +β.

Let m>0 and (in this case, ( q 1 , q 2 , , q N ) T := M 1 m 0 by Lemma  1). Assume also that

L 2 q ¯ < ( η q i + k = 1 N γ i k q k ) ,iS,
(24)

where q ¯ = max i S q i .

Under Assumptions 1∼3, the noiseless-perturbed response system (22) can be adaptive almost surely asymptotically synchronized with the unknown drive delayed neural network (1) if the update law of the feedback gain K(t) of the controller (3) is chosen as

k ˙ j = q i α j e j 2 ,
(25)

where α j >0 are arbitrary constants.

Proof For each iS, choose a nonnegative function as follows:

V(t,i,e)= q i | e | 2 + j = 1 n 1 α j k j 2 .

The rest of the proof is similar to that of Theorem 1, and hence omitted. □

4 Numerical example

In the section, an illustrative example is given to support our main results.

Example 1 Consider a delayed neural network (1), and its response system (2) with Markovian switching and the following network parameters:

C 1 = [ 2 0 0 2.4 ] , C 2 = [ 1.5 0 0 1 ] , A 1 = [ 3.2 1.5 2.7 3.2 ] , A 2 = [ 2.1 0.6 0.8 3.2 ] , B 1 = [ 2.7 3.1 0 2.3 ] , B 2 = [ 1.4 2.1 0.3 1.5 ] , D 1 = [ 0.4 0.5 ] , D 2 = [ 0.4 0.6 ] , Γ = [ 1.2 1.2 0.5 0.5 ] , σ ( t , e ( t ) , e ( t τ ) , 1 ) = ( 0.4 e 1 ( t τ ) , 0.5 e 2 ( t ) ) T , σ ( t , e ( t ) , e ( t τ ) , 2 ) = ( 0.5 e 1 ( t ) , 0.3 e 2 ( t τ ) ) T , f ( x ( t ) ) = g ( x ( t ) ) = tanh ( x ( t ) ) , τ = 0.12 , L = 1 .

It can be checked that Assumption 1∼Assumption 3 and inequality (9) are satisfied and the matrix M is a nonsingular M-matrix. So, the noise-perturbed response system (2) can be adaptive almost surely asymptotically synchronized with the drive delayed neural network (1) by Theorem 1. The simulation results are given in Figures 12. Figure 1 shows that the state response e 1 (t) and e 2 (t) of the errors system converge to zero. Figure 2 shows the dynamic curve of the feedback gain k 1 and k 2 . From the simulations, it can be seen that the stochastic delayed neural networks with Markovian switching are adaptive almost surely asymptotically synchronization.

Figure 1
figure 1

The response curve of the state variable e 1 (t) and e 2 (t) of the errors system.

Figure 2
figure 2

The dynamic curve of the feedback gain k 1 and k 2 .

5 Conclusions

In this paper, we have proposed the concept of adaptive almost surely asymptotically synchronization for the stochastic delayed neural networks with Markovian switching. Making use of the M-matrix and Lyapunov functional method, we have obtained a sufficient condition, under which the response stochastic delayed neural network with Markovian switching can be adaptive almost surely asymptotically synchronized with the drive delayed neural networks with Markovian switching. The method to obtain the sufficient condition of the adaptive synchronization for neural networks is different to that of the linear matrix inequality technique. The condition obtained in this paper is dependent on the generator of the Markovian jumping models and can be easily checked. Extensive simulation results are provided to demonstrate the effectiveness of our theoretical results and analytical tools.