1 Introduction

In recent years, complex networks have received a lot of research attention since the pioneering work of Watts and Strogatz [1]. The main reason is two-fold: the first reason is that complex networks can be found in almost everywhere in real world, such as the Internet, WWW, the World Trade Web, genetic networks, and social networks; the second reason is that the dynamical behaviors of complex networks have found numerous applications in various fields such as physics, technology, and so on [24]. Complex networks are a set of inter-connected nodes, in which each node is a basic unit with specific contents or dynamics. Among all of dynamical behaviors of complex networks, synchronization is one of the most interesting topics and has been extensively investigated [5, 6]. Synchronization phenomena are very common and important in real world networks, such as synchronization phenomena on the Internet, synchronization transfer of digital or analog signals in communication networks, and synchronization related to biological neural networks. Hence, synchronization analysis in complex networks is important both in theory and application [7, 8].

On the one hand, the actual systems may experience abrupt changes in their structure and parameters caused by phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances. In general, these systems can be modeled by using Markov chains [911]. For example, in [12], mean-square exponential synchronization of Markovian switching stochastic complex networks with time-varying delays by pinning control was proposed and in [13], synchronization of Markovian jumping stochastic complex networks with distributed time delays and probabilistic interval discrete time-varying delays was considered. Besides these, the other strategies, which usually include data-driven approaches, support vector machine and multivariate statistical methods, and can be used to process these systems [1418].

On the other hand, because of the limited speed of signals traveling through the links and the frequently delayed couplings in complex networks, gene regulatory networks, static networks and multi-agent networks, time delays often occur [1922]. Therefore, recently, synchronization problems in neural networks with mixed-time delays have been extensively studied [2328]. Although there is some literature [2933] to investigate synchronization issues of complex networks, to the best of our knowledge, until now, global synchronization of MSCNs via using the delay-partition approach is still rarely paid attention to.

Inspired by the above discussions and idea of [34, 35], we utilize the delay-partition approach to effectively solve mixed time-varying delays of MSCNs such that less conservative conditions of global synchronization can be achieved in this paper. Firstly, a new model for a class of MSCNs with mixed time-varying delays is proposed. Secondly, with using a novel Lyapunov-Krasovskii stability functional, stochastic analysis techniques, and a delay-partition approach, some sufficient synchronization criteria are derived, respectively. Finally, two numerical examples are used to demonstrate the usefulness of derived results. The main contributions of this paper are as follows:

  1. (1)

    MSCNs model aspect. A new model for a class of MSCNs with mixed time-varying delays is proposed.

  2. (2)

    A novel delay-partition approach is developed to solve global synchronization for a new class of MSCNs with mixed time-varying delays. This causes our results to have lower conservatism.

Notations: Throughout this paper, the following mathematical notations will be used. R n  denotes the n-dimensional Euclidean space and R n × m is the set of real matrices. The superscript T denotes the matrix transposition. I n R n × n means an n-dimensional identity matrix. XY>0, where X,Y R n × n , means that the matrix XY is real positive semi-definite. For symmetric block matrices or long matrix expressions, an asterisk ⋆ is used to represent a term that is induced by symmetry. diag{} stands for a block-diagonal matrix. The Kronecker product of matrices A R m × n and B R M × N is a matrix in R m M × n N , which is denoted as AB. Let (Ω,F, { F t } t 0 ,P) be a complete probability space with a filtration { F t } t 0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). E[x] means the expectation of the random variable x. If the dimensions of matrices are not explicitly indicated, that means they are suitable for any algebraic operations.

2 Problem formulation and preliminaries

In this section, the problem formulation and preliminaries are briefly introduced.

Let {r(t),t0} be a right-continuous Markov chain on the probability space (Ω,F, { F t } t 0 ,P) taking values in a finite state space S={1,2,,s} with a generator Π= ( δ i j ) s × s (i,jS) given by

P { r ( t + Δ t ) = j | r ( t ) = i } = { δ i j Δ t + o ( Δ t ) , if  ( i j ) , 1 + δ i j Δ t + o ( Δ t ) , if  ( i = j ) ,

where Δt>0, lim Δ t 0 (o(Δt)/Δt)=0, δ i j >0 (ij) is the transition probability from mode i to mode j, and δ i i = i j δ i j <0.

Due to distributed time-varying delays, discrete time-varying delays of complex networks widely existing in signal traveling and complex networks topologies structures being governed by Markov chains, we consider the following MSCNs with mixed time-varying delays:

x ˙ i ( t ) = A ( r ( t ) ) f ( x i ( t ) ) + B ( r ( t ) ) f ( x i ( t τ 1 ( t ) ) ) + c ( 1 ) ( r ( t ) ) j = 1 N a i j ( 1 ) ( r ( t ) ) Γ ( 1 ) ( r ( t ) ) x j ( t ) + c ( 2 ) ( r ( t ) ) j = 1 N a i j ( 2 ) ( r ( t ) ) Γ ( 2 ) ( r ( t ) ) x j ( t τ 2 ( t ) ) + c ( 3 ) ( r ( t ) ) j = 1 N a i j ( 3 ) ( r ( t ) ) Γ ( 3 ) ( r ( t ) ) t τ 3 ( t ) t x j ( s ) d s ,
(1)

where {r(t),t0} is the continuous-time Markov process which describes the evolution of the mode at time t. A(r(t)) and B(r(t)) R n × n are matrices with real values in mode r(t). τ 1 (t), τ 2 (t) and τ 3 (t) represent node discrete time-varying delay, discrete time-varying coupling delay and distributed time-varying coupling delay, respectively. c ( p ) (r(t)) is the coupling strength in mode r(t), where c ( p ) (r(t))>0. Γ ( p ) (r(t)) is the inner-coupling matrix in mode r(t). A ( p ) (r(t))= ( a ( m ) i j ( p ) ) N × N represents the outer-coupling matrix, and the diagonal elements of matrix A m ( p ) are defined by a ( m ) i i ( p ) = j = 1 , j i N a ( m ) i j ( p ) (i,j=1,2,,N, mS). Here, p=1,2,3.

Remark 1 The MSCN (1), which contains Markovian switching parameters and mixed time-varying delays, in this paper is more practical than that of [2931]. Although time delays are considered in [2931], Markovian switching cannot be taken to describe the addressed systems. Furthermore, the MSCN (1) of this paper is clearly different from that of [32, 33]. Their primary differences are mixed time-varying delays. In [32], mixed time-varying delays include node discrete time-varying and distributed time-varying delays. In [33], mixed time-varying delays are comprised of node discrete stochastic time-varying, discrete stochastic time-varying coupling, and distributed time-varying delays.

With the Kronecker product, we can rewrite system (1) in the following compact form:

x ˙ ( t ) = ( I N A ( r ( t ) ) ) F ( x ( t ) ) + ( I N B ( r ( t ) ) ) F ( x ( t τ 1 ( t ) ) ) + c ( 1 ) ( r ( t ) ) ( A ( 1 ) ( r ( t ) ) Γ ( 1 ) ( r ( t ) ) ) x ( t ) + c ( 2 ) ( r ( t ) ) ( A ( 2 ) ( r ( t ) ) Γ ( 2 ) ( r ( t ) ) ) x ( t τ 2 ( t ) ) + c ( 3 ) ( r ( t ) ) ( A ( 3 ) ( r ( t ) ) Γ ( 3 ) ( r ( t ) ) ) t τ 3 ( t ) t x ( s ) d s ,
(2)

where

x ( t ) = ( x 1 T ( t ) , , x N T ( t ) ) T , x ( t τ 2 ( t ) ) = ( x 1 T ( t τ 2 ( t ) ) , , x N T ( t τ 2 ( t ) ) ) T , F ( x ( t ) ) = ( f T ( x 1 ( t ) ) , , f T ( x N ( t ) ) ) T , F ( x ( t τ 1 ( t ) ) ) = ( f T ( x 1 ( t τ 1 ( t ) ) ) , , f T ( x N ( t τ 1 ( t ) ) ) ) T , t τ 3 ( t ) t x ( s ) d s = ( t τ 3 ( t ) t x 1 T ( s ) d s , , t τ 3 ( t ) t x N T ( s ) d s ) T .

For notational simplicity, we denote matrices A(r(t)), B(r(t)), A ( 1 ) (r(t)), A ( 2 ) (r(t)), A ( 3 ) (r(t)), Γ ( 1 ) (r(t)), Γ ( 2 ) (r(t)), Γ ( 3 ) (r(t)), scalars c ( 1 ) (r(t)), c ( 2 ) (r(t)), and c ( 3 ) (r(t)) as A m , B m , A m ( 1 ) , A m ( 2 ) , A m ( 3 ) , Γ m ( 1 ) , Γ m ( 2 ) , Γ m ( 3 ) , c m ( 1 ) , c m ( 2 ) , and c m ( 3 ) (mS), respectively. Therefore, system (2) can be rewritten as follows:

x ˙ ( t ) = ( I N A m ) F ( x ( t ) ) + ( I N B m ) F ( x ( t τ 1 ( t ) ) ) + c m ( 1 ) ( A m ( 1 ) Γ m ( 1 ) ) x ( t ) + c m ( 2 ) ( A m ( 2 ) Γ m ( 2 ) ) x ( t τ 2 ( t ) ) + c m ( 3 ) ( A m ( 3 ) Γ m ( 3 ) ) t τ 3 ( t ) t x ( s ) d s .
(3)

Definition 1 The MSCN (1) is said to achieve global asymptotic synchronization if

E x i ( t ) s ( t ) =0,as t,

where i{1,2,,N}, s(t) is a solution of an isolated node and satisfying s ˙ (t)= A m f(s(t))+ B m f(s(t τ 1 (t))).

Assumption 1 Time-varying delays in the MSCN (1) satisfy

0 τ 1 ( t ) τ 1 M , 0 τ 2 ( t ) τ 2 M , 0 τ 3 ( t ) τ 3 M , 0 | τ ˙ 1 ( t ) | μ 1 , 0 | τ ˙ 2 ( t ) | μ 2 , 0 | τ ˙ 3 ( t ) | μ 3 ,

where i,j=1,2,,N.

Assumption 2 (Khalil [36])

For x,y R n , the continuous nonlinear function f satisfies the following sector-bounded condition:

[ f ( x ) f ( y ) F 1 ( x y ) ] T [ f ( x ) f ( y ) F 2 ( x y ) ] 0,

where F 1 , F 2 are real constant matrices with F 2 F 1 0.

Assumption 3 There exist positive-definite matrices Z k , W l (k=1,2,,r, l=1,2,, n 1 ), and they satisfy

Z 1 Z 2 Z r , W 1 W 2 W n 1 .

Lemma 1 (Langville and Stewart [37])

The Kronecker product has the following properties:

  1. (1)

    (αA)B=A(αB),

  2. (2)

    (A+B)C=(AC)+(BC),

  3. (3)

    (AB)(CD)=(AC)(BD),

  4. (4)

    ( A B ) T = A T B T .

Lemma 2 (Liu et al. [32])

Let μ= ( α i j ) n × n , P R m × m , x= ( x 1 T , x 2 T , , x n T ) T , y= ( y 1 T , y 2 T , , y n T ) T , where x i = ( x i 1 , x i 2 , , x i m ) T R m , y i = ( y i 1 , y i 2 , , y i m ) T R m , if μ= μ T and each row sum of μ is equal to zero, then

x T (μP)y= 1 i < j n α i j ( x i x j ) T P( y i y j ).

Lemma 3 (Boyd et al. [38])

Given constant matrices X, Y, Z where X= X T , and 0<Y= Y T . Then X+ Z T Y 1 Z<0 if and only if

[ X Z T Z Y ]<0or[ Y Z Z T X ]<0.

Lemma 4 (Gu [39])

For any positive-definite matrix M>0, scalar γ>0, and vector function ω:[0,γ] R n such that the integrations concerned are well defined, then the following inequality holds:

( 0 γ ω ( s ) d s ) T M ( 0 γ ω ( s ) d s ) γ ( 0 γ ω T ( s ) M ω ( s ) d s ) .

Lemma 5 (Boyd et al. [38])

For any vector x,y R n and one positive-definite matrix Q>0, the following inequality holds:

2 x T y x T Q 1 x+ y T Qy.

Lemma 6 Let A= ( a i j ) N × N , a i i = i j , j = 1 N a i j , P R m × m , x= ( x 1 T , x 2 T , , x N T ) T R N m , x i = ( x i 1 , x i 2 , , x i m ) T R m , y= ( y 1 T , y 2 T , , y N T ) T R N m , y i = ( y i 1 , y i 2 , , y i m ) T R m , then

x T (UAP)y= 1 i < j N N a i j ( x i x j ) T P( y i y j ),

where

U=[ N 1 1 1 1 N 1 1 1 1 N 1 ].

3 Main results

In this section, global synchronization of the MSCN (1) is investigated by utilizing the Lyapunov-Krasovskii functional method, the stochastic analysis techniques and the delay-partition approach. Furthermore, in order to show the merits of the delay-partition approach, Corollary 1 can also be given, according to Theorem 1.

Theorem 1 Under Assumptions 1-3, Definition  1, for given constants τ 1 M , τ 2 M , τ 3 M , μ 1 , μ 2 , and any integer r1, n 1 1, N1, system (1) in the delay-partition approach is globally asymptotically synchronized if there exist positive-definite matrices P m (m=1,2,,s)>0, { N k , Z k }(k=1,2,,r)>0, { M l , W l }(l=1,2,, n 1 )>0, Q>0, arbitrary matrices { R 1 k , R 2 l }, R 3 , F 1 , F 2 , F 3 , F 4 with appropriate dimensions, and positive scalars α 1 , α 2 , α 3 , such that the following LMI holds for all 1i<jN:

[ Θ 11 m Θ 12 β T Θ 22 β ]<0(m=1,2,,s,β=1,2,,r,r+1,,r+ n 1 ),
(4)

where

Θ 11 m = [ Ξ 11 Ξ 12 Ξ 13 Ξ 14 Ξ 22 Ξ 23 Ξ 24 Ξ 33 Ξ 34 Ξ 44 ] , Θ 12 β T = [ R 11 , R 12 , , R 1 r , R 21 , R 22 , , R 2 n 1 ] , Θ 22 β = diag { p Z 1 , p Z 2 , , p Z r , q W 1 , q W 2 , , q W n 1 } , Ξ 22 = [ Π ˆ 11 ( r ) Π ˆ 12 ( r ) Π ˆ 1 ( r 1 ) ( r ) Π ˆ 22 ( r ) Π ˆ 2 ( r 1 ) ( r ) Π ˆ ( r 1 ) ( r 1 ) ( r ) ] , Ξ 14 = [ α 3 R 3 , α 3 R 3 , α 3 R 3 , α 3 R 3 , α 3 R 3 , α 3 R 3 ] T , Ξ 11 = [ Π 11 Π 12 Π 13 Π 14 Π 15 Π 16 Π 22 Π 23 Π 24 Π 25 Π 26 Π 33 Π 34 Π 35 Π 36 Π 44 Π 45 Π 46 Π 55 Π 56 Π 66 ] , Ξ 12 = [ Π 11 ( r ) Π 1 ( r 1 ) ( r ) Π 21 ( r ) Π 2 ( r 1 ) ( r ) Π 31 ( r ) Π 3 ( r 1 ) ( r ) Π 41 ( r ) Π 4 ( r 1 ) ( r ) Π 51 ( r ) Π 5 ( r 1 ) ( r ) Π 61 ( r ) Π 6 ( r 1 ) ( r ) ] , Ξ 13 = [ Π 11 ( n 1 ) Π 1 ( n 1 1 ) ( n 1 ) Π 21 ( n 1 ) Π 2 ( n 1 1 ) ( n 1 ) Π 31 ( n 1 ) Π 3 ( n 1 1 ) ( n 1 ) Π 41 ( n 1 ) Π 4 ( n 1 1 ) ( n 1 ) Π 51 ( n 1 ) Π 5 ( n 1 1 ) ( n 1 ) Π 61 ( n 1 ) Π 6 ( n 1 1 ) ( n 1 ) ] , Ξ 23 = [ Π ˆ 11 ( n 1 ) Π ˆ 1 ( n 1 1 ) ( n 1 ) Π ˆ ( r 1 ) 1 ( n 1 ) Π ˆ ( r 1 ) ( n 1 1 ) ( n 1 ) ] , Ξ 24 = [ α 3 R 3 , α 3 R 3 , , α 3 R 3 ] T , Ξ 33 = [ H 11 ( n 1 ) H 1 ( n 1 1 ) ( n 1 ) H ( n 1 1 ) ( n 1 1 ) ( n 1 ) ] , Ξ 34 = [ α 3 R 3 , α 3 R 3 , , α 3 R 3 ] T , Ξ 44 = 2 α 3 R 3 + τ 1 M r k = 1 r Z k + τ 2 M n 1 l = 1 n 1 W l , Π 11 = N c m ( 1 ) P m Γ m ( 1 ) N c m ( 1 ) ( Γ m ( 1 ) ) T P m + m 1 = 1 s δ m m 1 P m 1 Π 11 = + N 1 + M 1 α 1 ( F 1 T F 2 + F 2 T F 1 ) + 2 R 11 + 2 R 21 Π 11 = + τ 3 M 2 Q 2 N c m ( 1 ) α 3 R 3 Γ m ( 1 ) , Π 12 = R 1 r , Π 13 = N c m ( 2 ) P m Γ m ( 2 ) N c m ( 2 ) α 3 R 3 Γ m ( 2 ) R 2 n 1 , Π 14 = N c m ( 3 ) P m Γ m ( 3 ) N c m ( 3 ) α 3 R 3 Γ m ( 3 ) , Π 15 = α 3 R 3 A m + P m A m + α 1 ( F 1 T + F 2 T ) , Π 16 = α 3 R 3 B m + P m B m , Π 22 = ( 1 μ 1 ) N r 2 R 1 r α 2 ( F 3 T F 4 + F 4 T F 3 ) , Π 23 = N c m ( 2 ) α 3 R 3 Γ m ( 2 ) R 2 n 1 , Π 24 = N c m ( 3 ) α 3 R 3 Γ m ( 3 ) , Π 25 = α 3 R 3 A m , Π 26 = α 3 R 3 B m + α 2 ( F 3 T + F 4 T ) , Π 33 = ( 1 μ 2 ) M n 1 2 N c m ( 2 ) α 3 R 3 Γ m ( 2 ) 2 R 2 n 1 , Π 34 = N c m ( 3 ) α 3 R 3 Γ m ( 3 ) , Π 35 = α 3 R 3 A m , Π 36 = α 3 R 3 B m , Π 44 = 2 N c m ( 3 ) α 3 R 3 Γ m ( 3 ) Q , Π 45 = α 3 R 3 A m , Π 46 = α 3 R 3 B m , Π 55 = 2 α 1 I + 2 α 3 R 3 A m , Π 56 = α 3 R 3 B m , Π 66 = 2 α 2 I + 2 α 3 R 3 B m , Π 11 ( r ) = R 11 + R 12 , Π 1 ( r 1 ) ( r ) = R 1 ( r 1 ) + R 1 r , Π 61 ( r ) = R 11 + R 12 , Π 6 ( r 1 ) ( r ) = R 1 ( r 1 ) + R 1 r , Π 11 ( n 1 ) = R 21 + R 22 , Π 1 ( n 1 1 ) ( n 1 ) = R 2 ( n 1 1 ) + R 2 n 1 , Π 61 ( n 1 ) = R 21 + R 22 , Π 6 ( n 1 1 ) ( n 1 ) = R 2 ( n 1 1 ) + R 2 n 1 , Π ˆ 11 ( r ) = ( 1 1 r μ 1 ) N 1 + N 2 2 R 11 + 2 R 12 , Π ˆ 12 ( r ) = R 12 + R 13 , Π ˆ 1 ( r 1 ) ( r ) = R 1 ( r 1 ) + R 1 r , Π ˆ 22 ( r ) = ( 1 2 r μ 1 ) N 2 + N 3 2 R 12 + 2 R 13 , Π ˆ 2 ( r 1 ) ( r ) = R 1 ( r 1 ) + R 1 r , Π ˆ ( r 1 ) ( r 1 ) ( r ) = ( 1 r 1 r μ 1 ) N r 1 + N r 2 R 1 ( r 1 ) + 2 R 1 r , Π ˆ 11 ( n 1 ) = R 21 + R 22 , Π ˆ 1 ( n 1 1 ) ( n 1 ) = R 2 ( n 1 1 ) + R 2 n 1 , Π ˆ ( r 1 ) 1 ( n 1 ) = R 21 + R 22 , Π ˆ ( r 1 ) ( n 1 1 ) ( n 1 ) = R 2 ( n 1 1 ) + R 2 n 1 , H 11 ( n 1 ) = ( 1 1 n 1 μ 2 ) M 1 + M 2 2 R 21 + 2 R 22 , H 1 ( n 1 1 ) ( n 1 ) = R 2 ( n 1 1 ) + R 2 n 1 , H ( n 1 1 ) ( n 1 1 ) ( n 1 ) = ( 1 n 1 1 n 1 μ 2 ) M n 1 1 + M n 1 2 R 2 ( n 1 1 ) + 2 R 2 n 1 , p = r τ 1 M , q = n 1 τ 2 M .

Proof Construct a Lyapunov-Krasovskii functional candidate as

V ( x ( t ) , t , m ) = V 1 ( x ( t ) , t , m ) + V 2 ( x ( t ) , t , m ) + V 3 ( x ( t ) , t , m ) ,
(5)

where

V 1 ( x ( t ) , t , m ) = x T ( t ) ( U P m ) x ( t ) + k = 1 r t k r τ 1 ( t ) t k 1 r τ 1 ( t ) x T ( s ) ( U N k ) x ( s ) d s V 1 ( x ( t ) , t , m ) = + l = 1 n 1 t l n 1 τ 2 ( t ) t l 1 n 1 τ 2 ( t ) x T ( s ) ( U M l ) x ( s ) d s ,
(6)
V 2 ( x ( t ) , t , m ) = τ 3 M τ 3 M 0 t + θ t x T (s)(UQ)x(s)dsdθ,
(7)
V 3 ( x ( t ) , t , m ) = k = 1 r k r τ 1 M k 1 r τ 1 M t + θ t x ˙ T ( s ) ( U Z k ) x ˙ ( s ) d s d θ V 3 ( x ( t ) , t , m ) = + l = 1 n 1 l n 1 τ 2 M l 1 n 1 τ 2 M t + θ t x ˙ T ( s ) ( U W l ) x ˙ ( s ) d s d θ , U = [ N 1 1 1 1 N 1 1 1 1 N 1 ] .
(8)

Computing LV(x(t),t,m) along the trajectory of system (3), and according to Assumption 1, (6)-(8), we can obtain

LV ( x ( t ) , t , m ) =L V 1 ( x ( t ) , t , m ) +L V 2 ( x ( t ) , t , m ) +L V 3 ( x ( t ) , t , m ) ,
(9)
L V 1 ( x ( t ) , t , m ) 2 x T ( t ) ( U P m ) [ ( I N A m ) F ( x ( t ) ) + ( I N B m ) F ( x ( t τ 1 ( t ) ) ) L V 1 ( x ( t ) , t , m ) + c m ( 1 ) ( A m ( 1 ) Γ m ( 1 ) ) x ( t ) + c m ( 2 ) ( A m ( 2 ) Γ m ( 2 ) ) x ( t τ 2 ( t ) ) L V 1 ( x ( t ) , t , m ) + c m ( 3 ) ( A m ( 3 ) Γ m ( 3 ) ) t τ 3 ( t ) t x ( s ) d s ] + m 1 = 1 s δ m m 1 x T ( t ) ( U P m 1 ) x ( t ) L V 1 ( x ( t ) , t , m ) + k = 1 r [ x T ( t k 1 r τ 1 ( t ) ) ( U N k ) x ( t k 1 r τ 1 ( t ) ) L V 1 ( x ( t ) , t , m ) ( 1 k r μ 1 ) x T ( t k r τ 1 ( t ) ) ( U N k ) x ( t k r τ 1 ( t ) ) ] L V 1 ( x ( t ) , t , m ) + l = 1 n 1 [ x T ( t l 1 n 1 τ 2 ( t ) ) ( U M l ) x ( t l 1 n 1 τ 2 ( t ) ) L V 1 ( x ( t ) , t , m ) ( 1 l n 1 μ 2 ) x T ( t l n 1 τ 2 ( t ) ) ( U M l ) x ( t l n 1 τ 2 ( t ) ) ] ,
(10)
L V 2 ( x ( t ) , t , m ) = τ 3 M 2 x T (t)(UQ)x(t) τ 3 M t τ 3 M t x T (s)(UQ)x(s)ds,
(11)
L V 3 ( x ( t ) , t , m ) = x ˙ T ( t ) [ ( τ 1 M r ) k = 1 r ( U Z k ) + ( τ 2 M n 1 ) l = 1 n 1 ( U W l ) ] x ˙ ( t ) L V 3 ( x ( t ) , t , m ) = k = 1 r t k r τ 1 M t k 1 r τ 1 M x ˙ T ( s ) ( U Z k ) x ˙ ( s ) d s L V 3 ( x ( t ) , t , m ) = l = 1 n 1 t l n 1 τ 2 M t l 1 n 1 τ 2 M x ˙ T ( s ) ( U W l ) x ˙ ( s ) d s .
(12)

By Assumption 3, we have

k = 1 r t k r τ 1 M t k 1 r τ 1 M x ˙ T ( s ) ( U Z k ) x ˙ ( s ) d s k = 1 r t k r τ 1 ( t ) t k 1 r τ 1 ( t ) x ˙ T ( s ) ( U Z k ) x ˙ ( s ) d s = t 1 r τ 1 M t 1 r τ 1 ( t ) x ˙ T ( s ) ( U ( Z 1 Z 2 ) ) x ˙ ( s ) d s + + t r 1 r τ 1 M t r 1 r τ 1 ( t ) x ˙ T ( s ) ( U ( Z r 1 Z r ) ) x ˙ ( s ) d s + t τ 1 M t τ 1 ( t ) x ˙ T ( s ) ( U Z r ) x ( s ) d s 0 .
(13)

Similar to inequality (13), we get inequality (14) directly from Assumption 3:

l = 1 n 1 t l n 1 τ 2 M t l 1 n 1 τ 2 M x ˙ T (s)(U W l ) x ˙ (s)ds l = 1 n 1 t l n 1 τ 2 ( t ) t l 1 n 1 τ 2 ( t ) x ˙ T (s)(U W l ) x ˙ (s)ds.
(14)

It follows from Lemma 4 that

τ 3 M t τ 3 M t x T ( s ) ( U Q ) x ( s ) d s ( t τ 3 M t x T ( s ) d s ) ( U Q ) ( t τ 3 M t x ( s ) d s ) ( t τ 3 ( t ) t x T ( s ) d s ) ( U Q ) ( t τ 3 ( t ) t x ( s ) d s ) .
(15)

Let ξ i j T (t) = [ x i j T (t), x i j T (t τ 1 (t)), x i j T (t τ 2 (t)), t τ 3 ( t ) t x i j T (s)ds, f i j T (x(t)), f i j T (x(t τ 1 (t))), x i j T (t τ 1 ( t ) r ), … , x i j T (t r 1 r τ 1 (t)), x i j T (t τ 2 ( t ) n 1 ), … , x i j T (t n 1 1 n 1 τ 2 (t)), y i j T (t)], where x i j T (t)= ( x i ( t ) x j ( t ) ) T , x i j T (t τ 1 (t))= ( x i ( t τ 1 ( t ) ) x j ( t τ 1 ( t ) ) ) T , … , y i j T (t)= ( x ˙ i ( t ) x ˙ j ( t ) ) T , then by using the Newton-Leibniz formula, the following equalities are true for any matrices R 1 k , R 2 l , R 3 (k=1,2,,r, l=1,2,, n 1 ) with appropriate dimensions:

2 k = 1 r ξ T ( t ) ( U R 1 k ) × [ x ( t k 1 r τ 1 ( t ) ) x ( t k r τ 1 ( t ) ) t k r τ 1 ( t ) t k 1 r τ 1 ( t ) x ˙ ( s ) d s ] = 0 ,
(16)
2 l = 1 n 1 ξ T ( t ) ( U R 2 l ) × [ x ( t l 1 n 1 τ 2 ( t ) ) x ( t l n 1 τ 2 ( t ) ) t l n 1 τ 2 ( t ) t l 1 n 1 τ 2 ( t ) x ˙ ( s ) d s ] = 0 .
(17)

Denote x ˙ (t)=y(t), then

2 α 3 ξ T (t)(U R 3 ) [ y ( t ) x ˙ ( t ) ] =0,
(18)

where scalar α 3 >0.

By Lemmas 2, 5 and combining (16)-(17), there exist positive-definite matrices Z k and W l (k=1,2,,r, l=1,2,, n 1 ), such that

2 k = 1 r ξ T ( t ) ( U R 1 k ) t k r τ 1 ( t ) t k 1 r τ 1 ( t ) x ˙ ( s ) d s τ 1 M r 1 i < j N k = 1 r ξ i j T ( t ) R 1 k Z k 1 R 1 k T ξ i j ( t ) + 1 i < j N k = 1 r t k r τ 1 ( t ) t k 1 r τ 1 ( t ) x ˙ i j T ( s ) Z k x ˙ i j ( s ) d s ,
(19)
2 l = 1 n 1 ξ T ( t ) ( U R 2 l ) t l n 1 τ 2 ( t ) t l 1 n 1 τ 2 ( t ) x ˙ ( s ) d s τ 2 M n 1 1 i < j N l = 1 n 1 ξ i j T ( t ) R 2 l W l 1 R 2 l T ξ i j ( t ) + 1 i < j N l = 1 n 1 t l n 1 τ 2 ( t ) t l 1 n 1 τ 2 ( t ) x ˙ i j T ( s ) W l x ˙ i j ( s ) d s .
(20)

According to Assumption 2, for α 1 >0 and α 2 >0, we can obtain

α 1 [ f i ( x ( t ) ) f j ( x ( t ) ) F 1 ( x i ( t ) x j ( t ) ) ] T × [ f i ( x ( t ) ) f j ( x ( t ) ) F 2 ( x i ( t ) x j ( t ) ) ] 0 ,
(21)
α 1 [ f i ( x ( t ) ) f j ( x ( t ) ) F 2 ( x i ( t ) x j ( t ) ) ] T × [ f i ( x ( t ) ) f j ( x ( t ) ) F 1 ( x i ( t ) x j ( t ) ) ] 0 ,
(22)
α 2 [ f i ( x ( t τ 1 ( t ) ) ) f j ( x ( t τ 1 ( t ) ) ) F 3 ( x i ( t τ 1 ( t ) ) x j ( t τ 1 ( t ) ) ) ] T × [ f i ( x ( t τ 1 ( t ) ) ) f j ( x ( t τ 1 ( t ) ) ) F 4 ( x i ( t τ 1 ( t ) ) x j ( t τ 1 ( t ) ) ) ] 0 ,
(23)
α 2 [ f i ( x ( t τ 1 ( t ) ) ) f j ( x ( t τ 1 ( t ) ) ) F 4 ( x i ( t τ 1 ( t ) ) x j ( t τ 1 ( t ) ) ) ] T × [ f i ( x ( t τ 1 ( t ) ) ) f j ( x ( t τ 1 ( t ) ) ) F 3 ( x i ( t τ 1 ( t ) ) x j ( t τ 1 ( t ) ) ) ] 0 .
(24)

Substitute (10)-(24) into (9), then taking the expectation on both sides of (9) and using Lemmas 1, 2, 6, we get

E [ L V ( x ( t ) , t , m ) ] E { 1 i < j N { x i j T ( t ) [ 2 P m A m f i j ( x ( t ) ) + 2 P m B m f i j ( x ( t τ 1 ( t ) ) ) 2 N c m ( 1 ) P m Γ m ( 1 ) x i j ( t ) 2 N c m ( 2 ) P m Γ m ( 2 ) x i j ( t τ 2 ( t ) ) 2 N c m ( 3 ) P m Γ m ( 3 ) t τ 3 ( t ) t x i j ( s ) d s + m 1 = 1 s δ m m 1 P m 1 x i j ( t ) ] + k = 1 r [ x i j T ( t k 1 r τ 1 ( t ) ) N k x i j ( t k 1 r τ 1 ( t ) ) ( 1 k r μ 1 ) x i j T ( t k r τ 1 ( t ) ) N k x i j ( t k r τ 1 ( t ) ) ] + l = 1 n 1 [ x i j T ( t l 1 n 1 τ 2 ( t ) ) M l x i j ( t l 1 n 1 τ 2 ( t ) ) ( 1 l n 1 μ 2 ) x i j T ( t l n 1 τ 2 ( t ) ) M l x i j ( t l n 1 τ 2 ( t ) ) ] + τ 3 M 2 x i j T ( t ) Q x i j ( t ) t τ 3 ( t ) t x i j T ( s ) d s Q t τ 3 ( t ) t x i j ( s ) d s + y i j T ( t ) [ τ 1 M r k = 1 r Z k + τ 2 M n 1 l = 1 n 1 W l ] y i j ( t ) + ξ i j T ( t ) [ τ 1 M r k = 1 r R 1 k Z k 1 R 1 k T + τ 2 M n 1 l = 1 r R 2 l W l 1 R 2 l T ] ξ i j ( t ) ξ i j T ( t ) ( 2 α 3 R 3 ) y i j ( t ) + ξ i j T ( t ) ( 2 α 3 R 3 ) [ A m f i j ( x ( t ) ) + B m f i j ( x ( t τ 1 ( t ) ) ) N c m ( 1 ) Γ m ( 1 ) x i j ( t ) N c m ( 2 ) Γ m ( 2 ) x i j ( t τ 2 ( t ) ) N c m ( 3 ) Γ m ( 3 ) t τ 3 ( t ) t x i j ( s ) d s ] + ξ i j T ( t ) ( 2 k = 1 r R 1 k ) [ x i j ( t k 1 r τ 1 ( t ) ) x i j ( t k r τ 1 ( t ) ) ] + ξ i j T ( t ) ( 2 l = 1 n 1 R 2 l ) [ x i j ( t l 1 n 1 τ 2 ( t ) ) x i j ( t l n 1 τ 2 ( t ) ) ] α 1 [ f i j T ( x ( t ) ) ( 2 I ) f i j ( x ( t ) ) f i j T ( x ( t ) ) ( F 1 + F 2 ) x i j ( t ) x i j T ( t ) ( F 1 T + F 2 T ) f i j ( x ( t ) ) + x i j T ( t ) ( F 1 T F 2 + F 2 T F 1 ) x i j ( t ) ] α 2 [ f i j T ( x ( t τ 1 ( t ) ) ) ( 2 I ) f i j ( x ( t τ 1 ( t ) ) ) f i j T ( x ( t τ 1 ( t ) ) ) ( F 1 + F 2 ) x i j ( t τ 1 ( t ) ) x i j T ( t τ 1 ( t ) ) ( F 1 T + F 2 T ) f i j ( x ( t τ 1 ( t ) ) ) + x i j T ( t τ 1 ( t ) ) ( F 1 T F 2 + F 2 T F 1 ) x i j ( t τ 1 ( t ) ) ] } } = E { 1 i < j N ξ i j T ( t ) { Θ 11 m + ( τ 1 M r k = 1 r R 1 k Z k 1 R 1 k T + τ 2 M n 1 l = 1 n 1 R 2 l W l 1 R 2 l T ) } ξ i j ( t ) } .
(25)

By Lemma 3 and Theorem 1, we have

E { L V ( x ( t ) , t , m ) } E { 1 i < j N ξ i j T ( t ) { Θ 11 m + Θ 12 β T Θ 22 β 1 Θ 12 β } ξ i j ( t ) } <0.
(26)

According to Definition 1, the MSCN (1) is global asymptotic synchronization. The proof is completed. □

Remark 2 In Theorem 1, the criterion which is the MSCN (1) with mixed time-varying delays under the delay-partition approach can achieve global asymptotic synchronization is established. In proving Theorem 1, it is clear that the time-varying delays τ 1 (t) and τ 2 (t) are divided into r and n 1 slices, respectively. In [34, 35], the delay-partition approach is used to solve state estimation and stability analysis problems of neural networks with time-varying delay. Although synchronization problems of complex network with time delays were investigated in [2932], our results in the delay-partition approach in this paper has lower conservatism. The reason is that the integers r and n 1 become larger, and the allowable upper bounds of the time-varying delays τ 1 (t) and τ 2 (t) will be larger. This will also be analyzed in Remark 3 and be shown in numerical examples.

Corollary 1 Under Assumptions 1-2, Definition  1, for given constants τ 1 M , τ 2 M , τ 3 M , μ 1 , μ 2 , system (1) is globally asymptotically synchronized if there exist positive-definite matrices P m (m=1,2,,s)>0, N 1 >0, Z 1 >0, M 1 >0, W 1 >0, Q>0, arbitrary matrices R 3 , F 1 , F 2 , F 3 , F 4 with appropriate dimensions and positive scalars α 1 , α 2 , α 3 , such that the LMI (4) holds for all 1i<jN, r=1, and n 1 =1.

Remark 3 In Corollary 1, the delay-partition approach is not used to solve the synchronization problem of the MSCN (1). Therefore, the upper bounds of the time-varying delays τ 1 (t) and τ 2 (t) are τ 1 M and τ 2 M . From the analysis in Remark 2, we know that τ 1 M and τ 2 M can be divided into r and n 1 slices by using the delay-partition approach in Theorem 1. Therefore, the allowable upper bounds of the time-varying delays τ 1 (t) and τ 2 (t) of Corollary 1 are smaller than that of Theorem 1. That means conservatism of Theorem 1 is lower than that of Corollary 1.

4 Numerical example

In this section, two numerical examples are given to illustrate the effectiveness of the derived results. The initial conditions of the numerical simulations are taken as x 1 (0)= ( 1 , 2 ) T , x 2 (0)= ( 3 , 1 ) T , x 3 (0)= ( 2 , 3 ) T . The synchronization total error of the network are defined as e(t)= 1 i < j 3 3 l = 1 2 | x i ( l ) x j ( l ) |. For given transition rate matrix, a Markov chain can be generated. We consider the following transition rate matrix:

Π=[ 2 2 3 3 ].
(27)

The Markov chain r(t) is described in Figure 1.

Figure 1
figure 1

The switching of the Markov chain r(t) .

Example 1 In this example, we investigate global synchronization of the MSCN (1) comprised of three coupled nodes:

x ˙ i ( t ) = A m f ( x i ( t ) ) + B m f ( x i ( t τ 1 ( t ) ) ) + c m ( 1 ) j = 1 3 a ( m ) i j ( 1 ) Γ m ( 1 ) x j ( t ) + c m ( 2 ) j = 1 3 a ( m ) i j ( 2 ) Γ m ( 2 ) x j ( t τ 2 ( t ) ) + c m ( 3 ) j = 1 3 a ( m ) i j ( 3 ) Γ m ( 3 ) t τ 3 ( t ) t x j ( s ) d s ,
(28)

where x i (t)= ( x i 1 ( t ) , x i 2 ( t ) ) T is the state variable of the i th (i=1,2,3). All parameters are given as follows:

A 1 = [ 1 0 0 1 ] , A 2 = [ 1 0 1 0 ] , B 1 = [ 1 1 0 1 ] , B 2 = [ 1 1 1 0 ] , Γ 1 ( 1 ) = [ 1 0 0 1 ] , Γ 1 ( 2 ) = [ 1 0 1 1 ] , Γ 1 ( 3 ) = [ 1 1 1 1 ] , Γ 2 ( 1 ) = [ 1 0 1 1 ] , Γ 2 ( 2 ) = [ 1 1 1 1 ] , Γ 2 ( 3 ) = [ 1 1 0 1 ] , A 1 ( 1 ) = [ 1 1 0 1 2 1 0 1 1 ] , A 2 ( 1 ) = [ 1 0 1 0 1 1 1 1 2 ] , A 1 ( 2 ) = [ 2 1 1 1 2 1 1 1 2 ] , A 2 ( 2 ) = [ 0 0 0 0 0 0 0 0 0 ] , A 1 ( 3 ) = [ 0 0 0 0 1 1 0 1 1 ] , A 2 ( 3 ) = [ 2 1 1 1 1 0 1 0 1 ] , f ( x i ( t ) ) = [ 0.5 x i 1 ( t ) tanh ( 0.1 x i 1 ( t ) ) 0.4 x i 1 ( t ) tanh ( 0.3 x i 1 ( t ) ) ] , f ( x i ( t τ 1 ( t ) ) ) = [ 0.5 x i 1 ( t τ 1 ( t ) ) tanh ( 0.1 x i ( t τ 1 ( t ) ) ) 0.4 x i 1 ( t τ 1 ( t ) ) tanh ( 0.3 x i ( t τ 1 ( t ) ) ) ] , c 1 ( 1 ) = 1 , c 1 ( 2 ) = 1 , c 1 ( 3 ) = 1 , c 2 ( 1 ) = 1 , c 2 ( 2 ) = 1 , c 2 ( 3 ) = 1 , τ 3 ( t ) = 0.1 , t [ 0 , 20 ] .

For given μ 1 =0, μ 2 =0, r=2, and n 1 =2, combining the above parameters of system (28) and employing the LMI toolbox in MATLAB to solve the LMI defined in Theorem 1, it is easy to verify that

P 1 = [ 0.4490 0.0105 0.0105 0.7049 ] , P 2 = [ 0.9249 0.2896 0.2896 0.4253 ] , N 1 = [ 0.9464 0.1793 0.1793 0.5468 ] , N 2 = [ 0.6475 0.0321 0.0321 0.7154 ] , Z 1 = [ 0.1263 0.0140 0.0140 0.1221 ] , Z 2 = [ 0.7166 0.0143 0.0143 0.7107 ] , M 1 = [ 0.9580 0.0040 0.0040 0.8392 ] , M 2 = [ 0.0970 0.0202 0.0202 0.0367 ] , W 1 = [ 0.2933 0.0108 0.0108 0.2822 ] , W 2 = [ 0.0970 0.0202 0.0202 0.0367 ] , Q = [ 0.4501 0.0038 0.0038 0.4041 ] , 0 < τ 1 M 0.44 , 0 < τ 2 M 0.44 , α 1 = 10.768 , α 2 = 9.387 , α 3 = 7.355 .

It is obvious that under the above feasible solution, system (28) is globally asymptotically synchronized. The simulation results of system (28) with τ 1 M = τ 2 M =0.44 are shown in Figures 2-3.

Figure 2
figure 2

The synchronization trajectories of system ( 28 ) τ 1 M = τ 2 M =0.44 for Example 1 .

Figure 3
figure 3

The synchronization trajectories of system ( 28 ) τ 1 M = τ 2 M =0.44 for Example 1 .

Example 2 In this example, in order to test Corollary 1, we make r=1 and n 1 =1. For given μ 1 =0, μ 2 =0, by using the LMI toolbox in MATLAB and Corollary 1, τ 1 M , and τ 2 M must satisfy 0< τ 1 M 0.29 and 0< τ 2 M 0.29 if we still choose P 1 , P 2 , N 1 , Z 1 , M 1 , W 1 , Q, and system (28) in Example 1. The simulation results of system (28) with τ 1 M = τ 2 M =0.29 are shown in Figures 4-5.

Figure 4
figure 4

The synchronization trajectories of system ( 28 ) τ 1 M = τ 2 M =0.29 for Example 2 .

Figure 5
figure 5

The synchronization total trajectory of system ( 28 ) τ 1 M = τ 2 M =0.29 for Example 2 .

Remark 4 From Examples 1-2, it is clear that τ 1 M and τ 2 M of Example 1 are larger than that of Example 2. That means allowable upper bounds of τ 1 (t) and τ 2 (t) of Example 1 are larger than that of Example 2. This further proves that the analysis in Remarks 2-3 is reasonable.

5 Conclusions

In this paper, we study global synchronization for a new class of MSCNs with mixed time-varying delays in the delay-partition approach. Sufficient conditions of global synchronization for the new class of MSCNs with mixed time-varying delays are derived by the new delay-partition approach. The advantage of the delay-partition approach is that the obtained results have lower conservatism. With two numerical examples, the theoretical results proposed are proved to be effective.