1 Introduction

Dynamical neural networks have recently received a great deal of attention due to their potential applications in image and signal processing, combinatorial optimization problems, pattern recognition, control engineering, and some other related areas. In the electronic implementation of analog neural networks, during the processing and transmission of signals in the network, due to the finite switching speed of amplifiers, some time delays occur which may change the dynamical behavior of the network from stable to unstable. Therefore, it is important to take into account the effects of the time delays in the dynamical analysis of neural networks. On the other hand, it is well known that unavoidably some disturbances are to be considered in the modeling and stability analysis neural networks. The major disturbances occur within the network, which is mainly due to the deviations in the values of the electronic components during the process of implementation. Therefore, in recent years, many papers have focused on studying the existence, uniqueness, and global robust asymptotic stability of the equilibrium point in the presence of time delays and parameter uncertainties for various classes of nonlinear neural networks, and one reported some robust stability results [143].

In the current paper, we aim to study the robust stability of a class of uncertain neural networks with multiple time delays. By using the Lyapunov functional and homeomorphism mapping theorems, a new delay-independent sufficient condition for global robust asymptotic stability of the equilibrium point for this class of neural networks is derived. Meanwhile, three numerical examples are presented to demonstrate the applicability of the condition and to show the advantages of our result over the previously published robust stability results.

We use the following notation. Throughout this paper, the superscript T represents the transpose. I stands for the identity matrix of appropriate dimension. For the vector v= ( v 1 , v 2 , , v n ) T , |v| will denote |v|= ( | v 1 | , | v 2 | , , | v n | ) T . For any real matrix Q= ( q i j ) n × n , |Q| will denote |Q|= ( | q i j | ) n × n , and λ m (Q) and λ M (Q) will denote the minimum and maximum eigenvalues of Q, respectively. If Q= ( q i j ) n × n is a symmetric matrix, then Q>0 will imply that Q is positive definite, i.e., Q has all eigenvalues real and positive. Let P= ( p i j ) n × n and Q= ( q i j ) n × n be two symmetric matrices. Then, P<Q will imply that v T Pv< v T Qv for any real vector v= ( v 1 , v 2 , , v n ) T . A real matrix P= ( p i j ) n × n is said to be a nonnegative matrix if p i j 0, i,j=1,2,,n. Let P= ( p i j ) n × n and Q= ( q i j ) n × n be two real matrices. Then, PQ will imply that p i j q i j , i,j=1,2,,n. We also recall the following vector and matrix norms:

v 1 = i = 1 n | v i | , v 2 = i = 1 n v i 2 , v = max 1 i n | v i | , Q 1 = max 1 i n j = 1 n | q j i | , Q 2 = λ max ( Q T Q ) , Q = max 1 i n j = 1 n | q i j | .

2 Preliminaries

The delayed neural network model we consider in this paper is described by the set of nonlinear differential equations of the form

d x i ( t ) d t = c i x i (t)+ j = 1 n a i j f j ( x j ( t ) ) + j = 1 n b i j f j ( x j ( t τ i j ) ) + u i ,
(2.1)

where i=1,2,,n, n is the number of the neurons, x i (t) denotes the state of the neuron i at time t, f i () denote activation functions, a i j and b i j denote the strengths of connectivity between neurons j and i at time t and t τ i j , respectively; τ i j represents the time delay required in transmitting a signal from the neuron j to the neuron i, u i is the constant input to the neuron i, c i is the charging rate for the neuron i.

In order to accomplish the objectives of this paper in the sense of robust stability of dynamical neural networks, we will first define the class of the activation functions that we will employ in the neural network model (2.1) and the parametric uncertainties of the system matrices A, B, and C.

The activation functions f i are assumed to be nondecreasing and slope-bounded, that is, there exist some positive constants k i such that the following conditions hold:

0 f i ( x ) f i ( y ) x y k i ,i=1,2,,n,x,yR,xy.

This class of functions will be denoted by fK.

We will set intervals for the system matrices A= a i j , B= b i j and C= c i in (2.1) as follows:

C I : = { C = diag ( c i ) : 0 C ̲ C C ¯ , i.e. , 0 < c ̲ i c i c ¯ i , i = 1 , 2 , , n } , A I : = { A = ( a i j ) : A ̲ A A ¯ , i.e. , a ̲ i j a i j a ¯ i j , i , j = 1 , 2 , , n } , B I : = { B = ( b i j ) : B ̲ B B ¯ , i.e. , b ̲ i j b i j b ¯ i j , i , j = 1 , 2 , , n } .
(2.2)

In what follows, we will give some basic definitions and lemmas that will play an important role in the proof of our robust stability results.

Definition 2.1 (See [28])

Let x = ( x 1 , x 2 , , x n ) T be an equilibrium point of neural system (2.1). The neural network model (2.1) with the parameter ranges defined by (2.2) is globally asymptotically robust stable if x is a unique and globally asymptotically stable equilibrium point of system (2.1) for all C C I , A A I , and B B I .

Lemma 2.1 (See [29])

If H(x) C 0 satisfies the conditions H(x)H(y) for all xy and H(x) as x, then H(x) is a homeomorphism of R n .

Lemma 2.2 (See [1])

Let x= ( x 1 , x 2 , , x n ) T R n . If

A A I := { A = ( a i j ) : A ̲ A A ¯ , i.e. , a ̲ i j a i j a ¯ i j , i , j = 1 , 2 , , n }

then, for any positive diagonal matrix P and a nonnegative diagonal matrix ϒ, the following inequality holds:

x T ( P A + A T P ) x x T ( P ( A ϒ ) + ( A ϒ ) T P ) x + x T ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) x ,

where A = 1 2 ( A ¯ + A ̲ ), A = 1 2 ( A ¯ A ̲ ).

3 Robust stability analysis

In this section, we will present a new sufficient condition that guarantees the global robust asymptotic stability of the equilibrium point of the neural network model (2.1), which is stated in the following theorem.

Theorem 3.1 For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) has a unique and globally robust asymptotically stable equilibrium point for each u, if there exist a positive diagonal matrix P and a nonnegative diagonal matrix ϒ such that the following condition holds:

Π = 2 C ̲ P K 1 P ( A ϒ ) ( A ϒ ) T P P ( A + ϒ ) + ( A + ϒ ) T P 2 I 2 n p M ( ρ 1 R + ρ 2 R 1 ) I > 0 ,

where K=diag( k i >0), A = 1 2 ( A ¯ + A ̲ ), A = 1 2 ( A ¯ A ̲ ), p M =max( p i ), R= ( r i j ) n × n with r i j = b ˆ i j 2 with b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}, ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0.

Proof We will first prove the existence and uniqueness of the equilibrium point of system (2.1) by making use of the homeomorphism mapping theorem defined in Lemma 2.1. To this end, we define the mapping associated with system (2.1) as

H(x)=Cx+Af(x)+Bf(x)+u.
(3.1)

We point out here that if x is an equilibrium point of the neural network model (2.1), then, by definition, x satisfies the following equilibrium equation:

C x +Af ( x ) +Bf ( x ) +u=0.

Therefore, every solution of the equation H(x)=0 is an equilibrium point of system (2.1). Hence, if we shaw that H(x) is homeomorphism of R n , then we will conclude that H( x )=0 has a unique solution for each u. In order to prove that H(x) is a homeomorphism of R n , we choose two real vectors x R n and y R n such that xy. In this case, we can write the following equation for H(x) given by (3.1):

H(x)H(y)=C(xy)+A ( f ( x ) f ( y ) ) +B ( f ( x ) f ( y ) ) .
(3.2)

Let xy. If f(x)f(y)=0 when xy, then (3.2) takes the form

H(x)H(y)=C(xy)

from which it follows that H(x)H(y) if xy0 as C is a positive diagonal matrix. Now assume that f(x)f(y)0 when xy0. In this case, if we multiply both sides of (3.2) by 2 ( f ( x ) f ( y ) ) T P, we obtain

2 ( f ( x ) f ( y ) ) T P ( H ( x ) H ( y ) ) = 2 ( f ( x ) f ( y ) ) T P C ( x y ) + 2 ( f ( x ) f ( y ) ) T P A ( f ( x ) f ( y ) ) + 2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) = 2 ( f ( x ) f ( y ) ) T P C ( x y ) + ( f ( x ) f ( y ) ) T ( P A + A T P ) ( f ( x ) f ( y ) ) + 2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) ,
(3.3)

where P=diag( p i >0) is a positive diagonal matrix.

In the light of Lemma 2.2, we can write

( f ( x ) f ( y ) ) T ( P A + A T P ) ( f ( x ) f ( y ) ) ( f ( x ) f ( y ) ) T ( P ( A ϒ ) + ( A ϒ ) T P ) ( f ( x ) f ( y ) ) + ( f ( x ) f ( y ) ) T ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) ( f ( x ) f ( y ) ) ,
(3.4)

where ϒ is a nonnegative diagonal matrix.

fK implies that

2 ( f ( x ) f ( y ) ) T P C ( x y ) = 2 i = 1 n p i c i ( f i ( x i ) f i ( y i ) ) ( x i y i ) 2 i = 1 n p i c ̲ i k i ( f i ( x i ) f i ( y i ) ) 2 = 2 ( f ( x ) f ( y ) ) T C ̲ P K 1 ( f ( x ) f ( y ) ) .
(3.5)

We also note the following inequality:

2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) = i = 1 n j = 1 n 2 p i b i j ( f i ( x i ) f i ( y i ) ) ( f j ( x j ) f j ( y j ) ) ( ρ 1 + ρ 2 ) p M i = 1 n j = 1 n 2 b ˆ i j | f i ( x i ) f i ( y i ) | | f j ( x j ) f j ( y j ) | = ρ 1 p M i = 1 n j = 1 n 2 b ˆ i j | f i ( x i ) f i ( y i ) | | f j ( x j ) f j ( y j ) | + ρ 2 p M i = 1 n j = 1 n 2 b ˆ j i | f i ( x i ) f i ( y i ) | | f j ( x j ) f j ( y j ) | ,
(3.6)

where ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0, p M =max( p i ) and b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}.

Equation (3.6) can be written in the following form:

2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) ρ 1 p M i = 1 n j = 1 n ( α b ˆ i j 2 ( f i ( x i ) f i ( y i ) ) 2 + 1 α ( f j ( x j ) f j ( y j ) ) 2 ) + ρ 2 p M i = 1 n j = 1 n ( β b ˆ j i 2 ( f i ( x i ) f i ( y i ) ) 2 + 1 β ( f j ( x j ) f j ( y j ) ) 2 ) = ρ 1 p M i = 1 n j = 1 n ( α r i j ( f i ( x i ) f i ( y i ) ) 2 + 1 α ( f j ( x j ) f j ( y j ) ) 2 ) + ρ 2 p M i = 1 n j = 1 n ( β r j i ( f i ( x i ) f i ( y i ) ) 2 + 1 β ( f j ( x j ) f j ( y j ) ) 2 ) ρ 1 p M ( α R f ( x ) f ( y ) 2 2 + 1 α n f ( x ) f ( y ) 2 2 ) + ρ 2 p M ( β R 1 f ( x ) f ( y ) 2 2 + 1 β n f ( x ) f ( y ) 2 2 ) ,
(3.7)

where α and β are some positive constants. Letting α= n R and β= n R 1 in (3.7) yields

2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) 2 n p M ρ 1 R ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) + 2 n p M ρ 2 R 1 ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) .
(3.8)

Using (3.4), (3.5), and (3.8) in (3.3) results in

2 ( f ( x ) f ( y ) ) T P ( H ( x ) H ( y ) ) 2 ( f ( x ) f ( y ) ) T C ̲ P K 1 ( f ( x ) f ( y ) ) + ( f ( x ) f ( y ) ) T ( P ( A ϒ ) + ( A ϒ ) T P ) ( f ( x ) f ( y ) ) + ( f ( x ) f ( y ) ) T P ( A + ϒ ) + ( A + ϒ ) T P 2 ( f ( x ) f ( y ) ) + 2 n p M ρ 1 R ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) + 2 n p M ρ 2 R 1 ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) ,

which can be written in the form

2 ( f ( x ) f ( y ) ) T P ( H ( x ) H ( y ) ) ( f ( x ) f ( y ) ) T Π ( f ( x ) f ( y ) ) .
(3.9)

For the activations functions belonging to the class , it has been shown in [27] that for the inequality in the form of (3.9), if Π>0, then H(x)H(y), for all xy, and H(x) as x. Hence, we have proved that the map H(x): R n R n is a homomorphism of  R n , meaning that the condition of Theorem 3.1 implies the existence and uniqueness of the equilibrium point for neural network model (2.1).

It will be now shown that the condition obtained for the existence and uniqueness of the equilibrium point of neural network model (2.1) in Theorem 3.1 also implies the global asymptotic stability of the equilibrium point. To this end, we will shift the equilibrium point x of system (2.1) to the origin. The transformation z i ()= x i () x i , i=1,2,,n, puts the network model (2.1) into the following form:

z ˙ i (t)= c i z i (t)+ j = 1 n a i j g j ( z j ( t ) ) + j = 1 n b i j g j ( z j ( t τ i j ) ) ,i=1,2,,n,
(3.10)

where g i ( z i ())= f i ( z i ()+ x i ) f i ( x i ), i=1,2,,n, satisfies the following property:

0 g i ( z ) z k i ,zR,z0and g i (0)=0,i=1,2,,n.

Note that equilibrium and stability properties of the neural network models are identical. Therefore, proving the asymptotic stability of the origin of system (3.10) will directly imply the asymptotic stability of x . Now, consider the following positive definite Lyapunov functional for system (3.10):

V ( z ( t ) ) = i = 1 n z i 2 ( t ) + 2 ε i = 1 n 0 z i ( t ) p i g i ( s ) d s + i = 1 n j = 1 n ( γ + n c ̲ i b ˆ i j 2 + ρ 1 ε α p M + ρ 2 ε β p M b ˆ i j 2 ) t τ i j t g j 2 ( z j ( ξ ) ) d ξ ,

where the p i , α, β, γ, and ε are positive constants to be determined later. The time derivative of the functional along the trajectories of system (3.10) is obtained as follows:

V ˙ ( z ( t ) ) = 2 i = 1 n c i z i 2 ( t ) + i = 1 n j = 1 n 2 a i j z i ( t ) g j ( z j ( t ) ) + i = 1 n j = 1 n 2 b i j z i ( t ) g j ( z j ( t τ i j ) ) 2 ε i = 1 n p i c i z i ( t ) g i ( z i ( t ) ) + ε i = 1 n j = 1 n 2 p i a i j g i ( z i ( t ) ) g j ( z j ( t ) ) + ε i = 1 n j = 1 n 2 p i b i j g i ( z i ( t ) ) g j ( z j ( t τ i j ) ) + ρ 1 ε p M i = 1 n j = 1 n 1 α g j 2 ( z j ( t ) ) ρ 1 ε p M i = 1 n j = 1 n 1 α g j 2 ( z j ( t τ i j ) ) + γ i = 1 n j = 1 n g j 2 ( z j ( t ) ) γ i = 1 n j = 1 n g j 2 ( z j ( t τ i j ) ) + ρ 2 ε p M i = 1 n j = 1 n β b ˆ i j 2 g j 2 ( z j ( t ) ) ρ 2 ε p M i = 1 n j = 1 n β b ˆ i j 2 g j 2 ( z j ( t τ i j ) ) + i = 1 n j = 1 n n c ̲ i b ˆ i j 2 g j 2 ( z j ( t ) ) i = 1 n j = 1 n n c ̲ i b ˆ i j 2 g j 2 ( z j ( t τ i j ) ) .
(3.11)

We note that the following inequalities hold:

i = 1 n j = 1 n 2 a i j z i (t) g j ( z j ( t ) ) i = 1 n c i z i 2 (t)+ i = 1 n j = 1 n n c ̲ i a ˆ i j 2 g j 2 ( z j ( t ) ) ,
(3.12)
i = 1 n j = 1 n 2 b i j z i (t) g j ( z j ( t τ i j ) ) i = 1 n c i z i 2 (t)+ i = 1 n j = 1 n n c ̲ i b ˆ i j 2 g j 2 ( z j ( t τ i j ) ) ,
(3.13)
i = 1 n j = 1 n 2 p i b i j g i ( z i ( t ) ) g j ( z j ( t τ i j ) ) ρ 1 p M i = 1 n j = 1 n ( α b ˆ i j 2 g i 2 ( z i ( t ) ) + 1 α g j 2 ( z j ( t τ i j ) ) ) + ρ 2 p M i = 1 n j = 1 n ( 1 β g j 2 ( z j ( t ) ) + β b ˆ j i 2 g i 2 ( z i ( t τ j i ) ) ) .
(3.14)

For gK, we have

2 i = 1 n p i c i z i ( t ) g i ( z i ( t ) ) 2 i = 1 n p i c i k i g i 2 ( z i ( t ) ) 2 g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) .
(3.15)

In the light of Lemma 2.2, we can write

i = 1 n j = 1 n 2 p i a i j g i ( z i ( t ) ) g j ( z j ( t ) ) = g T ( z ( t ) ) ( P A + A T P ) g ( z ( t ) ) g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) .
(3.16)

Using (3.12)-(3.16) in (3.11) yields

V ˙ ( z ( t ) ) i = 1 n j = 1 n n c m a ˆ M 2 g j 2 ( z j ( t ) ) + i = 1 n j = 1 n n c m b ˆ M 2 g j 2 ( z j ( t ) ) 2 ε g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) + ε ρ 1 p M i = 1 n j = 1 n α r i j g i 2 ( z i ( t ) ) + ε ρ 1 p M i = 1 n j = 1 n 1 α g j 2 ( z j ( t ) ) + ε ρ 2 p M i = 1 n j = 1 n 1 β g j 2 ( z j ( t ) ) + ε ρ 2 p M i = 1 n j = 1 n β r j i g i 2 ( z i ( t ) ) + γ i = 1 n j = 1 n g j 2 ( z j ( t ) ) γ i = 1 n j = 1 n g j 2 ( z j ( t τ i j ) ) ,
(3.17)

where c m =min{ c ̲ i }, a ˆ M =max{ a ˆ i j }, b ˆ M =max{ b ˆ i j }. We now note the following inequalities:

i = 1 n j = 1 n r j i g i 2 ( z i ( t ) ) R 1 i = 1 n g i 2 ( z i ( t ) ) = R 1 g T ( z ( t ) ) g ( z ( t ) ) ,
(3.18)
i = 1 n j = 1 n r i j g i 2 ( z i ( t ) ) R i = 1 n g i 2 ( z i ( t ) ) = R g T ( z ( t ) ) g ( z ( t ) ) ,
(3.19)
i = 1 n j = 1 n g j 2 ( z j ( t ) ) =n g T ( z ( t ) ) g ( z ( t ) ) .
(3.20)

Using (3.18)-(3.20) in (3.17) leads to

V ˙ ( z ( t ) ) i = 1 n j = 1 n n c m a ˆ M 2 g j 2 ( z j ( t ) ) + i = 1 n j = 1 n n c m b ˆ M 2 g j 2 ( z j ( t ) ) 2 ε g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) + γ i = 1 n j = 1 n g j 2 ( z j ( t ) ) + ε g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) + ε ρ 1 p M α R g T ( z ( t ) ) g ( z ( t ) ) + ε ρ 1 p M 1 α n g T ( z ( t ) ) g ( z ( t ) ) + ε ρ 2 p M 1 β n g T ( z ( t ) ) g ( z ( t ) ) + ε ρ 2 p M β R 1 g T ( z ( t ) ) g ( z ( t ) ) .
(3.21)

Let γ= n c m ( a ˆ M 2 + b ˆ M 2 ), α= n R and β= n R 1 . Then (3.21) takes the form

V ˙ ( z ( t ) ) 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) g T ( z ( t ) ) g ( z ( t ) ) 2 ε g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) + 2 ε ρ 1 p M n R g T ( z ( t ) ) g ( z ( t ) ) + 2 ε ρ 2 p M n R 1 g T ( z ( t ) ) g ( z ( t ) ) = 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) g T ( z ( t ) ) g ( z ( t ) ) ε g T ( z ( t ) ) Π g ( z ( t ) ) 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) g ( z ( t ) ) 2 2 ε λ m ( Π ) g ( z ( t ) ) 2 2 = [ 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) ε λ m ( Π ) ] g ( z ( t ) ) 2 2 .
(3.22)

It has been shown in [27] that, for the Lyapunov functional defined above, if its time derivative is in the form of (3.22) and ε> 2 n 2 ( a ˆ M 2 + b ˆ M 2 ) c m λ m ( Π ) with Π being positive definite, then the origin system (3.10), or equivalently the equilibrium point of system (2.1) is globally asymptotically stable. Hence, we have shown that the condition of Theorem 3.1 implies the global robust asymptotic stability of system (2.1). □

4 Comparison and examples

In this section, we present some constructive numerical materials to demonstrate the effectiveness and applicability of the proposed conditions and to show the advantages of our results over the previous corresponding robust stability result derived in the literature. In order to make a precise comparison between the results, we will first restate results from the previous literature.

Theorem 4.1 (See [27])

For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) is globally asymptotically robust stable if there exists a positive diagonal matrix P=diag( p i >0) such that

Λ=2 C ̲ P K 1 +S2 n p M ( ρ 1 R + ρ 2 R 1 ) I>0,

where K=diag( k i >0) is a positive diagonal matrix, S= ( s i j ) n × n with s i i =2 p i a ¯ i i , s i j =max(| p i a ¯ i j + p j a ¯ j i |,| p i a ̲ i j + p j a ̲ j i |) for ij, p M =max( p i ), R= ( r i j ) n × n with r i j = b ˆ i j 2 , and b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}, and ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0.

Theorem 4.2 (See [27])

For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) is globally asymptotically robust stable, if there exists a positive diagonal matrix P=diag( p i >0) such that

Γ=2 C ̲ P K 1 P A A T P P A + A T P 2 I2 n p M ( ρ 1 R + ρ 2 R 1 ) I>0,

where K=diag( k i >0) is a positive diagonal matrix, p M =max( p i ), R= ( r i j ) n × n with r i j = b ˆ i j 2 , and b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}; ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0, and A = 1 2 ( A ¯ + A ̲ ), A = 1 2 ( A ¯ A ̲ ).

Theorem 4.3 (See [30])

For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) is globally asymptotically robust stable if there exist positive constants α i , i=1,2,,n, such that

α i ( c ̲ i μ i a ¯ i i ) j = 1 j i n α j a ˆ j i j = 1 n α j b ˆ j i >0,i=1,2,,n,

where a ˆ j i =max{| a ̲ j i |,| a ¯ j i |} and b ˆ j i =max{| b ̲ j i |,| b ¯ j i |}.

We will now consider the following examples.

Example 4.1 Consider the neural system (2.1) with the following network parameters:

A ̲ = [ 1 1 1 1 0 1 0 1 2 1 1 1 2 2 0 1 ] , A ¯ = [ 1 1 1 1 2 1 2 1 0 1 1 1 0 0 2 1 ] , B ̲ = [ 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 ] , B ¯ = [ 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 ] , k 1 = k 2 = k 3 = k 4 = 1 and c ̲ 1 = c ̲ 2 = c ̲ 3 = c ̲ 4 = c m ,

from which we obtain the following matrices:

A = [ 1 0 0 0 1 1 1 0 1 0 1 0 1 1 1 1 ] , A = [ 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , B ˆ = [ 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 ] , R = [ 1 1 1 1 4 4 4 4 4 4 4 4 16 16 16 16 ] .

We note that R 1 =25 and R =64. We consider a special case of Theorem 4.1 where P=I, in which case the matrix S in Theorem 4.1 is of the form

S=[ 2 3 3 3 3 2 3 3 3 3 2 3 3 3 3 2 ].

Then, for ρ 1 =0 and ρ 2 =1, Λ in Theorem 4.1 is obtained as follows:

Λ=2 c m I+S2 n R 1 I=2[ c m 11 1.5 1.5 1.5 1.5 c m 11 1.5 1.5 1.5 1.5 c m 11 1.5 1.5 1.5 1.5 c m 11 ].

Note that Λ>0 if and only if c m >15.5. Hence, the robust stability condition imposed by Theorem 4.1 is c m >15.5. For the same network parameters of this example, we will obtain the robust stability condition imposed by Theorem 3.1. Let P=I and

ϒ=[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ].

Then we obtain

P ( A ϒ ) + ( A ϒ ) T P = [ 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , P ( A + ϒ ) + ( A + ϒ ) T P = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] .

Π in Theorem 3.1 takes the form

Π=[ 2 c m 28 1 1 1 1 2 c m 28 1 1 1 1 2 c m 28 1 1 1 1 2 c m 28 ].

It can be calculated that Π>0 if and only if c m >15.12. Therefore, Theorem 3.1 imposes a less restrictive stability condition on the network parameters than Theorem 4.1 does.

Example 4.2 Consider the neural system (2.1) with the following network parameters:

A ̲ = [ 2 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , A ¯ = [ 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , B ̲ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , B ¯ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , k 1 = k 2 = k 3 = k 4 = 1 , and c ̲ 1 = c ̲ 2 = c ̲ 3 = c ̲ 4 = c m .

From the above matrices, we can obtain the following matrices:

A = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] , A = [ 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , B ˆ [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , R = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] ,

from which we can calculate the norms R 1 =25 and R =64. Let P=I and

ϒ=[ 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ].

Then we can write

P ( A ϒ ) + ( A ϒ ) T P = [ 2 0 0 0 0 2 0 0 0 0 2 0 0 0 0 2 ] , P ( A + ϒ ) + ( A + ϒ ) T P = [ 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ] .

For ρ 1 =0 and ρ 2 =1, the matrix Π in Theorem 3.1 is in the form

Π=2[ c m 7 0 0 0 0 c m 7 0 0 0 0 c m 7 0 0 0 0 c m 7 ].

The condition Π>0 is satisfied if c m >7. For the parameters of this example, Γ in Theorem 4.2 is in the form

Γ = 2 c m I A A T A + A T 2 I 4 R 1 I = 2 [ c m 6.3 0 0 0 0 c m 7.3 0 0 0 0 c m 7.3 0 0 0 0 c m 7.3 ] .

The choice c m >7.3 implies that Γ>0, which ensures the global robust stability of neural system (2.1). Hence, for the network parameters of this example, if 7< c m 7.3, then the result of Theorems 4.2 does not hold, whereas the result of Theorem 3.1 is still applicable.

Example 4.3 Assume that the network parameters of neural system (2.1) are given as follows:

A ̲ = [ 1 1 1 1 3 1 1 1 3 3 1 1 3 3 3 1 ] , A ¯ = [ 3 3 3 3 1 3 3 3 1 1 3 3 1 1 1 3 ] , B ̲ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , B ¯ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , k 1 = k 2 = k 3 = k 4 = 1 , and c ̲ 1 = c ̲ 2 = c ̲ 3 = c ̲ 4 = c m .

For the matrices given above, we can obtain the following matrices:

A = [ 2 1 1 1 1 2 1 1 1 1 2 1 1 1 1 2 ] , A = [ 1 2 2 2 2 1 2 2 2 2 1 2 2 2 2 1 ] , A ˆ = [ 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 ] , B ˆ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , R = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] ,

from which we calculate R 1 =4 and R =4. Let P=I and

ϒ=[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ].

We have

P ( A ϒ ) + ( A ϒ ) T P = [ 2 0 0 0 0 2 0 0 0 0 2 0 0 0 0 2 ] , P ( A + ϒ ) + ( A + ϒ ) T P = [ 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 ] .

For ρ 1 =0 and ρ 2 =1, the matrix Π in Theorem 3.1 is of the form

Π=2[ c m 13 0 0 0 0 c m 13 0 0 0 0 c m 13 0 0 0 0 c m 13 ].

Clearly, Π>0 holds if and only if c m >13. Therefore, for this example, Theorem 3.1 ensures the global robust stability of neural system (2.1) under the condition that c m >13.

When checking the condition of Theorem 4.3 for the same network parameters of this example, we search for the existence of the positive constants α 1 , α 2 , α 3 , and α 4 such that the following conditions hold:

c m α 1 4 α 1 4 α 2 4 α 3 4 α 4 > 0 , 4 α 1 + c m α 2 4 α 2 4 α 3 4 α 4 > 0 , 4 α 1 4 α 2 + c m α 3 4 α 3 4 α 4 > 0 , 4 α 1 4 α 2 4 α 3 + c m α 4 4 α 4 > 0 ,

which can be written in form

[ c m 4 4 4 4 4 c m 4 4 4 4 4 c m 4 4 4 4 4 c m 4 ][ α 1 α 2 α 3 α 4 ]>0.

From the properties of the nonsingular M-matrices [44], in order to ensure the existence of α 1 , α 2 , α 3 , and α 4 the symmetric matrix in the above inequality must be positive definite, which holds if and only if c m >16. Obviously, for the interval 13< c m 16, our condition obtained in Theorem 3.1 is satisfied, but the result of Theorem 4.3 does not hold.

5 Conclusions

In this paper, we have focused on the existence, uniqueness, and global robust stability of an equilibrium point for neural networks with multiple time delays with respect to the class of nondecreasing activation functions. We have employed a suitable Lyapunov functional and made use of the homeomorphism mapping theorem to derive a new time-independent robust stability condition for dynamical neural networks with multiple time delays. The obtained condition basically establishes a relationship between the network parameters of the neural system and the number of neurons. We have also presented some numerical examples, which enabled us to show the advantages of our result over previously reported robust stability results. We should point here that in the neural network model we have considered, the delay parameters are constant and the stability condition we obtain is delay independent. However, it is possible to derive some delay-dependent stability conditions for the same neural network model by employing different classes of Lyapunov functionals.