1 Introduction

Cellular neural networks (CNNs), proposed by Chua and Yang in 1988 [1, 2], have become a hot topic for their numerous successful applications in various fields such as optimization, linear and nonlinear programming, associative memory, pattern recognition and computer vision.

Due to the finite switching speed of neurons and amplifiers in the implementation of neural networks, it turns out that time delays should not be neglected; and therefore, the model of delayed cellular neural networks (DCNNs) is put forward, which is naturally of better realistic significance. In fact, besides delay effects, stochastic and impulsive as well as diffusion effects are also likely to exist in the neural networks. Accordingly, many experts are showing a growing interest in the dynamic behavior research of complex CNNs such as impulsive delayed reaction-diffusion CNNs and stochastic delayed reaction-diffusion CNNs, followed by a mass of achievements [39] obtained.

Synthesizing the reported results about the complex CNNs, we find that the existing research skill for dealing with the stability is mainly based on Lyapunov theory. However, we also notice that there are still lots of difficulties in the applications of corresponding theory to the specific problems [1016]. It is therefore necessary to seek some new techniques to overcome those difficulties.

It is inspiring that in a few recent years, Burton and other authors have applied fixed point theory to investigate the stability of deterministic systems and obtained some more applicable results; for example, see the monograph [17] and the papers [1829]. In addition, more recently, there have been a few papers where fixed point theory is employed to deal with the stability of stochastic (delayed) differential equations; see [1016, 30]. Particularly, in [1113], Luo used fixed point theory to study the exponential stability of mild solutions for stochastic partial differential equations with bounded delays and with infinite delays. In [14, 15], fixed point theory is used to investigate the asymptotic stability in the p th moment of mild solutions to nonlinear impulsive stochastic partial differential equations with bounded delays and with infinite delays. In [16], the exponential stability of stochastic Volterra-Levin equations is studied based on fixed point theory. As is known to all, although Lyapunov functions play an important role in Lyapunov stability theory, it is not easy to find the appropriate Lyapunov functions. This difficulty can be avoided by applying fixed point theory. By means of fixed point theory, refs. [1116] require no Lyapunov functions for stability analysis, and the delay terms need no differentiability.

Naturally, for the complex CNNs which have great application values, we wonder if fixed point theory can be used to investigate the stability, not just the existence and uniqueness of a solution. With this motivation, in the present paper, we aim to discuss the stability of impulsive CNNs with time-varying delays via fixed point theory. It is worth noting that our research skill is contraction mapping theory which is different from the usual method of Lyapunov theory. We use the fixed point theorem to prove the existence and uniqueness of a solution and the global exponential stability of the considered system all at once. Some new and concise algebraic criteria are provided; moreover, these conditions are easy to verify and do not require even the differentiability of delays, let alone the monotone decreasing behavior of delays.

2 Preliminaries

Let R n denote the n-dimensional Euclidean space and represent the Euclidean norm. N{1,2,,n}, R + =[0,), C[X,Y] corresponds to the space of continuous mappings from the topological space X to the topological space Y.

In this paper, we consider the following impulsive cellular neural network with time-varying delays:

(1)
(2)

where iN and n is the number of neurons in the neural network. x i (t) corresponds to the state of the i th neuron at time t. f j (), g j ()C[R,R], f j ( x j (t)) is the activation function of the j th neuron at time t and g j ( x j (t τ j (t))) represents the activation function of the j th neuron at time t τ j (t), where τ j (t)C[ R + , R + ] corresponds to the transmission delay along the axon of the j th neuron and satisfies 0 τ j (t)τ (τ is a constant). The constant b i j represents the connection weight of the j th neuron on the i th neuron at time t. The constant c i j denotes the connection strength of the j th neuron on the i th neuron at time t τ j (t). The constant a i >0 represents the rate with which the i th neuron will reset its potential to the resting state when disconnected from the network and external inputs. The fixed impulsive moments t k (k=1,2,) satisfy 0= t 0 < t 1 < t 2 < , and lim k t k =. x i ( t k +) and x i ( t k ) stand for the right-hand and left-hand limit of x i (t) at time t k , respectively. P i k ( x i ( t k )) shows the abrupt change of x i (t) at the impulsive moment t k and P i k ()C[R,R].

Throughout this paper, we always assume that f i (0)= g i (0)= P i k (0)=0 for iN and k=1,2, . Denote by x(t)x(t;s,φ)= ( x 1 ( t ; s , φ 1 ) , , x n ( t ; s , φ n ) ) T R n the solution to Eqs. (1)-(2) with the initial condition

x i (s)= φ i (s),τs0,iN,
(3)

where φ(s)= ( φ 1 ( s ) , , φ n ( s ) ) T R n and φ i (s)C[[τ,0],R].

The solution x(t)x(t;s,φ) R n of Eqs. (1)-(3) is, for the time variable t, a piecewise continuous vector-valued function with the first kind discontinuity at the points t k (k=1,2,), where it is left-continuous, i.e., the following relations are valid:

x i ( t k )= x i ( t k ), x i ( t k +)= x i ( t k )+ P i k ( x i ( t k ) ) ,iN,k=1,2,.

Definition 2.1 Equations (1)-(2) are said to be globally exponentially stable if for any initial condition φ(s)C[[τ,0], R n ], there exists a pair of positive constants λ and M such that

x ( t ; s , φ ) M e λ t ,for all t0.

The consideration of this paper is based on the following fixed point theorem.

Theorem 2.1 [31]

Let ϒ be a contraction operator on a complete metric space Θ, then there exists a unique point ζΘ for which ϒ(ζ)=ζ.

3 Main results

In this section, we investigate the existence and uniqueness of a solution to Eqs. (1)-(3) and the global exponential stability of Eqs. (1)-(2) by means of the contraction mapping principle. Before proceeding, we introduce some assumptions listed as follows:

(A1) There exist nonnegative constants l j such that for any η,υR,

| f j ( η ) f j ( υ ) | l j |ηυ|,jN.

(A2) There exist nonnegative constants k j such that for any η,υR,

| g j ( η ) g j ( υ ) | k j |ηυ|,jN.

(A3) There exist nonnegative constants p j k such that for any η,υR,

| P j k ( η ) P j k ( υ ) | p j k |ηυ|,jN,k=1,2,.

Let H= H 1 ×× H n , and let H i (iN) be the space consisting of functions ϕ i (t):[τ,)R, where ϕ i (t) satisfies:

  1. (1)

    ϕ i (t) is continuous on t t k (k=1,2,);

  2. (2)

    lim t t k ϕ i (t) and lim t t k + ϕ i (t) exist; furthermore, lim t t k ϕ i (t)= ϕ i ( t k ) for k=1,2, ;

  3. (3)

    ϕ i (s)= φ i (s) on s[τ,0];

  4. (4)

    e α t ϕ i (t)0 as t, where α is a positive constant and satisfies α< min i N { a i },

here t k (k=1,2,) and φ i (s) (s[τ,0]) are defined as shown in Section 2. Also, ℋ is a complete metric space when it is equipped with a metric defined by

d ( q ¯ ( t ) , h ¯ ( t ) ) = i = 1 n sup t τ | q i ( t ) h i ( t ) | ,

where q ¯ (t)=( q 1 (t),, q n (t))H and h ¯ (t)=( h 1 (t),, h n (t))H.

In what follows, we give the main result of this paper.

Theorem 3.1 Assume that the conditions (A1)-(A3) hold. Provided that

  1. (i)

    there exists a constant μ such that inf k = 1 , 2 , { t k t k 1 }μ,

  2. (ii)

    there exist constants p i such that p i k p i μ for iN and k=1,2, ,

  3. (iii)

    ϑ i = 1 n { 1 a i max j N | b i j l j |+ 1 a i max j N | c i j k j |}+ max i N { p i (μ+ 1 a i )}<1,

then Eqs. (1)-(2) are globally exponentially stable.

Proof The following proof is based on the contraction mapping principle, which can be divided into three steps.

Step 1. The mapping is needed to be determined. Multiplying both sides of Eq. (1) with e a i t gives, for t>0 and t t k ,

de a i t x i ( t ) = e a i t d x i ( t ) + a i x i ( t ) e a i t d t = e a i t { a i x i ( t ) + j = 1 n b i j f j ( x j ( t ) ) + j = 1 n c i j g j ( x j ( t τ j ( t ) ) ) } d t + a i x i ( t ) e a i t d t = e a i t { j = 1 n b i j f j ( x j ( t ) ) + j = 1 n c i j g j ( x j ( t τ j ( t ) ) ) } d t ,

which yields, after integrating from t k 1 +ε (ε>0) to t( t k 1 , t k ) (k=1,2,),

x i ( t ) e a i t = x i ( t k 1 + ε ) e a i ( t k 1 + ε ) + t k 1 + ε t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } d s .
(4)

Letting ε0 in (4), we have, for t( t k 1 , t k ) (k=1,2,),

x i (t) e a i t = x i ( t k 1 +) e a i t k 1 + t k 1 t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } ds.
(5)

Setting t= t k ε (ε>0) in (5), we get

x i ( t k ε) e a i ( t k ε ) = x i ( t k 1 +) e a i t k 1 + t k 1 t k ε e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } ds,

which generates by letting ε0

x i ( t k ) e a i t k = x i ( t k 1 +) e a i t k 1 + t k 1 t k e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } ds.
(6)

Noting x i ( t k 0)= x i ( t k ), (6) can be rearranged as

x i ( t k ) e a i t k = x i ( t k 1 +) e a i t k 1 + t k 1 t k e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } ds.
(7)

Combining (5) and (7), we derive that

x i (t) e a i t = x i ( t k 1 +) e a i t k 1 + t k 1 t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } ds

is true for t( t k 1 , t k ] (k=1,2,). Further,

x i ( t ) e a i t = { x i ( t k 1 ) + I i ( k 1 ) ( x i ( t k 1 ) ) } e a i t k 1 + t k 1 t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } d s = x i ( t k 1 ) e a i t k 1 + t k 1 t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } d s + P i ( k 1 ) ( x i ( t k 1 ) ) e a i t k 1

holds for t( t k 1 , t k ] (k=1,2,). Hence,

which produces, for t>0,

x i ( t ) = φ i ( 0 ) e a i t + e a i t 0 t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } d s + e a i t 0 < t k < t { P i k ( x i ( t k ) ) e a i t k } .
(8)

Noting x i (0)= φ i (0) in (8), we define the following operator π acting on ℋ for x ¯ (t)=( x 1 (t),, x n (t))H:

π( x ¯ )(t)= ( π ( x 1 ) ( t ) , , π ( x n ) ( t ) ) ,

where π( x i )(t):[τ,)R (iN) obeys the rule as follows:

π ( x i ) ( t ) = φ i ( 0 ) e a i t + e a i t 0 t e a i s { j = 1 n b i j f j ( x j ( s ) ) + j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) } d s + e a i t 0 < t k < t { P i k ( x i ( t k ) ) e a i t k }
(9)

on t0 and π( x i )(s)= φ i (s) on s[τ,0].

Step 2. We need to prove π(H)H. Choosing x i (t) H i (iN), it is necessary to testify π( x i )(t) H i .

First, since π( x i )(s)= φ i (s) on s[τ,0] and φ i (s)C[[τ,0],R], we immediately know π( x i )(s) is continuous on s[τ,0]. Then, for a fixed time t>0, it follows from (9) that

π( x i )(t+r)π( x i )(t)= Q 1 + Q 2 + Q 3 + Q 4 ,
(10)

where

Q 1 = φ i ( 0 ) e a i ( t + r ) φ i ( 0 ) e a i t , Q 2 = e a i ( t + r ) 0 t + r e a i s j = 1 n b i j f j ( x j ( s ) ) ds e a i t 0 t e a i s j = 1 n b i j f j ( x j ( s ) ) ds , Q 3 = e a i ( t + r ) 0 t + r e a i s j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) ds e a i t 0 t e a i s j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) ds , Q 4 = e a i ( t + r ) 0 < t k < ( t + r ) { I i k ( x i ( t k ) ) e a i t k } e a i t 0 < t k < t { P i k ( x i ( t k ) ) e a i t k } .

Owing to x i (t) H i , we see that x i (t) is continuous on t t k (k=1,2,). Moreover, as t= t k , lim t t k x i (t) and lim t t k + x i (t) exist, in addition, lim t t k x i (t)= x i ( t k ).

Consequently, when t t k (k=1,2,) in (10), it is easy to find that Q i 0 as r0 for i=1,,4, and so π( x i )(t) is continuous on the fixed time t t k (k=1,2,). On the other hand, as t= t k (k=1,2,) in (10), it is not difficult to find that Q i 0 as r0 for i=1,2,3. Furthermore, if letting r<0 be small enough, we have

Q 4 = e a i ( t k + r ) 0 < t m < ( t k + r ) P i m ( x i ( t m ) ) e a i t m e a i t k 0 < t m < t k P i m ( x i ( t m ) ) e a i t m = { e a i ( t k + r ) e a i t k } 0 < t m < t k { P i m ( x i ( t m ) ) e a i t m } ,

which implies lim r 0 Q 4 =0. While if letting r>0 be small enough, we get

Q 4 = e a i ( t k + r ) 0 < t m < ( t k + r ) P i m ( x i ( t m ) ) e a i t m e a i t k 0 < t m < t k P i m ( x i ( t m ) ) e a i t m = e a i ( t k + r ) { 0 < t m < t k { P i m ( x i ( t m ) ) e a i t m } + P i k ( x i ( t k ) ) e a i t k } e a i t k 0 < t m < t k { P i m ( x i ( t m ) ) e a i t m } = { e a i ( t k + r ) e a i t k } 0 < t m < t k { P i m ( x i ( t m ) ) e a i t m } + e a i ( t k + r ) P i k ( x i ( t k ) ) e a i t k ,

which yields lim r 0 + Q 4 = e a i t k P i k ( x i ( t k )) e a i t k .

According to the above discussion, we see that π( x i )(t):[τ,)R is continuous on t t k (k=1,2,), and for t= t k (k=1,2,), lim t t k π( x i )(t) and lim t t k + π( x i )(t) exist; furthermore, lim t t k π( x i )(t)=π( x i )( t k ) lim t t k + π( x i )(t).

Next, we will prove e α t π( x i )(t)0 as t for iN. First of all, it is obvious that lim t e ( a i α ) t =0 for a i α>0. In addition, owing to x j (t) H j for jN, we know lim t e α t x j (t)=0. Then, for any ε>0, there exists a T j >0 such that s T j implies | e α s x j (s)|<ε. Choose T = max j N { T j }. It is derived from (A1) that

e α t e a i t 0 t e a i s j = 1 n b i j f j ( x j ( s ) ) d s e α t e a i t 0 t e a i s j = 1 n { | b i j l j | | x j ( s ) | } d s = e ( a i α ) t 0 T e ( a i α ) s j = 1 n { | b i j l j | e α s | x j ( s ) | } d s + e ( a i α ) t T t e ( a i α ) s j = 1 n { | b i j l j | e α s | x j ( s ) | } d s e ( a i α ) t j = 1 n { | b i j l j | sup s [ 0 , T ] | e α s x j ( s ) | } { 0 T e ( a i α ) s ds } + ε j = 1 n { | b i j l j | } e ( a i α ) t T t e ( a i α ) s d s e ( a i α ) t j = 1 n { | b i j l j | sup s [ 0 , T ] | e α s x j ( s ) | } { 0 T e ( a i α ) s d s } + ε a i α j = 1 n { | b i j l j | } ,

which leads to

e α t e a i t 0 t e a i s j = 1 n b i j f j ( x j ( s ) ) ds0as t for iN.
(11)

Similarly, for any ε>0, since lim t e α t x j (t)=0, there also exists a T j >0 such that s T j τ implies | e α s x j (s)|<ε. Select T ˆ = max j N { T j }. It follows from (A2) that

e α t e a i t 0 t e a i s j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) d s e α t e a i t 0 t e a i s j = 1 n { | c i j k j | | x j ( s τ j ( s ) ) | } d s e ( a i α ) t 0 t e a i s e α { s τ } j = 1 n { | c i j k j | e α [ s τ j ( s ) ] | x j ( s τ j ( s ) ) | } d s = e α τ e ( a i α ) t 0 T ˆ e ( a i α ) s j = 1 n { | c i j k j | e α [ s τ j ( s ) ] | x j ( s τ j ( s ) ) | } d s + e α τ e ( a i α ) t T ˆ t e ( a i α ) s j = 1 n { | c i j k j | e α [ s τ j ( s ) ] | x j ( s τ j ( s ) ) | } d s e α τ j = 1 n { | c i j k j | sup s [ τ , T ˆ ] | e α s x j ( s ) | } e ( a i α ) t 0 T ˆ e ( a i α ) s d s + e α τ ε j = 1 n { | c i j k j | } e ( a i α ) t T ˆ t e ( a i α ) s d s e α τ j = 1 n { | c i j k j | sup s [ τ , T ˆ ] | e α s x j ( s ) | } e ( a i α ) t 0 T ˆ e ( a i α ) s d s + e α τ ε a i α j = 1 n { | c i j k j | } ,

which results in

e α t e a i t 0 t e a i s j = 1 n c i j g j ( x j ( s τ j ( s ) ) ) ds0as t for iN.
(12)

Furthermore, from (A3), we know that | P i k ( x i ( t k ))| p i k | x i ( t k )|. So,

e α t e a i t 0 < t k < t { P i k ( x i ( t k ) ) e a i t k } e α t e a i t 0 < t k < t { p i k | x i ( t k ) | e a i t k } .

As x i (t) H i , we have lim t e α t x i (t)=0. Then, for any ε>0, there exists a non-impulsive point T i >0 such that s T i implies | e α s x i (s)|<ε. It then follows from the conditions (i) and (ii) that

e α t e a i t 0 < t k < t { p i k | x i ( t k ) | e a i t k } = e α t e a i t { 0 < t k < T i { p i k | x i ( t k ) | e a i t k } + T i < t k < t { p i k | x i ( t k ) | e α t k e ( a i α ) t k } } e α t e a i t 0 < t k < T i { p i k | x i ( t k ) | e a i t k } + e α t e a i t p i ε T i < t k < t { μ e ( a i α ) t k } e ( a i α ) t 0 < t k < T i { p i k | x i ( t k ) | e a i t k } + e ( a i α ) t p i ε { T i < t r < t k { e ( a i α ) t r ( t r + 1 t r ) } + μ e ( a i α ) t k } e ( a i α ) t 0 < t k < T i { p i k | x i ( t k ) | e a i t k } + e ( a i α ) t p i ε T i t e ( a i α ) s d s + e ( a i α ) t p i ε μ e ( a i α ) t e ( a i α ) t 0 < t k < T i { p i k | x i ( t k ) | e a i t k } + p i ε a i α + p i ε μ ,

which produces

e α t e a i t 0 < t k < t { P i k ( x i ( t k ) ) e a i t k } 0as t.
(13)

From (11), (12) and (13), we deduce e α t π( x i )(t)0 as t. We therefore conclude that π( x i )(t) H i (iN), which means π(H)H.

Step 3. We need to prove π is contractive. For x ¯ =( x 1 (t),, x n (t))H and y ¯ =( y 1 (t),, y n (t))H, we estimate |π( x i )(t)π( y i )(t)| J 1 + J 2 + J 3 , where

J 1 = e a i t 0 t e a i s j = 1 n [ | b i j | | f j ( x j ( s ) ) f j ( y j ( s ) ) | ] d s , J 2 = e a i t 0 t e a i s j = 1 n [ | c i j | | g j ( x j ( s τ j ( s ) ) ) g j ( y j ( s τ j ( s ) ) ) | ] d s . J 3 = e a i t 0 < t k < t { e a i t k | P i k ( x i ( t k ) ) P i k ( y i ( t k ) ) | } ,

Note

J 1 e a i t 0 t e a i s j = 1 n [ | b i j l j | | x j ( s ) y j ( s ) | ] d s max j N | b i j l j | j = 1 n { sup s [ 0 , t ] | x j ( s ) y j ( s ) | } e a i t 0 t e a i s d s 1 a i max j N | b i j l j | j = 1 n { sup s [ 0 , t ] | x j ( s ) y j ( s ) | } ,
(14)

and

J 2 e a i t 0 t e a i s j = 1 n [ | c i j k j | | x j ( s τ j ( s ) ) y j ( s τ j ( s ) ) | ] d s max j N | c i j k j | j = 1 n { sup s [ τ , t ] | x j ( s ) y j ( s ) | } e a i t 0 t e a i s d s 1 a i max j N | c i j k j | j = 1 n { sup s [ τ , t ] | x j ( s ) y j ( s ) | } ,
(15)

and

J 3 e a i t 0 < t k < t { e a i t k p i k | x i ( t k ) y i ( t k ) | } p i e a i t sup s [ 0 , t ] | x i ( s ) y i ( s ) | 0 < t k < t { e a i t k μ } p i e a i t sup s [ 0 , t ] | x i ( s ) y i ( s ) | { 0 < t r < t k { e a i t r ( t r + 1 t r ) } + e a i t k μ } p i sup s [ 0 , t ] | x i ( s ) y i ( s ) | e a i t { 0 t e a i s d s + e a i t μ } p i ( μ + 1 a i ) sup s [ 0 , t ] | x i ( s ) y i ( s ) | .
(16)

It hence follows from (14), (15) and (16) that

| π ( x i ) ( t ) π ( y i ) ( t ) | 1 a i max j N | b i j l j | j = 1 n { sup s [ 0 , t ] | x j ( s ) y j ( s ) | } + 1 a i max j N | c i j k j | j = 1 n { sup s [ τ , t ] | x j ( s ) y j ( s ) | } + p i ( μ + 1 a i ) sup s [ 0 , t ] | x i ( s ) y i ( s ) | ,

which implies

sup t [ τ , T ] | π ( x i ) ( t ) π ( y i ) ( t ) | 1 a i max j N | b i j l j | j = 1 n { sup s [ τ , T ] | x j ( s ) y j ( s ) | } + 1 a i max j N | c i j k j | j = 1 n { sup s [ τ , T ] | x j ( s ) y j ( s ) | } + p i ( μ + 1 a i ) sup s [ τ , T ] | x i ( s ) y i ( s ) | .

Therefore,

i = 1 n sup t [ τ , T ] | π ( x i ) ( t ) π ( y i ) ( t ) | ϑ j = 1 n { sup s [ τ , T ] | x j ( s ) y j ( s ) | } .

In view of the condition (iii), we see π is a contraction mapping, and thus there exists a unique fixed point x ¯ () of π in ℋ, which means x ¯ T () is the solution to Eqs. (1)-(3) and meets e α t x ¯ T ()0 as t. This completes the proof. □

Theorem 3.2 Assume the conditions (A1)-(A3) hold. Provided that

  1. (i)

    inf k = 1 , 2 , { t k t k 1 }1,

  2. (ii)

    there exist constants p i such that p i k p i for iN and k=1,2, ,

  3. (iii)

    i = 1 n { 1 a i max j N | b i j l j |+ 1 a i max j N | c i j k j |}+ max i N { p i (1+ 1 a i )}<1,

then Eqs. (1)-(2) are globally exponentially stable.

Proof Theorem 3.2 is a direct conclusion by letting μ=1 in Theorem 3.1. □

Remark 3.1 In Theorem 3.1, we see that it is fixed point theory that deals with the existence and uniqueness of a solution and the global exponential stability of impulsive delayed neural networks at the same time, while the Lyapunov method fails to do this.

Remark 3.2 The presented sufficient conditions in Theorems 3.1-3.2 do not require even the differentiability of delays, let alone the monotone decreasing behavior of delays which is necessary in some relevant works.

Remark 3.3 In [4], the abrupt changes are assumed linear with the coefficient α(0,2), while in our paper, this restriction is removed and the abrupt changes can be linear and nonlinear. On the other hand, the activation functions in [6] are assumed to satisfy 0 f ( x ) f ( y ) x y l, where f is an activation function. However, in this paper, we relax this restriction and instead suppose an activation function f satisfies |f(x)f(y)|l|xy|.

4 Example

Consider the following two-dimensional impulsive cellular neural network with time-varying delays:

d x i ( t ) d t = a i x i ( t ) + j = 1 2 b i j f j ( x j ( t ) ) + j = 1 2 c i j g j ( x j ( t τ j ( t ) ) ) , t 0 , t t k , Δ x i ( t k ) = x i ( t k + ) x i ( t k ) = P i k ( x i ( t k ) ) , k = 1 , 2 , ,

with the initial conditions x 1 (s)=cos(s), x 2 (s)=sin(s) on τs0, where a 1 = a 2 =7, b 11 =0, b 12 = 1 7 , b 21 = 1 7 , b 22 = 1 7 , c 11 = 3 7 , c 12 = 2 7 , c 21 =0, c 22 = 1 7 , f j (s)= g j (s)=(|s+1||s1|)/2 (j=1,2), P i k ( x i ( t k ))=arctan(0.4 x i ( t k )) for i=1,2 and k=1,2, , t k = t k 1 +0.5k (k=1,2,). It is easy to see that μ=0.5 and l j = k j =1 as well as p i k =0.4.

Select p i =0.8 and compute i = 1 2 { 1 a i max j = 1 , 2 | b i j l j |+ 1 a i max j = 1 , 2 | c i j k j |}+ max i = 1 , 2 { p i (μ+ 1 a i )}<1. From Theorem 3.1, we conclude that this two-dimensional impulsive cellular neural network with time-varying delays is globally exponentially stable.

5 Conclusion

This work aims to seek new methods to study the stability of complex CNNs. From what have been discussed above, we find that the application of fixed point theory to the stability analysis of complex CNNs is successful. We utilize the contraction mapping principle to deal with the existence and uniqueness of a solution and the global exponential stability of the considered system at the same time, for which Lyapunov theory feels helpless. Now that there are different kinds of fixed point theorems and complex neural networks, our future work is to continue the study on the application of fixed point theory to the stability analysis of complex neural networks.