1 Introduction

Neural networks have found important applications in various areas such as combinatorial optimization, signal processing, pattern recognition, and solving nonlinear algebraic equations. We notice that a lot of practical systems have the phenomenon of time delay, and many scholars have paid much attention to time delay systems [18]. As is well known, stochastic functional differential systems which include stochastic delay differential systems have been widely used since stochastic modeling plays an important role in many branches of science and engineering [9]. Consequently, the stability analysis of these systems has received a lot of attention [1012].

In some applications, besides delay and stochastic effects, impulsive effects are also likely to exist [13, 14]; they could stabilize or destabilize the systems. Therefore, it is of interest to take delay effects, stochastic effects, and impulsive effects into account when studying the dynamical behavior of neural networks.

In [11], Guo et al. studied the exponential stability for a stochastic neutral cellular neural network without impulses and obtained new criteria for exponential stability in mean square of the considered neutral cellular neural network by using fixed point theory. To the best of the authors’ knowledge there are only a few papers where fixed point theory is used to discuss the stability of stochastic neural networks. In this paper, we will study the exponential stability for a stochastic neural network with impulses by the contraction mapping theorem and Krasnoselskii’s fixed point theorem.

2 Some preliminaries

Throughout this paper, unless otherwise specified, we let (Ω,F,P) be a complete probability space with a filtration { F t } t 0 satisfying the usual conditions, i.e. it is right continuous and F 0 contains all P-null sets, C F 0 b ([τ,0]; R n ) be the family of all bounded, F 0 -measurable functions. Let R n denote the n-dimensional real space equipped with Euclidean norm | | 1 . B= [ b i j ( t ) ] n × n denote a matrix, its norm is denoted by B ( t ) 3 = i , j = 1 n | b i , j (t)|.

In this paper, by using fixed point theory, we discuss the stability of the impulsive stochastic delayed neural networks:

{ d x ( t ) = [ ( A + A ( t ) ) x ( t ) + f ( t , x ( t ) , x ( t τ 1 ) ) ] d t d x ( t ) = + σ ( t , x ( t ) , x ( t τ 2 ) ) d ω ( t ) , t t k , x ( t k ) = I k ( x ( t k ) ) , t = t k , k Z + , x ( t ) = ψ ( t ) , τ t 0 ,
(2.1)

where τ=max{ τ 1 , τ 2 }, τ 1 and τ 2 are positive constant. tψ(t)C([τ,0], L F 0 p (Ω, R n )) with the norm defined by ψ 2 = sup τ θ 0 | ψ ( θ ) | 1 , x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is the state vector, A=diag( a 1 , a 2 ,, a n )>0 (i.e. a i >0, i=1,2,,n) is the connection weight constant matrix with appropriate dimensions, and A(t) represents the time-varying parameter which is uncertain with | A ( t ) | 3 bounded.

Here f(t, u 1 , u 2 )C(R× R n × R n ) is the neuron activation function with f(t,0,0)=0 and ω(t)= ( ω 1 ( t ) , ω 2 ( t ) , , ω m ( t ) ) T R m is an m-dimensional Brownian motion defined on (Ω,F,P). The stochastic disturbance term, σ(t, u 1 , u 2 )C(R× R n × R n ), can be viewed as stochastic perturbations on the neuron states and delayed neuron states with σ(t,0,0)=0. x( t k )= I k (x( t k ))=x( t k + )x( t k ) is the impulse at moment t k , and t 1 < t 2 < is strictly increasing sequence such that lim k t k =+, x( t k + ) and x( t k ) stand for the right-hand and left-hand limit of x(t) at t= t k , respectively. I k (x( t k )) shows the abrupt change of x(t) at the impulsive moment t k and I k ()C( L F t p (Ω; R n ), L F t p (Ω; R n )).

The local Lipschitz condition and the linear growth condition on the function f(t,,) and σ(t,,) guarantee the existence and uniqueness of a global solution for system (2.1); we refer to [9] for detailed information. Clearly, system (2.1) admits a trivial solution x(t;0,0)0.

Definition 2.1 System (2.1) is said to be exponentially stable in mean square for all admissible uncertainties if there exists a solution x of (2.1) and there exist a pair of positive constants β and μ with

E|x(t) | 1 2 μE ψ 2 2 e β t ,t0.

In order to prove the exponentially stability in mean square of system (2.1), we need the following lemma.

Lemma 2.1 (Krasnoselskii)

Suppose that Ω is Banach space and X is a bounded, convex, and closed subset of Ω. Let U,S:XΩ satisfy the following conditions:

  1. (1)

    Ux+SyX for any x,yX;

  2. (2)

    U is contraction mapping;

  3. (3)

    S is continuous and compact.

Then U+S has a fixed point in X.

Lemma 2.2 ( C p inequality)

If X L p (Ω, R n ), then

E| i = 1 n X i | p i = 1 n E | X i | p ,for 0<p1

and

E| i = 1 n X i | p n p 1 i = 1 n E | X i | p ,for p>1.

3 Main results

Let (ß, ß ) be the Banach space of all F 0 -adapted processes φ(t,ω):[τ,)×Ω R n such that φ:[τ,) L F 0 p (Ω, R n ) is continuous on t t k , lim t t k φ(t,) and lim t t k + φ(t,) exist, and lim t t k φ(t,)=φ( t k ,), k=1,2, ; we have

φ ß := sup t 0 E|φ(t) | 1 2 ,for φß.

Let Λ be the set of functions φß such that φ(s)=ψ(s) on s[τ,0] and e α t E | φ ( t , ω ) | 1 2 0 as t. It is clear that Λ is a bounded, convex, and closed subset of ß.

To obtain our results, we suppose the following conditions are satisfied:

  • (H1) there exist μ 1 , μ 2 >0 such that | f ( t , u , v ) f ( t , u ¯ , v ¯ ) | 1 μ 1 | u u ¯ | 1 + μ 2 | v v ¯ | 1 ;

  • (H2) there exist ν 1 , ν 2 >0 such that | σ ( t , u , v ) σ ( t , u ¯ , v ¯ ) | 1 2 ν 1 | u u ¯ | 1 2 + ν 2 | v v ¯ | 1 2 ;

  • (H3) there exists an α>0 such that α<min{ a 1 , a 2 ,, a n };

  • (H4) for k=1,2,3, , the mapping I k () satisfies I k (0)0 and is globally Lipschitz function with Lipschitz constants p k ;

  • (H5) there exists a constant ρ such that inf k = 1 , 2 , { t k t k 1 }ρ;

  • (H6) there exists constant p such that p k pρ, for iN and k=1,2, .

The solution x(t):=x(t;0,ψ) of system (2.1) is, for the time t, a piecewise continuous vector-valued function with the first kind discontinuity at the points t k (k=1,2,), where it is left continuous, i.e.,

x ( t k ) =x( t k ),x ( t k + ) =x( t k )+ I k ( x ( t k ) ) ,k=1,2,.

Theorem 3.1 Assume (H1)-(H6) hold and the following condition is satisfied:

( P 1 )3 { n 2 2 λ min ( A ) [ ( A ( t ) 3 + μ 1 + μ 2 ) 2 + ν 1 + ν 2 ] + n 2 p 2 ( 1 λ min ( A ) + ρ ) 2 } <1,

then system (2.1) is exponentially stable in mean square for all admissible uncertainties, that is, e α t E | x ( t ) | 1 2 0, as t.

Proof System (2.1) is equivalent to

x ( t ) = exp ( A t ) ψ ( 0 ) + 0 t exp A ( s t ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s + 0 t exp A ( s t ) σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) + 0 < t k < t exp [ A ( t t k ) ] I k ( x ( t k ) ) .
(3.1)

Let

J 1 ( t ) : = exp ( A t ) ψ ( 0 ) , J 2 ( t ) : = 0 t exp A ( s t ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s , J 3 : = 0 t exp A ( s t ) σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) , J 4 ( t ) : = 0 < t k < t exp [ A ( t t k ) ] I k ( x ( t k ) ) .

Define an operator by (Qx)(t)=ψ(t) for t[τ,0], and for t0, we define (Qx)(t):= J 1 (t)+ J 2 (t)+ J 3 (t)+ J 4 (t) (i.e. the right-hand side of (3.1)). From the definition of Λ, we have E | φ ( t ) | 2 <, for all t0, and φΛ.

Next, we prove that QΛΛ. It is clear that (Qx)(t) is continuous on [τ,0]. For a fixed time t>0, it is easy to check that J 1 (t), J 2 (t), J 4 (t) are continuous in mean square on the fixed time t t k , for k=1,2, . In the following, we check the mean square continuity of J 3 (t) on the fixed time t t k (k=1,2,).

Let xΛ and rR such that |r| sufficiently small; we obtain

E | J 3 ( t + r ) J 3 ( t ) | 1 2 = E | 0 t + r exp A ( s t r ) σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) 0 t exp A ( s t ) σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) | 1 2 = E | 0 t [ exp A ( s t r ) exp A ( s t ) ] σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) + t t + r exp A ( s t r ) σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) | 1 2 2 ( 0 t [ exp A ( s t r ) exp A ( s t ) ] 3 2 × E | σ ( s , x ( s ) , x ( s τ 2 ) ) | 1 2 d s + t t + r exp A ( s t r ) 3 2 E | σ ( s , x ( s ) , x ( s τ 2 ) ) | 1 2 d s ) 0 as  r 0 .

Hence, (Qx)(t) is continuous in mean square on the fixed time t t k , for k=1,2, . On the other hand, as t= t k , it is easy to check that J 1 (t), J 2 (t), J 3 (t) are continuous in mean square on the fixed time t= t k .

Let r<0 be small enough; we have

E | J 4 ( t k + r ) J 4 ( t k ) | 1 2 = E | 0 < t m < t k + r exp [ A ( t k + r t m ) ] I m ( x ( t m ) ) 0 < t m < t k exp [ A ( t k t m ) ] I m ( x ( t m ) ) | 1 2 E | [ exp ( A ( t k + r ) ) exp ( A t k ) ] × 0 < t m < t k exp ( A t m ) I m ( x ( t m ) ) | 1 2 ,

which implies that lim r 0 E | J 4 ( t k + r ) J 4 ( t k ) | 1 2 =0.

Let r>0 be small enough, we have

E | J 4 ( t k + r ) J 4 ( t k ) | 1 2 = E | [ exp ( A ( t k + r ) ) exp ( A t k ) ] × 0 < t m < t k exp ( A t m ) I m ( x ( t m ) ) + exp ( A r ) I k ( x ( t k ) ) | 1 2 ,

which implies that lim r 0 + E | J 4 ( t k + r ) J 4 ( t k ) | 1 2 =E | I k ( x ( t k ) ) | 1 2 .

Hence, we see that (Qx)(t):[τ,) L F 0 P (Ω, R n ) is continuous in mean square on t t k , and for t= t k , lim t t k + (Qx)(t) and lim t t k (Qx)(t) exist. Furthermore, we also obtain lim t t k (Qx)(t)=(Qx)( t k ) lim t t k + (Qx)(t).

It follows from (3.1) that

e α t E|(Qx)(t) | 1 2 4 e α t i = 1 4 E| J i (t) | 1 2 .

By (H3), it is easy to see e α t E | J 1 ( t ) | 1 2 0, as t. Now, we prove e α t E | J 2 ( t ) | 1 2 0, e α t E | J 3 ( t ) | 1 2 0, and e α t E | J 4 ( t ) | 1 2 0, as t.

Note, for any ϵ>0, there exists t >0 such that s t τ implies that e α s E | x ( s ) | 1 2 <ϵ. Hence, we have from (H1), (H3)

e α t E | J 2 ( t ) | 1 2 = e α t E | 0 t exp A ( s t ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s | 1 2 e α t E 0 t exp A ( s t ) 3 2 | [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] | 1 2 d s = e α t E 0 t exp A ( s t ) 3 2 | [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] | 1 2 d s + e α t E t t exp A ( s t ) 3 2 | [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] | 1 2 d s e α t E 0 t exp A ( s t ) 3 2 [ A ( s ) 3 | x ( s ) | 1 + μ 1 | x ( s ) | 1 + μ 2 | x ( s τ 1 ) | 1 ] 2 d s + e α t E t t exp A ( s t ) 3 2 [ A ( s ) 3 | x ( s ) | 1 + μ 1 | x ( s ) | 1 + μ 2 | x ( s τ 1 ) | 1 ] 2 d s e α 2 λ min ( A ) t n 2 ( A ( s ) 3 + μ 1 + μ 2 ) 2 × E ( sup τ s t | x ( s ) | 1 2 ) 0 t e 2 λ min ( A ) s d s + 2 e α t n 2 t t [ ( A ( s ) 3 + μ 1 ) 2 e α s e α s E ( | x ( s ) | 1 2 ) + μ 2 2 e α ( s τ 1 ) e α ( s τ 1 ) E | x ( s τ 1 ) | 1 2 ] e 2 λ min ( A ) ( s t ) d s ,

where exp A ( s t ) 3 = i = 1 n e a i ( s t ) , λ min (A) represents the minimal eigenvalue of A. Thus, we have e α t E | J 2 ( t ) | 1 2 0 as t.

From (H2) and (H3), we have

e α t E | J 3 ( t ) | 1 2 = e α t E | 0 t exp A ( s t ) σ ( s , x ( s ) , x ( s τ 2 ) ) d ω ( s ) | 1 2 e α t E 0 t exp A ( s t ) 3 2 | σ ( s , x ( s ) , x ( s τ 2 ) ) | 1 2 d s = e α t E 0 t exp A ( s t ) 3 2 | σ ( s , x ( s ) , x ( s τ 2 ) ) | 1 2 d s + e α t E t t exp A ( s t ) 3 2 | σ ( s , x ( s ) , x ( s τ 2 ) ) | 1 2 d s e α 2 λ min ( A ) t n 2 ( ν 1 + ν 2 ) E ( sup τ s t | x ( s ) | 1 2 ) 0 t e 2 λ min ( A ) s d s + e α t n 2 ( ν 1 + ν 2 ) t t e α s e α s E ( sup τ s t | x ( s ) | 1 2 ) e 2 λ min ( A ) ( s t ) d s 0 as  t .

As x(t)Λ, we have lim t e α t E | x ( t ) | 1 2 0. Then, for any ϵ>0, there exists a nonimpulsive point T>0 such that sT implies e α t E | x ( t ) | 1 2 <ϵ. It then follows from the conditions (H4)-(H6) that

e α t E | J 4 ( t ) | 1 2 = e α t E | 0 < t k < t exp [ A ( t t k ) ] I k ( x ( t k ) ) | 1 2 e α t E | 0 < t k < t exp A ( t k t ) 3 | I k ( x ( t k ) ) | 1 | 2 e α t E | 0 < t k < t n e λ min ( A ) ( t k t ) p k | x ( t k ) | 1 | 2 e α t E | 0 < t k < T n e λ min ( A ) ( t k t ) p k | x ( t k ) | 1 + T < t k < t n e λ min ( A ) ( t k t ) p ρ | x ( t k ) | 1 | 2 2 e α t [ E | 0 < t k < T n e λ min ( A ) ( t k t ) p k | x ( t k ) | 1 | 2 + E | T < t k < t n e λ min ( A ) ( t k t ) p ρ | x ( t k ) | 1 | 2 ] 2 e ( α 2 λ min ( A ) ) t n 2 p k 2 E | 0 < t k < T e λ min ( A ) t k | x ( t k ) | 1 | 2 + 2 e α t E | T < t r < t k n e λ min ( A ) ( t r t ) p ( t r + 1 t r ) | x ( t k ) | 1 + n e λ min ( A ) ( t k t ) p ρ | x ( t k ) | 1 | 2 2 e ( α 2 λ min ( A ) ) t n 2 p k 2 E | 0 < t k < T e λ min ( A ) t k | x ( t k ) | 1 | 2 + 2 e α t E | T t n e λ min ( A ) ( s t ) p | x ( s ) | 1 d s + n p ρ | x ( t ) | 1 | 2 2 e ( α 2 λ min ( A ) ) t n 2 p k 2 E | 0 < t k < T e λ min ( A ) t k | x ( t k ) | 1 | 2 + 4 e α t E | T t n e λ min ( A ) ( s t ) p | x ( s ) | 1 d s | 2 + 4 e α t E | n p ρ | x ( t ) | 1 | 2 2 e ( α 2 λ min ( A ) ) t n 2 p k 2 E | 0 < t k < T e λ min ( A ) t k | x ( t k ) | 1 | 2 + 4 n 2 p 2 e ( α 2 λ min ( A ) ) t T t e ( 2 λ min ( A ) α ) s e α s E | x ( s ) | 1 2 d s + 4 n 2 p 2 ρ 2 ϵ 2 e ( α 2 λ min ( A ) ) t n 2 p k 2 E | 0 < t k < T e λ min ( A ) t k | x ( t k ) | 1 | 2 + 4 n 2 p 2 e ( α 2 λ min ( A ) ) t ϵ T t e ( 2 λ min ( A ) α ) s d s + 4 n 2 p 2 ρ 2 ϵ 2 e ( α 2 λ min ( A ) ) t n 2 p k 2 E | 0 < t k < T e λ min ( A ) t k | x ( t k ) | 1 | 2 + 4 n 2 p 2 ϵ 2 λ min ( A ) α + 4 n 2 p 2 ρ 2 ϵ 0 as  t .

Thus we conclude that Q:ΛΛ.

Finally, we prove that Q is a contraction mapping. For any φ,ϕΛ, we obtain

sup t τ E | ( Q φ ) ( t ) ( Q ϕ ) ( t ) | 1 2 = sup t τ { E | 0 t exp A ( s t ) [ A ( s ) ( φ ( s ) ϕ ( s ) ) + ( f ( s , φ ( s ) , φ ( s τ 1 ) ) f ( s , ϕ ( s ) , ϕ ( s τ 1 ) ) ) ] d s + 0 t exp A ( s t ) [ σ ( s , φ ( s ) , φ ( s τ 2 ) ) σ ( s , ϕ ( s ) , ϕ ( s τ 2 ) ) ] d ω ( s ) + 0 < t k < t e A ( t t k ) [ I k ( φ ( t k ) ) I k ( ϕ ( t k ) ) ] | 1 2 } 3 sup t τ { E 0 t exp A ( s t ) 3 2 [ A ( s ) 3 | φ ( s ) ϕ ( s ) | 1 + μ 1 | φ ( s ) ϕ ( s ) | 1 + μ 2 | φ ( s τ 1 ) ϕ ( s τ 1 ) | 1 ] 2 d s + E 0 t exp A ( s t ) 3 2 [ ν 1 | φ ( s ) ϕ ( s ) | 1 2 + ν 2 | φ ( s τ 2 ) ϕ ( s τ 2 ) | 1 2 ] d s + E | 0 < t k < t exp A ( t k t ) 3 p k | φ ( t k ) ϕ ( t k ) | 1 | 2 } 3 { n 2 2 λ min ( A ) [ ( A ( t ) 3 + μ 1 + μ 2 ) 2 + ν 1 + ν 2 ] + n 2 p 2 E | 0 < t k < t e λ min ( A ) ( t k t ) ρ | 1 2 } sup t τ E | φ ( s ) ϕ ( s ) | 1 2 3 { n 2 2 λ min ( A ) [ ( A ( t ) 3 + μ 1 + μ 2 ) 2 + ν 1 + ν 2 ] + n 2 p 2 E | 0 < t r < t k e λ min ( A ) ( t r t ) ( t r + 1 t r ) + e λ min ( A ) ( t k t ) ρ | 1 2 } sup t τ E | φ ( s ) ϕ ( s ) | 1 2 3 { n 2 λ min ( A ) [ ( A ( t ) 3 + μ 1 + μ 2 ) 2 + ν 1 + ν 2 ] + n 2 p 2 ( 1 λ min ( A ) + ρ ) 2 } sup t τ E | φ ( s ) ϕ ( s ) | 1 2 .

From the condition (P1), we find that Q is a contraction mapping. Hence, by the contraction mapping principle, we see that Q has a unique fixed point x(t), which is a solution of (2.1) with x(t)=ψ(t) as t[τ,0] and e α t E | x ( t ) | 1 2 0 as t. □

The second result is established using Krasnoselskii’s fixed point theorem.

Theorem 3.2 Assume (H1)-(H6) hold and the following condition is satisfied:

( P 2 ) n 2 2 λ min ( A ) ( ν 1 + ν 2 )<1,

then system (2.1) is exponentially stable in mean square for all admissible uncertainties, that is, e α t E | x ( t ) | 1 2 0, as t.

Proof For xX, define the operators U:XX and S:XX, respectively, by

(Ux)(t)=exp(At)ψ(0)+ 0 t expA(st)σ ( s , x ( s ) , x ( s τ 2 ) ) dω(s)

and

( S x ) ( t ) = 0 t exp A ( s t ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s + 0 < t k < t exp [ A ( t t k ) ] I k ( x ( t k ) ) .

By the proof of Theorem 3.1, we can verify that Sx+UyΛ when x,yΛ and S is mean square continuous.

Next, we show that U is a contraction mapping. For x,yΛ, we have

sup t τ | U ( x ) ( t ) U ( y ) ( t ) | 1 2 = sup t τ E | 0 t exp A ( s t ) [ σ ( s , x ( s ) , x ( s τ 2 ) ) σ ( s , y ( s ) , y ( s τ 2 ) ) ] d ω ( s ) | 1 2 sup t τ E 0 t exp A ( s t ) 3 2 [ ν 1 | x ( s ) y ( s ) | 1 2 + ν 2 | x ( s τ 2 ) y ( s τ 2 ) | 1 2 ] d s n 2 2 λ min ( A ) sup t τ E | x ( t ) y ( t ) | 1 2 .

From the condition (P2), we find that U is a contraction mapping.

Finally, we prove that S is compact.

Let DΛ be a bounded set: | x | 1 M, xD, we have

| ( S x ) ( t ) | 1 = | 0 t exp A ( s t ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s + 0 < t k < t exp [ A ( t t k ) ] I k ( x ( t k ) ) | 1 0 t exp A ( s t ) 3 [ A ( s ) 3 | x ( s ) | 1 + μ 1 | x ( s ) | 1 + μ 2 | x ( s τ 1 ) | 1 ] d s + 0 < t k < t exp A ( t k t ) p ρ | x ( t k ) | 1 ( A ( s ) 3 + μ 1 + μ 2 ) M 0 t e λ min A ( s t ) d s + n p ρ M 1 λ min ( A ) ( A ( s ) 3 + μ 1 + μ 2 + n p ρ ) M .

Therefore, we can conclude that Sx is uniformly bounded.

Further, let xD and t ̲ , t ¯ [ t k 1 , t k ], with t ̲ < t ¯ ; we have

| ( S x ) ( t ¯ ) ( S x ) ( t ̲ ) | 1 = | 0 t ¯ exp A ( s t ¯ ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s + 0 < t m < t ¯ exp [ A ( t ¯ t m ) ] I m ( x ( t m ) ) 0 t ̲ exp A ( s t ̲ ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s 0 < t m < t ̲ exp [ A ( t ̲ t m ) ] I m ( x ( t m ) ) | 1 = | 0 t ̲ [ exp A ( s t ¯ ) exp A ( s t ̲ ) ] × [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s + t ̲ t ¯ exp A ( s t ¯ ) [ A ( s ) x ( s ) + f ( s , x ( s ) , x ( s τ 1 ) ) ] d s + 0 < t m < t ¯ [ exp A ( t m t ¯ ) exp A ( t m t ̲ ) ] I m ( x ( t m ) ) | 1 0 as  t ̲ t ¯ .

Thus, the equicontinuity of S is obtained. According to the PC-type Ascoli-Arzela lemma [[15], Lemma 2.4], S(D) is relatively compact in Λ. Therefore S is compact. By Lemma 2.1, U+S has a fixed point x in Λ and we note x(s)=(U+S)(s) on [τ,0] and e α t E | x ( t ) | 1 2 0 as t. This completes the proof. □