1 Introduction

In this paper, we study the existence of positive solutions to second-order singular nonlinear impulsive integro-differential equation of the form:

{ y + h ( t ) f ( t , y ( t ) , y ( t ) , ( T y ) ( t ) , ( S y ) ( t ) ) = 0 , t J , t t k , Δ y | t = t k = I k ( y ( t k ) ) , k = 1 , , m , Δ y | t = t k = I ¯ k ( y ( t k ) , y ( t k ) ) , k = 1 , , m , α y ( 0 ) β y ( 0 ) = 0 , γ y ( 1 ) + δ y ( 1 ) = 0 ,
(1.1)

where α,β,γ,δ0, ρ=βγ+αγ+αδ>0, I=[0,1], J=(0,1), 0< t 1 < t 2 << t m <1, J =J{ t 1 , t 2 ,, t m }, J ¯ =[0,1], J 0 =(0, t 1 ], J k =( t k , t k + 1 ], k=1,,m1, J m =( t m ,1], fC[J×P×P×P×P,P], and P is a positive cone in E. θ is a zero element of E, I k C[P,P], I ¯ k C[P,P], and

(Ty)(t)= 0 t K(t,s)y(s)ds,(Sy)(t)= 0 1 H(t,s)y(s)ds,
(1.2)

in which KC[D,J], D={(t,s)J×J:ts}, HC[J×J,J], and K 0 =max{K(t,s):(t,s)D}, H 0 =max{H(t,s):(t,s)D}. Δy | t = t k and Δ y | t = t k denote the jump of y(t) and y (t) at t= t k , i.e.,

Δy | t = t k =y ( t k + ) y ( t k ) ,Δ y | t = t k = y ( t k + ) y ( t k ) ,

where y( t k + ), y ( t k + ) and y( t k ), y ( t k ) represent the right-hand limit and left-hand limit of y(t) and y (t) at t= t k , respectively. hC(J, R + ) and may be singular at t=0 and/or t=1.

Boundary value problems for impulsive differential equations arise from many nonlinear problems in sciences, such as physics, population dynamics, biotechnology, and economics etc. (see [1, 2, 414, 1618]). As it is well known that impulsive differential equations contain jumps and/or impulses which are main characteristic feature in computational biology. Over the past 15 years, a significant advance has been achieved in theory of impulsive differential equations. However, the corresponding theory of impulsive integro-differential equations in Banach spaces does not develop rapidly. Recently, Guo [58] established the existence of a solution, multiple solutions and extremal solutions for nonlinear impulsive integro-differential equations with nonsingular argument in Banach spaces. The main tools of Guo [58] are the Schauder fixed-point theorem, fixed-point index theory, upper and lower solutions together with the monotone iterative technique, respectively. The conditions of the Kuratowski measure of non-compactness in Guo [58] play an important role in the proof of the results. But all kinds of compactness type conditions is difficult to verify in abstract spaces. As a result, it is an interesting and important problem to remove or weak compactness type conditions.

Inspired and motivated greatly by the above works, the aim of the paper is to consider the existence of positive solutions for the boundary value problem (1.1) under simpler conditions. The main results of problem (1.1) are obtained by making use of fixed-point index theory and fixed-point theorem. More specifically, in the proof of these theorems, we construct a special cone for strict set contraction operator. Our main results in essence improve and generalize the corresponding results of Guo [58]. Moreover, our method is different from those in Guo [58].

The rest of the paper is organized as follows: In Section 2, we present some known results and introduce conditions to be used in the next section. The main theorem formulated and proved in Section 3. Finally, in Section 4, some discussions and an example for singular nonlinear integro-differential equations are presented to demonstrate the application of the main results.

2 Preliminaries and lemmas

In this section, we shall state some necessary definitions and preliminaries results.

Definition 2.1 Let E be a real Banach space. A nonempty closed set PE is called a cone if it satisfies the following two conditions:

  1. (1)

    xP, λ>0 implies λxP;

  2. (2)

    xP, xP implies x=0.

A cone is said solid if it contains interior points, P θ. A cone P is called to be generating if E=PP, i.e., every element yE can be represented in the form y=xz, where x,zP. A cone P in E induces a partial ordering in E given by uv if vuP. If uv and uv, we write u<v; if cone P is solid and vu P , we write uv.

Definition 2.2 A cone PE is said to be normal if there exists a positive constant N such that x+yN, x,yP, x=1, y=1.

Definition 2.3 Let E be a metric space and S be a bounded subset of E. The measure of non-compactness ϒ(S) of S is defined by

ϒ(S)=inf{δ>0:S admits a finite cover by sets of diameter δ}.

Definition 2.4 An operator B:DE is said to be completely continuous if it is continuous and compact. B is called a k-set-contraction (k0) if it is continuous, bounded and ϒ(B(S))kϒ(S) for any bounded set SD, where ϒ(S) denotes the measure of noncompactness of S.

A k-set-contraction is called a strict-set contraction if k<1. An operator B is said to be condensing if it is continuous, bounded, and ϒ(B(S))<ϒ(S) for any bounded set SD with ϒ(S)>0.

Obviously, if B is a strict-set contraction, then B is a condensing mapping, and if operator B is completely continuous, then B is a strict-set contraction.

It is well known that y C 2 (0,1)C[0,1] is a solution of the problem (1.1) if and only if xC[0,1] is a solution of the following nonlinear integral equation:

y ( t ) = 0 1 G ( t , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) { k = 1 m [ γ ( 1 t k ) I ¯ k ( y ( t k ) , y ( t k ) ) + δ I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] } ,

where

G(t,s)= 1 ρ { ( γ + δ γ t ) ( β + α s ) , 0 s t 1 , ( β + α t ) ( γ + δ γ s ) , 0 t s 1 ,
(2.1)

where ρ=γβ+αγ+αδ>0. In what follows, we write J 1 =[0, t 1 ], J k =( t k 1 , t k ] (k=1,2,,m), J m =( t m ,1]. By making use of (2.1), we can prove that G(t,s) has the following properties.

Proposition 2.10G(t,s)G(s,s) 1 ρ (α+δ)(α+β), t,s[0,1].

Proposition 2.20σG(t,s)G(t,s), t,s[a,b][0,1], wherea(0, t 1 ], b[ t m ,1)and

0<σ=min { β + α a α + β , δ + ( 1 b ) γ γ + δ } <1.
(2.2)

Let PC[J,E]={y:y is a map from J into E such that y(t) is continuous at t t k , leftcontinuous at t= t k  and y( t k + ) exists for k=1,,m} and PC[J,P]={yPC[J,E]:y(t)θ}. It is easy to verify PC[J,E] is a Banach space with norm y P C = sup t J y(t). Obviously, PC[J,P] is a cone in Banach space PC[J,E].

Let P C 1 [J,E]={y:y is a map from J into E such that  y (t) exist and is continuous at t t k ,y(t) left continuous at t= t k , and  y ( t k + ), y ( t k ) exist for k=1,,m}, P C 1 [J,P]={yP C 1 [J,E]:y(t)θ, y (t)θ}. It is easy to see that P C 1 [J,E] is a Banach space with the norm y 1 =max{ y P C , y P C }. Evidently, y 1 y P C + y P C and P C 1 [J,P] is a cone in Banach space P C 1 [J,E]. For any yP C 1 [J,E], by making use of the mean value theorem y( t k )y( t k h)h c o ¯ { y (t): t k h<t< t k } (h>0), obviously we see that y ( t k ) exists and y ( t k )= lim h 0 y ( t k ) y ( t k h ) h .

Let K={yPC[ J ¯ ,P]:yσy,t[a,b]}. For any 0<r<R<+, let K r ={yK:y<r}, K r ={yK:y=r}, K ¯ r , R ={yK:ryR}.

A map yP C 1 [J,E] C 2 [ J ,E] is called a nonnegative solution of problem (1.1) if y(t)θ, y (t)θ for tJ and y(t) satisfies problem (1.1). An operator yP C 1 [J,E] C 2 [ J ,E] is called a positive solution of problem (1.1) if y is a nonnegative solution of problem (1.1) and y(t)θ.

For convenience and simplicity in the following discussion, we denote

where ν denote 0 or ∞.

To establish the existence of multiple positive solutions in E of problem (1.1), let us list the following assumptions, which will stand throughout the paper:

(H1) fC(J× P 4 ,P), hC(J,P) and

f ( t , y 1 , y 2 , y 3 , y 4 ) a(t)+ i = 1 4 b i (t) y i ,
(2.3)

where a(t) and b i (t) are Lebesgue integrable functionals on J (i=1,2,3,4) and satisfying

(H2) I k C(P,P), I ¯ k C(P,P) and there exist positive constants c k , c k and c ¯ k (k=1,,m) satisfying

k = 1 m ( c k + c k + c ¯ k ) + 1 ρ (α+β) k = 1 m [ ( γ + δ ) ( c k + c ¯ k ) + γ c k ] < 1 4
(2.4)

such that

I k ( y ) c k y, I ¯ k ( y 1 , y 2 ) c k y 1 + c ¯ k y 2 ,tJ,

(H3) for any bounded set B i E, i=1,2,3,4, f(t, B 1 , B 2 , B 3 , B 4 ) and I k ( B 1 ) together with I ¯ k ( B 1 , B 2 ) are relatively compact sets,

(H4) f 0 >m,

(H5) f <m, where

m = max { ( σ a b G ( s , s ) h ( s ) d s ) 1 , ( a b G t ( t , s ) h ( s ) d s ) 1 , 1 ρ ( α + β ) ( γ + δ ) 0 1 h ( s ) a ( s ) d s } .
(2.5)

We shall reduce problem (1.1) to an integral equation in E. To this end, we first consider operator A:KPC[ J ¯ ,E] defined by

( A y ) ( t ) = 0 1 G ( t , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) { k = 1 m ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ k = 1 m I k ( y ( t k ) ) } .
(2.6)

Lemma 2.1yP C 1 [ J ¯ ,E] C 2 [ J ,E]is a solution of problem (1.1) if and only ifyP C 1 [ J ¯ ,E]is a solution of the following impulsive integral equation:

y ( t ) = 0 1 G ( t , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ]
(2.7)

i.e., y is a fixed point of operator A defined by (2.6) inP C 1 [J,E].

Proof First suppose that yP C 1 [J,E] is a solution of problem (1.1). It is easy to see by the integration of problem (1.1) that

y ( t ) = y ( 0 ) 0 t f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ y ( t k + ) y ( t k ) ] = y ( 0 ) 0 t f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s 0 < t k < t I ¯ k ( y ( t k ) , y ( t k ) ) , t J .
(2.8)

Integrate again, we get

y ( t ) = y ( 0 ) + y ( 0 ) t 0 t ( t s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] , t J .
(2.9)

Letting t=1 in (2.8) and (2.9), we find that

y ( 1 ) = y ( 0 ) 0 1 f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s k = 1 m I ¯ k ( y ( t k ) , y ( t k ) ) .
(2.10)
y ( 1 ) = y ( 0 ) + y ( 0 ) 0 1 ( 1 s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + k = 1 m [ I k ( y ( t k ) ) ( 1 t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] .
(2.11)

Since

y ( 0 ) = β α y ( 0 ) , y ( 1 ) = δ γ { y ( 0 ) 0 1 f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s k = 1 m I ¯ k ( y ( t k ) , y ( t k ) ) } .
(2.12)

We get

y ( 0 ) = α ρ { δ 0 1 f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + γ 0 1 ( 1 s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] } , t J .
(2.13)

Substituting (2.12) and (2.13) into (2.9), we obtain

y ( t ) = 1 ρ ( α t + β ) { 0 1 ( γ ( 1 s ) + δ ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s k = 1 m [ γ I k ( y ( t k ) ) ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) ] } 0 t γ ( t s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] = 1 ρ 0 t [ ( α t + β ) ( γ ( 1 s ) + δ ) ρ ( t s ) ] f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 1 ρ ( α t + β ) t 1 ( γ ( 1 s ) + δ ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] = 0 1 G ( t , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] .

Conversely, if yP C 1 [J,E] is a solution of the integral equation (2.7). Evidently, Δy | t = t k = I k (y( t k )) (k=1,,m). For t= t k , direct differentiation of the integral equation (2.7) implies

y ( t ) = 1 ρ { γ 0 t ( α s + β ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + α t 1 ( γ ( 1 s ) + δ ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s 0 < t k < t I ¯ k ( y ( t k ) , y ( t k ) ) } α ρ k = 1 m [ γ I k ( y ( t k ) ) ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) ]

and y =f(t,y(t), y (t),(Ty)(t),(Sy)(t)). So y C 2 [ J ,E] and Δ y | t = t k = I ¯ k (y( t k ), y ( t k )) (k=1,,m). It is easy to verify that αy(0)β y (0)=0 and γy(1)+δ y (1)=0. The proof is complete. □

Thanks to (2.1), we know that

G t (t,s)= 1 ρ { γ ( β + α s ) , 0 s t 1 , α ( γ + δ γ s ) , 0 t s 1 .

In the following, let w = sup t , s J , t s | G t (t,s)|. For BP C 1 [J,E], we denote B ={ y : y B}PC[J,E], B(t)={y(t):yB}E and B (t)={ y (t): y B}E (tJ).

Lemma 2.2 ([12])

LetDP C 1 [J,E]be a bounded set. Suppose that D (t)is equi-continuous on each J k (k=1,,m). Then

ϒ P C 1 (D)=max { sup t J ϒ ( D ( t ) ) , sup t J ϒ ( D ( t ) ) } ,

where ϒ and ϒ P C 1 denote the Kuratowski measures of noncompactness of bounded sets in E andP C 1 [J,E], respectively.

Lemma 2.3 ([15])

LetHPC[J,E]be bounded equicontinuous, thenϒ(H(t))is continuous on J and

ϒ ( { J y ( t ) d t : y H } ) J ϒ ( H ( t ) ) dt.

Lemma 2.4 ([15])

HP C 1 [J,E]is relatively compact if and only if each elementy(t)Hand y (t)Hare uniformly bounded and equicontinuous on each J k (k=1,,m).

Lemma 2.5 ([15])

Let E be a Banach space andHC[J,E]if H is countable and there existsφL[J, R + ]such thaty(t)φ(t), tJ, yH. Thenϒ({y(t):yH})is integrable on J, and

ϒ ( { J y ( t ) d t : y H } ) 2 J ϒ { y ( t ) : y H } dt.

Lemma 2.6A(K)K.

Proof For any yK, from Proposition 2.1 and (2.6), we obtain

A y 0 1 G ( s , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) { k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) ] k = 1 m γ I k ( y ( t k ) ) } .

On the other hand, for any t[a,b], by (2.6) and Proposition 2.2, we know that

A y ( t ) = 0 1 G ( t , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] σ 0 1 G ( s , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) { k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] } σ A y .

Hence, A(K)K. □

Lemma 2.7 Suppose that (H1) and (H3) hold. ThenA:KKis completely continuous.

Proof Firstly, we show that A:KK is continuous. Assume that y n , y 0 K and y n y 0 0, y n y 0 0 (n). Since fC( J ¯ ×P×P×P×P,P), I k C(J,P) and I ¯ k C(J,P), then

(2.14)

and

lim n I ¯ k ( y n ( t k ) , y n ( t k ) ) I ¯ k ( y 0 ( t k ) , y 0 ( t k ) ) =0,k=1,,m.
(2.15)

Thus, for any tJ, from the Lebesgue dominated convergence theorem together with (2.14) and (2.15), we know that

Hence, A:KK is continuous.

Let BK be any bounded set, then there exists a positive constant R 0 such that y 1 R 0 . Thus, for any yB, t(0,1), we know that

| ( A y ) ( t ) | = 1 ρ | { γ 0 t ( α s + β ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + α t 1 ( γ ( 1 s ) + δ ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s } 0 < t k < t I ¯ k ( y ( t k ) , y ( t k ) ) + α ρ k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] | .

Therefore,

| ( A y ) ( t ) | 1 ρ { γ 0 t ( α s + β ) h ( s ) | f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) | d s + α t 1 ( γ ( 1 s ) + δ ) h ( s ) | f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) | d s } + 0 < t k < t | I ¯ k ( y ( t k ) , y ( t k ) ) | + α ρ k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) + γ | I k ( y ( t k ) ) | ] 1 ρ { γ 0 t ( α s + β ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) y 1 ) d s + α t 1 ( γ ( 1 s ) + δ ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) y ( s ) 1 ) d s } + k = 1 m ( c k + c ¯ k ) y ( s ) 1 + α ρ k = 1 m [ ( γ ( 1 t k ) + δ ) ( c k + c ¯ k ) + γ c k ] y ( s ) 1 1 ρ { γ 0 t ( α s + β ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d s + α t 1 ( γ ( 1 s ) + δ ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d s } + k = 1 m ( c k + c ¯ k ) R 0 + α ρ k = 1 m [ ( γ ( 1 t k ) + δ ) ( c k + c ¯ k ) + γ c k ] R 0 ,
(2.16)

tJ, t t k , k=1,,m. Let

Integrating ψ(t) from 0 to 1 and exchanging integral sequence, then

0 1 ψ ( t ) d t = 0 1 0 s α ( γ + δ γ s ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d t d s γ 0 1 s 1 ( β + α s ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d t d s α 0 1 s ( γ + δ γ s ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d s γ 0 1 ( β + α s ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d s ρ 0 1 G ( s , s ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d s < + .
(2.17)

Thus, by (H1) and (2.17), we have ψ(t) L 1 (J). Hence, for any 0 t 1 t 2 1 and for all yE, from (2.16), we know that

( A y ) ( t 1 ) ( A y ) ( t 2 ) =| t 1 t 2 ( A y ) (t)dt| t 1 t 2 ( ψ ( t ) + M 1 ) dt.
(2.18)

From (2.17), (2.18), and the absolutely continuity of integral function, we see that A(B) is equicontinuous.

On the other hand, for any yB and tJ, we know that

| ( A y ) ( t ) | = | 0 1 G ( t , s ) h ( s ) f ( s , y ( s ) , y ( s ) , ( T y ) ( s ) , ( S y ) ( s ) ) d s + 0 < t k < t [ I k ( y ( t k ) ) ( t t k ) I ¯ k ( y ( t k ) , y ( t k ) ) ] + 1 ρ ( α t + β ) { k = 1 m [ ( γ ( 1 t k ) + δ ) I ¯ k ( y ( t k ) , y ( t k ) ) γ I k ( y ( t k ) ) ] } | 0 1 G ( s , s ) h ( s ) ( a ( s ) + ( i = 1 2 b i ( s ) + K 0 b 3 ( s ) + H 0 b 4 ( s ) ) R 0 ) d s + k = 1 m ( c k + c ¯ k ) R 0 ) + α ρ k = 1 m [ ( γ ( 1 t k ) + δ ) ( c k + c ¯ k ) + γ c k ] R 0 < + .

Therefore, A(B) is uniformly bounded. By virtue of Lemma 2.3 and (H3), we know that

So, ϒ(A(B))=0. Therefore, A is compact. To sum up, the conclusion of Lemma 2.7 follows. □

The main tools of the paper are the following well-known fixed-point index theorems (see [24]).

Lemma 2.8 LetA:KKbe a completely continuous mapping andAyyfory K r . Thus, we have the following conclusions:

  1. (i)

    If yAy for y K r , then i(A, K r ,K)=0.

  2. (ii)

    If yAy for y K r , then i(A, K r ,K)=1.

3 Main results

In this section, we establish the existence of positive solutions for problem (1.1) by making use of Lemma 2.8.

Theorem 3.1 Suppose that (H1)-(H4) hold. Then problem (1.1) has at least one positive solution.

Proof From (H4), there exists ε>0 such that f 0 >m+ε and also there exists r>0 such that for any 0< i = 1 4 y i r and t[a,b], we have

f(t, y 1 , y 2 , y 3 , y 4 )(m+ε) i = 1 4 y i .
(3.1)

Set K r ={yK: y 1 <r}. Then for any y K r K, by virtue of (3.1), we know that

So ( A y ) ( t ) 1 r. Therefore,

i(A, K r K,K)=0.
(3.2)

Let R>max{4m,2r}. Then K R is a bounded open subsets in E, and so for any y K R K and tJ, we obtain

Hence, A y 1 <R. Therefore,

i(A, K R K,K)=0.
(3.3)

From (3.2) and (3.3), we get

i ( A , ( K R K ) ( K ¯ r K ) , K ) =i(A, K R K,K)i(A, K r K,K)=1.

Therefore, A has at least one fixed point on ( K R K)( K ¯ r K). Consequently, problem (1.1) has at least one positive solution. □

Theorem 3.2 Suppose that (H1)∼(H3) and (H5) are satisfied. Then problem (1.1) has at least one positive solution.

Proof From (H5), we can choose ε 1 >0 such that f >m+ε and also there exists R 1 >0 such that for any i = 1 4 y i R 1 and t[a,b], we have

f(t, y 1 , y 2 , y 3 , y 4 )(m+ ε 1 ) i = 1 4 y i .
(3.4)

Let R > R 1 σ . By virtue of (3.4), we know that

f ( t , y ( t ) , y ( t ) , ( T y ) ( t ) , ( S y ) ( t ) ) (m+ ε 1 ) ( y + y + T y + S y ) .
(3.5)

Set K R ={yK: y 1 < R }. Then for any y K R K, by virtue of (3.5), we know that

So, ( A y ) ( t ) 1 r. Therefore,

i(A, K R K,K)=0.
(3.6)

By the same method as the selection of r in Theorem 3.1, we can obtain r < R satisfying

i(A, K r K,K)=1.
(3.7)

According to (3.6) and (3.7), we get

i ( A , ( K R K ) ( K ¯ r K ) , K ) =i(A, K R K,K)i(A, K r K,K)=1.

Therefore, A has at least one fixed point on ( K R K)( K ¯ r K). Consequently, problem (1.1) has at least one positive solution. The proof is complete. □

4 Concerned results and applications

In this section, we deal with a special case of the problem (1.1). The method is just similar to what we have done in Section 3, so we omit the proof of some main results of the section. Case F(t,x(t), x (t))=f(t,x(t), x (t),(Ax)(t),(Bx)(t)) is treated in the following theorem. Under the case, the problem (1.1) reduces to the following boundary value problems:

{ y ( t ) + h ( t ) F ( t , y ( t ) , y ( t ) ) = θ , t J , t t k , Δ y | t = t k = I k ( y ( t k ) ) , Δ y | t = t k = I ¯ k ( y ( t k ) , y ( t k ) ) , k = 1 , , m , α y ( 0 ) β y ( 0 ) = 0 , γ y ( 1 ) + δ y ( 1 ) = 0 ,
(4.1)

where FC(J×P×P,P), hC(J).

Theorem 4.1 Assume that (H2) holds, and the following conditions are satisfied:

(C1) FC(J× P 2 ,P), hC(J,P)and

F ( t , y 1 , y 2 ) a(t)+ i = 1 2 b i (t) y i ,

wherea(t)and b i (t)are Lebesgue integrable functionals on J (i=1,2) and satisfying

(C2) for any bounded set B i E (i=1,2), F(t, B 1 , B 2 )and I k ( B 1 )together with I ¯ k ( B 1 , B 2 )are relatively compact sets.

(C3) lim y 1 + y 2 0 F ( t , y 1 , y 2 ) y 1 + y 2 >m, where m is defined by (2.4). Then the problem (4.1) has at least one positive solution.

Theorem 4.2 Assume that (H2) and (C1)∼(C2) hold, and the following condition is satisfied:

(C4) lim y 1 + y 2 F ( t , y 1 , y 2 ) y 1 + y 2 >m, where m is defined by (2.4). Then the problem (4.1) has at least one positive solution.

To illustrate how our main results can be used in practice, we present an example.

Example 4.1 Consider the following boundary value problem for scalar second-order impulsive integro-differential equation:

{ y ( t ) = π sin t ( ln ( 3 + t 2 ) + t 3 y + t 0 t e t s y d s + 0 1 e 2 s y d s ) 720 t ( 1 + y + 0 t e t s y ( s ) d s + 0 1 e 2 s y ( s ) d s ) 2 , t t 1 , Δ y | t 1 = 1 3 = 1 120 y 3 ( 1 3 ) , Δ y | t 1 = 1 3 = 1 120 ( y 3 ( 1 3 ) + y ( 1 3 ) ) ( y ( 1 3 ) + y ( 1 3 ) ) 2 , α y ( 0 ) β y ( 0 ) = 0 , γ y ( 1 ) + δ y ( 1 ) = 0 .
(4.2)

5 Conclusion

The problem (4.2) has at least one positive solution y (t).

For tJ, t t 1 , let h(t)= π 720 t ,

F ( t , y , y , ( T y ) ( t ) , ( S y ) ( t ) ) = sin t ( ln ( 3 + t 2 ) + t 3 y + t 0 t e t s y d s + 0 1 e 2 s y d s ) ( 1 + y + 0 t e t s y ( s ) d s + 0 1 e 2 s y ( s ) d s ) 2 .

Choose a(t)=sin t . By simple computation, we know that

Then conditions (H1)∼(H4) are satisfied. Therefore, by Theorem 3.1, the problem (4.2) has at least one positive solution.

Remark 5.1 In [12], by requiring that f satisfies some noncompact measure conditions and P is a normal cone, Guo established the existence of positive solutions for initial value problem. In the paper, we impose some weaker condition on f, we obtain the positive solution of the problem (1.1).

Remark 5.2 For the special case when the problem (1.1) has no singularities and J=[0,a], our results still hold. Obviously, our theorems generalize and improve the results in [912].