1 Introduction

The theory of boundary value problems is experiencing a rapid development. Many methods are used to study this kind of problems such as fixed point theorems, shooting method, iterative method with upper and lower solutions, etc. We refer the readers to the papers [114]. Among them, the fixed-point principle in cone has become an important tool used in the study of existence and multiplicity of positive solutions. Many papers that use this method have been published in recent years (see [1520]).

Recently, scientists have noticed that the boundary conditions in many areas of applied mathematics and physics come down to integral boundary conditions. For instance, the models on chemical engineering, heat conduction, thermo-elasticity, plasma physics, and underground water flow can be reduced to the nonlocal problems with integral boundary conditions. For more information about this subject, we refer the readers to the excellent survey by Gallardo [2123], Corduneanu [24], and Agarwal and O’Regan [25].

In [23], Lomtatidze and Malaguti considered the second-order nonlinear singular differential equations

{ u = f ( t , u , u ) , t [ a , b ] , u ( a + ) = 0 , u ( b ) = a b u ( s ) d μ ( s ) ,

where f:[a,b]× R 2 R satisfies the local Carathéodory conditions and μ:[a,b]R is the function of bounded variation. These criteria apply to the case where the function f has nonintegrable singularities in the first argument at the points a and b.

In [22], Karakostas and Tsamatos considered multiple positive solutions of some Fredholm integral equations arising from the nonlocal boundary-value problems

{ ( p ( t ) x ( t ) ) + μ ( t ) f ( x ( t ) ) = 0 , t [ 0 , 1 ] , x ( 0 ) = 0 1 x ( s ) d g ( s ) , x ( 1 ) = 0 1 x ( s ) d h ( s ) ,

where the kernel K(t,s) satisfies a continuity assumption in the L 1 -sense and it is monotone and concave. The main method is the Krasnosel’skii fixed point theorem on a suitable cone, then the above equation has at least one positive solution.

Motivated by the above works, this paper studies the following system:

{ ( φ ( x ( t ) ) ) = f ( t , x ( t ) ) , t J , x ( 0 ) = θ , x ( 0 ) = θ , x ( 1 ) = 0 1 g ( t ) x ( t ) d t ,
(1)

where J=[0,1], fC([0,1]×P,P), θ is the zero element of E, E is a real Banach space with the norm x, and gL[0,1] is nonnegative; φ:RR is an increasing and positive homomorphism (see Definition 1.2) and φ(0)= θ 1 .

According to Definition 1.2, we know that many problems, such as the problems with p-Laplacian operator, three-order boundary-value problems and so on, are special cases of (1). To the best of our knowledge, there have been few results on the positive solutions for odd-order boundary-value problems (or p-Laplacian problems) with integral boundary conditions in Banach spaces (see [2631]).

The plan of this paper is as follows. We introduce some notations and lemmas in the rest of this section. In Section 2, we provide some necessary backgrounds. In particular, we state some properties of the Green’s function associated with BVP (1). In Section 3, we establish the main results of the paper. Finally, one example is also included to illustrate the main results.

Definition 1.1 Let (E,) be a real Banach space. A nonempty, closed, and convex set PE is said to be a cone provided that the following conditions are satisfied:

  1. (a)

    If yP and λ0, then λyP;

  2. (b)

    If yP and yP, then y=0.

If PE is a cone, we denote the order induced by P on E by ≤, that is, xy if and only if yxP. P is said to be normal if there exists a positive constant N such that θxy implies xNy. N is called the normal constant of P.

Definition 1.2 A projection φ:RR is called an increasing and positive homomorphism if the following conditions are satisfied:

  1. (1)

    If xy, then φ(x)φ(y) for all x,yR;

  2. (2)

    φ is a continuous bijection and its inverse is also continuous;

  3. (3)

    φ(xy)=φ(x)φ(y) for all x,y R + =[0,+).

In the above definition, condition (3) can be replaced by the following stronger condition:

  1. (4)

    φ(xy)=φ(x)φ(y) for all x,yR, where R=(,+).

Definition 1.3 Let E be a real Banach space and PE be a cone in E. If P ={ψ E |ψ(x)0,xP}, then P is a dual cone of the cone P.

Definition 1.4 Let S be a bounded set in a real Banach space E. Let α(S) = inf{δ>0:S be expressed as the union of a finite number of sets such that the diameter of each set does not exceed δ, i.e., S= i = 1 m S i with diam( S i )δ, i=1,2,,m}. Clearly, 0α(S)<. α(S) is called Kuratowski’s measure of noncompactness.

Definition 1.5 Let E be an ordered Banach space, D be a bounded set of E. The operator A:DE is said to be a k-set contraction if A:DE is continuous and bounded, and there is a constant k0 such that α(A(S))kα(S) for any bounded SD; a k-set contraction with k<1 is called a strict set contraction.

More facts about the properties on the Banach space E can be found in [1518]. For a bounded set C in the Banach space E, we denote by α(C) the Kuratowski’s measure of noncompactness. In the following, we denote by α(), α C () the Kuratowski’s measure of noncompactness of a bounded subset in E and in C(J,E), respectively. And we set

where xP, β denotes 0 or ∞, ψ P , ψ=1, and h(t,x)= φ 1 ( 0 t f(s,x(s))ds).

Lemma 1.1 [1]

If HC(J,E) is bounded and equicontinuous, then α C (H)=α(H(J))= max t J α(H(t)), where H(J)={x(t):tJ,xH}, H(t)={x(t):xH}.

Lemma 1.2 [1]

Let D be a bounded set of E; if f is uniformly continuous and bounded from J×S into E, then

α ( f ( J , S ) ) = max t J α ( f ( t , S ) ) η l α(S) for SD,
(2)

where η l is a nonnegative constant.

Lemma 1.3 Let K be a cone of the Banach space E and K r ={xK:xr}, K r , r ={xK,rx r } with r >r>0. Suppose that A: K r K is a strict set contraction such that one of the following two conditions is satisfied:

  1. (a)

    Axx, xK, x=r; Axx, xK, x= r .

  2. (b)

    Axx, xK, x=r; Axx, xK, x= r .

Then A has a fixed point x K r , r .

2 Preliminaries

To establish the existence of positive solutions in C 3 (J,P) of (1), let us list the following assumptions:

(H0) fC(J×P,P), and for any l>0, f is uniformly continuous on J× P l . Further suppose that gL[0,1] is nonnegative, σ= 0 1 sg(s)ds, and σ[0,1), γ= 1 + 0 1 ( 1 s ) g ( s ) d s 1 σ , h(s,x(s))= φ 1 ( 0 s f(t,x(t))dt), where P l ={xP,xl}.

(H1) There exists a nonnegative constant η l with γ η l <1 such that

α ( h ( t , S ) ) η l α(S),tJ,S P l .
(3)

Evidently, (C(J,E), c ) is a Banach space, and the norm is defined as x C = max t J x(t).

In the following, we construct a cone K={xQ:x(t)δx(v),t J δ ,v[0,1]}, where Q={x C 3 (J,P):x(t)θ,tJ}, and let B l ={xC(J,P): x c l}, l>0. It is easy to see that K is a cone of C 3 (J,E) and K r , r ={xK:rx r }K, KQ.

In our main results, we will make use of the following lemmas.

Lemma 2.1 Assume that (H0) is satisfied. Then x(t) is a solution of problem (1) if and only if xK is a solution of the integral equation

x(t)= 0 1 H(t,s) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds.
(4)

Here, we define an operator A by

(Ax)(t)= 0 1 H(t,s) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds,
(5)

where

(6)
(7)

That is, x is a fixed point of the operator A in K.

Lemma 2.2 If condition (H0) is satisfied, then the operator A defined by (5) is a continuous operator.

Proof It can be verified easily by the definition of (Ax)(t), we omit it here. □

Lemma 2.3 For t,s[0,1], 0G(t,s) 1 4 .

Proof It is obvious that G(t,s)0 for any s,t[0,1]. When 0st0, G(t,s)=s(1t)t(1t)= ( t 1 2 ) 2 + 1 4 , thus when t= 1 2 , max 0 t , s 1 G(t,s)= 1 4 , therefore G(t,s) 1 4 , which implies the proof is complete. □

Lemma 2.4 Assume that (H0) holds, choose δ(0, 1 2 ) and let J δ =[δ,1δ], then for all v,s[0,1],

where γ is as defined in (H0).

Proof First, we prove that G(t,s)δG(v,s). Obviously, for t J δ , v,s{0,1}, G(t,s)δG(v,s) hold. And for v,s(0,1), we have the following four cases.

Case I: max{v,t}s, then

G ( t , s ) G ( v , s ) = t ( 1 s ) v ( 1 s ) = t v δ.

Case II: smin{v,t}, then

G ( t , s ) G ( v , s ) = s ( 1 t ) s ( 1 v ) = 1 t 1 v δ 1 =δ.

Case III: tsv, then

G ( t , s ) G ( v , s ) = t ( 1 s ) s ( 1 v ) δ ( 1 s ) 1 s =δ.

Case IV: vst, then

G ( t , s ) G ( v , s ) = s ( 1 t ) v ( 1 s ) 1 t 1 s 1tδ.

To sum up, we get that G(t,s)δG(v,s).

For t[0,1],

H ( t , s ) = G ( t , s ) + t 1 σ 0 1 g ( τ ) G ( τ , s ) d τ s ( 1 s ) + t 1 σ 0 1 g ( τ ) G ( τ , s ) d τ ( 1 s ) + 1 1 σ 0 1 g ( τ ) G ( τ , s ) d τ ( 1 s ) + 1 1 σ 0 1 ( 1 s ) g ( τ ) d τ = ( 1 s ) ( 1 + 1 1 σ 0 1 g ( s ) d s ) = ( 1 s ) 1 σ + 0 1 g ( s ) d s 1 σ = ( 1 s ) 1 0 1 s g ( s ) d s + 0 1 g ( s ) d s 1 σ = ( 1 s ) 1 + 0 1 ( 1 s ) g ( s ) d s 1 σ 1 + 0 1 ( 1 s ) g ( s ) d s 1 σ = γ , t [ 0 , 1 ] .

For t J δ , we have

H ( t , s ) = G ( t , s ) + t 1 σ 0 1 g ( τ ) G ( τ , s ) d τ δ G ( v , s ) + δ 1 σ 0 1 g ( τ ) G ( τ , s ) d τ δ G ( v , s ) + δ v 1 σ 0 1 g ( τ ) G ( τ , s ) d τ = δ H ( v , s ) .

So, we complete the proof. □

Proof of Lemma 2.1 Necessity. First, we suppose that x is a solution of equation (1). By taking the integral of (1) on [0,t], we have

φ ( x ( t ) ) φ ( x ( 0 ) ) = 0 t f ( τ , x ( τ ) ) dτ.

By the boundary value condition and together with φ(0)=0, we have

x (t)= φ 1 ( 0 t f ( τ , x ( τ ) ) d τ ) .
(8)

By taking the integral of (8) on [0,t], we can get

x (t)= x (0) 0 t φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds.
(9)

Integrating (9) from 0 to t, we have

x(t)=x(0)+ x (0)t 0 t (ts) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds.

By the boundary value condition x(0)=θ, we can get

x(t)= x (0)t 0 t (ts) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds.
(10)

Letting t=1, we find that

x(1)= x (0) 0 1 (1s) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds,

thus

x (0)=x(1)+ 0 1 (1s) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds.
(11)

Substituting x(1)= 0 1 g(t)x(t)dt into (11), we obtain

x (0)= 0 1 g(s)x(s)ds+ 0 1 (1s) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) ds.
(12)

Substituting (12) into (10), we can get

x ( t ) = t ( 0 1 g ( s ) x ( s ) d s + 0 1 ( 1 s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s ) 0 t ( t s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s = 0 1 G ( t , s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s + t 0 1 g ( s ) x ( s ) d s .

Thus,

Then we obtain that

Therefore,

x ( t ) = 0 1 G ( t , s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s + t 1 σ 0 1 g ( s ) 0 1 G ( s , τ ) φ 1 ( 0 τ f ( η , x ( η ) ) d η ) d τ d s = 0 1 G ( t , s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s + t 1 σ 0 1 ( 0 1 g ( τ ) G ( τ , s ) d τ ) φ 1 ( 0 τ f ( η , x ( η ) ) d η ) d s = 0 1 H ( t , s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s .

By Lemma 2.3, x(t)θ holds, that is, xQ. Together with Lemma 2.4, we have

x ( t ) = 0 1 H ( t , s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s δ 0 1 H ( v , s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s = δ x ( v ) ,

which implies xK. To sum up, we know that x is a solution of the integral equation (4) in K.

Sufficiency. Let x be as in (4). Taking the derivative of (4), it implies that

x ( t ) = 0 1 G ( t , s ) t φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s + 1 1 σ 0 1 0 1 g ( τ ) G ( s , τ ) φ 1 ( 0 τ f ( η , x ( η ) ) d η ) d τ d s = 0 t s φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s + t 1 ( 1 s ) φ 1 ( 0 s f ( τ , x ( τ ) ) d τ ) d s + 1 1 σ 0 1 0 1 g ( τ ) G ( s , τ ) φ 1 ( 0 τ f ( η , x ( η ) ) d η ) d τ d s .

Moreover,

hold, which implies x(t) is a solution of (1). The proof is complete. □

Lemma 2.5 Suppose that (H0) and (H1) hold. Then, for each l>0, A is a strict set contraction on Q B l , i.e., there exists a constant 0 k l <1 such that α C (A(S)) k l α C (S) for any SQ B l .

Proof By Lemmas 2.1 and 2.3, we know that A:QQ is continuous and bounded. Now, let S be a bounded set in Q. Then by (H1), we get that

α ( A ( S ) ) α ( co ¯ H ( t , s ) h ( s , x ( s ) ) : s [ 0 , t ] , t J , x S ) γ η l α ( S ( J ) ) .

Thus,

α C ( A ( S ) ) γ η l α ( S ( J ) ) .

For α(S(J)) α C (s), we can get that

α C ( A ( S ) ) γ η l α ( S ( J ) ) γ η l α C (s)= k l α C (s),SQ B l ,

where k l =γ η l , 0 k l <1. The proof is complete. □

3 Main results

In this section, we impose growth conditions on f which allow us to apply Lemma 1.3 to establish the existence of positive solutions of (1). At the beginning, we introduce the notation

Λ=δ δ 1 δ H ( 1 2 , s ) ds.
(13)

Theorem 3.1 Suppose (H0) and (H1) hold and P is normal. If γ h 0 <1<Λ ( ψ h ) , then problem (1) has at least one positive solution.

Proof A is defined as (5). Considering γ h 0 <1, there exists r ¯ 1 such that h(t,x)( h 0 + ε 1 )x for tJ, xK, x r ¯ 1 , where ε 1 >0 satisfies γ( h 0 + ε 1 )1.

Let r 1 (0, r ¯ 1 ). Then, for tJ, xK, x C = r 1 , by Lemma 2.4, we can get

( A x ) ( t ) γ 0 1 h ( s , x ( s ) ) d s γ 0 1 ( h 0 + ε 1 ) x d s γ ( h 0 + ε 1 ) 0 1 x C d s x C ,

i.e., for xK, x C = r 1 , A x C x C holds.

Next, turning to 1<Λ ( ψ h ) , there exists r ¯ 2 >0 such that ψ(h(t,x(t)))>( ( ψ h ) ε 2 )x, tJ, xP, x r ¯ 2 , where ε 2 >0 satisfies ( ( ψ h ) ε 2 )xΛ1.

Let r 2 =max{3 r 1 , r ¯ 2 δ }. Then for t J δ , xK, x C = r 2 , we have xδ x C r ¯ 2 and ψ((Ax)( 1 2 ))ψ((Ax)( 1 2 ))ψ(Ax)( 1 2 )=(Ax)( 1 2 ), thus

( A x ) ( 1 2 ) ψ ( ( A x ) ( 1 2 ) ) = ψ ( 0 1 H ( 1 2 , s ) h ( s , x ( s ) ) d s ) = 0 1 H ( 1 2 , s ) ψ ( h ( s , x ( s ) ) ) d s δ 1 δ H ( 1 2 , s ) ψ ( h ( s , x ( s ) ) ) d s ( ( ψ h ) ε 2 ) δ 1 δ H ( 1 2 , s ) x d s ( ( ψ h ) ε 2 ) δ x C δ 1 δ H ( 1 2 , s ) d s = Λ ( ( ψ h ) ε 2 ) x C x C ,

i.e., for xK, x C = r 2 , A x C x C holds.

Lemma 1.3 yields that A has at least one fixed point x K ¯ r 1 , r 2 , r 1 x r 2 and x (t)δ x >0, t J δ . Thus, BVP (1) has at least one positive solution x . The proof is complete. □

Remark 3.1 If φ=I, where I denotes a unit operator, then the differential equation can change into the general differential equation

{ x ( t ) = f ( t , x ( t ) ) , t J , x ( 0 ) = θ , x ( 0 ) = θ , x ( 1 ) = 0 1 g ( t ) x ( t ) d t .
(14)

This system has been studied in Ref. [1]. By Theorem 3.1, we can easily obtain the main result (Theorem 3.1 in Ref. [1]).

Corollary 3.1 Assume that (H0) holds and P is normal. If 1 2 γ f 0 <1<Λ ( ψ f ) , then problem (14) has at least one positive solution.

Remark 3.2 If φ(x)= Φ p (x)= | x | p 2 x for some p>1, where Φ P 1 = Φ q , then (1) can be written as a BVP with a p-Laplace operator,

{ ( Φ p ( x ) ) = f ( t , x ( t ) ) , t J , x ( 0 ) = θ , x ( 0 ) = θ , x ( 1 ) = 0 1 g ( t ) x ( t ) d t .
(15)

First, we reduce (H0) as ( H 0 ) .

( H 0 ) fC(J×P,P), and for any l>0, f is uniformly continuous on J× P l . Further suppose that gL[0,1] is nonnegative, σ= 0 1 sg(s)ds, and σ[0,1), γ= 1 + 0 1 ( 1 s ) g ( s ) d s 1 σ , h 1 (s,x(s))= Φ q ( 0 s f(t,x(t))dt), where P l ={xP,xl}.

Then we can get the following similar conclusion.

Corollary 3.2 Suppose ( H 0 ) and (H1) hold and P is normal. If γ h 1 0 <1<Λ ( ψ h 1 ) , then problem (15) has at least one positive solution.

4 Example

Next, we will give an example to illustrate our results.

Example 4.1

Consider the finite system of scalar third-order differential equations,

{ ( φ ( x ( t ) ) ) = 8 t x 3 , t J , x ( 0 ) = θ , x ( 0 ) = θ , x ( 1 ) = 0 1 x ( t ) d t ,
(16)

where x(t)= t 2 , and

φ(s)={ s 3 , s < 0 ; s 2 , s > 0 .
(17)

Let E= R n ={x=( x 1 , x 2 ,, x n ): x i R,i=1,2,,n} with the norm x= max 1 i n | x i |, and P={x=( x 1 , x 2 ,, x n ): x i 0,i=1,2,,n}. Then we have the following result.

Theorem 4.1 For any tJ, system (16) has at least one positive solution x(t).

Proof First, according to (16), (17), we have g=1 and f i (t, x i )=8t x i 3 . Then, when x i = t 2 , we have f(t,x)=8 t 7 . Moreover, we can get that P =P. Choose ψ=(1,1,,1), then it is clear that (H0) and (H1) are satisfied.

Next, we try to prove that all the conditions in Theorem 3.1 are satisfied. It is easy to prove that σ= 0 1 sg(s)ds= 1 2 , γ= 1 + 0 1 ( 1 s ) g ( s ) d s 1 σ =3. And

Choose δ= 1 4 (0, 1 2 ), then Λ=δ δ 1 δ H( 1 2 ,s)ds= 1 4 1 4 3 4 H( 1 2 ,s)ds= 1 16 .

From (17), we can get that

φ 1 (s)={ s 3 , s < 0 ; s , s > 0 .

Then h(s,x(s))= φ 1 ( 0 s f(t,x(t))dt)= 0 s f ( t , x ( t ) ) d t = t 4 = x 2 .

Since

h 0 = lim sup x 0 max t J h ( t , x ) x = lim x 0 x 2 x =0,

thus we have γ h 0 =0<1.

On the other hand, because ψ(x)x, xP, we have

( ψ h ) = lim inf x min t J ψ h ( t , x ) x lim inf x min t J h ( t , x ) x = lim x x 2 x ,

which implies that Λ ( ψ h ) >1. So, all the conditions in Theorem 3.1 are satisfied, then the conclusion follows, and the proof is complete. □