1 Introduction

In this paper, we study the existence of positive solutions for the system of fractional integral boundary value problem

{ D 0 + α u i ( t ) + f i ( t , u 1 ( t ) , u 2 ( t ) ) = 0 , 0 < t < 1 , i = 1 , 2 , u i ( 0 ) = u i ( 0 ) = 0 , u i ( 1 ) = 0 1 u i ( t ) d η ( t ) ,
(1.1)

where α(2,3] is a real number, D 0 + α is the standard Riemann-Liouville fractional derivative of order α and f i C([0,1]× R + × R + ,R), i=1,2. 0 1 u i (t)dη(t) denotes the Riemann-Stieltjes integral, η is right continuous on [0,1), left continuous at t=1, and nondecreasing on [0,1], with η(0)=0.

The subject of multi-point nonlocal boundary value problems, initiated by Il’in and Moiseev [1], has been addressed by many authors. The multi-point boundary conditions appear in certain problems of thermodynamics, elasticity, and wave propagation; see [2] and the references therein. For example, the vibrations of a guy wire of a uniform cross-section and composed of N parts of different densities can be set up as a multi-point boundary value problem (see [3]); many problems in the theory of elastic stability can be handled by the method of multi-point problems (see [4]). On the other hand, we all know that the Riemann-Stieltjes integral, as in the form of 0 1 u(s)dη(s), where η is of bounded variation, that is, dη can be a signed measure, includes as special cases the multi-point boundary value problems and integral boundary value problems. That is why many authors are particularly interested in Riemann-Stieltjes integral boundary value problems.

Meanwhile, we also note that fractional differential equation’s modeling capabilities in engineering, science, economy, and other fields, have resulted in a rapid development of the theory of fractional differential equations in the last few decades; see the recent books [59]. This may explain the reason why the last few decades have witnessed an overgrowing interest in the research of such problems, with many papers in this direction published. Recently, there are some papers dealing with the existence of solutions (or positive solutions) of nonlinear fractional differential equation by the use of techniques of nonlinear analysis (fixed-point theorems, Leray-Schauder theory, upper and lower solution method, etc.); for example, see [1018] and the references therein.

However, to the best our knowledge, there are only a few papers dealing with systems with fractional boundary value problems. In [13] and [18], Bai and Su considered respectively the existence of solutions for systems of fractional differential equations, and obtained some excellent results. Motivated by the works mentioned above, in this paper, we shall discuss the existence of positive solutions for the system of fractional integral boundary value problem (1.1). It is interesting that a square function and its inverse function are used to characterize coupling behaviors of f i , so that f i are allowed to grow superlinearly and sublinearly.

2 Preliminaries

We first offer some definitions and fundamental facts of fractional calculus theory, which can be found in [59].

Definition 2.1 (see [7, 8], [[6], pp.36-37])

The Riemann-Liouville fractional derivative of order α>0 of a continuous function f:(0,+)R is given by

D 0 + α f(t)= 1 Γ ( n α ) ( d d t ) n 0 t f ( s ) ( t s ) α n + 1 ds,

where n=[α]+1, [α] denotes the integer part of number α, provided that the right-hand side is pointwise defined on (0,+).

Definition 2.2 (see [[6], Definition 2.1])

The Riemann-Liouville fractional integral of order α>0 of a function f:(0,+)R is given by

I 0 + α f(t)= 1 Γ ( α ) 0 t ( t s ) α 1 f(s)ds,

provided that the right-hand side is pointwise defined on (0,+).

From the definition of the Riemann-Liouville derivative, we can obtain the following statement.

Lemma 2.1 (see [11])

Let α>0. If we assume uC(0,1)L(0,1), then the fractional differential equation D 0 + α u(t)=0 has a unique solution

u(t)= c 1 t α 1 + c 2 t α 2 ++ c N t α N , c i R,i=1,2,,N,

where N is the smallest integer greater than or equal to α.

Lemma 2.2 (see [11])

Assume that uC(0,1)L(0,1) with a fractional derivative of order α>0 that belongs to C(0,1)L(0,1). Then

I 0 + α D 0 + α u(t)=u(t)+ c 1 t α 1 + c 2 t α 2 ++ c N t α N ,for some  c i R,i=1,2,,N,

where N is the smallest integer greater than or equal to α.

In what follows, we need to consider the following fractional integral boundary value problem:

{ D 0 + α u ( t ) + h ( t , u ) = 0 , 0 < t < 1 , u ( 0 ) = u ( 0 ) = 0 , u ( 1 ) = 0 1 u ( t ) d η ( t ) ,
(2.1)

then we present Green’s function for (2.1), and study the properties of Green’s function. In our paper, we always assume that the following two conditions are satisfied:

(H0) κ 0 :=1 0 1 t α 1 dη(t)>0.

(H1) hC([0,1]× R + ,R) is bounded from below, i.e., there is a positive constant M such that h(t,u)M, (t,u)[0,1]× R + .

Lemma 2.3 Let (H0), (H1) hold. Then problem (2.1) is equivalent to

u(t)= 0 1 G(t,s)h ( s , u ( s ) ) ds,

where

G(t,s)=H(t,s)+ κ 0 1 t α 1 0 1 H(t,s)dη(t),
(2.2)

and

H(t,s):= 1 Γ ( α ) { t α 1 ( 1 s ) α 1 ( t s ) α 1 , 0 s t 1 , t α 1 ( 1 s ) α 1 , 0 t s 1 .
(2.3)

Proof By Lemmas 2.1 and 2.2, we can reduce the equation of problem (2.1) to an equivalent integral equation

u ( t ) = I 0 + α h ( t ) + c 1 t α 1 + c 2 t α 2 + c 3 t α 3 = 1 Γ ( α ) 0 t ( t s ) α 1 h ( s ) d s + c 1 t α 1 + c 2 t α 2 + c 3 t α 3 ,
(2.4)

where c i (i=1,2,3) are fixed constants. By u(0)=0, there is c 3 =0. Thus,

u(t)= 1 Γ ( α ) 0 t ( t s ) α 1 h(s)ds+ c 1 t α 1 + c 2 t α 2 .
(2.5)

Differentiating (2.5), we have

u (t)= α 1 Γ ( α ) 0 t ( t s ) α 2 h(s)ds+ c 1 (α1) t α 2 + c 2 (α2) t α 3 .
(2.6)

By (2.6) and u (0)=0, we have c 2 =0. Then

u(t)= 1 Γ ( α ) 0 t ( t s ) α 1 h(s)ds+ c 1 t α 1 .
(2.7)

From u(1)= 0 1 u(t)dη(t), we arrive at

u(1)= 1 Γ ( α ) 0 1 ( 1 s ) α 1 h(s)ds+ c 1 = 0 1 u(t)dη(t),

and thus

c 1 = 1 Γ ( α ) 0 1 ( 1 s ) α 1 h(s)ds+ 0 1 u(t)dη(t).

Therefore, we obtain by (2.7)

u ( t ) = 1 Γ ( α ) 0 t ( t s ) α 1 h ( s ) d s + t α 1 Γ ( α ) 0 1 ( 1 s ) α 1 h ( s ) d s + t α 1 0 1 u ( t ) d η ( t ) = 0 1 H ( t , s ) h ( s ) d s + t α 1 0 1 u ( t ) d η ( t ) ,
(2.8)

where H(t,s) is defined by (2.3). From (2.8), we have

0 1 u(t)dη(t)= 0 1 dη(t) 0 1 H(t,s)h(s)ds+ 0 1 t α 1 dη(t) 0 1 u(t)dη(t),
(2.9)

and by (H0) we find

0 1 u(t)dη(t)= κ 0 1 0 1 dη(t) 0 1 H(t,s)h(s)ds.
(2.10)

Combining (2.8) and (2.10), we see

u ( t ) = 0 1 H ( t , s ) h ( s ) d s + t α 1 κ 0 1 0 1 d η ( t ) 0 1 H ( t , s ) h ( s ) d s = 0 1 G ( t , s ) h ( s ) d s ,
(2.11)

where G(t,s) is determined by (2.2). This completes the proof. □

Lemma 2.4 (see [[10], Lemma 3.2])

For any (t,s)[0,1]×[0,1], let k(t):= t α 1 (1t)+ κ 0 1 t α 1 0 1 t α 1 (1t)dη(t), φ(t):= t ( 1 t ) α 1 Γ ( α ) , μ=(α1)(1+η(1) κ 0 1 )>0. Then the following two inequalities are satisfied:

  1. (i)

    k(t)φ(s)G(t,s)μφ(s),

  2. (ii)

    H(t,s) Γ 1 (α)(α1) t α 1 (1t).

Proof (i) For st, we have 1s1t, then

Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 ( t s ) α 1 = ( α 1 ) t s t t s x α 2 d x ( α 1 ) ( t t s ) α 2 ( ( t t s ) ( t s ) ) = ( α 1 ) t α 2 ( 1 s ) α 2 s ( 1 t ) ( α 1 ) t α 2 ( 1 s ) α 2 s ( 1 s ) ( α 1 ) s ( 1 s ) α 1 .
(2.12)

On the other hand, for ts, since α>2, we have

Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 ( α 1 ) t α 1 ( 1 s ) α 1 = ( α 1 ) t α 2 t ( 1 s ) α 1 ( α 1 ) t α 2 s ( 1 s ) α 1 ( α 1 ) s ( 1 s ) α 1 .

Consequently,

Γ ( α ) G ( t , s ) = Γ ( α ) H ( t , s ) + κ 0 1 t α 1 0 1 Γ ( α ) H ( t , s ) d η ( t ) ( α 1 ) s ( 1 s ) α 1 + κ 0 1 t α 1 0 1 ( α 1 ) s ( 1 s ) α 1 d η ( t ) ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) s ( 1 s ) α 1 = μ φ ( s ) .

Moreover, for st, note that ( t s ) α 2 ( t t s ) α 2 , ( 1 s ) α 2 ( 1 s ) α 1 , and t α 2 t α 1 , then we find

Γ ( α ) H ( t , s ) = t α 1 ( 1 s ) α 1 ( t s ) α 1 = ( t t s ) α 2 ( t t s ) ( t s ) α 2 ( t s ) ( t t s ) α 2 ( t t s ) ( t t s ) α 2 ( t s ) = t α 2 ( 1 s ) α 2 s ( 1 t ) t α 1 ( 1 t ) s ( 1 s ) α 1 .

On the other hand, for ts, we have

Γ(α)H(t,s)= t α 1 ( 1 s ) α 1 t α 1 (1t)s ( 1 s ) α 1 .

Therefore, we get

Γ ( α ) G ( t , s ) = Γ ( α ) H ( t , s ) + κ 0 1 t α 1 0 1 Γ ( α ) H ( t , s ) d η ( t ) t α 1 ( 1 t ) s ( 1 s ) α 1 + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) s ( 1 s ) α 1 d η ( t ) s ( 1 s ) α 1 [ t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) ] = k ( t ) φ ( s ) .
  1. (ii)

    If ts, since α>2, we have ( 1 s ) α 1 ( 1 t ) α 1 (1t) and

    H(t,s) Γ 1 (α)(α1) t α 1 ( 1 s ) α 1 Γ 1 (α)(α1) t α 1 (1t).

For st, we have 1s1t, then by (2.12) we get

H ( t , s ) Γ 1 ( α ) ( α 1 ) t α 2 ( 1 s ) α 2 s ( 1 t ) Γ 1 ( α ) ( α 1 ) t α 2 ( 1 s ) α 2 t ( 1 t ) Γ 1 ( α ) ( α 1 ) t α 1 ( 1 t ) .

This completes the proof. □

Lemma 2.5 Let κ 1 := α Γ ( α + 1 ) Γ ( 2 α + 2 ) + κ 0 1 Γ ( α + 1 ) 0 1 t α 1 ( 1 t ) d η ( t ) Γ ( 2 α + 1 ) and κ 2 := ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) Γ ( α + 2 ) . Then the following inequality holds:

κ 1 φ(s) 0 1 G(t,s)φ(t)dt κ 2 φ(s),s[0,1].
(2.13)

Proof By (i) of Lemma 2.4, we have

[ α Γ ( α + 1 ) Γ ( 2 α + 2 ) + κ 0 1 Γ ( α + 1 ) 0 1 t α 1 ( 1 t ) d η ( t ) Γ ( 2 α + 1 ) ] φ ( s ) = 0 1 [ t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) ] φ ( s ) φ ( t ) d t 0 1 G ( t , s ) φ ( t ) d t 0 1 ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) φ ( s ) φ ( t ) d t = ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) Γ ( α + 2 ) φ ( s ) ,

and we easily obtain (2.13), as claimed. This completes the proof. □

Let

E:=C[0,1],u:= max t [ 0 , 1 ] | u ( t ) | ,P:= { u E : u ( t ) 0 , t [ 0 , 1 ] } .

Then (E,) is a real Banach space and P is a cone on E.

The norm on E×E is defined by (u,v):=u+v, (u,v)E×E. Note that E×E is a real Banach space under the above norm, and P×P is a positive cone on E×E.

By Lemma 2.3, we can obtain that system (1.1) is equivalent to the system of nonlinear Hammerstein integral equations

u i (t)= 0 1 G(t,s) f i ( s , u 1 ( s ) , u 2 ( s ) ) ds,i=1,2,
(2.14)

where G(t,s) is defined by (2.2).

Lemma 2.6 (i) If u (t) is a positive solution of (2.1), then u (t)+w(t) is a positive solution of the following differential equation:

{ D 0 + α u = F ( t , u ( t ) w ( t ) ) , u ( 0 ) = u ( 0 ) = 0 , u ( 1 ) = 0 1 u ( t ) d η ( t ) ,
(2.15)

where

F(t,x):={ h ˜ ( t , x ) , t [ 0 , 1 ] , x 0 , h ˜ ( t , 0 ) , t [ 0 , 1 ] , x < 0 ,

the function h ˜ (t,x)=h(t,x)+M, h ˜ :[0,1]× R + R + is continuous,

w(t):=M 0 1 G(t,s)ds,t[0,1].
(2.16)
  1. (ii)

    If u(t) is a solution of (2.15) and u(t)w(t), t[0,1], then u (t)=u(t)w(t) is a positive solution of (2.1).

Proof If u (t) is a positive solution of (2.1), then we obtain

{ D 0 + α u = h ( t , u ( t ) ) , u ( 0 ) = u ( 0 ) = 0 , u ( 1 ) = 0 1 u ( t ) d η ( t ) .

By a simple computation, we easily get u (0)+w(0)= u (0)+ w (0)=0, u (1)+w(1)= 0 1 ( u (t)+w(t))dη(t) and

D 0 + α ( u ( t ) + w ( t ) ) + F ( t , u ( t ) ) = D 0 + α u ( t ) + D 0 + α w ( t ) + h ( t , u ( t ) ) + M = D 0 + α w ( t ) + M = D 0 + α M 0 1 G ( t , s ) d s + M = M + M = 0 ,

i.e., u (t)+w(t) satisfies (2.15). Therefore, (i) holds, as claimed. Similarly, it is easy to prove that (ii) is also satisfied. This completes the proof. □

By Lemma 2.3, we obtain that (2.15) is equivalent to the integral equation

u(t)= 0 1 G(t,s)F ( s , u ( s ) w ( s ) ) ds:=(Tu)(t),
(2.17)

where G(t,s) is determined by (2.2). Clearly, the continuity and nonnegativity of G and F imply that T:PP is a completely continuous operator.

Lemma 2.7 Put P 1 :={uP:u(t) μ 1 k(t)u for t[0,1]}. Then T(P) P 1 , where μ and T are defined by Lemma  2.4 and (2.17), respectively.

Proof By (i) of Lemma 2.4, we easily find

0 1 k ( t ) φ ( s ) F ( s , u ( s ) w ( s ) ) d s ( T u ) ( t ) = 0 1 G ( t , s ) F ( s , u ( s ) w ( s ) ) d s 0 1 ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) φ ( s ) F ( s , u ( s ) w ( s ) ) d s ,

and thus

( T u ) ( t ) k ( t ) ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) 0 1 ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) φ ( s ) F ( s , u ( s ) w ( s ) ) d s k ( t ) ( α 1 ) ( 1 + η ( 1 ) κ 0 1 ) T u .

This completes the proof. □

In this paper, we assume that f i (i=1,2) satisfy the following condition:

(H2) f i (t,x,y)C([0,1]× R + × R + ,R) and there is a positive constant M such that f i (t,x,y)M, (t,x,y)[0,1]× R + × R + .

By (H2) and Lemma 2.6, (2.14) is turned into the following integral equation:

u i (t)= 0 1 G(t,s) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) ds,
(2.18)

where

F i (t,x,y):={ f ˜ i ( t , x , y ) , t [ 0 , 1 ] , x , y 0 , f ˜ i ( t , 0 , 0 ) , t [ 0 , 1 ] , x , y < 0 ,

the function f ˜ i (t,x,y)= f i (t,x,y)+M, f ˜ i C([0,1]× R + × R + , R + ) and w(t) is denoted by (2.16). By Lemma 2.6, we know if ( u 1 , u 2 ) is a solution of (2.18) and u i (t)w(t), t[0,1], then ( u 1 = u 1 w, u 2 = u 2 w) is a positive solution of (1.1).

Define the operator A as follows:

A( u 1 , u 2 )(t):= ( A 1 ( u 1 , u 2 ) , A 2 ( u 1 , u 2 ) ) (t),
(2.19)

where

A i ( u 1 , u 2 )(t)= 0 1 G(t,s) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) ds.

It is obvious that A i (i=1,2):P×PP, A:P×PP×P are completely continuous operators. Clearly, ( u 1 w, u 2 w)P×P is a positive solution of (1.1) if and only if ( u 1 , u 2 )(P×P){0} is a fixed point of A and u i w, i=1,2.

The following two lemmas play some important roles in our proofs involving fixed point index.

Lemma 2.8 ([19])

Let ΩE be a bounded open set, and let A: Ω ¯ PP be a completely continuous operator. If there exists v 0 P{0} such that vAvλ v 0 for all vΩP and λ0, then i(A,ΩP,P)=0.

Lemma 2.9 ([19])

Let ΩE be a bounded open set with 0Ω. Suppose that A: Ω ¯ PP is a completely continuous operator. If vλAv for all vΩP and 0λ1, then i(A,ΩP,P)=1.

3 The existence of positive solutions for (1.1)

We list the assumptions on F i (i=1,2) in this section.

(H3) There are c>0 and ξ i >0, i=1,2, satisfying ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 (α) κ 1 2 >1 such that

F 1 (t,x,y) ξ 1 y c, F 2 (t,x,y) ξ 2 x 2 c,(t,x,y)[0,1]× R + × R + .

(H4) There exists Q(t):[0,1](,+) such that

F i ( t , x , y ) Q ( t ) , t [ 0 , 1 ] , i = 1 , 2 , x , y [ 0 , M μ Γ 1 ( α ) ( α 1 ) ] , 0 1 φ ( s ) Q ( s ) d s < M Γ 1 ( α ) ( α 1 ) .

(H5) There are c>0 and ξ i >0, i=3,4, satisfying 2μ Γ 1 (α) ξ 3 ξ 4 2 κ 2 2 <1 such that

F 1 (t,x,y) ξ 3 y 2 +c, F 2 (t,x,y) ξ 4 x +c,(t,x,y)[0,1]× R + × R + .

(H6) There exist Q:[0,1](,+), θ(0, 1 2 ), and t 0 [θ,1θ] such that

F i ( t , x , y ) Q ( t ) , t [ θ , 1 θ ] , i = 1 , 2 , x , y [ 0 , M μ Γ 1 ( α ) ( α 1 ) ] , θ 1 θ k ( t 0 ) φ ( s ) Q ( s ) d s M μ Γ 1 ( α ) ( α 1 ) .

We adopt the convention in the sequel that c 1 , c 2 , stand for different positive constants. We denote B ρ :={uE:u<ρ} for ρ>0 in the sequel.

Theorem 3.1 Suppose that (H2)-(H4) hold, (1.1) has at least a positive solution.

Proof By Lemma 2.6, it suffices to find a fixed point ( u 1 , u 2 ) of A satisfying u i (t)w(t), t[0,1]. By Lemma 2.7, for any u i P and t[0,1], noting (ii) of Lemma 2.4, together with

0 1 G ( t , s ) d s = 0 1 ( H ( t , s ) + κ 0 1 t α 1 0 1 H ( t , s ) d η ( t ) ) d s Γ 1 ( α ) ( α 1 ) 0 1 ( t α 1 ( 1 t ) + κ 0 1 t α 1 0 1 t α 1 ( 1 t ) d η ( t ) ) d s = Γ 1 ( α ) ( α 1 ) k ( t ) ,

we have

u i ( t ) w ( t ) = u i ( t ) M 0 1 G ( t , s ) d s u i ( t ) M Γ 1 ( α ) ( α 1 ) k ( t ) u i ( t ) M μ Γ 1 ( α ) ( α 1 ) u i ( t ) u i 1 , i = 1 , 2 .
(3.1)

Therefore, u i Mμ Γ 1 (α)(α1) leads to u i (t)w(t), t[0,1].

In what follows, we first show that there exists an adequately big positive number R>Mμ Γ 1 (α)(α1) such that the following claim holds:

( u 1 , u 2 )A( u 1 , u 2 )+λ(ψ,ψ),( u 1 , u 2 ) B R (P×P),λ0,
(3.2)

where ψP is a given function. Indeed, if the claim is false, there exist (u,v) B R (P×P) and λ0 such that (u,v)=A(u,v)+λ(ψ,ψ), then u A 1 (u,v) and v A 2 (u,v). In view of (H3) and the definition of A i (i=1,2), we get

u ( t ) 0 1 G ( t , s ) ξ 1 v ( s ) w ( s ) d s c 1 0 1 G ( t , s ) ξ 1 v ( s ) d s 0 1 G ( t , s ) ξ 1 w ( s ) d s c 1 0 1 G ( t , s ) ξ 1 v ( s ) d s c 2 ,
(3.3)

and

v(s) 0 1 G(s,τ) ξ 2 ( u ( τ ) w ( τ ) ) 2 dτ c 1 .
(3.4)

By the concavity of , we have by (3.4)

v ( s ) v ( s ) + c 1 c 1 0 1 G ( s , τ ) ξ 2 ( u ( τ ) w ( τ ) ) 2 d τ c 1 0 1 G ( s , τ ) ξ 2 ( u ( τ ) w ( τ ) ) 2 d τ c 1 = 0 1 μ Γ 1 ( α ) ξ 2 μ 1 Γ ( α ) G ( s , τ ) ( u ( τ ) w ( τ ) ) d τ c 1 0 1 μ Γ 1 ( α ) ξ 2 μ 1 Γ ( α ) G ( s , τ ) ( u ( τ ) w ( τ ) ) d τ c 1 μ 1 2 Γ 1 2 ( α ) ξ 2 1 2 0 1 G ( s , τ ) u ( τ ) d τ c 3 .
(3.5)

Combining (3.3) and (3.5), we easily find

u ( t ) 0 1 G ( t , s ) ξ 1 [ μ 1 2 Γ 1 2 ( α ) ξ 2 1 2 0 1 G ( s , τ ) u ( τ ) d τ c 3 ] d s c 2 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) 0 1 0 1 G ( t , s ) G ( s , τ ) u ( τ ) d τ d s c 4 .
(3.6)

Multiply the both sides of the above by φ(t) and integrate over [0,1] and use Lemma 2.5 to obtain

0 1 u(t)φ(t)dt ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 (α) κ 1 2 0 1 u(t)φ(t)dt c 5 ,
(3.7)

and thus

0 1 u(t)φ(t)dt c 5 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 2 1 .
(3.8)

Noting Lemma 2.7, we obtain

μ 1 κ 1 u= 0 1 μ 1 k(t)uφ(t)dt 0 1 u(t)φ(t)dt c 5 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 2 1 .
(3.9)

Hence,

u μ c 5 ξ 1 ξ 2 1 2 μ 1 2 Γ 1 2 ( α ) κ 1 3 κ 1 := N 1 .
(3.10)

On the other hand, noting (3.3), together with the concavity of , we arrive at

u+ c 2 u(t)+ c 2 0 1 G(t,s) ξ 1 v ( s ) ds ξ 1 v 0 1 G(t,s)v(s)ds.
(3.11)

Multiply the both sides of the above by φ(t) and integrate over [0,1] and use Lemma 2.5, Lemma 2.7 to obtain

Γ 1 ( α + 2 ) ( u + c 2 ) = 0 1 ( u + c 2 ) φ ( t ) d t ξ 1 κ 1 v 0 1 v ( t ) φ ( t ) d t ξ 1 κ 1 v 0 1 μ 1 k ( t ) v φ ( t ) d t = μ 1 ξ 1 κ 1 2 v .
(3.12)

Consequently,

v [ Γ 1 ( α + 2 ) ( N 1 + c 2 ) μ 1 ξ 1 κ 1 2 ] 2 .
(3.13)

Taking R>max{ N 1 ,Mμ Γ 1 (α)(α1), [ Γ 1 ( α + 2 ) ( N 1 + c 2 ) μ 1 ξ 1 κ 1 2 ] 2 }, which contradicts (u,v) B R (P×P). As a result, (3.2) is true. Lemma 2.8 implies

i ( A , B R ( P × P ) , P × P ) =0.
(3.14)

On the other hand, by (H4), we have, for i=1,2,

A i ( u 1 , u 2 ) ( t ) = 0 1 G ( t , s ) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) d s 0 1 μ φ ( s ) Q ( s ) d s < M μ Γ 1 ( α ) ( α 1 ) = u i

for any (t, u 1 , u 2 )[0,1]× B N × B N (N=Mμ Γ 1 (α)(α1)), from which we obtain

A ( u 1 , u 2 ) < ( u 1 , u 2 ) ,( u 1 , u 2 ) B N (P×P).

This leads to

( u 1 , u 2 )λA( u 1 , u 2 ),( u 1 , u 2 ) B N (P×P),λ[0,1].
(3.15)

Now Lemma 2.9 implies

i ( A , B N ( P × P ) , P × P ) =1.
(3.16)

Combining (3.14) and (3.16) gives

i ( A , ( B R B ¯ N ) ( P × P ) , P × P ) =01=1.

Therefore the operator A has at least one fixed point in ( B R B ¯ N )(P×P). Equivalently, (1.1) has at least one positive solution. This completes the proof. □

Theorem 3.2 Suppose that (H2), (H5), and (H6) hold, (1.1) has at least a positive solution.

Proof We first find that there exists an adequately big positive number R>Mμ Γ 1 (α)(α1) such that the following claim holds:

( u 1 , u 2 )λA( u 1 , u 2 ),( u 1 , u 2 ) B R (P×P),λ[0,1].
(3.17)

If the claim is false, there exist (u,v) B R (P×P) and λ[0,1] such that (u,v)=λA(u,v). Therefore, u A 1 (u,v) and v A 2 (u,v). In view of (H5), we have

u ( t ) 0 1 G ( t , s ) [ ξ 3 ( v ( s ) w ( s ) ) 2 + c ] d s 0 1 G ( t , s ) ξ 3 v 2 ( s ) d s 0 1 G ( t , s ) ξ 3 w 2 ( s ) d s + c 1 0 1 G ( t , s ) ξ 3 v 2 ( s ) d s + c 1 ,
(3.18)

and

v(s) 0 1 G(s,τ) [ ξ 4 u ( τ ) w ( τ ) + c ] dτ.
(3.19)

By (3.19), the convexity of a square function enables us to obtain

v 2 ( s ) ( 0 1 μ 1 Γ ( α ) G ( s , τ ) μ Γ 1 ( α ) [ ξ 4 u ( τ ) w ( τ ) + c ] d τ ) 2 0 1 μ 1 Γ ( α ) G ( s , τ ) ( μ Γ 1 ( α ) [ ξ 4 u ( τ ) w ( τ ) + c ] ) 2 d τ μ Γ 1 ( α ) 0 1 G ( s , τ ) [ 2 ξ 4 2 ( u ( τ ) w ( τ ) ) + 2 c 2 ] d τ 2 μ Γ 1 ( α ) ξ 4 2 0 1 G ( s , τ ) u ( τ ) d τ + c 6 .
(3.20)

We find from (3.18) and (3.20) that

u ( t ) 0 1 G ( t , s ) ξ 3 [ 2 μ Γ 1 ( α ) ξ 4 2 0 1 G ( s , τ ) u ( τ ) d τ + c 6 ] d s + c 1 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 0 1 0 1 G ( t , s ) G ( s , τ ) u ( τ ) d τ d s + c 7 .
(3.21)

Multiply the both sides of the above by φ(t) and integrate over [0,1] and use Lemma 2.5 to obtain

0 1 u(t)φ(t)dt2μ Γ 1 (α) ξ 3 ξ 4 2 κ 2 2 0 1 u(t)φ(t)dt+ c 8 .
(3.22)

Noting Lemma 2.7, we obtain

0 1 μ 1 k(t)uφ(t)dt 0 1 u(t)φ(t)dt c 8 1 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 κ 2 2 ,
(3.23)

and hence

u μ c 8 κ 1 2 μ Γ 1 ( α ) ξ 3 ξ 4 2 κ 1 κ 2 2 := N 2 .
(3.24)

Multiply the both sides of (3.19) by φ(t) and integrate over [0,1] and use Lemma 2.5, Lemma 2.7, note (3.24), to obtain

μ 1 κ 1 v 0 1 v ( t ) φ ( t ) d t κ 2 0 1 φ ( t ) [ ξ 4 u ( t ) w ( t ) + c ] d t κ 2 0 1 φ ( t ) [ ξ 4 N 2 + c ] d t = Γ 1 ( α + 2 ) κ 2 ( ξ 4 N 2 + c ) .
(3.25)

Consequently,

vμ Γ 1 (α+2) κ 1 1 κ 2 ( ξ 4 N 2 +c).
(3.26)

Take R>max{ N 2 ,Mμ Γ 1 (α)(α1),μ Γ 1 (α+2) κ 1 1 κ 2 ( ξ 4 N 2 +c)}, which contradicts (u,v) B R (P×P). As a result, (3.17) is true. So, we have from Lemma 2.9 that

i ( A , B R ( P × P ) , P × P ) =1.
(3.27)

On the other hand, by (H6), we have, for i=1,2,

A i ( u 1 , u 2 ) ( t 0 ) = 0 1 G ( t 0 , s ) F i ( s , u 1 ( s ) w ( s ) , u 2 ( s ) w ( s ) ) d s θ 1 θ k ( t 0 ) φ ( s ) Q ( s ) d s M μ Γ 1 ( α ) ( α 1 ) = u i ,

and thus A i A i ( u 1 , u 2 )( t 0 ) u i for any (t, u 1 , u 2 )[0,1]× B N × B N (N=Mμ Γ 1 (α)(α1)). This yields

( u 1 , u 2 )A( u 1 , u 2 )+λ(ψ,ψ),( u 1 , u 2 ) B N (P×P),λ0.

Lemma 2.8 gives

i ( A , B N ( P × P ) , P × P ) =0.
(3.28)

Combining (3.27) and (3.28) gives

i ( A , ( B R B ¯ N ) ( P × P ) , P × P ) =10=1.

Therefore the operator A has at least one fixed point in ( B R B ¯ N )(P×P). Equivalently, (1.1) has at least one positive solution. This completes the proof. □