1 Introduction

In this paper, we consider the existence of positive solutions for a singular nonlinear fractional differential system with nonlocal boundary conditions,

{ D 0 + α x ( t ) = f ( t , x ( t ) , y ( t ) ) , D 0 + β y ( t ) = g ( t , x ( t ) , y ( t ) ) , 0 < t < 1 , x ( 0 ) = 0 , x ( 1 ) = 0 1 x ( s ) d A ( s ) , y ( 0 ) = 0 , y ( 1 ) = 0 1 y ( s ) d B ( s ) ,
(1.1)

where 1<α,β2, D 0 + α and D 0 + β are the standard Riemann-Liouville derivatives, 0 1 x(s)dA(s) and 0 1 y(s)dB(s) denote the Riemann-Stieltjes integral, where A, B are functions of bounded variation. f,g:(0,1)×(0,+)×(0,+)[0,+) are continuous and may be singular at x=y=0 and t=0,1.

In system (1.1), the boundary condition is given by a nonlocal condition involving a Stieltjes integral type linear functional on C[0,1] with a signed measure, but it does not need to be a positive functional. In particular, if dA(s)=dB(s)=ds or h(s)ds, then the BVP (1.1) reduces to an integral boundary value problem, and thus it also includes the multi-point boundary value problem as a special case. So the problem with Stieltjes integral boundary condition contains various boundary value problems (see [1]).

Since the nonlocal boundary value problems can describe a class of very interesting and important phenomena arising from heat conduction, chemical engineering, underground water flow, thermo-elasticity, and plasma physics, this type of problem has attracted much attention of many researchers (see [214] and the references therein). Especially, based on the fixed point theory of a strict set of contraction operators in a cone, Feng et al. [5] investigated the existence and nonexistence of positive solutions of the following second order BVPs with integral boundary conditions in Banach space:

{ u ( t ) + f ( t , u ) = θ , t ( 0 , 1 ) , u ( 0 ) = 0 1 g ( t ) u ( t ) d t , u ( 1 ) = θ ,  or u ( 0 ) = θ , u ( 1 ) = 0 1 g ( t ) u ( t ) d t .
(1.2)

Subsequently, Liu et al. [6] studied a singular integral boundary value problem,

{ u ( t ) + a ( t ) u ( t ) + b ( t ) u ( t ) + c ( t ) f ( u ) = 0 , t ( 0 , 1 ) , u ( 0 ) = 0 1 g ( s ) u ( s ) d s , u ( 1 ) = 0 1 h ( s ) u ( s ) d s ,
(1.3)

where aC[0,1], bC([0,1],(,0)), cC((0,1),[0,+)), fC((0,+),[0,+)), and g,h L 1 [0,1] are nonnegative. c(t)0 is allowed to be singular at t=0,1, and f may be singular at u=0. By using the fixed point index theorem, the existence of positive solutions for the BVP (1.3) is established.

By means of a monotone iterative technique, Zhang and Han [4] established the existence and uniqueness of the positive solutions for a class of higher conjugate-type fractional differential equation with one nonlocal term,

{ D 0 + α x ( t ) + f ( t , x ( t ) ) = 0 , 0 < t < 1 , n 1 < α n , x ( k ) ( 0 ) = 0 , 0 k n 2 , x ( 1 ) = 0 1 x ( s ) d A ( s ) ,
(1.4)

where α2, D 0 + α is the standard Riemann-Liouville derivative, A is a function of bounded variation, 0 1 u(s)dA(s) denotes the Riemann-Stieltjes integral of u with respect to A, dA can be a signed measure. Recently, some work on systems of nonlinear fractional differential equations was developed [79]. In [7], Ahmad and Ntouyas studied the existence and uniqueness of solutions for a system of Hadamard type fractional differential equations with integral boundary conditions

{ D α c u ( t ) = f ( t , u ( t ) , v ( t ) ) , 1 < t < e , 1 < α 2 , D β c u ( t ) = g ( t , u ( t ) , v ( t ) ) , 1 < t < e , 1 < β 2 , u ( 1 ) = 0 , u ( e ) = I γ u ( σ 1 ) = 1 Γ ( γ ) 1 σ 1 ( log σ 1 s ) γ 1 u ( s ) s d s , v ( 1 ) = 0 , v ( e ) = I γ v ( σ 2 ) = 1 Γ ( γ ) 1 σ 2 ( log σ 2 s ) γ 1 v ( s ) s d s ,
(1.5)

where γ>0, 1< σ 1 <e, 1< σ 2 <e, D ( ) is the Hadamard fractional derivative of fractional order, I γ is the Hadamard fractional integral of order γ and f,g:[1,e]×R×RR are continuous functions. The existence of solutions for the system (1.4) is derived from Leray-Schauder’s alternative, whereas the uniqueness of the solution is established by the Banach contraction principle. More recently, Ahmad et al. [8] studied the existence of solutions for a system of coupled hybrid fractional differential equations with Dirichlet boundary conditions. By using the standard tools of the fixed point theory, the existence and uniqueness results were established.

Motivated by the above work, we consider the existence of positive solutions for the singular fractional differential system with nonlocal Stieltjes integral boundary conditions when f, g can be singular at t=0,1 and x=y=0. It is well known from linear elastic fracture mechanics that the stress near the crack tip exhibits a power singularity of r 0.5 [1], where r is the distance measured from the crack tip, and this classical singularity also exists in nonlocal nonlinear problems. But due to the singularity of f, g at x=y=0, we cannot handle the system (1.1) like in [4, 5]. Thus, this work we shall devote to finding the upper and lower solution of the system (1.1), and by means of the Schauder fixed point theorem to establish the criterion of the existence of positive solutions for the system (1.1). To the best of our knowledge, there has been no work done for the singular fractional differential system with the Riemann-Stieltjes integral boundary conditions, and this work aims to contribute in this field. Our work also extends the results of [46, 9] to fractional systems with which f, g can be singular at t=0,1 and x=y=0.

2 Preliminaries and lemmas

The basic space used in this paper is E=C([0,1];R)×C([0,1];R), where ℝ is a real number set. Obviously, the space E is a Banach space if it is endowed with the norm as follows:

( x , y ) :=x+y,x= max t [ 0 , 1 ] | x ( t ) | ,y= max t [ 0 , 1 ] | y ( t ) |

for any (x,y)E. By a positive solution of problem (1.1), we mean a pair of functions (u,v)E satisfying (1.1) with u(t)0, v(t)0 for all t[0,1] and (u,v)(0,0).

Now we begin our work based on theory of fractional calculus; for details of the definitions and semigroup properties of Riemann-Liouville fractional calculus, one refers to [1517]. In what follows, we give the definitions of the lower and upper solution of the system (1.1).

Definition 2.1 A pair of functions ( ϕ 1 (t), ψ 1 (t))E is called a lower solution of the system (1.1), if it satisfies

{ D 0 + α ϕ 1 ( t ) f ( t , ϕ 1 ( t ) , ψ 1 ( t ) ) , D 0 + β ψ 1 ( t ) g ( t , ϕ 1 ( t ) , ψ 1 ( t ) ) , 0 < t < 1 , ϕ 1 ( 0 ) 0 , ϕ 1 ( 1 ) 0 1 ϕ 1 ( s ) d A ( s ) , ψ 1 ( 0 ) 0 , ψ 1 ( 1 ) 0 1 ψ 1 ( s ) d B ( s ) .

Definition 2.2 A pair of functions ( ϕ 2 (t), ψ 2 (t))E is called an upper solution of the system (1.1), if it satisfies

{ D 0 + α ϕ 2 ( t ) f ( t , ϕ 2 ( t ) , ψ 2 ( t ) ) , D 0 + β ψ 2 ( t ) g ( t , ϕ 2 ( t ) , ψ 2 ( t ) ) , 0 < t < 1 , ϕ 2 ( 0 ) 0 , ϕ 2 ( 1 ) 0 1 ϕ 2 ( s ) d A ( s ) , ψ 2 ( 0 ) 0 , ψ 2 ( 1 ) 0 1 ψ 2 ( s ) d B ( s ) .

Remark 2.1 Normally, it is difficult to find the lower solution and upper solution of the system (1.1). In Theorem 3.1 of this paper, we will give a general strategy to find the lower solution and upper solution of the system (1.1) through a series of integral calculations form the initial value ( t α 1 , t β 1 ).

Next let

G α (t,s)= 1 Γ ( α ) { [ t ( 1 s ) ] α 1 , 0 t s 1 , [ t ( 1 s ) ] α 1 ( t s ) α 1 , 0 s t 1 ,
(2.1)
G β (t,s)= 1 Γ ( β ) { [ t ( 1 s ) ] β 1 , 0 t s 1 , [ t ( 1 s ) ] β 1 ( t s ) β 1 , 0 s t 1 ,
(2.2)

and define

G A (s)= 0 1 G α (t,s)dA(t), G B (s)= 0 1 G β (t,s)dB(t).
(2.3)

According to the strategy of [4], we can get easily the Green functions of the corresponding linear boundary value problem for the system (1.1).

Lemma 2.1 Given h L 1 (0,1) and 1<α,β2, then the following boundary value problems:

{ D 0 + α x ( t ) = h ( t ) , 0 < t < 1 , x ( 0 ) = 0 , x ( 1 ) = 0 1 x ( s ) d A ( s ) , { D 0 + β y ( t ) = h ( t ) , 0 < t < 1 , y ( 0 ) = 0 , y ( 1 ) = 0 1 y ( s ) d B ( s ) ,
(2.4)

have the unique solution

x(t)= 0 1 H α (t,s)h(s)ds,y(t)= 0 1 H β (t,s)h(s)ds,
(2.5)

where H α (t,s), H β (t,s) are the Green functions of the BVP (2.4), respectively, and

H α (t,s)= t α 1 1 A G A (s)+ G α (t,s), H β (t,s)= t β 1 1 B G B (s)+ G β (t,s),
(2.6)

where

A= 0 1 t α 1 dA(t),B= 0 1 t β 1 dB(t).

Lemma 2.2 Let 0A,B<1 and G A (s), G B (s)0 for s[0,1], then the Green functions defined by (2.6) satisfy

  1. (1)

    H α (t,s), H β (t,s)>0, for all t,s(0,1).

  2. (2)

    There exist two constants λ, μ such that

    t α 1 1 A G A ( s ) H α ( t , s ) λ t α 1 , t β 1 1 B G B ( s ) H β ( t , s ) μ t β 1 , s , t [ 0 , 1 ] .
    (2.7)

Proof (1) is obvious. We only prove the first inequality of (2.7), the proof of second one is similar to those of the first one.

Since G α (t,s)0 for any s,t[0,1], we have

t α 1 1 A G A (s) H α (t,s),s,t[0,1].

On the other hand, from (2.1), obviously,

G α (t,s) 1 Γ ( α ) t α 1 .

Take

λ= 1 Γ ( α ) + max 0 s 1 G A ( s ) 1 A ,

then we have

H α (t,s)λ t α 1 .

The proof is completed. □

Lemmas 2.1 and 2.2 lead to the following maximum principle.

Lemma 2.3 If (x,y)E satisfies

x(0)=0,x(1)= 0 1 x(s)dA(s),y(0)=0,y(1)= 0 1 y(s)dB(s),

and D 0 + α x(t)0, D 0 + α y(t)0 for any t(0,1). Then

x(t)0,y(t)0,t[0,1].

Lemma 2.4 (Schauder fixed point theorem)

Let T be a continuous and compact mapping of a Banach space E into itself, such that the set

{xE:x=σTx, for some 0σ1}

is bounded. Then T has a fixed point.

3 Main results

We make the following assumptions throughout this paper:

(H0) A and B are functions of bounded variation satisfying G A (s), G B (s)0 for s[0,1] and 0A,B<1;

(H1) f,gC((0,1)×(0,+)×(0,),[0,+)) are decreasing in second and third variables and such that

f ( s , s α 1 , s β 1 ) ,g ( s , s α 1 , s β 1 ) L 1 (0,1);

(H2) for all r(0,1), there exist constants 0<ϵ,σ<1 such that, for any (t,x,y)(0,1)×(0,+)×(0,+),

f(t,rx,ry) r ϵ f(t,x,y),g(t,rx,ry) r σ g(t,x,y).

Remark 3.1 The conditions (H1)-(H2) imply that f, g have a powder singularity at x=y=0, and some typical functions are i = 1 m ( x λ i + y μ i ), 0< λ i <1, 0< μ i <1 with ϵ= min 1 i m { λ i , μ i } and λ i (α1)<1, μ i (β1)<1, i=1,2,,m.

Theorem 3.1 Suppose (H0)-(H2) hold. Then the system (1.1) has at least a positive solution ( x , y ), which satisfies

( L 1 t α 1 , L 1 t β 1 ) ( x , y ) ( L t α 1 , L t β 1 ) ,

where

L = max { [ λ 0 1 f ( s , s α 1 ( 1 s ) ) d s ] 1 1 ϵ , [ μ 0 1 f ( s , s α 1 ( 1 s ) ) d s ] 1 1 σ , [ 1 A 0 1 G A ( s ) f ( s , s α 1 , s β 1 ) d s ] 1 1 ϵ , [ 1 B 0 1 G B ( s ) g ( s , s α 1 , s β 1 ) d s ] 1 1 σ , 1 } .

In particular, if L=1, then ( t α 1 , t β 1 ) is positive solution of the system (1.1).

Proof Define a cone

P= { ( x , y ) E : L 1 t α 1 x ( t ) L t α 1 , L 1 t β 1 y ( t ) L t β 1 , t [ 0 , 1 ] } ,
(3.1)

then P is nonempty since ( t α 1 , t β 1 )P. Now let us denote an operator T by

T(x,y)(t)= ( T 1 ( x , y ) ( t ) , T 2 ( x , y ) ( t ) ) ,for any (x,y)P,
(3.2)

where

T 1 (x,y)(t)= 0 1 H α (t,s)f ( s , x ( s ) , y ( s ) ) ds, T 2 (x,y)(t)= 0 1 H β (t,s)g ( s , x ( s ) , y ( s ) ) ds.

We claim that T is well defined and T(P)P.

In fact, for any (x,y)P, we have

L 1 t α 1 x(t)L t α 1 , L 1 t β 1 y(t)L t β 1 ,t[0,1].

So from Lemma 2.1 and (H1)-(H2), one gets

T 1 ( x , y ) ( t ) = 0 1 H α ( t , s ) f ( s , x ( s ) , y ( s ) ) d s 0 1 λ t α 1 f ( s , L 1 s α 1 , L 1 s β 1 ) d s λ L ϵ t α 1 0 1 f ( s , s α 1 , s β 1 ) d s L t α 1
(3.3)

and

T 2 ( x , y ) ( t ) = 0 1 H β ( t , s ) g ( s , x ( s ) , y ( s ) ) d s 0 1 μ t β 1 g ( s , L 1 s α 1 , L 1 s β 1 ) d s μ L σ t β 1 0 1 g ( s , s α 1 , s β 1 ) d s L t β 1 .
(3.4)

On the other hand, by Lemma 2.1 and (H1)-(H2), we also have

T 1 ( x , y ) ( t ) t α 1 1 A 0 1 G A ( s ) f ( s , L s α 1 , L s β 1 ) d s L ϵ t α 1 1 A 0 1 G A ( s ) f ( s , s α 1 , s β 1 ) d s L ϵ t α 1 L 1 t α 1 ,
(3.5)

and

T 2 ( x , y ) ( t ) t β 1 1 B 0 1 G B ( s ) g ( s , L s α 1 , L s β 1 ) d s L σ t β 1 1 B 0 1 G B ( s ) g ( s , s α 1 , s β 1 ) d s L σ t β 1 L 1 t β 1 .
(3.6)

Thus it follows from (3.3)-(3.6) that T is well defined and T(P)P. Moreover, by Lemma 2.2, we have

{ D 0 + α T 1 ( x , y ) ( t ) = f ( t , T 1 ( x , y ) ( t ) , T 2 ( x , y ) ( t ) ) , D 0 + β T 2 ( x , y ) ( t ) = g ( t , T 1 ( x , y ) ( t ) , T 2 ( x , y ) ( t ) ) , T 1 ( x , y ) ( t ) ( 0 ) = 0 , T 1 ( x , y ) ( 1 ) = 0 1 T 1 ( x , y ) ( s ) d A ( s ) , T 2 ( x , y ) ( 0 ) = 0 , T 2 ( x , y ) ( 1 ) = 0 1 T 2 ( x , y ) ( s ) d B ( s ) .
(3.7)

Now take

φ ̲ (t)=min { t α 1 , T 1 ( t α 1 , t β 1 ) } , φ ¯ (t)=max { t α 1 , T 1 ( t α 1 , t β 1 ) } ,
(3.8)
ψ ̲ (t)=min { t β 1 , T 2 ( t α 1 , t β 1 ) } , ψ ¯ (t)=max { t β 1 , T 2 ( t α 1 , t β 1 ) } ,
(3.9)

since ( t α 1 , t β 1 )P, ( T 1 ( t α 1 , t β 1 ), T 2 ( t α 1 , t β 1 ))P, we have

( φ ̲ , ψ ̲ )P,( φ ¯ , ψ ¯ )P,and φ ̲ t α 1 φ ¯ , ψ ̲ t β 1 ψ ¯ .
(3.10)

Let

( φ 1 , ψ 1 )= ( T 1 ( φ ̲ , ψ ̲ ) , T 2 ( φ ̲ , ψ ̲ ) ) ,( φ 2 , ψ 2 )= ( T 1 ( φ ¯ , ψ ¯ ) , T 2 ( φ ¯ , ψ ¯ ) ) ,
(3.11)

then by (3.8)-(3.11) and (H2), we have

( φ 2 , ψ 2 ) = ( T 1 ( φ ¯ , ψ ¯ ) , T 2 ( φ ¯ , ψ ¯ ) ) ( T 1 ( t α 1 , t β 1 ) , T 2 ( t α 1 , t β 1 ) ) ( T 1 ( φ ̲ , ψ ̲ ) , T 2 ( φ ̲ , ψ ̲ ) ) = ( φ 1 , ψ 1 ) ,
(3.12)
( φ 2 , ψ 2 ) ( T 1 ( t α 1 , t β 1 ) , T 2 ( t α 1 , t β 1 ) ) ( φ ¯ , ψ ¯ ) , ( φ 1 , ψ 1 ) ( T 1 ( t α 1 , t β 1 ) , T 2 ( t α 1 , t β 1 ) ) ( φ ̲ , ψ ̲ ) .
(3.13)

Consequently, it follows from (3.7) and (3.10)-(3.13) that

D 0 + α φ 1 ( t ) + f ( t , φ 1 ( t ) , ψ 1 ( t ) ) = D 0 + α T 1 ( φ ̲ , ψ ̲ ) ( t ) + f ( t , φ 1 ( t ) , ψ 1 ( t ) ) = f ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) + f ( t , φ 1 ( t ) , ψ 1 ( t ) ) f ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) + f ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) = 0 , φ 1 ( 0 ) = 0 , φ 1 ( 1 ) = 0 1 φ 1 ( s ) d A ( s ) ,
(3.14)
D 0 + β ψ 1 ( t ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) = D 0 + β T 2 ( φ ̲ , ψ ̲ ) ( t ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) = g ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) g ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) + g ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) = 0 , ψ 1 ( 0 ) = 0 , ψ 1 ( 1 ) = 0 1 ψ 1 ( s ) d B ( s ) ,
(3.15)

and

D 0 + α φ 2 ( t ) + f ( t , φ 2 ( t ) , ψ 2 ( t ) ) = D 0 + α T 1 ( φ ¯ , ψ ¯ ) ( t ) + f ( t , φ 2 ( t ) , ψ 2 ( t ) ) = f ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + f ( t , φ 2 ( t ) , ψ 2 ( t ) ) f ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + f ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) = 0 , φ 2 ( 0 ) = 0 , φ 2 ( 1 ) = 0 1 φ 2 ( s ) d A ( s ) ,
(3.16)
D 0 + β ψ 2 ( t ) + g ( t , φ 2 ( t ) , ψ 2 ( t ) ) = D 0 + β T 2 ( φ ¯ , ψ ¯ ) ( t ) + g ( t , φ 2 ( t ) , ψ 2 ( t ) ) = g ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + g ( t , φ 2 ( t ) , ψ 2 ( t ) ) g ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + g ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) = 0 , ψ 2 ( 0 ) = 0 , ψ 2 ( 1 ) = 0 1 ψ 2 ( s ) d B ( s ) ,
(3.17)
D 0 + β ψ 1 ( t ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) = D 0 + β T 2 ( φ ̲ , ψ ̲ ) ( t ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) = g ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) g ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) + g ( t , φ ̲ ( t ) , ψ ̲ ( t ) ) = 0 , ψ 1 ( 0 ) = 0 , ψ 1 ( 1 ) = 0 1 ψ 1 ( s ) d B ( s ) .
(3.18)

It follows from (3.12) and (3.14)-(3.18) that ( φ 2 , ψ 2 ), ( φ 1 , ψ 1 ) are lower and upper solutions of the system (1.1), and ( φ 2 , ψ 2 ),( φ 1 , ψ 1 )P.

Define the functions F ˜ , G ˜ , and the operator T ˜ in E by

F ˜ (t,x,y)= { f ( t , φ 2 ( t ) , ψ 2 ( t ) ) , ( x , y ) < ( φ 2 , ψ 2 ) , f ( t , x , y ) , ( φ 2 , ψ 2 ) ( x , y ) ( φ 1 , ψ 1 ) , f ( t , φ 1 ( t ) , ψ 1 ( t ) ) , ( x , y ) > ( φ 1 , ψ 1 ) ,
(3.19)
G ˜ (t,x,y)= { g ( t , φ 2 ( t ) , ψ 2 ( t ) ) , ( x , y ) < ( φ 2 , ψ 2 ) , g ( t , x , y ) , ( φ 2 , ψ 2 ) ( x , y ) ( φ 1 , ψ 1 ) , g ( t , φ 1 ( t ) , ψ 1 ( t ) ) , ( x , y ) > ( φ 1 , ψ 1 ) ,
(3.20)

and T ˜ =( T ˜ 1 (x,y)(t), T ˜ 2 (x,y)(t)) where

T ˜ 1 (x,y)(t)= 0 1 H α (t,s) F ˜ ( s , x ( s ) , y ( s ) ) ds, T ˜ 2 (x,y)(t)= 0 1 H β (t,s) G ˜ ( s , x ( s ) , y ( s ) ) ds.

It follows from the assumption that F ˜ :(0,1)×[0,+)×[0,+)[0,+) and G ˜ :(0,1)×[0,+)×[0,+)[0,+) are continuous. Consider the following boundary value problem:

{ D 0 + α x ( t ) = F ˜ ( t , x ( t ) , y ( t ) ) , D 0 + β y ( t ) = G ˜ ( t , x ( t ) , y ( t ) ) , 0 < t < 1 , x ( 0 ) = 0 , x ( 1 ) = 0 1 x ( s ) d A ( s ) , y ( 0 ) = 0 , y ( 1 ) = 0 1 y ( s ) d B ( s ) .
(3.21)

Obviously, a fixed point of the operator T ˜ is a solution of the BVP (3.21).

For all (x,y)E, by (3.19)-(3.20), we have

T ˜ 1 ( x , y ) ( t ) 0 1 λ F ˜ ( s , x ( s ) , y ( s ) ) d s λ 0 1 f ( s , φ 2 ( s ) , ψ 2 ( s ) ) d s λ 0 1 f ( s , L 1 s α 1 , L 1 s β 1 ) d s λ L ϵ 0 1 f ( s , s α 1 , s β 1 ) d s < + , T ˜ 2 ( x , y ) ( t ) 0 1 μ G ˜ ( s , x ( s ) , y ( s ) ) d s μ 0 1 g ( s , L 1 s α 1 , L 1 s β 1 ) d s μ L σ 0 1 g ( s , s α 1 , s β 1 ) d s < + .

So T ˜ = T ˜ 1 + T ˜ 2 <+, which implies that T ˜ is uniformly bounded. In addition, it follows from the continuity of F ˜ , G ˜ and the uniform continuity of H α , H β , and (H1) that T ˜ :E×EE is continuous.

Let ΩE×E be bounded, by standard discuss and the Arzela-Ascoli theorem, we easily know T ˜ (Ω) is equicontinuous. Thus T ˜ :EE is completely continuous, and by using Schauder fixed point theorem, T ˜ has at least a fixed point ( x , y ) such that ( x , y )= T ˜ ( x , y ).

Now we prove

( φ 2 ( t ) , ψ 2 ( t ) ) ( x , y ) ( φ 1 ( t ) , ψ 1 ( t ) ) ,t[0,1].
(3.22)

We firstly prove ( x , y )( φ 1 (t), ψ 1 (t)). Otherwise, suppose ( x , y )>( φ 1 (t), ψ 1 (t)). According to the definition of F ˜ , G ˜ , we have

D 0 + α x ( t ) = F ˜ ( t , x ( t ) , y ( t ) ) = f ( t , φ 1 ( t ) , ψ 1 ( t ) ) , D 0 + β y ( t ) = G ˜ ( t , x ( t ) , y ( t ) ) = g ( t , φ 1 ( t ) , ψ 1 ( t ) ) .
(3.23)

On the other hand, as ( φ 1 (t), ψ 1 (t)) is an upper solution of (1.1), we have

D 0 + α φ 1 (t)f ( t , φ 1 ( t ) , ψ 1 ( t ) ) , D 0 + β ψ 1 (t)g ( t , φ 1 ( t ) , ψ 1 ( t ) ) .
(3.24)

Let z(t)= φ 1 (t) x (t), w(t)= ψ 1 (t) y (t), (3.23)-(3.24) imply that

D 0 + α z(t)= D 0 + α φ 1 (t) D 0 + α x (t)0, D 0 + β w(t)= D 0 + β ψ 1 (t) D 0 + α y (t)0.

On the other hand, since ( φ 1 (t), ψ 1 (t)) is an upper solution of the BVP (1.1) and ( x , y ) is a fixed point of T ˜ , we know

z(0)=0,z(1)= 0 1 z(s)dA(s),w(0)=0,w(1)= 0 1 w(s)dB(s).

It follows from Lemma 2.3 that

z(t)0,w(t)0,

i.e., ( x (t), y (t))( φ 1 (t), ψ 1 (t)) on [0,1], which contradicts ( x , y )>( φ 1 (t), ψ 1 (t)). Thus we have ( x (t), y (t))( φ 1 (t), ψ 1 (t)) on [0,1]. In the same way, ( x (t), y (t))( φ 2 (t), ψ 2 (t)) on [0,1]. Consequently, (3.22) is satisfied; then ( x (t), y (t)) is a positive solution of the problem (1.1).

It follows from ( φ 2 (t), ψ 2 (t)),( φ 1 (t), ψ 1 (t))P and (3.22) that

( L 1 t α 1 , L 1 t β 1 ) ( x , y ) ( L t α 1 , L t β 1 ) .

The proof is completed. □

4 Further results

In this section, we discuss some special case for system (1.1) and obtain some further results. We firstly discuss that f, g have no singularity at x,y=0, but can be singular at t=0,1.

Theorem 4.1 Suppose (H0) holds, and f, g satisfies

(H1) f,gC((0,1)×[0,)×[0,),[0,+)) are decreasing in second and third variables and such that

0< 0 1 f(s,0,0)ds<,0< 0 1 g(s,0,0)ds<.

Then the system (1.1) has at least a positive solution ( x , y ), which satisfies

(0,0) ( x , y ) ( L ˜ t α 1 , L ˜ t β 1 ) ,

where

L ˜ =max { λ 0 1 f ( s , 0 , 0 ) d s , μ 0 1 g ( s , 0 , 0 ) d s } .

Proof Similar to the proof of Theorem 3.1, we take the cone

P 1 = { ( x , y ) E : x ( t ) 0 , y ( t ) 0 , t [ 0 , 1 ] } .

Clearly, T( P 1 ) P 1 is well defined.

Now take

φ ̲ (t)=0, φ ¯ (t)= T 1 (0,0),
(4.1)
ψ ̲ (t)=0, ψ ¯ (t)= T 2 (0,0);
(4.2)

we have

( φ ̲ , ψ ̲ ) P 1 ,( φ ¯ , ψ ¯ ) P 1 ,and φ ̲ =0 φ ¯ , ψ ̲ =0 ψ ¯ .
(4.3)

Let

( φ 1 , ψ 1 )= ( T 1 ( φ ̲ , ψ ̲ ) , T 2 ( φ ̲ , ψ ̲ ) ) ,( φ 2 , ψ 2 )= ( T 1 ( φ ¯ , ψ ¯ ) , T 2 ( φ ¯ , ψ ¯ ) ) ,
(4.4)

then by (4.1)-(4.4) and (H1), we have

( φ 2 , ψ 2 ) = ( T 1 ( φ ¯ , ψ ¯ ) , T 2 ( φ ¯ , ψ ¯ ) ) ( T 1 ( 0 , 0 ) , T 2 ( 0 , 0 ) ) ( T 1 ( φ ̲ , ψ ̲ ) , T 2 ( φ ̲ , ψ ̲ ) ) = ( φ 1 , ψ 1 ) ,
(4.5)
( φ 2 , ψ 2 ) ( T 1 ( 0 , 0 ) , T 2 ( 0 , 0 ) ) = ( φ ¯ , ψ ¯ ) , ( φ 1 , ψ 1 ) = ( T 1 ( 0 , 0 ) , T 2 ( 0 , 0 ) ) ( φ ̲ , ψ ̲ ) .
(4.6)

Consequently, it follows from (4.5) and (4.6) that

D 0 + α φ 1 ( t ) + f ( t , φ 1 ( t ) , ψ 1 ( t ) ) = D 0 + α T 1 ( φ ̲ , ψ ̲ ) ( t ) + f ( t , φ 1 ( t ) , ψ 1 ( t ) ) = f ( t , 0 , 0 ) + f ( t , φ 1 ( t ) , ψ 1 ( t ) ) f ( t , 0 , 0 ) + f ( t , 0 , 0 ) = 0 ,
(4.7)
D 0 + β ψ 1 ( t ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) = D 0 + β T 2 ( φ ̲ , ψ ̲ ) ( t ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) = g ( t , 0 , 0 ) + g ( t , φ 1 ( t ) , ψ 1 ( t ) ) g ( t , 0 , 0 ) + g ( t , 0 , 0 ) = 0 ,
(4.8)

and

D 0 + α φ 2 ( t ) + f ( t , φ 2 ( t ) , ψ 2 ( t ) ) = D 0 + α T 1 ( φ ¯ , ψ ¯ ) ( t ) + f ( t , φ 2 ( t ) , ψ 2 ( t ) ) = f ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + f ( t , φ 2 ( t ) , ψ 2 ( t ) ) f ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + f ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) = 0 ,
(4.9)
D 0 + β ψ 2 ( t ) + g ( t , φ 2 ( t ) , ψ 2 ( t ) ) = D 0 + β T 2 ( φ ¯ , ψ ¯ ) ( t ) + g ( t , φ 2 ( t ) , ψ 2 ( t ) ) = g ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + g ( t , φ 2 ( t ) , ψ 2 ( t ) ) g ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) + g ( t , φ ¯ ( t ) , ψ ¯ ( t ) ) = 0 .
(4.10)

Thus (4.5) and (4.7)-(4.10) imply that ( φ 2 , ψ 2 ), ( φ 1 , ψ 1 ) are lower and upper solutions of the system (1.1), and ( φ 2 , ψ 2 ),( φ 1 , ψ 1 ) P 1 .

On the other hand, by Lemma 2.2,

φ 1 ( t ) = T 1 ( 0 , 0 ) ( t ) = 0 1 H α ( t , s ) f ( s , 0 , 0 ) d s λ t α 1 0 1 f ( s , 0 , 0 ) d s L ˜ t α 1 , ψ 1 ( t ) = T 2 ( 0 , 0 ) ( t ) = 0 1 H β ( t , s ) g ( s , 0 , 0 ) d s μ t β 1 0 1 g ( s , 0 , 0 ) d s L ˜ t β 1 .

Thus the rest of proof is similar to those of Theorem 3.1. □

Next, if f, g have no singularity at x,y=0 and t=0,1, we copy the proof of Theorem 4.1, and we have the following interesting result.

Theorem 4.2 Suppose (H0) holds, f(t,0,0)0, g(t,0,0)0, t[0,1], and f,gC([0,1]×[0,)×[0,),[0,+)) are decreasing in second and third variables. Then the system (1.1) has at least a positive solution ( x , y ), which satisfies

(0,0) ( x , y ) ( L ˜ t α 1 , L ˜ t β 1 ) ,

where

L ˜ =max { λ 0 1 f ( s , 0 , 0 ) d s , μ 0 1 g ( s , 0 , 0 ) d s } .

5 Examples

Take functions of bounded variation,

A(t)= { 0 , t [ 0 , 1 3 ) , 1 2 , t [ 1 3 , 2 3 ) , 1 , t [ 2 3 , 1 ] , B(t)= { 0 , t [ 0 , 1 2 ) , 2 , t [ 1 2 , 3 4 ) , 1 , t [ 3 4 , 1 ] .

Example 5.1 Suppose that α i , β i , γ i >0, and 0< γ i + 1 2 α i <1, 0< γ i + 1 3 β i <1, i=1,2. We consider the following singular fractional differential system:

D 0 + 3 2 x(t)= t γ 1 ( x α 1 + y β 1 ) , D 0 + 4 3 y(t)= t γ 2 ( x α 2 + y β 2 ) ,
(5.1)

subject to the nonlocal boundary condition

x(0)=0,x(1)= 0 1 x(s)dA(s),y(0)=0,y(1)= 0 1 y(s)dB(s).
(5.2)

By a simple calculation, the system (5.1) with boundary condition (5.2) is equivalent to the following system with coefficients of both signs:

{ D 0 + 3 2 x ( t ) = t γ 1 ( x α 1 + y β 1 ) , D 0 + 4 3 y ( t ) = t γ 2 ( x α 2 + y β 2 ) , x ( 0 ) = 0 , x ( 1 ) = 1 2 x ( 2 3 ) + 1 2 x ( 1 3 ) , y ( 0 ) = 0 , y ( 1 ) = 2 y ( 1 2 ) y ( 3 4 ) ,

and

0 A = 0 1 t 1 2 d A ( t ) = 1 2 ( 2 3 ) 1 2 + 1 2 ( 1 3 ) 1 2 0.6969 < 1 , 0 B = 0 1 t 1 3 d B ( t ) = 2 ( 1 2 ) 1 3 ( 3 4 ) 1 3 0.6788 < 1 .

Clearly, G A (s), G B (s)0 for s[0,1] also hold.

Let f(t,x,y)= t γ 1 ( x α 1 + y β 1 ), g(t,x,y)= t γ 2 ( x α 2 + y β 2 ), then f, g are decreasing in x and y, and

f ( s , s α 1 , s β 1 ) = s γ 1 1 2 α 1 + s γ 1 1 3 β 1 ,g ( s , s α 1 , s β 1 ) = s γ 2 1 2 α 2 + s γ 2 1 3 β 2 L 1 (0,1).

Moreover, for all r(0,1) and (t,x,y)(0,1)×(0,+)×(0,+), we have

f(t,rx,ry) r max { α 1 , β 1 } f(t,x,y),g(t,rx,ry) r max { α 2 , β 2 } g(t,x,y).

By Theorem 3.1, the system (5.1) with boundary condition (5.2) has at least a positive solution ( x , y ).

Example 5.2 Consider the singular fractional differential system

D 0 + 3 2 x(t)= t 1 4 ( 1 x 2 + 2 + cos y ) , D 0 + 4 3 y(t)= | ln t | + t 1 2 ( x 1 2 + 1 ) ( sin y + 1 ) ,
(5.3)

subject to nonlocal boundary condition (5.2). Let

f(t,x,y)= t 1 4 ( 1 x 2 + 2 + cos y ) ,g(t,x,y)= | ln t | + t 1 2 ( x 1 2 + 1 ) ( sin y + 1 ) ,

then f, g are decreasing in x and y, and

0 1 f(s,0,0)ds= 3 2 0 1 s 1 4 ds=2, 0 1 g(s,0,0)ds= 0 1 ( s 1 2 ln s ) ds=2.

Thus by Theorem 4.1, the system (5.3) with boundary condition (5.2) has at least a positive solution ( x , y ).

Remark 5.1 In this work, the monotone assumption of f and g is an essential condition. In particular, for nonsingular case, the result is interesting since only monotone assumption is requested, which meets a large classes of functions.