1 Introduction

In the past years, much attention has been devoted to the study of fractional differential equations due to the fact that they have many applications in a broad range of areas such as physics, chemistry, aerodynamics, electrodynamics of complex medium and polymer rheology. Many existence results of solutions to initial value problems and boundary value problems for fractional differential equations have been established in terms of all sorts of methods; see, e.g., [117] and the references therein. Generally speaking, it is difficult to get the exact solution for fractional differential equations. To obtain approximate solutions of nonlinear fractional differential problems, we can use the monotone iterative technique and the lower and upper solutions. This technique is well known and can be used for both initial value problems and boundary value problems for differential equations [1820]. Recently, this method has also been applied to initial value problems and boundary value problems for fractional differential equations; see [2133]. To the best of our knowledge, there is still little utilization of the monotone iterative method to a fractional differential equation of order \(p\in(2,3]\).

Consider the following nonlinear fractional differential equations:

$$ \left \{ \textstyle\begin{array}{l} D^{p}x(t)+f(t, x(t))=0, \quad t\in(0,1),\\ x(0)=x'(0)=0, \quad\quad x(1)=0, \end{array}\displaystyle \right .$$
(1.1)

where \(D^{p}\) is the standard Riemann-Liouville derivative and \(p\in(2,3]\). In this paper, we give some sufficient conditions, under which such problems have extremal solutions. To formulate such theorems, we need the corresponding comparison results for fractional differential inequalities by use of the Banach fixed point theorem and the method of successive approximations. An example is added to verify assumptions and theoretical results.

For convenience, let us set the following notations:

$$ \begin{gathered} K=\frac{M(p-1)}{\Gamma(p)} \biggl(1- \biggl( \frac{M(p-1)}{\Gamma (p+2)} \biggr)^{2} \biggr)^{-1}, \\ K_{1}=\frac{1}{\Gamma(p)} -M\frac{(p-1)^{2}}{\Gamma(p)^{2}} -K \frac{M^{2}(p-1)^{2}}{\Gamma(p)^{2}}\frac{1}{p^{2}(p+1)^{2}}, \\ K_{2}=\sum_{n=1}^{+\infty}M^{2n} \biggl(\frac{p-1}{\Gamma (p)} \biggr)^{2n+1} \biggl(\frac{1}{p(p+1)} \biggr)^{2n-1}. \end{gathered} $$

2 Preliminaries

For the convenience of the reader, we present here the necessary definitions from fractional calculus theory. These definitions and properties can be found in the recent literature; see [14].

Definition 2.1

[1]

The Riemann-Liouville fractional integral of order \(p>0\) of a function \(f:(0,\infty)\to\mathbb{R}\) is given by

$$I^{p}f(t)=\frac{1}{\Gamma(p)} \int_{0}^{t}(t-s)^{p-1}f(s)\,ds $$

provided that the right-hand side is pointwise defined on \((0,\infty)\).

Definition 2.2

[1]

The Riemann-Liouville fractional derivative of order \(p>0\) of a continuous function \(f:(0,\infty)\to\mathbb{R}\) is given by

$$D^{p}f(t)=\frac{1}{\Gamma(n-p)} \biggl(\frac{d}{dt} \biggr)^{n} \int_{0}^{t}\frac{f(s)}{(t-s)^{p-n+1}}\,ds, $$

where \({n-1\leq p< n}\), provided that the right-hand side is pointwise defined on \((0,\infty)\).

Lemma 2.1

[1]

Assume that \(u\in C(0,1)\cap L(0,1)\) with a fractional derivative of order \(p>0\) that belongs to \(C(0,1)\cap L(0,1)\). Then

$$I^{p}D^{p}u(t)=u(t)+c_{1}t^{p-1}+ c_{2}t^{p-2}+\cdots+c_{N}t^{p-N} $$

for some \(c_{i}\in\mathbb{R}\), \(i=1,\ldots,N\), \(N=[p]\).

For brevity, let us take \(E=\{u: D^{p}u(t)\in C(0,1)\cap L(0,1)\}\). In the Banach space \(C[0,1]\), in which the norm is defined by \(\|x\|=\max_{t\in[0,1]}|x(t)|\), we set \(P=\{ x\in C[0,1] \mid x(t)\geq0, \forall t\in[0,1]\} \). P is a positive cone in \(C[0,1]\). Throughout this paper, the partial ordering is always given by P.

The following are the existence and uniqueness results of a solution for a linear boundary value problem, which is important for us in the following analysis.

Lemma 2.2

[6]

Let \(a\in\mathbb{R}\), \(\sigma\in C(0,1)\cap L(0,1)\) and \(2< p\leq3\), then the unique solution of

$$ \left \{ \textstyle\begin{array}{l} D^{p} x(t)+\sigma(t)=0,\quad t\in(0,1),\\ x(0)=x'(0)=0,\qquad x(1)=a, \end{array}\displaystyle \right .$$
(2.1)

is given by

$$x(t)=at^{p-1}+ \int_{0}^{1}G(t,s)\sigma(s)\,ds, $$

where \(G(t,s)\) is Green’s function given by

$$ G(t,s)=\left \{ \textstyle\begin{array}{l@{\quad}l} \frac{(1-s)^{p-1}t^{p-1}-(t-s)^{p-1}}{\Gamma(p)},& 0\leq s\leq t\leq 1, \\ \frac{(1-s)^{p-1}t^{p-1}}{\Gamma(p)}, & 0\leq t\leq s\leq1. \end{array}\displaystyle \right .$$
(2.2)

The following properties of Green’s function play an important part in this paper.

Lemma 2.3

[10]

The function \(G(t,s)\) defined by (2.2) satisfies the following conditions:

  1. (1)

    \(t^{p-1}(1-t)s(1-s)^{p-1}\leq\Gamma(p)G(t,s)\leq(p-1)s(1-s)^{p-1}\), \(t,s\in(0,1) \),

  2. (2)

    \(t^{p-1}(1-t)s(1-s)^{p-1}\leq\Gamma(p)G(t,s)\leq(p-1)t^{p-1}(1-t)\), \(t,s\in(0,1) \).

Lemma 2.4

Suppose that \(\sigma\in C(0,1)\cap L(0,1)\), and there exists \(M>0\) satisfying

$$ \frac{M(p-1)}{\Gamma(p+2)}< 1,$$
(2.3)

then the linear boundary value problem

$$ \left \{ \textstyle\begin{array}{l} D^{p} x(t)-Mx(t)+\sigma(t)=0,\quad t\in(0,1),\\ x(0)=x'(0)=0,\qquad x(1)=a \end{array}\displaystyle \right .$$
(2.4)

has exactly one solution given by

$$ x(t)=at^{p-1}+a \int_{0}^{1}Q(t,s)s^{p-1}\,ds+ \int_{0}^{1}H(t,s)\sigma(s)\,ds,$$
(2.5)

where

$$\begin{gathered} Q(t,s)=\sum_{n=1}^{+\infty}G_{n}(t,s), \qquad G_{1}(t,s)=-MG(t,s), \\ G_{n}(t,s)=(-M)^{n} \int_{0}^{1}\cdots \int_{0}^{1}G(t,r_{1})G(r_{1},r_{2}) \cdots G(r_{1},s)\,dr_{1}\cdots dr_{n-1}, \end{gathered} $$

and

$$H(t,s)=G(t,s)+ \int_{0}^{1}Q(t,\tau)G(\tau,s)\,d\tau. $$

Proof

Using Lemma 2.2, it is easy to show that problem (2.4) is equivalent to the following integral equation:

$$x(t)=at^{p-1}+ \int_{0}^{1}G(t,s) \bigl(-Mx(s)+\sigma(s) \bigr) \,ds, $$

i.e.,

$$ x(t)=v(t)-M \int_{0}^{1}G(t,s)x(s)\,ds,$$
(2.6)

where

$$ v(t)=at^{p-1}+ \int_{0}^{1}G(t,s)\sigma(s)\,ds.$$
(2.7)

We write in the form \(x=Tx\), where T is defined by the right-hand side of (2.6). Clearly, T is an operator from \(C[0,1]\) into \(C[0,1]\). Now, we have to show that the operator T has a unique fixed point. To do this, we will prove that T is a contraction map. In fact, by Lemma 2.3, for \(x, y \in C[0,1]\), we obtain

$$\begin{aligned} \|Tx-Ty\|&= \max_{0\leq t\leq1} |(Tx) (t)-(Ty) (t) | \leq M\max_{0\leq t\leq1} \int_{0}^{1}G(t,s)\,ds \|x-y\| \\ &\leq \frac{M(p-1)}{\Gamma(p)} \int_{0}^{1}s(1-s)^{p-1}\,ds \|x-y\| \\ &= \frac{M(p-1)}{\Gamma(p)}\frac{\Gamma(2)\Gamma(p)}{\Gamma(p+2)} \|x-y\| =\frac{M(p-1)}{\Gamma(p+2)} \|x-y\|. \end{aligned} $$

This and condition (2.3) prove that problem (2.4) has a unique solution \(x(t)\) given by

$$\|x_{n}-x\|\rightarrow0\quad (n\rightarrow+\infty), $$

where

$$x_{0}(t)=v(t),\quad\quad x_{n}(t)=Tx_{n-1}(t), \quad t \in[0,1]\ (n=1,2,\ldots). $$

Applying the method of successive approximations, it is easy to see that

$$ x(t)=v(t)+\sum_{n=1}^{+\infty} \int_{0}^{1}G_{n}(t,s)v(s)\,ds =v(t)+ \int_{0}^{1}Q(t,s)v(s)\,ds.$$
(2.8)

Substituting (2.7) into (2.8), we get (2.5) and the proof is complete. □

Lemma 2.5

Suppose that the constant M given in Lemma 2.4 satisfies inequality (2.3) and

$$ \bigl(KM+p^{2}(1-p)^{2} \bigr)M(p-1)^{2} < \Gamma(p)p^{2}(1-p)^{2}.$$
(2.9)

Then the function \(H(t,s)\) has the following properties:

$$K_{1}t^{p-1}(1-t)s(1-s)^{p-1}\leq H(t,s)\leq K_{2}t^{p-1}(1-t)s(1-s)^{p-1},\quad t,s\in(0,1). $$

Proof

It follows from the expression of \(G_{n}(t,s)\) that \(G_{n}(t,s)\leq0\) when n is odd and \(G_{n}(t,s)\geq0\) when n is even. By Lemma 2.3, we obtain that

$$\begin{aligned} G_{n}(t,s)={}& (-M)^{n} \int_{0}^{1}\cdots \int_{0}^{1}G(t,r_{1})G(r_{1},r_{2}) \cdots G(r_{n-1},s)\,dr_{1}\cdots dr_{n-1} \\ \geq{}& (-M)^{n} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{n}t^{p-1}(1-t)s(1-s)^{p-1} \\ &\times \int_{0}^{1}r_{1}^{p-1}(1-r_{1}) \,dr_{1} \cdots \int _{0}^{1}r_{n-2}^{p-1}(1-r_{n-2}) \,dr_{n-2} \\ ={}& (-M)^{n} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{n} \biggl( \frac {1}{p(p+1)} \biggr)^{n-2}t^{p-1}(1-t)s(1-s)^{p-1}, \quad n=3,5,\ldots, \end{aligned} $$

and

$$G_{n}(t,s)\leq M^{n} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{n} \biggl(\frac {1}{p(p+1)} \biggr)^{n-2}t^{p-1}(1-t)s(1-s)^{p-1}, \quad n=2,4,\ldots. $$

Consequently, we have

$$\begin{aligned} H(t,s)={}& G(t,s)+ \int_{0}^{1}Q(t,\tau)G(\tau,s)\,d\tau =G(t,s)+\sum _{n=1}^{+\infty} \int_{0}^{1}G_{n}(t,\tau)G(\tau,s)\,d\tau \\ \geq{}& G(t,s)-M \int_{0}^{1}G(t,\tau)G(\tau,s)\,d\tau +\sum _{n=1}^{+\infty} \int_{0}^{1}G_{2n+1}(t,\tau)G(\tau,s)\,d\tau \\ \geq{}& \frac{t^{p-1}(1-t)}{\Gamma(p)}s(1-s)^{p-1} -M\frac{(p-1)^{2}}{\Gamma(p)^{2}}t^{p-1}(1-t)s(1-s)^{p-1} \\ & -\sum_{n=1}^{+\infty}M^{2n+1} \biggl( \frac{p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac {1}{p(p+1)} \biggr)^{2n-1} t^{p-1}(1-t)s(1-s)^{p-1} \\ &\times \int_{0}^{1}\tau(1-\tau )^{p-1}\,d\tau \\ \geq{}& t^{p-1}(1-t)s(1-s)^{p-1} \\ &\times \Biggl[ \frac{1}{\Gamma(p)} -M \frac{(p-1)^{2}}{\Gamma(p)^{2}} -\sum _{n=1}^{+\infty}M^{2n+1} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac {1}{p(p+1)} \biggr)^{2n} \Biggr] \\ ={}& t^{p-1}(1-t)s(1-s)^{p-1} \biggl[ \frac{1}{\Gamma(p)} -M \frac{(p-1)^{2}}{\Gamma(p)^{2}} -K\frac{M^{2}(p-1)^{2}}{\Gamma(p)^{2}}\frac{1}{p^{2}(p+1)^{2}} \biggr] \\ ={}&K_{1}t^{p-1}(1-t)s(1-s)^{p-1} \end{aligned} $$

(notice that (2.9) is equivalent to the inequality \(K_{1}>0\)) and

$$\begin{aligned} H(t,s) =& G(t,s)+ \int_{0}^{1}Q(t,\tau)G(\tau,s)\,d\tau =G(t,s)+\sum _{n=1}^{+\infty} \int_{0}^{1}G_{n}(t,\tau)G(\tau,s)\,d\tau \\ \leq& \sum_{n=1}^{+\infty} \int_{0}^{1}G_{2n}(t,\tau)G(\tau,s)\,d\tau \\ \leq& \sum_{n=1}^{+\infty}M^{2n} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{2n} \biggl(\frac {1}{p(p+1)} \biggr)^{2n-2} t^{p-1}(1-t) \\ &{}\times \int_{0}^{1}\tau(1-\tau)^{p-1}\,d\tau\cdot \frac{p-1}{\Gamma(p)} s(1-s)^{p-1} \\ =& t^{p-1}(1-t)s(1-s)^{p-1}\sum_{n=1}^{+\infty}M^{2n} \biggl(\frac {p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac{1}{p(p+1)} \biggr)^{2n-1} \\ =& K_{2}t^{p-1}(1-t)s(1-s)^{p-1}. \end{aligned}$$

This completes the proof. □

Let

$$P_{1}= \biggl\{ x\in C[0,1] \Bigm| x(t)\geq\frac{K_{1}}{K_{2}}t^{p-1}(1-t) \|x\|, t\in[0,1] \biggr\} . $$

It is obvious that \(P_{1}\) is a cone and \(P_{1}\subset P\). We define the operator \(S:C[0,1]\rightarrow C[0,1]\) by

$$ (Sx) (t)= \int_{0}^{1}H(t,s)x(s)\,ds,\quad t\in[0,1], x\in C[0,1].$$
(2.10)

It is clear that S is a linear operator, and the operator equation \(x=S\sigma\) is equivalent to the existence of a solution for the problem

$$\left \{ \textstyle\begin{array}{l} D^{p} x(t)-Mx(t)+\sigma(t)=0,\quad t\in(0,1),\\ x(0)=x'(0)=0,\qquad x(1)=0. \end{array}\displaystyle \right . $$

Lemma 2.6

S is a completely continuous operator and \(S(P)\subset P_{1}\).

Proof

Applying the Arzela-Ascoli theorem and a standard argument, we can prove that S is a completely continuous operator. In the following, we prove that \(S(P)\subset P_{1}\). In fact, for any \(x\in P\), it follows from Lemma 2.5 that

$$(Sx) (t)= \int_{0}^{1}H(t,s)x(s)\,ds\leq K_{2}t^{p-1}(1-t) \int _{0}^{1}s(1-s)^{p-1}x(s)\,ds\leq K_{2} \int_{0}^{1}s(1-s)^{p-1}x(s)\,ds, $$

which implies that

$$ \|Sx\|\leq K_{2} \int_{0}^{1}s(1-s)^{p-1}x(s) \,ds.$$
(2.11)

On the other hand, by Lemma 2.5 again, we have

$$(Sx) (t)= \int_{0}^{1}H(t,s)x(s)\,ds\geq K_{1}t^{p-1}(1-t) \int _{0}^{1}s(1-s)^{p-1}x(s)\,ds, $$

which together with (2.11) implies

$$(Sx) (t)\geq\frac{K_{1}}{K_{2}}t^{p-1}(1-t)\|Sx\|,\quad t\in[0,1]. $$

Therefore, \(S(P)\subset P_{1}\). This completes the proof. □

Lemma 2.7

Suppose that \(x\in E\) satisfies

$$\left \{ \textstyle\begin{array}{l} -D^{p} x(t)\geq-Mx(t),\quad t\in(0,1),\\ x(0)=x'(0)=0,\qquad x(1)\geq0, \end{array}\displaystyle \right . $$

where M satisfies (2.3), (2.9) and

$$ K=\frac{M(p-1)}{\Gamma(p)} \biggl(1- \biggl(\frac{M(p-1)}{\Gamma(p+2)} \biggr)^{2} \biggr)^{-1}< 1.$$
(2.12)

Then \(x(t)\geq0\) for \(t\in[0,1]\).

Proof

Let \(\sigma(t)=-D^{p} x(t)+Mx(t)\) and \(a=x(1)\). Then

$$\sigma(t)\geq0,\qquad a\geq0. $$

By Lemma 2.4, (2.5) holds. By the proof of Lemma 2.5, we have

$$\begin{gathered} t^{p-1}+ \int_{0}^{1}Q(t,s)s^{p-1}\,ds \\ \quad\geq t^{p-1}-M \int_{0}^{1}G(t,s)s^{p-1}\,ds-\sum _{n=1}^{+\infty} \int_{0}^{1}G_{2n+1}(t,s)s^{p-1} \,ds \\ \quad\geq t^{p-1}-\frac{p-1}{\Gamma(p)}M(1-t)t^{p-1} \int_{0}^{1}s^{p-1}\,ds -\sum _{n=1}^{+\infty}M^{2n+1} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac{1}{p(p+1)} \biggr)^{2n-1} \\ \qquad{} \times \int _{0}^{1}t^{p-1}(1-t)s(1-s)^{p-1}s^{p-1} \,ds \\ \quad\geq t^{p-1}-\frac{p-1}{\Gamma(p)}Mt^{p-1} -t^{p-1} \sum_{n=1}^{+\infty}M^{2n+1} \biggl( \frac{p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac{1}{p(p+1)} \biggr)^{2n-1} \\ \quad\quad{} \times \int_{0}^{1}(1-s)^{p-1}s\,ds \\ \quad\geq t^{p-1}-\frac{p-1}{\Gamma(p)}Mt^{p-1} -t^{p-1} \sum_{n=1}^{+\infty}M^{2n+1} \biggl( \frac{p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac{1}{p(p+1)} \biggr)^{2n} \\ \quad= t^{p-1} -t^{p-1}\sum_{n=0}^{+\infty}M^{2n+1} \biggl(\frac{p-1}{\Gamma(p)} \biggr)^{2n+1} \biggl(\frac{1}{p(p+1)} \biggr)^{2n} \\ \quad= t^{p-1} \biggl[1-\frac{M(p-1)}{\Gamma(p)}\frac{1}{1- (\frac {M(p-1)}{\Gamma(p+2)} )^{2}} \biggr]. \end{gathered} $$

Thus, by (2.5) and Lemma 2.5, we have that \(x(t)\geq0\) for \(t\in[0,1]\), and the lemma is proved. □

Lemma 2.8

[34]

Suppose that \(S:C[0,1]\rightarrow C[0,1]\) is a completely continuous linear operator and \(S(P)\subset P\). If there exist \(\psi\in C[0,1]\setminus(-P)\) and a constant \(c>0\) such that \(cS\psi\geq\psi\), then the spectral radius \(r(S)\neq0\) and S has a positive eigenfunction corresponding to its first eigenvalue \(\lambda_{1}=(r(S))^{-1}\), i.e., \(\varphi=\lambda_{1}S\varphi\).

Lemma 2.9

Suppose that S is defined by (2.10), then the spectral radius \(r(S)\neq0\) and S has a positive eigenfunction \(\varphi^{*}(t)\) corresponding to its first eigenvalue \(\lambda_{1}=(r(S))^{-1}\).

Proof

By Lemma 2.5, \(H(t,s)>0\) for all \(t,s\in(0,1)\). Take \(\psi(t)=t^{p-1}(1-t)\). Then, for \(t\in[0,1]\), by Lemma 2.5 we have

$$(S\psi) (t)= \int_{0}^{1}H(t,s)\psi(s)\,ds\geq \frac {K_{1}}{K_{2}}t^{p-1}(1-t)\|S\psi\|. $$

So there exists a constant \(c>0\) such that \(c(S\psi)(t)\geq\psi(t)\), \(\forall t\in[0,1]\). From Lemma 2.8, we know that the spectral radius \(r(S)\neq0\) and S has a positive eigenfunction corresponding to its first eigenvalue \(\lambda_{1}=(r(S))^{-1}\). This completes the proof. □

3 Main results

In this section, we prove the existence of extremal solutions and the uniqueness result of differential equation (1.1). We list the following assumptions for convenience.

\((H_{1})\) :

There exist \(\alpha_{0}, \beta_{0}\in E\) with \(\alpha_{0}(t)\leq\beta _{0}(t)\) such that

$$\begin{gathered} D^{p} \alpha_{0}(t)+f \bigl(t, \alpha_{0}(t) \bigr) \geq0,\quad t\in(0,1), \alpha_{0}(0)= \alpha_{0}'(0)=0, \alpha_{0}(1)\leq0, \\ D^{p} \beta_{0}(t)+f \bigl(t, \beta_{0}(t) \bigr) \leq0, \quad t\in(0,1), \beta_{0}(0)= \beta_{0}'(0)=0, \beta_{0}(1)\geq0. \end{gathered} $$
\((H_{2})\) :

\(f\in C([0,1]\times\mathbb{R}, \mathbb{R})\) and there exists \(M>0\) such that

$$f(t,x)-f(t,y)\geq-M(x-y), $$

where \(\alpha_{0}(t)\leq y\leq x\leq \beta_{0}(t)\) and M satisfies (2.3), (2.9) and (2.12).

Theorem 3.1

Suppose that \((H_{1})\) and \((H_{2})\) hold. Then there exist monotone iterative sequences \(\{\alpha_{n}(t)\}, \{\beta_{n}(t)\}\) which converge uniformly on \([0,1]\) to the extremal solutions of problem (1.1) in the sector \(\Omega=\{v\in C[0,1]: \alpha_{0}(t)\leq v(t) \leq\beta_{0}(t), t\in[0,1]\}\).

Proof

First, for any \(\alpha_{n-1}\), \(\beta_{n-1}\), \(n\geq1\), we define two sequences \(\{\alpha_{n}(t)\}\), \(\{\beta_{n}(t)\}\) by relations

$$\begin{gathered} \left \{ \textstyle\begin{array}{l} D^{p} \alpha_{n}(t)-M\alpha_{n}(t)+f(t,\alpha_{n-1}(t))+M\alpha _{n-1}(t)=0,\quad t\in(0,1),\\ \alpha_{n}(0)=\alpha_{n}'(0)=0,\qquad \alpha_{n}(1)= 0, \end{array}\displaystyle \right . \\ \left \{ \textstyle\begin{array}{l} D^{p} \beta_{n}(t)-M\beta_{n}(t)+f(t,\beta_{n-1}(t))+M\beta_{n-1}(t)=0, \quad t\in(0,1),\\ \beta_{n}(0)=\beta'_{n}(0)=0,\qquad \beta_{n}(1)=0. \end{array}\displaystyle \right . \end{gathered} $$

By Lemma 2.4, \(\{\alpha_{n}(t)\}\), \(\{\beta_{n}(t)\}\) are well defined. Moreover, \(\{\alpha_{n}(t)\}\), \(\{\beta_{n}(t)\}\) can be rewritten as follows:

$$\alpha_{n}=(S\mathbf{F})\alpha_{n-1},\qquad \beta_{n}=(S \mathbf{F})\beta_{n-1},\quad n=1,2,\ldots, $$

where \(\mathbf{F}:C[0,1]\rightarrow C[0,1]\) is given by

$$(\mathbf{F}x) (t)=f \bigl(t,x(t) \bigr)+Mx(t). $$

Next, we show that \(\{\alpha_{n}(t)\}\), \(\{\beta_{n}(t)\}\) satisfy the property

$$ \alpha_{n-1} \leq\alpha_{n} \leq \beta_{n} \leq \beta_{n-1}. $$
(3.1)

Let \(w(t)=\alpha_{1}-\alpha_{0}\). By condition \((H_{1})\), we obtain

$$\left \{ \textstyle\begin{array}{l} -D^{p} w(t)\geq-Mw(t),\quad t\in(0,1),\\ w(0)=w'(0)=0,\qquad w(1)\geq0. \end{array}\displaystyle \right . $$

Thus, by Lemma 2.5, we have that \(w(t)\geq0\), \(t\in[0,1]\). By a similar way, we can show that \(\beta_{1}\leq \beta_{0}\).

Let \(w(t)=\beta_{1}-\alpha_{1}\). From condition \((H_{2})\), we obtain

$$\left \{ \textstyle\begin{array}{l} -D^{p} w(t)\geq-Mw(t),\quad t\in(0,1),\\ w(0)=w'(0)=0,\qquad w(1)\geq0. \end{array}\displaystyle \right . $$

By Lemma 2.5, we obtain \(w(t)\geq0\), \(t\in[0,1]\). Hence, we have the relation \(\alpha_{0} \leq\alpha_{1} \leq \beta_{1} \leq \beta_{0}\). It follows from \((H_{1})\) that S F is nondecreasing in the sector Ω, and then

$$\alpha_{1}=(S\mathbf{F})\alpha_{0}\leq\alpha_{2}=(S \mathbf{F})\alpha_{1}\leq \beta_{2}=(S\mathbf{F}) \beta_{1}\leq\beta_{1}=(S\mathbf{F})\beta_{0}. $$

Thus, by induction, we have

$$\alpha_{0} \leq\alpha_{1} \leq\cdots\leq \alpha_{n} \leq\alpha_{n+1}\leq\beta_{n+1}\leq\beta _{n}\leq\cdots\leq\beta_{1}\leq\beta_{0}. $$

Applying the standard arguments and Lemma 2.6, we have that

$$\lim_{n\rightarrow\infty}\alpha_{n}(t)=\alpha^{*}(t), \qquad\lim _{n\rightarrow\infty}\beta_{n}(t)=\beta^{*}(t) $$

uniformly on \([0,1]\), and that \(\alpha^{*}\), \(\beta^{*}\) are the solutions of boundary value problem (1.1). Furthermore, \(\alpha^{*}\) and \(\beta^{*}\) are a minimal solution and a maximal solution of (1.1) in Ω, respectively. □

The uniqueness results of a solution to problem (1.1) are established in the following theorem.

Theorem 3.2

Assume that conditions \((H_{1})\) and \((H_{2})\) hold, and that there exists \(M_{1}>0\) such that

$$ f(t,x)-f(t,y)\leq M_{1}(x-y),$$
(3.2)

where \(\alpha_{0}(t)\leq y\leq x\leq \beta_{0}(t)\) and \(M_{1}\) satisfies

$$ (M+M_{1})r(S)< 1.$$
(3.3)

Then BVP (1.1) has a unique solution in Ω, i.e., \(\alpha^{*}=\beta^{*}\).

Proof

It follows from the proof of Theorem 3.1 that

$$\alpha^{*}=(S\mathbf{F})\alpha^{*},\qquad \beta^{*}=(S\mathbf{F})\beta^{*}, \quad n=1,2, \ldots. $$

Let \(u(t)=\beta^{*}(t)-\alpha^{*}(t)\). Then, by (3.2), we have \(u\in P\) and

$$\begin{aligned} u(t)&= \beta^{*}(t)-\alpha^{*}(t)= \int_{0}^{1}H(t,s) \bigl(f \bigl(t,\beta ^{*}(s) \bigr)+M\beta^{*}(s)-f \bigl(t,\alpha^{*}(s) \bigr)-M\alpha^{*}(t) \bigr)\,ds \\ &\leq (M+M_{1}) (Su) (t), \quad t\in[0,1]. \end{aligned} $$

Applying mathematical induction, for \(n\in\mathbb{N}\), we get

$$u(t)\leq(M+M_{1})^{n} \bigl(S^{n}u \bigr) (t), \quad t \in[0,1]. $$

The above inequality guarantees \(\|u\|\leq(M+M_{1})^{n}\|S\|^{n}\|u\|\) for \(n\in\mathbb{N}\). It is easy to see that

$$u(t)\equiv0 \quad \bigl( \text{or } \beta^{*}(t)=\alpha^{*}(t) \bigr),\quad t \in[0,1]. $$

In fact, \(\|u\|\neq0\) implies that \(1\leq(M+M_{1})^{n}\|S\|^{n}\) for \(n\in N\), and consequently, \(1\leq\lim_{n\rightarrow\infty}\sqrt[n]{(M+M_{1})^{n}\|S\| ^{n}}=(M+M_{1})r(S)\), in contradiction to (3.3). □

Remark 3.1

From the point of view of differential equation, we know that (3.3) is equivalent to the inequality \(M_{1}r(S_{1})<1 \), where \(S_{1}:C[0,1]\rightarrow C[0,1]\) is defined by

$$(S_{1}x) (t)= \int_{0}^{1}G(t,s)x(s)\,ds,\quad t\in[0,1], x\in C[0,1]. $$

At the end of this section, we give a rough estimate for \(r(S_{1})\). For \(x\in C[0,1]\), we have

$$\begin{aligned} \|S_{1}x\|&= \max_{0\leq t\leq1} |(S_{1}x) (t) | \leq \max_{0\leq t\leq1} \int_{0}^{1}G(t,s)\,ds \|x\| \\ &\leq \frac{(p-1)}{\Gamma(p)} \int_{0}^{1}s(1-s)^{p-1}\,ds \|x\| = \frac{(p-1)}{\Gamma(p+2)} \|x\|, \end{aligned} $$

which implies \(\|S_{1}\|\leq\frac{(p-1)}{\Gamma(p+2)}\), hence \(r(S_{1})\leq\|S_{1}\|\leq\frac{(p-1)}{\Gamma(p+2)}\). On the other hand, take \(\psi(t)=t^{p-1}(1-t)\), by Lemma 2.3, we have

$$\begin{aligned} (S_{1}\psi) (t)&= \int_{0}^{1}G(t,s)\psi(s)\,ds \geq \frac{1}{\Gamma(p)} \int_{0}^{1}s^{p}(1-s)^{p}\,ds \cdot t^{p-1}(1-t) \\ &= \frac{B(p+1,p+1)}{\Gamma(p)} \psi(t),\quad t\in[0,1]. \end{aligned} $$

Thus \(r(S_{1})=\lim_{n\rightarrow\infty}\sqrt[n]{\|S_{1}\|^{n}}\geq\frac {B(p+1,p+1)}{\Gamma(p)}\). So we have

$$\frac{B(p+1,p+1)}{\Gamma(p)}\leq r(S_{1})\leq \frac{(p-1)}{\Gamma(p+2)}. $$

4 Example

Consider the following problem:

$$ \left \{ \textstyle\begin{array}{l} D^{\frac{5}{2}}x(t)+\frac{\sqrt{\pi}}{112} (t^{2}-x(t) )^{3}-\frac {\sqrt{\pi}}{112}t^{2}x^{2}(t)=0,\quad t\in(0,1),\\ x(0)=x'(0)=0,\qquad x(1)=0. \end{array}\displaystyle \right .$$
(4.1)

Obviously, \(f(t,x)=\frac{\sqrt{\pi}}{112} (t^{2}-x )^{3}-\frac {\sqrt{\pi}}{112}t^{2}x^{2}\). Take \(\alpha_{0}(t)=-t^{2}\), \(\beta_{0}(t)=t^{2}\), then

$$\begin{gathered} \left \{ \textstyle\begin{array}{l} D^{\frac{5}{2}}\alpha_{0}(t)+f(t,\alpha_{0})= \frac{\sqrt{\pi}}{112} (t^{2}+t^{2} )^{3}-\frac{\sqrt{\pi }}{112}t^{2}t^{4}=\frac{7\sqrt{\pi}}{112}t^{6}\geq0,\quad t\in(0,1),\\ \alpha_{0}(0)=\alpha_{0}'(0)=0,\qquad \alpha_{0}(1)=-1\leq0, \end{array}\displaystyle \right . \\ \left \{ \textstyle\begin{array}{l} D^{\frac{5}{2}}\beta_{0}(t)+f(t,\beta_{0})=-\frac{\sqrt{\pi }}{112}t^{2}t^{4}(t)=-t^{6}\leq0,\quad t\in(0,1),\\ \beta_{0}(0)=\beta_{0}'(0)=0,\qquad \beta_{0}(1)=1\geq0. \end{array}\displaystyle \right . \end{gathered} $$

It shows that condition \((H_{1})\) of Theorem 3.2 holds. On the other hand, it is easy to verify that condition \((H_{2})\) and (3.2) hold for \(M=M_{1}=\frac{\sqrt{\pi}}{8}\) and \(r(S_{1})\leq\|S_{1}\|\leq\frac{4}{35\sqrt{\pi}}\).

Therefore, by Theorem 3.2, there exist iterative sequences \(\{\alpha_{n}\} \), \(\{\beta_{n}\}\) which converge uniformly to the unique solution in \([\alpha_{0},\beta _{0}]\), respectively.