1 Introduction

This paper is concerned with the eigenvalue intervals and positive solutions of integral boundary value problem for the following higher-order nonlinear fractional differential equation with impulses (abbreviated by BVP (1.1) throughout this paper):

$$ \left \{\textstyle\begin{array}{@{}l} -D_{0^{+}}^{\alpha}u(t)=\lambda a(t)f(t,u(t)),\quad t\in(0,1)\setminus\{ t_{k}\}_{k=1}^{m},\\ \Delta u(t_{k})=I_{k}(u(t_{k})),\quad t=t_{k},\\ u(0)=u'(0)=\cdots=u^{(n-2)}(0),\qquad u'(1)=\int_{0}^{1}u(s)\,dH(s), \end{array}\displaystyle \right . $$
(1.1)

where \(D_{0^{+}}^{\alpha}\) is the standard Riemann-Liouville fractional derivative of order \(n-1<{\alpha}\leq n\), \(n\geq3\). The number n is the smallest integer greater than or equal to α. The impulsive point sequence \(\{t_{k}\}_{k=1}^{m}\) satisfies \(0=t_{0}< t_{1}<\cdots<t_{m}<t_{m+1}=1\), \(\Delta u(t_{k})=u(t_{k}^{+})-u(t_{k}^{-})\). \(u(t_{k}^{-})=u(t_{k})\), and \(u(t_{k}^{+})=\lim_{h\to0}u(t_{k}+h)\) and \(u(t_{k}^{-})=\lim_{h\to0}u(t_{k}-h)\) represent the right- and left-hand limits of \(u(t)\) at \(t=t_{k}\), respectively; \(\lambda>0\) is a parameter, \(f\in C([0,1]\times[0,+\infty)\to[0,+\infty))\), \(a(t)\in C((0,1),[0,+\infty))\), \(I_{k}\in C(\mathbb{R}^{+},\mathbb{R}^{+})\). The integral \(\int_{0}^{1}u(s)\,dH(s)\) is the Riemann-Stieltjes integral with \(H :[0,1]\to\mathbb{R}\). By applying the Schauder fixed-point theorem, fixed-point index theorem, we obtain some sufficient conditions for the existence and multiplicity of positive solutions of BVP (1.1). Meanwhile, the eigenvalue intervals are also given.

During the past decades, the subject of fractional differential equations has aroused great attention due to both the further development of fractional-order calculus theory and important applications in the fields of science and engineering such as physics, chemistry, aerodynamics, electrodynamics of complex medium, polymer rheology, Bode’s analysis of feedback amplifiers, capacitor theory, electrical circuits, electron-analytical chemistry, biology, control theory, fitting of experimental data, and so forth. Fractional derivatives provide an excellent tool for description of memory and hereditary properties of various materials and processes. This is the main advantage of fractional differential equations in comparison with classical integer-order models. As a consequence, the subject of fractional differential equations is gaining much importance and attention. In particular, there are many papers focused on the existence or multiplicity of positive solutions for the boundary value problems of fractional ordinary differential equations (see [120]).

However, there are a few papers [1416] that consider the existence or multiplicity of positive solutions for fractional differential equations involving in eigenvalue parameters. For example, Bai [14] considered the following fractional ordinary differential equation boundary value problem:

$$\begin{aligned} \left \{ \textstyle\begin{array}{@{}l} {}^{c}D_{0+}^{s}u(t)+\lambda h(t)f(u)=0, \quad 0< t< 1, \\ u(0)=u'(1)=u''(0)=0, \end{array}\displaystyle \right . \end{aligned}$$

where \(2< s\leq3\) is a real number, \(^{c}D_{0+}^{s}\) is the standard Caputo differentiation, and \(\lambda>0\). By applying a fixed-point theorem on a cone, some sufficient conditions of multiplicity and eigenvalue intervals for the problem are established.

In order to describe the dynamics of populations subject to abrupt changes and other phenomena such as harvesting, diseases, and so on, some authors have used an impulsive differential system to describe these kinds of phenomena since the last century. Recently, some scholars have begun to study the boundary value problems of impulsive fractional differential equations (see [2127]). This type of boundary value problems has become one of the hottest problems at present.

To the best of our knowledge, there is less research dealing with the eigenvalue intervals and positive solutions of Riemann-Stieltjes integral boundary problems for higher-order nonlinear fractional differential equation with impulses. Therefore, we will investigate the existence and multiplicity of positive solutions for BVP (1.1) under some further conditions.

The rest of this paper is organized as follows. In Section 2, we recall some useful definitions and properties and present properties of the Green functions. In Section 3, we give some sufficient conditions for the existence of single positive solutions for BVP (1.1). In Section 4, some sufficient conditions are established to guarantee the existence of multiple positive solutions for BVP (1.1). Finally, some examples are also provided to illustrate the validity of our main results in Section 5.

2 Preliminaries

For convenience, now we introduce some definitions and results of fractional calculus.

Definition 2.1

(see [28, 29])

The Riemann-Liouville fractional integral of order \(\alpha>0\) of a function \(u:(0,\infty)\to\mathbb{R}\) is given by

$$I_{0+}^{\alpha}u(t)=\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}u(s)\,ds, $$

provided that the right-hand side is pointwise defined on \((0,\infty)\).

Definition 2.2

(see [28, 29])

The Riemann-Liouville fractional derivative of order \(\alpha>0\) of a continuous function \(u:(0,\infty)\to\mathbb{R}\) is given by

$$D_{0+}^{\alpha}u(t)=\frac{1}{\Gamma(n-\alpha)}\frac{d^{n}}{dt^{n}} \int _{0}^{t}(t-s)^{n-\alpha-1}u(s)\,ds, $$

where \(n-1<{\alpha}\leq n\), provided that the right-hand side is pointwise defined on \((0,\infty)\).

Lemma 2.1

(see [28])

Assume that \(u\in C(0,1)\cap L(0,1)\) with a fractional derivative of order \(\alpha>0\) that belongs to \(u\in C(0,1)\cap L(0,1)\). Then

$$I_{0+}^{\alpha}D_{0+}^{\alpha}u(t)=u(t)+C_{1}t^{\alpha-1}+C_{2}t^{\alpha -2}+ \cdots+C_{n}t^{\alpha-n} $$

for some \(C_{i} \in\mathbb{R}\), \(i=1,2,\ldots,n\), where n is the smallest integer greater than or equal to α.

Lemma 2.2

(Schauder fixed-point theorem, see [30])

If U is a close bounded convex subset of a Banach space X and \(T :U \to U\) is completely continuous, then T has at least one fixed point in U.

Lemma 2.3

(Fixed-point index theorem, see [31])

Let E be a Banach space, and \(P\subset E\) be a cone. For \(r>0\), define \(\Omega_{r}=\{u\in P:\|u\|< r\}\). Assume that \(A:\overline{\Omega}_{r}\to P\) is a completely continuous operator such that \(Au\neq u\) for \(u\in\partial\Omega_{r}=\{u\in P:\|u\|=r\}\).

  1. (1)

    If \(\|Au\| \geq\|u\|\) for \(u\in\partial\Omega_{r}\), then \(i(A,\Omega_{r},P)=0\).

  2. (2)

    If \(\|Au\| \leq\|u\|\) for \(u\in\partial\Omega_{r}\), then \(i(A,\Omega_{r},P)=1\).

Now we present the Green function for the system associated with BVP (1.1).

Lemma 2.4

If \(H:[0,1]\to\mathbb{R}\) is a function of bounded variation \(\delta\triangleq\int_{0}^{1}s^{\alpha-1}\,dH(s)\neq\alpha-1\) and \(h\in C([0,1])\), then the unique solution of

$$ \left \{\textstyle\begin{array}{@{}l} D_{0^{+}}^{\alpha}u(t)+h(t)=0, \quad t\in(0,1)\setminus\{t_{k}\}_{k=1}^{m}, n-1< \alpha\leq n, n\geq3,\\ \Delta u(t_{k})=I_{k}(u(t_{k})),\quad t=t_{k},\\ u(0)=u'(0)=\cdots=u^{(n-2)}(0), \qquad u'(1)=\int_{0}^{1}u(s)\,dH(s), \end{array}\displaystyle \right . $$
(2.1)

is

$$ u(t)= \int_{0}^{1}G(t,s)h(s)\,ds+t^{\alpha-1}\sum _{t\leq t_{k}< 1}t_{k}^{1-\alpha}I_{k} \bigl(u(t_{k}) \bigr),\quad t\in[0,1], $$
(2.2)

where

$$\begin{aligned}& G(t,s)=G_{1}(t,s)+G_{2}(t,s), \end{aligned}$$
(2.3)
$$\begin{aligned}& G_{1}(t,s)=\left \{\textstyle\begin{array}{@{}l@{\quad}l} \frac{t^{\alpha-1}(1-s)^{\alpha-2}-(t-s)^{\alpha-1}}{\Gamma(\alpha)}, & 0\leq s\leq t\leq1, \\ \frac{t^{\alpha-1}(1-s)^{\alpha-2}}{\Gamma(\alpha)}, & 0\leq t\leq s\leq1, \end{array}\displaystyle \right . \end{aligned}$$
(2.4)
$$\begin{aligned}& G_{2}(t,s)=\frac{t^{\alpha-1}}{\alpha-1-\delta} \int_{0}^{1}G_{1}(\tau,s)\,dH( \tau). \end{aligned}$$
(2.5)

Proof

We denote the solution of (2.1) by \(u(t)\triangleq u_{k}(t)\) in \([t_{k}, t_{k+1}]\) (\(k=0,1,\ldots,m\)).

For \(t\in[0,t_{1})\), applying Lemma 2.1, we have

$$\begin{aligned} u_{0}(t) =-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha -1}h(s)\,ds+C_{1}^{0}t^{\alpha-1}+C^{0}_{2}t^{\alpha-2}+ \cdots+C^{0}_{n}t^{\alpha-n}. \end{aligned}$$

In the light of \(u(0)=u'(0)=u''(0)=\cdots=u^{(n-2)}(0)=0\), we have \(C^{0}_{2}=C^{0}_{3}=\cdots= C^{0}_{n}=0\). Thus, we get

$$\begin{aligned} u_{0}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds+C_{1}^{0}t^{\alpha-1} \end{aligned}$$

and

$$\begin{aligned} u \bigl(t_{1}^{-} \bigr)+I_{1} \bigl(u({t_{1}}) \bigr)=u(t_{1})+I_{1} \bigl(u({t_{1}}) \bigr)=u \bigl(t_{1}^{+} \bigr)= - \int_{0}^{t_{1}}\frac{(t_{1}-s)^{\alpha-1}}{\Gamma(\alpha )}h(s)\,ds+C_{1}^{0}t_{1}^{\alpha-1}. \end{aligned}$$

For \(t\in[t_{1},t_{2})\), by applying Lemma 2.1 we have

$$\begin{aligned} u_{1}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha -1}h(s)\,ds+C_{1}^{1}t^{\alpha-1}+C^{1}_{2}t^{\alpha-2}+ \cdots+C^{1}_{n}t^{\alpha-n}. \end{aligned}$$

In view of \(u(0)=u'(0)=u''(0)=\cdots=u^{(n-2)}(0)=0\), we have \(C^{1}_{2}=C^{1}_{3}=\cdots=C^{1}_{n}=0\). Thus, we get

$$\begin{aligned} u_{1}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds+C_{1}^{1}t^{\alpha-1}. \end{aligned}$$

Noting that \(u(t_{1})=u_{1}(t_{1})\), we derive \(C_{1}^{1}=C_{1}^{0}-t_{1}^{1-\alpha}I_{1}(u({t_{1}}))\). So we obtain

$$\begin{aligned} u_{1}(t)=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha -1}h(s)\,ds+C_{1}^{0}t^{\alpha-1}-t^{\alpha-1}t_{1}^{1-\alpha}I_{1}\bigl(u({t_{1}})\bigr) \end{aligned}$$

and

$$\begin{aligned} u \bigl(t_{2}^{-} \bigr)+I_{2} \bigl(u(t_{2}) \bigr)=u(t_{2})+I_{2} \bigl(u(t_{2}) \bigr)=u \bigl(t_{2}^{+} \bigr) =- \int_{0}^{t_{2}}\frac{(t_{2}-s)^{\alpha-1}}{\Gamma(\alpha )}h(s)\,ds+C^{1}_{1}t_{2}^{\alpha-1}. \end{aligned}$$

For \(t\in[t_{2},t_{3})\), by applying Lemma 2.1 we have

$$\begin{aligned} u_{2}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha -1}h(s)\,ds+C_{1}^{2}t^{\alpha-1}+C^{2}_{2}t^{\alpha-2}+ \cdots+C^{2}_{n}t^{\alpha-n}. \end{aligned}$$

In view of \(u(0)=u'(0)=u''(0)=\cdots=u^{(n-2)}(0)=0\), we have \(C^{2}_{2}=C^{2}_{3}=\cdots= C^{2}_{n} =0\). Thus, we get

$$\begin{aligned} u_{2}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds+C_{1}^{2}t^{\alpha-1}. \end{aligned}$$

Noting that \(u(t_{2})=u_{2}(t_{2})\), we derive \(C_{1}^{2}=C_{1}^{1}-t_{2}^{1-\alpha}I_{2}(u({t_{2}}))\). So we obtain

$$\begin{aligned} u_{2}(t)=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha -1}h(s)\,ds+C_{1}^{0}t^{\alpha-1}-t^{\alpha-1} \sum_{i=1}^{2}t_{i}^{1-\alpha}I_{i} \bigl(u({t_{i}}) \bigr) \end{aligned}$$

and

$$\begin{aligned} u \bigl(t_{3}^{-} \bigr)+I_{3} \bigl(u(t_{3}) \bigr)=u(t_{3})+I_{3} \bigl(u(t_{3}) \bigr)=u \bigl(t_{3}^{+} \bigr) =- \int_{0}^{t_{3}}\frac{(t_{3}-s)^{\alpha-1}}{\Gamma(\alpha )}h(s)\,ds+C_{1}^{2}t_{3}^{\alpha-1}. \end{aligned}$$

By the recurrent method and Lemma 2.1, for \(t\in[t_{k},t_{k+1})\) (\(k=0, 1, 2, \ldots, m\)), we have

$$\begin{aligned} u(t)=u_{k}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds+C_{1}^{0}t^{\alpha -1}-t^{\alpha-1} \sum_{i=1}^{k}t_{i}^{1-\alpha}I_{i} \bigl(u({t_{i}}) \bigr). \end{aligned}$$
(2.6)

Thus, for \(t\in[t_{m},t_{m+1}]=[t_{m},1]\), we have

$$\begin{aligned} u(t)=u_{m}(t) &=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds+C_{1}^{0}t^{\alpha -1}-t^{\alpha-1} \sum_{i=1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u({t_{i}}) \bigr). \end{aligned}$$
(2.7)

From \(u'(1)=\int_{0}^{1} u(s)\,dH(s)\) we obtain

$$\begin{aligned} C_{1}^{0}=&\frac{1}{\Gamma(\alpha)} \int_{0}^{1}(1-s)^{\alpha-2}h(s)\,ds+ \frac {1}{\alpha-1} \int_{0}^{1}u(s)\,dH(s)+\sum _{i=1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr). \end{aligned}$$
(2.8)

By (2.7) and (2.8), for \(t\in[t_{m},t_{m+1}]=[t_{m},1]\), we get

$$\begin{aligned} u(t)&=-\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds + \frac{t^{\alpha-1}}{\Gamma(\alpha)} \int_{0}^{1}(1-s)^{\alpha-2}h(s)\,ds + \frac{t^{\alpha-1}}{\alpha-1} \int_{0}^{1}u(s)\,dH(s) \\ &= \int_{0}^{1}G_{1}(t,s)h(s)\,ds+ \frac{t^{\alpha-1}}{\alpha-1} \int_{0}^{1}u(s)\,dH(s), \end{aligned}$$
(2.9)

which implies that

$$\begin{aligned} \int_{0}^{1}u(s)\,dH(s)=\frac{\alpha-1}{\alpha-1-\delta} \int_{0}^{1} \biggl[ \int _{0}^{1}G_{1}(\tau,s)h(s)\,ds \biggr]\,dH( \tau). \end{aligned}$$
(2.10)

According to (2.6), (2.8), and (2.10), for \(t\in[t_{k},t_{k+1})\) (\(k=0, 1, 2, \ldots, m\)), the unique solution of BVP (2.1) is formulated by

$$\begin{aligned} u(t)={}&{-}\frac{1}{\Gamma(\alpha)} \int_{0}^{t}(t-s)^{\alpha-1}h(s)\,ds + \frac{t^{\alpha-1}}{\Gamma(\alpha)} \int_{0}^{1}(1-s)^{\alpha-2}h(s)\,ds \\ &{}+\frac{t^{\alpha-1}}{\alpha-1} \int_{0}^{1}u(s)\,dH(s)+t^{\alpha -1}\sum _{i=k+1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr) \\ ={}& \int_{0}^{1}G_{1}(t,s)h(s)\,ds+ \frac{t^{\alpha-1}}{\alpha-1} \int _{0}^{1}u(s)\,dH(s)+t^{\alpha-1}\sum _{i=k+1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr) \\ ={}& \int_{0}^{1}G_{1}(t,s)h(s)\,ds+ \frac{t^{\alpha-1}}{\alpha-1-\delta } \int_{0}^{1} \biggl[ \int_{0}^{1}G_{1}(\tau,s)h(s)\,ds \biggr]\,dH( \tau) \\ &{}+t^{\alpha-1}\sum_{i=k+1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr) \\ ={}& \int_{0}^{1}G_{1}(t,s)h(s)\,ds+ \int_{0}^{1} \biggl[\frac{t^{\alpha -1}}{\alpha-1-\delta} \int_{0}^{1}G_{1}(\tau,s)\,dH(\tau) \biggr]h(s)\,ds \\ &{}+t^{\alpha-1}\sum_{i=k+1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr) \\ ={}& \int_{0}^{1}G_{1}(t,s)h(s)\,ds+ \int_{0}^{1}G_{2}(t,s)h(s)\,ds +t^{\alpha-1}\sum_{i=k+1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr) \\ ={}& \int_{0}^{1}G(t,s)h(s)\,ds+t^{\alpha-1}\sum _{i=k+1}^{m}t_{i}^{1-\alpha}I_{i} \bigl(u(t_{i}) \bigr). \end{aligned}$$
(2.11)

Therefore, for \(t\in[0,1]\), the unique solution of BVP (2.1) is expressed as

$$\begin{aligned} u(t)= \int_{0}^{1}G(t,s)h(s)\,ds+t^{\alpha-1}\sum _{t\leq t_{k}< 1}t_{k}^{1-\alpha}I_{k} \bigl(u(t_{k}) \bigr), \end{aligned}$$
(2.12)

where \(G(t,s)\), \(G_{1}(t,s)\), and \(G_{2}(t,s)\) are defined by (2.3), (2.4), and (2.5), respectively. The proof is complete. □

From (2.3), (2.4), and (2.5) we can prove that \(G(t,s)\), \(G_{1}(t,s)\), and \(G_{2}(t,s)\) have the following properties.

Lemma 2.5

The function \(G_{1}(t,s)\) defined by (2.4) satisfies

  1. (i)

    \(G_{1}(t,s)\geq0\) is continuous for all \(t, s\in [0,1]\), and \(G_{1}(t,s)>0\) for all \(t, s\in(0,1)\);

  2. (ii)

    For all \(t, s\in[0,1]\), \(G_{1}(t,s)\) is increasing with respect to t, and \(G_{1}(t,s)\leq g_{1}(s)\triangleq\frac{s(1-s)^{\alpha-2}}{\Gamma(\alpha)}\);

  3. (iii)

    For \(\theta\in(0,\frac{1}{2})\), there exists a constant \(\gamma>0\) such that \(\min_{t\in J_{\theta}}G_{1}(t,s)\geq \gamma g_{1}(s)\) for \(s\in[0,1]\), where \(J_{\theta}\triangleq[\theta,1-\theta]\).

Proof

(i) It is obvious that \(G_{1}(t,s)\) is continuous on \([0,1]\times[0,1]\), and \(G_{1}(t,s)\geq0\) for \(s\geq t\). For \(0\leq s< t\leq1\), noting that \(0< t-s<1-s\leq1\) and \(t(1-s)\geq t-s\), we have

$$\begin{aligned} t^{\alpha-1}(1-s)^{\alpha-2}-(t-s)^{\alpha-1}=t^{\alpha-1}(1-s)^{\alpha -2} \biggl[1- \biggl(\frac{t-s}{t(1-s)} \biggr)^{\alpha-2}(t-s) \biggr]\geq0. \end{aligned}$$
(2.13)

So, by (2.4) we get \(G_{1}(t,s)\geq0\) for all \(t, s\in[0,1]\). Similarly, for \(s, t\in(0,1)\), we obtain \(G_{1}(t,s)>0\).

(ii) In fact,

$$\begin{aligned} \frac{\partial G_{1}}{\partial t} =\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{t^{\alpha-2}(1-s)^{\alpha-2}}{\Gamma(\alpha-1)}\geq0, & 0\leq t\leq s\leq1, \\ \frac{t^{\alpha-2}(1-s)^{\alpha-2}}{\Gamma(\alpha-1)} [1- (\frac{t-s}{t(1-s)} )^{\alpha-2} ]\geq0, & 0\leq s\leq t\leq1. \end{array}\displaystyle \right . \end{aligned}$$
(2.14)

From (2.14) we see that \(G_{1}(t,s)\) is increasing with respect to t. For \(0\leq s\leq t\leq1\), \(G_{1}(t,s)\leq G_{1}(1,s)=\frac{s(1-s)^{\alpha-2}}{\Gamma(\alpha)}=g_{1}(s)\); for \(0\leq t\leq s\leq1\), \(G_{1}(t,s)\leq G_{1}(s,s)=\frac{s^{\alpha-1}(1-s)^{\alpha-2}}{\Gamma(\alpha)}\leq\frac {s(1-s)^{\alpha-2}}{\Gamma(\alpha)}=g_{1}(s)\). Therefore, for all \(t, s\in[0,1]\), we have \(G_{1}(t,s)\leq g_{1}(s)=\frac{s(1-s)^{\alpha-2}}{\Gamma(\alpha)}\).

(iii) For \(t\in J_{\theta}\), we divide the proof into the following three cases for \(s\in[0,1]\).

Case 1. If \(s\in J_{\theta}\), then from (i) of Lemma 2.5 we have \(G_{1}(t,s)>0\) and \(g_{1}(s)>0\) for all \(t,s\in J_{\theta}\). It is obvious that \(G_{1}(t,s)\) and \(g_{1}(s)\) are bounded on \(J_{\theta}\). So, there exists a constant \(1>\gamma_{1}>0\) such that

$$\begin{aligned} G_{1}(t,s)\geq\gamma_{1}g_{1}(s) \quad \forall t, s\in J_{\theta}. \end{aligned}$$
(2.15)

Case 2. If \(s\in[1-\theta,1]\), then from (2.4) and (ii) of Lemma 2.5 we get

$$\begin{aligned} \min_{t\in J_{\theta}}G_{1}(t,s)=\min_{t\in J_{\theta}} \frac{t^{\alpha-1}(1-s)^{\alpha-2}}{\Gamma(\alpha)}=\frac {\theta^{\alpha-1}(1-s)^{\alpha-2}}{\Gamma(\alpha)}. \end{aligned}$$

Thus, we have

$$\begin{aligned} \frac{\min_{t\in J_{\theta}}G_{1}(t,s)}{g_{1}(s)}=\frac{\theta^{\alpha-1}}{s}\geq\theta ^{\alpha-1}, \end{aligned}$$

that is,

$$\begin{aligned} \min_{t\in J_{\theta}}G_{1}(t,s)\geq \theta^{\alpha-1}g_{1}(s). \end{aligned}$$
(2.16)

Case 3. If \(s\in[0,\theta]\), then from (2.4) and (ii) of Lemma 2.5 we obtain

$$\begin{aligned} \min_{t\in J_{\theta}}G_{1}(t,s)=\min_{t\in J_{\theta}} \frac{t^{\alpha-1}(1-s)^{\alpha-2}-(t-s)^{\alpha-1}}{\Gamma (\alpha)} =\frac{\theta^{\alpha-1}(1-s)^{\alpha-2}-(\theta-s)^{\alpha-1}}{\Gamma (\alpha)}. \end{aligned}$$

It is clear that

$$\begin{aligned} 0< \min_{t\in J_{\theta}}G_{1}(t,s)\leq1, \quad 0< g_{1}(s)\leq1\ \forall s\in(0,\theta], t\in J_{\theta}, \end{aligned}$$
(2.17)

and

$$\begin{aligned} &\lim_{s\to0}\frac{\min_{t\in J_{\theta}}G_{1}(t,s)}{g_{1}(s)} \\\ &\quad=\lim _{s\to0}\frac{\theta^{\alpha-1}-(\theta -s)^{\alpha-1}(1-s)^{2-\alpha}}{s} \\ &\quad=\lim_{s\to0} \bigl[(\alpha-1) (\theta-s)^{\alpha -2}(1-s)^{2-\alpha}+(2- \alpha) (\theta-s)^{\alpha-1}(1-s)^{1-\alpha} \bigr] \\ &\quad=\theta^{\alpha-2} \bigl[(\alpha-1)-\theta(\alpha-2) \bigr]>0. \end{aligned}$$
(2.18)

In the light of (2.17) and (2.18), we conclude that there exists a constant \(1>\gamma_{2}>0\) such that

$$\begin{aligned} G_{1}(t,s)\geq\gamma_{2}g_{1}(s) \quad \forall s\in[0,\theta], t\in J_{\theta}. \end{aligned}$$
(2.19)

Taking \(\gamma=\min\{\gamma_{1}, \gamma_{2}, \theta^{\alpha-1}\}\) and applying (2.15), (2.16), and (2.19), we get that (iii) of Lemma 2.5 holds. The proof is complete. □

Lemma 2.6

If \(0\leq\delta\triangleq\int_{0}^{1}s^{\alpha-1}\,dH(s)<\alpha-1\), then we have

  1. (i)

    \(G_{2}(t,s)\geq0\) is continuous for all \(t, s\in [0,1]\), and \(G_{2}(t,s)>0\) for all \(t, s\in(0,1)\);

  2. (ii)

    \(G_{2}(t,s)\leq\frac{1}{\alpha-1-\delta}\int _{0}^{1}G_{1}(\tau,s)\,dH(\tau)\) for all \(t\in [0,1]\), \(s\in(0,1)\).

Employing the properties of \(G_{1}(t,s)\) and the definition of \(G_{2}(t,s)\), it is easy to show that (i) and (ii) of Lemma 2.6 hold. So we omit it.

Lemma 2.7

If \(0\leq\delta\triangleq\int_{0}^{1}s^{\alpha-1}\,dH(s)<\alpha-1\), then the function \(G(t,s)\) defined by (2.3) satisfies

  1. (i)

    \(G(t,s)\geq0\) is continuous for all \(t, s\in [0,1]\), and \(G(t,s)>0\) for all \(t, s\in(0,1)\);

  2. (ii)

    \(G(t,s)\leq g(s)\) for all \(t, s\in[0,1]\), and

    $$\begin{aligned} \min_{t\in[\theta,1-\theta]}G(t,s)\geq\sigma g(s) \quad\forall s \in[0,1], \end{aligned}$$
    (2.20)

    where

    $$\begin{aligned} \sigma=\min \bigl\{ \gamma,\theta^{\alpha-1} \bigr\} ,\qquad g(s)=g_{1}(s)+G_{2}(1,s) \end{aligned}$$
    (2.21)

    with γ defined in Lemma  2.5.

Proof

(i) From Lemma 2.5 and Lemma 2.6 we obtain that \(G(t,s)\geq0\) is continuous for all \(t, s\in[0,1]\) and \(G(t,s)>0\) for all \(t, s\in(0,1)\).

(ii) From (ii) of Lemma 2.5 and (ii) of Lemma 2.6 we have that \(G(t,s)\leq g(s)\) for all \(t, s\in[0,1]\). Now, we show (2.20). Indeed, from Lemma 2.6 we have

$$\begin{aligned} \min_{t\in J_{\theta}}G(t,s)&\geq\gamma g_{1}(s)+ \frac{\theta^{\alpha-1}}{\alpha-1-\delta} \int_{0}^{1}G_{1}(\tau ,s)\,dH(\tau) \\ &\geq\sigma \biggl[g_{1}(s)+\frac{1}{\alpha-1-\delta} \int_{0}^{1}G_{1}(\tau ,s)\,dH(\tau) \biggr] =\sigma g(s) \quad \forall s\in[0,1]. \end{aligned}$$
(2.22)

Then the proof of Lemma 2.7 is completed. □

Let \(J_{\theta}\triangleq[\theta,1-\theta]\) for \(\theta\in (0,\frac{1}{2})\), and let \(E= \{u(t):u(t)\in C([0,1])\}\) be a real Banach space with the norm \(\|u\|= \max_{0\leq t\leq1} |u(t) |\). Let

$$\begin{aligned}& \begin{aligned}[b] PC \bigl([0,1] \bigr)\triangleq& \bigl\{ u\in E |u:[0,1] \to[0,+ \infty), u \bigl(t_{k}^{-} \bigr) \mbox{ and } u \bigl(t_{k}^{+} \bigr) \\ &{}\mbox{exist with } u \bigl(t_{k}^{-} \bigr)=u(t_{k}), 1\leq k \leq m \bigr\} , \end{aligned} \\& K = \Bigl\{ u\in PC \bigl([0,1] \bigr):u\geq0, \min_{t\in J_{\theta}}u(t) \geq \sigma\|u\| \Bigr\} , \end{aligned}$$
(2.23)
$$\begin{aligned}& K_{r} =\bigl\{ u\in K:\|u\|< r\bigr\} , \qquad \partial K_{r}=\bigl\{ u\in K: \|u\| =r \bigr\} . \end{aligned}$$
(2.24)

Obviously, \(PC([0,1])\subset E\) is a Banach space with the norm \(\|u\|=\max_{t\in[0,1]}|u(t)|\), and \(K\subset PC([ 0,1])\) is a positive cone.

In the following, we need the assumptions and some notation as follows:

(B1):

\(a\in C((0,1),[0,+\infty))\), and \(a(t)\not\equiv0\) on any subinterval of \((0,1)\).

(B2):

\(f\in C([0,1]\times[0,+\infty),[0,+\infty))\), and \(f(t,0)=0\) uniformly with respect to t on \([0,1]\).

(B3):

\(I_{k}(u(t_{k}))\in C([0,+\infty), [0,+\infty))\), \(k=1,2,\ldots,m\).

(B4):

\(H:[0,1]\to[0,+\infty)\) is of bounded variation with \(0<\delta=\int_{0}^{1}s^{\alpha-1}\,dH(s)<\alpha-1\).

Let

$$\begin{aligned} &f^{\delta}=\limsup_{u\to\delta} \max_{t\in [0,1]} \frac{f(t,u)}{u}, \qquad f_{\delta}=\liminf_{u\to \delta} \min _{t\in[0,1]} \frac{f(t,u)}{u}, \\ &A= \int_{0}^{1}g(s)a(s)\,ds< \infty, \qquad B = \sigma^{2} \int_{\theta}^{1-\theta}g(s)a(s)\,ds< \infty, \end{aligned}$$

where δ denotes 0 or +∞. In addition, we introduce the following conditions:

(H1):

\(f_{0}>\frac{1}{\lambda B}\), namely, \(\lambda>\frac {1}{Bf_{0}}\) (particularly, \(f_{0}=+\infty\), \(\lambda>0\)).

(H2):

\(f_{\infty}>\frac{1}{\lambda B}\), namely, \(\lambda>\frac {1}{Bf_{\infty}}\) (particularly, \(f_{\infty}=+\infty\), \(\lambda>0\)).

(H3):

\(f^{0}<\frac{1}{\lambda A}\), namely, \(\lambda<\frac {1}{Af^{0}}\) (particularly, \(f^{0}=0\), \(\lambda<+\infty\)).

(H4):

\(f^{\infty}<\frac{1}{\lambda A}\), namely, \(\lambda<\frac {1}{Af^{\infty}}\) (particularly, \(f^{\infty}=0\), \(\lambda<+\infty\)).

(H5):

There exists \(a>0\) such that \(\min_{t\in J_{\theta}, u\in[\theta a,a]}f(t,u) >\frac{u}{\lambda B}\), namely,

$$\lambda>\frac{u}{B\min_{t\in J_{\theta}, u\in[\theta a,a]}f(t,u)}. $$
(H6):

There exists \(b>0\) such that \(\max_{t\in[0,1], u\in [0,b]}f(t,u) <\frac{b}{\lambda A}\), namely,

$$\lambda< \frac{b}{A\max_{t\in[0,1], u\in[0,b]}f(t,u)}. $$

Remark 2.1

If there exists \(a>0\) such that \(\min_{t\in J_{\theta}, u\in[\theta a,a]}f(t,u)> \frac{a}{\lambda B}\), then (H5) holds.

Remark 2.2

If there exists \(b>0\) such that \(\max_{t\in[0,1], u\in[0,b]}f(t,u)< \frac{u}{\lambda A}\), then (H6) holds.

From Lemma 2.4 we obtain the following lemma.

Lemma 2.8

If (B1)-(B4) hold, then BVP (1.1) has a solution \(u\in PC([0,1])\) if and only if \(u\in PC([0,1])\) is a solution of the integral equation

$$\begin{aligned} u(t)=\lambda \int_{0}^{1}G(t,s)a(s)f \bigl(s,u(s) \bigr)\,ds+t^{\alpha-1}\sum_{t\leq t_{k}< 1}t_{k}^{1-\alpha}I_{k} \bigl(u(t_{k}) \bigr). \end{aligned}$$

Let \(T: K\to K\) be the operator defined as

$$\begin{aligned} (Tu) (t) &=\lambda \int_{0}^{1}G(t,s)a(s)f \bigl(s,u(s) \bigr)\,ds+t^{\alpha-1}\sum_{t\leq t_{k}< 1}t_{k}^{1-\alpha}I_{k} \bigl(u(t_{k}) \bigr). \end{aligned}$$
(2.25)

Then, by Lemma 2.8 the fixed point of operator T coincides with the solution of BVP (1.1).

Remark 2.3

If (B1)-(B4) hold, then \((Tu)'(t)\geq0\) for all \(t\in[0,1]\), that is, \((Tu)(t)\) is increasing on \([0,1]\).

Lemma 2.9

If (B1)-(B4) hold, then \(T: K \to K\) defined by (2.25) is completely continuous.

Proof

(1) For any \(u \in K\), \(t\in[0,1]\), it is clear that \((Tu)(t)\geq0\). Noting that \(0<\sigma<1\), for \(t\in[0,1]\), we have \((Tu)(t)\geq\sigma(Tu)(t)\), which implies \(\min_{t\in J_{\theta}}(Tu)(t)\geq\max_{t\in[0,1]}\sigma(Tu)(t)=\sigma\|Tu\|\). Therefore, \(T(K)\subset K\), that is, \(T:K \to K\) is well defined.

(2) Let \(u\in K\), in view of the nonnegativeness and continuity of functions \(G(t,s)\), \(a(t)\), \(f(t, u(t))\), \(I_{k}(u)\) and \(\lambda>0\), we conclude that \(T:K \to K\) is continuous.

Let \(\Omega\subset K\) be any bounded subset in \(PC([0,1])\). By the Ascoli-Arzela theorem we only need to show that \(T(\Omega)\) is uniformly bounded in \(PC([0,1])\) and \(T:K\to K\) is equicontinuous. For any \(u\in\Omega\), \(t, s\in[0,1]\), there exist some constants \(L_{i}>0\) (\(i=1,2,3,4\)) such that

$$\begin{aligned} &\max_{t,s\in[0,1]}\bigl|G(t,s)\bigr|\leq L_{1},\qquad \max _{t\in[0,1]}\bigl|a(t)\bigr|\leq L_{2},\\ &\max_{u\in\Omega,t\in[0,1]}\bigl|f(t,u)\bigr| \leq L_{3}, \qquad\max_{1\leq k\leq1}\max_{u\in\Omega}\bigl|I_{k} \bigl(u(t_{k}) \bigr)\bigr|\leq L_{4}. \end{aligned}$$

Then we have

$$\begin{aligned} \bigl|(Tu) (t)\bigr|& \leq \int_{0}^{1}\bigl|G(t,s)\bigr| \bigl[\lambda\bigl|a(s)\bigr|\bigl|f \bigl(s,u(s) \bigr)\bigr| \bigr]\,ds+\sum_{k=1}^{m} \bigl|I_{k} \bigl(u(t_{k}) \bigr)\bigr|\leq\lambda L_{1}L_{2}L_{3}+mL_{4}. \end{aligned}$$

Hence, \(T(\Omega)\) is uniformly bounded.

Next, we will prove that \(S: K\to K\) is equicontinuous. Indeed, for any \(u\in\Omega\) and \(t_{1}, t_{2}\in[0,1]\), we have

$$\begin{aligned} &\bigl|(Tu) (t_{2})-(Tu) (t_{1})\bigr| \\ &\quad=\lambda \int_{0}^{1} \bigl|G(t_{2},s)-G(t_{1},s) \bigr|a(s)f \bigl(s,u(s) \bigr)\,ds \\ &\qquad{}+ \biggl|\sum_{t_{2}\leq t_{k}< 1} \biggl(\frac{t_{2}}{t_{k}} \biggr)^{\alpha -1}I_{k} \bigl(u(t_{k}) \bigr) -\sum _{t_{1}\leq t_{k}< 1} \biggl(\frac{t_{1}}{t_{k}} \biggr)^{\alpha-1}I_{k} \bigl(u(t_{k}) \bigr) \biggr|\to0 \quad\mbox{as } t_{1}\to t_{2}. \end{aligned}$$

Thus, it follows from the continuity of \(G(t,s)\) that for any \(\epsilon>0\), there exists a positive constant \(\delta=\delta(\epsilon)>0\), independent of \(t_{1}\), \(t_{2}\), and u, such that \(|(Tu)(t_{2})-(Tu)(t_{1})|<\epsilon\) whenever \(|t_{2}-t_{1}|<\delta\). Thereby, \(T:K\to K\) is equicontinuous. The proof is complete. □

3 Single positive solutions and eigenvalue intervals

In this section, employing the Schauder fixed-point theorem, we derive the existence of one positive solution for BVP (1.1) under weak assumptions, which improved the results of [10].

Theorem 3.1

Assume that (B1)-(B4) hold. If (H3) or (H4) is satisfied, then BVP (1.1) has at least one positive solution.

Proof

If the condition (H3) holds, considering \(f^{0} < \frac{1}{\lambda A}\), there exists \(\varepsilon_{1}>0\) such that \((f^{0} +\varepsilon_{1})\lambda A\leq1\). From the definition of \(f^{0}\), there exists \(r_{1}>0\) such that \(f(t,u)\leq(f^{0} +\varepsilon_{1})u\) for all \(0\leq u\leq r_{1}\), \(t\in[0,1]\). Let \(u\in\Omega_{1}\triangleq K_{r_{1}}\) be defined as in (2.24). It is easy to know that \(\Omega_{1}\) is a close bounded convex subset of a Banach space \(PC([0,1])\). Then, for \(t\in[0,1]\) and \(u\in\Omega_{1}\), in view of the nonnegativeness and continuity of functions \(G(t,s)\), \(a(t)\), \(f(t,u(t))\), \(I_{k}(u(t_{k}))\) and \(\lambda>0\), we conclude that \(Tu\in PC([0,1])\), \(Tu\geq0\), \(t\in[0,1]\). According to Lemma 2.9, we have \((Tu)(t)\in K\). Next, we will prove that \(\|Tu\|< r_{1}\). In fact, by Remark 2.3 we have

$$\begin{aligned} \|Tu\| &=\max_{t\in[0,1]}(Tu) (t)=(Tu) (1) =\lambda \int_{0}^{1}G(1,s)a(s)f \bigl(s,u(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}g(s)a(s) \bigl(f^{0} + \varepsilon_{1} \bigr)u(s)\,ds \leq\|u\| \bigl(f^{0} + \varepsilon_{1} \bigr)\lambda \int_{0}^{1}g(s)a(s)\,ds \\ &=\|u\| \bigl(f^{0} +\varepsilon_{1} \bigr)\lambda A\leq\|u \|< r_{1}. \end{aligned}$$

Therefore, \(T:\Omega_{1} \to\Omega_{1}\). From Lemma 2.9 we have \(T:\Omega_{1} \to\Omega_{1}\) is completely continuous. Thus, BVP (1.1) has at least one positive solution by Lemma 2.2.

If condition (H4) holds, the proof is similar to the previous arguments. So we omit it. The proof is complete. □

Theorem 3.2

Assume that (B1)-(B4) hold. Suppose that one of the following conditions is satisfied:

(A1):

there exists a constant \(M>0\) such that \(f(t,u)\leq \frac{M}{\lambda A}\) for \(0\leq t\leq1\), \(0\leq u\leq M\);

(A2):

there exists a constant \(N>0\) such that \(f(t,u)\leq \frac{N}{\lambda A}\) for \(0\leq t\leq1\), \(u\geq N\).

Then BVP (1.1) has at least one positive solution.

Proof

If condition (A1) holds, then we take \(\Omega_{2}\triangleq K_{M}=\{u\in PC([0,1]):\|u\|< M\}\). If the condition (A2) holds, then we take \(\Omega_{3}\triangleq K_{d}=\{u\in PC([0,1]):\|u\|< d\}\), where \(d>0\) satisfies \(d\geq1+N+\lambda A\max_{0\leq t\leq1,0\leq u\leq N}f(t,u)\). The rest of the proof is similar to that of Theorem 3.1. So we omit it. The proof is complete. □

4 Multiple positive solutions and eigenvalue intervals

In this section, applying the fixed-point index theorem, we will discuss the multiplicity of positive solutions for BVP (1.1).

Theorem 4.1

Assume that (B1)-(B4) hold. If (H1), (H2), and (H6) are satisfied. Then BVP (1.1) has at least two positive solutions \(u_{1}\), \(u_{2}\) with

$$ 0 < \|u_{1}\| < b < \|u_{2}\|, $$
(4.1)

and the parameter λ satisfies

$$\begin{aligned} \max \biggl\{ \frac{1}{Bf_{0}},\frac{1}{Bf_{\infty}} \biggr\} < \lambda< \frac {b}{A\max_{t\in[0,1], u\in[0,b]}f(t,u)}. \end{aligned}$$
(4.2)

Proof

To begin with, we consider condition (H6). Choose r, R with \(0 < r < b < R\) and let \(u\in\Omega_{b}\triangleq K_{b}\). For any \(u\in\partial\Omega_{b}\), we have \(\|u\|=b\) and \(0\leq u(t)\leq b\) for all \(t\in[0,1]\). By condition (H6), for \(u\in\partial\Omega_{b}\) and \(t\in[0,1]\), we get

$$\begin{aligned} \|Tu\| &=\max_{t\in[0,1]}(Tu) (t)=(Tu) (1) \\ &=\lambda \int_{0}^{1}G(1,s)a(s)f \bigl(s,u(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}g(s)a(s)\max_{s\in[0,1], u\in[0,b]}f \bigl(s,u(s) \bigr)\,ds \\ &< \lambda \int_{0}^{1}g(s)a(s)\frac{b}{\lambda A}\,ds=b= \|u\|. \end{aligned}$$

Therefore,

$$\begin{aligned} \|Tu\|< \|u\|, \quad u\in\partial\Omega_{b}. \end{aligned}$$
(4.3)

By Lemma 2.3 we have

$$\begin{aligned} i(T,\Omega_{b},K)=1. \end{aligned}$$
(4.4)

On the one hand, by condition (H1) there exists \(\epsilon_{1}>0\) that satisfies \((f_{0}-\epsilon_{1})\lambda B\geq1\). By the definition of \(f_{0}\) there exists \(r>0\) such that \(f(t,u)\geq (f_{0}-\epsilon_{1})u\) for \(u\in[0,r]\) and \(t\in[0,1]\). Together with (2.6), let \(u\in\Omega_{r}\triangleq K_{r}\). For \(u\in\partial\Omega_{r}\) and \(t\in[0,1]\), we have

$$\begin{aligned} \|Tu\|& =\max_{t\in[0,1]}(Tu) (t)=(Tu) (1) =\lambda \int_{0}^{1}G(1,s)a(s)f \bigl(s,u(s) \bigr)\,ds \\ &\geq\lambda \int_{0}^{1}G(1,s)a(s) (f_{0}- \epsilon_{1})u(s)\,ds \geq(f_{0}-\epsilon_{1})\lambda \sigma \int_{\theta}^{1-\theta}G(1,s)a(s)u(s)\,ds \\ &\geq(f_{0}-\epsilon_{1})\lambda\sigma^{2} \int_{\theta}^{1-\theta }g(s)a(s)\,ds\|u\| =(f_{0}- \epsilon_{1})\lambda B\|u\|\geq\|u\|. \end{aligned}$$

So

$$\begin{aligned} \|Tu\|\geq\|u\|, \quad u\in\partial\Omega_{r}. \end{aligned}$$
(4.5)

By Lemma 2.3 we have

$$\begin{aligned} i(T,\Omega_{r},K)=0. \end{aligned}$$
(4.6)

On the other hand, by condition (H2) there exists \(\epsilon_{2}>0\) that satisfies \((f_{\infty}-\epsilon_{2})\lambda B\geq1\). By the definition of \(f_{\infty}\) there exists \(m>0\) such that \(f(t,u)\geq(f_{\infty}-\epsilon_{2})u\) for \(u\in(m,+\infty)\) and \(t\in [0,1]\). Together with (2.6), let \(u\in\Omega_{R}\triangleq K_{R}\). For \(u\in\partial\Omega_{R}\) and \(t\in[0,1]\), we obtain

$$\begin{aligned} \|Tu\| &=\max_{t\in[0,1]}(Tu) (t)=(Tu) (1) =\lambda \int_{0}^{1}G(1,s)a(s)f \bigl(s,u(s) \bigr)\,ds \\ &\geq\lambda \int_{0}^{1}G(1,s)a(s) (f_{\infty}- \epsilon_{2})u(s)\,ds \geq(f_{\infty}-\epsilon_{2})\lambda \sigma \int_{\theta}^{1-\theta}G(1,s)a(s)u(s)\,ds \\ &\geq(f_{\infty}-\epsilon_{2})\lambda\sigma^{2} \int_{\theta}^{1-\theta }g(s)a(s)\,ds\|u\| =(f_{\infty}- \epsilon_{2})\lambda B\|u\|\geq\|u\|. \end{aligned}$$

Therefore,

$$\begin{aligned} \|Tu\|\geq\|u\|, \quad u\in\partial\Omega_{R}. \end{aligned}$$
(4.7)

By Lemma 2.3 we have

$$\begin{aligned} i(T,\Omega_{R},K)=0. \end{aligned}$$
(4.8)

Combining (4.4) with (4.6) and (4.8) with (4.6), we get

$$\begin{aligned} i(T,\Omega_{b}\setminus\overline{\Omega}_{r},K)&=i(T, \Omega _{b},K)-i(T,\Omega_{r},K) \\ &=1-0=1 \end{aligned}$$
(4.9)

and

$$\begin{aligned} i(T,\Omega_{R}\setminus\overline{\Omega}_{b},K)&=i(T, \Omega _{R},K)-i(T,\Omega_{b},K) \\ &=0-1=-1. \end{aligned}$$
(4.10)

By (4.9) and (4.10), T has a fixed point \(u_{1}\in \Omega_{b}\setminus\overline{\Omega}_{r}\) and a fixed point \(u_{2}\in \Omega_{R}\setminus\overline{\Omega}_{b}\). Thus, it follows that BVP (1.1) has at least two positive solutions \(u_{1}\) and \(u_{2}\). Noticing (4.3), we have \(\|u_{1}\| \neq b\) and \(\|u_{2}\| \neq b\). So (4.1) holds. Combing (H1), (H2), and (H6), we derive (4.2). The proof is complete. □

Similarly, we have the following results.

Theorem 4.2

Assume that (B1)-(B4) hold. If (H3), (H4), and (H5) are satisfied, then BVP (1.1) has at least two positive solutions \(u_{1}\), \(u_{2}\) with

$$ 0 < \|u_{1}\| < a < \|u_{2}\|, $$

and the parameter λ satisfies

$$\begin{aligned} \frac{a}{B\min_{t\in J_{\theta}, u\in[\theta a,a]}f(t,u)}< \lambda< \min \biggl\{ \frac{1}{Af^{0}},\frac{1}{Af^{\infty}} \biggr\} . \end{aligned}$$

Theorem 4.3

Assume that (B1)-(B4) hold. Suppose that there exist 2m positive numbers \(a_{k}\), \(b_{k}\) (\(k=1,2,\ldots, m\)) with \(0<\theta a_{1}<a_{1}<\theta b_{1}<b_{1}<\theta a_{2}<a_{2}<\cdots<\theta a_{m}<a_{m}=a<\theta b_{m}<b_{m}=b\) such that the following two conditions are satisfied:

(A3):

\(\min_{t\in J_{\theta}, u\in[\theta a_{k},a_{k}]}f(t,u)> \frac{a_{k}}{\lambda B}\);

(A4):

\(\max_{t\in[0,1], u\in[\theta b_{k},b_{k}]}f(t,u)< \frac {b_{k}}{\lambda A}\).

Then BVP (1.1) has at least m positive solutions \(u_{k}\) satisfying with \(a_{k} < \|u_{k}\| <\theta b_{k} \) (\(k=1,2,\ldots,m\)), and the parameter λ satisfies

$$\begin{aligned} \frac{a_{k}}{ B\min_{t\in J_{\theta}, u\in[\theta a_{k},a_{k}]}f(t,u)}< \lambda< \frac{b_{k}}{ A\max_{t\in[0,1], u\in[\theta b_{k},b_{k}]}f(t,u)} \end{aligned}$$

for \(k=1,2,\ldots,m\).

Theorem 4.4

Assume that (B1)-(B4) hold. Suppose that there exist n (\(n\geq2\)) positive numbers \(b_{i}\) (\(i=1,2,\ldots, n\)) with \(0<\theta b_{1}<b_{1}<\theta b_{2}<b_{2}<\cdots<\theta b_{n}<b_{n}<b\) such that the following two conditions are satisfied:

(A5):

\(\max_{t\in[0,1], u\in\partial D_{0}}f(t,u)< \frac {b}{\lambda A}\);

(A6):

\(\max_{t\in[0,1], u\in\partial D_{i}}f(t,u)< \frac {b_{i}}{\lambda A}\).

Then BVP (1.1) has at least \(n+1\) positive solutions \(u_{i} \) (\(i=0,1,\ldots,n\)) with

$$ u_{i}\in D_{i}, \qquad u_{0}\in D_{0} \backslash\bigcup_{i=1}^{n}\overline{D}_{i}, $$

where \(D_{0}=\{u\in K:u\in[0,b]\}\), \(D_{i}=\{u\in K:u\in[\theta b_{i},b_{i}]\}\), \(1\leq i\leq n\).

Proof

First, for each \(D_{i}\) (\(i=1,2,\ldots,n\)), if (A6) holds, then it follows from Theorem 4.1 that \(i(T,D_{i},K)=1\). Therefore, BVP (1.1) has at least one positive solution \(u_{i}\in D_{i}\). Thus, we show that BVP (1.1) has at least n positive solutions.

On the other hand, if (A5) holds, then we get

$$\begin{aligned} i \Biggl(T,D_{0}\backslash\bigcup_{i=1}^{n} \overline{D}_{i},K \Biggr)&=i(T,D_{0},K)-\sum _{i=1}^{n} i(T,D_{i},K)=1-n\neq0. \end{aligned}$$

Thus, the operator T has a fixed point \(u_{0}\in D_{0}\backslash\bigcup_{i=1}^{n}\overline{D}_{i}\), that is, \(u_{0}\) is the \((n+1)\)th positive solution of BVP (1.1). This completes the proof. □

5 Illustrative examples

Example 5.1

Consider the following boundary value problem:

$$ \left \{\textstyle\begin{array}{@{}l} -D_{0^{+}}^{\frac{5}{2}}u(t)=\lambda\frac{u(t)(1+\sin(\pi t))}{1+u^{2}(t)}, \quad t\in(0,1)\setminus\{\frac{1}{2}\}, \\ \Delta u(\frac{1}{2})=I(u(\frac{1}{2})), \quad t=\frac{1}{2},\\ u(0)=u'(0)=0,\qquad u'(1)=\int_{0}^{1}u(s)\,dH(s), \end{array}\displaystyle \right . $$
(5.1)

where \(\alpha=\frac{5}{2}\), \(k=1\), \(t_{1}=\frac{1}{2}\), \(a(t)=1\), \(f(t,u)=\frac{u(t)(1+\sin(\pi t))}{1+u^{2}(t)}\). Let \(I(u)=\frac{u^{2}}{1+u^{2}}\), \(H(s)=\frac{5}{2}s\). It is easy to verify that (B1)-(B4) hold. By simple calculation we have

$$ f^{\infty}=\limsup_{u\to\infty} \max_{t\in[0,1]} \frac {u \bigl(1+\sin(\pi t) \bigr)}{u \bigl(1+u^{2} \bigr)}=\limsup _{u\to\infty}\frac {2}{1+u^{2}}=0. $$

Thus, all the assumptions of Theorem 3.1 are satisfied. Hence, BVP (5.1) has at least one positive solution for \(\lambda>0\).

Remark 5.1

Noting that \(f_{0}=\liminf_{u\to0} \min_{t\in [0,1]}\frac{u(1+\sin(\pi t))}{u(1+u^{2})}=\liminf_{u\to 0}\frac{1}{1+u^{2}}=1\neq\infty\), we do not conclude that BVP (5.1) has positive solutions by applying the results of [10].

Example 5.2

Consider the nonlinear fractional differential equations

$$ \left \{\textstyle\begin{array}{@{}l} -D_{0^{+}}^{\frac{5}{2}}u(t)=\lambda \vert \frac{u(t)\ln u(t)}{1+2t^{2}}\vert , \quad t\in(0,1)\setminus\{\frac{1}{2}\}, \\ \Delta u(\frac{1}{2})=I(u(\frac{1}{2})), \quad t=\frac{1}{2},\\ u(0)=u'(0)=0,\qquad u'(1)=\int_{0}^{1}u(s)\,dH(s), \end{array}\displaystyle \right . $$
(5.2)

where \(\alpha=\frac{5}{2}\), \(k=1\), \(t_{1}=\frac{1}{2}\), \(a(t)=1\), \(f(t,u)=\vert \frac{u(t)\ln u(t)}{1+2t^{2}}\vert \). Let \(I(u)=\frac{u}{1+u}\), \(H(s)=\frac{5}{2}s\), \(\theta=\frac{1}{4}\in(0,\frac{1}{2})\), \(b=1\). It is easy to verify that (B1)-(B4) hold. By simple computation we have

$$\begin{aligned} &\delta= \int_{0}^{1}s^{\alpha-1}\,dH(s)=1, \qquad g_{1}(s)=\frac{s(1-s)^{\frac {1}{2}}}{\Gamma(\frac{5}{2})},\\ & G_{1}(t,s)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{t^{\frac{3}{2}}(1-s)^{\frac{1}{2}}-(t-s)^{\frac{3}{2}}}{\Gamma (\frac{5}{2})}, & 0\leq s\leq t\leq1, \\ \frac{t^{\frac{3}{2}}(1-s)^{\frac{1}{2}}}{\Gamma(\frac{5}{2})}, & 0\leq t\leq s\leq1, \end{array}\displaystyle \right . \\ &G_{2}(1,s)=\frac{1}{\alpha-1-\delta} \int_{0}^{1}G_{1}(\tau,s)\,dH(\tau)= \frac {2[(1-s)^{\frac{1}{2}}-(1-s)^{\frac{5}{2}}]}{\Gamma(\frac{5}{2})}, \\ &g(s)=g_{1}(s)+G_{2}(1,s)=\frac{s(1-s)^{\frac{1}{2}}+2[(1-s)^{\frac {1}{2}}-(1-s)^{\frac{5}{2}}]}{\Gamma(\frac{5}{2})}, \\ &A= \int_{0}^{1}g(s)a(s)\,ds = \int_{0}^{1}\frac{s(1-s)^{\frac{1}{2}}+2[(1-s)^{\frac{1}{2}}-(1-s)^{\frac {5}{2}}]}{\Gamma(\frac{5}{2})}\,ds=\frac{36}{35\Gamma(\frac{5}{2})} \approx 0.3094984, \\ &0< B=\sigma^{2} \int_{\theta}^{1-\theta}g(s)a(s)\,ds\leq\theta^{2(\alpha -1)} \int_{\theta}^{1-\theta}g(s)a(s)\,ds < A< +\infty, \\ &f_{0} =\liminf_{u\to0} \min_{t\in [0,1]} \biggl\vert \frac{u\ln u}{(1+2t^{2})u} \biggr\vert =\liminf_{u\to 0} \frac{|\ln u|}{3}=+\infty, \\ &f_{\infty}=\liminf_{u\to\infty} \min_{t\in [0,1]} \biggl\vert \frac{u\ln u}{(1+2t^{2})u} \biggr\vert =\liminf_{u\to \infty} \frac{|\ln u|}{3}=+\infty. \end{aligned}$$

When \(0\leq u(t)\leq1\), \(0\leq t\leq1\), \(f(t,u)=\frac{-u\ln u}{1+2t^{2}}\) arrives at maximum at \(u=\frac{1}{e}\), \(t=0\). Therefore, we have

$$\begin{aligned}& \max_{t\in[0,1], u\in[0,1]} f(t,u)=\max_{t\in[0,1], u\in [0,1]} \frac{-u\ln u}{1+2t^{2}}=\frac{1}{e}\approx0.3678794, \\& \frac{b}{A\max_{t\in[0,1], u\in[0,b]}f(t,u)}=\frac{1}{0.3094984\times0.3678794}\approx8.7828623. \end{aligned}$$

Thus, by Theorem 4.1 it follows that BVP (5.2) has at least two positive solutions \(u_{1}\), \(u_{2}\) satisfying \(0< \|u_{1}\|< 1 < \|u_{2}\|\) for \(0<\lambda<8.7828623\).