1 Introduction

In this paper, we consider the existence of two infinite families of positive solutions for the second impulsive singular parametric differential equation

$$ \textstyle\begin{cases} \lambda x''(t)+\omega(t)f(t,x(t))=0, \quad t\in J, t\neq t_{k}, \\ x(t_{k}^{+})-x(t_{k})=c_{k}x(t_{k}), \quad k=1,2,\ldots,n, \\ ax(0)-bx'(0)=\int_{0}^{1}h(s)x(t)\,dt, \\ ax(1)+bx'(1)=\int_{0}^{1}h(s)x(t)\,dt, \end{cases} $$
(1.1)

where \(\lambda>0\) is a positive parameter, \(J=[0,1]\), \(t_{k} \in\mathrm{R}\), \(k =1,2,\ldots,n\), \(n \in\mathrm{N}\) satisfy \(0< t_{1}< t_{2}<\cdots <t_{k}<\cdots<t_{n}<1\), \(a, b>0\), \(\{c_{k}\}\) is a real sequence with \(c_{k}>-1\), \(k=1,2,\ldots,n\), \(x(t_{k}^{+})\) (\(k=1,2,\ldots,n\)) represents the right-hand limit of \(x(t)\) at \(t_{k}\), \(\omega\in L^{p}[0,1]\) for some \(p\geq1\) and and has infinitely many singularities in \([0,\frac{1}{2})\).

In addition, ω, f, h and \(c_{k}\) satisfy the following conditions:

(H1):

\(\omega(t)\in L^{p}[0,1]\) for some \(p\in[1,+\infty)\), and there exists \(\xi>0\) such that \(\omega(t)\geq\xi\) a.e. on J;

(H2):

There exists a sequence \(\{t_{i}'\}_{i=1}^{\infty} \) such that \(t_{1}'<\frac{1}{2}\), \(t_{i}' \downarrow t^{*}\geq0 \) and \(\lim_{t\rightarrow t_{i}'} \omega(t) =+\infty\) for all \(i=1, 2,\ldots \) ;

(H3):

\(f(t,u): J\times[0,+\infty)\rightarrow[0,+\infty)\) is continuous, \(\{c_{k}\}\) is a real sequence with \(c_{k}>-1\), \(k=1, 2, \ldots, n\), \(c(t):=\Pi_{0< t_{k}< t}(1+c_{k})\);

(H4):

\(h\in C[0,1]\) is nonnegative with \(\mu\in[0,1)\), where

$$\mu= \int_{0}^{1}A(t)h(t)c(t)\,dt, $$

and

$$ A(t)=\frac{(a+b-at)c(1)+a+b}{a(a+2b)c(1)}. $$
(1.2)

Remark 1.1

Throughout this paper, we always assume that a product \(c(t):=\Pi_{0< t_{k}< t}(1+c_{k})\) equals unity if the number of factors is equal to zero, and let

$$c_{M}=\max_{t\in J} c(t), \qquad c_{m}=\min _{t\in J} c(t), \qquad c^{-1}(t)=\frac{1}{c(t)}= \Pi_{0< t_{k}< t}(1+c_{k})^{-1},\quad t\in J. $$

Remark 1.2

Combining (H2), Remark 1.1 and the definition of \(c(t)\), we know that \(c(t)\) is a step function bounded on J, and

$$c(t)>0, \quad \forall t\in J, \qquad c(t)=1, \quad \forall t \in[0,t_{1}]. $$

Remark 1.3

To make it clear for the reader what \(c(t)\) is, we give a special example of \(c(t)\), e.g., letting \(k=3\), \(t_{1}=\frac{1}{4}\), \(t_{2}=\frac{1}{2}\), \(t_{3}=\frac {3}{4}\), \(c_{1}=-\frac{1}{2}\), \(c_{2}=-\frac{1}{3}\), \(c_{3}=-\frac {1}{4}\), we can get the graph of \(c(t)\). For details, see Figure 1.

Figure 1
figure 1

Graph of function \(\pmb{c(t)}\) for \(\pmb{k=3}\) .

Such problems were first studied by Zhang and Feng [1]. By using the transformation technique to deal with impulsive term of second impulsive differential equations, the authors obtained the existence results of positive solutions by using fixed point theorems in a cone. But they only gave the sufficient conditions for the existence of finite positive solutions. In fact, there is almost no paper that considers the existence of infinitely many positive solutions for second order singular impulsive parametric equations; for details, see [27].

For the case \(\lambda=1\), \(a=1\), \(b=0\), \(h(t)\equiv0\) on \(t\in J\) and \(c_{k}=0\) (\(k=1,2,\ldots,n\)), problem (1.1) reduces to the problem studied by Kaufmann and Kosmatov in [8]. By using Krasnosel’skiĭ’s fixed point theorem and Hölder’s inequality, the authors showed the existence of countably many positive solutions. The other related results can be found in [914]. However, there are almost no papers considering a second order impulsive parametric equation with infinitely many singularities. To identify a few, we refer the reader to [814] and the references therein.

The main reasons are that \(\lambda\neq0\) and \(c_{k}\neq0\) (\(k=1,2,\ldots ,n\)) in problem (1.1). If \(\lambda\neq0\), then it is very difficult to be concerned with determining values of λ, for which there exist infinitely many positive solutions. On the other hand, if \(c_{k}\neq0\) (\(k=1,2,\ldots,n\)), then there exist singular points and impulsive points in the same problem, which leads to many difficulties in defining the interval \([\tau _{i},1-\tau_{i}]\), where \(t'_{i+1}\leq\tau_{i}\leq t'_{i}\). The goal of this paper is to seek new methods to solve these difficulties and to give some new sufficient conditions to guarantee that problem (1.1) has two infinite families of positive solutions.

2 Preliminaries

In this section, we collect some definitions and lemmas for the convenience of later use and reference.

Definition 2.1

A function \(x(t)\) is said to be a solution of problem (1.1) on J if:

  1. (i)

    \(x(t)\) is absolutely continuous on each interval \((0,t_{1}]\) and \((t_{k},t_{k+1}]\), \(k=1,2,\ldots,n\);

  2. (ii)

    for any \(k=1,2,\ldots,n\), \(x(t_{k}^{+})\), \(x(t_{k}^{-})\) exist and \(x(t_{k}^{-})=x(t_{k})\);

  3. (iii)

    \(x(t)\) satisfies (1.1).

We shall reduce problem (1.1) to a system without impulse. To this goal, firstly by means of the transformation

$$ x(t)=c(t)y(t), $$
(2.1)

we convert problem (1.1) into

$$ \textstyle\begin{cases} -\lambda y''(t)=c^{-1}(t)\omega(t)f(t,c(t)y(t)), \quad t\in J, \\ ay(0)-by'(0)=\int_{0}^{1}h(s)c(s)y(s)\,ds, \\ ac(1)y(1)+bc(1)y'(1)=\int_{0}^{1}h(s)c(s)y(s)\,ds. \end{cases} $$
(2.2)

It follows from (1.1), (2.1) and (2.2) that we can obtain the following lemma.

Lemma 2.1

Assume that (H1)-(H4) hold. Then

  1. (i)

    if \(y(t)\) is a solution of problem (2.2) on J, then \(x(t)=c(t)y(t)\) is a solution of problem (1.1) on J;

  2. (ii)

    if \(x(t)\) is a solution of problem (1.1) on J, then \(y(t)=c^{-1}(t)x(t)\) is a solution of problem (2.2) on J, where \(c^{-1}(t)\) is defined in Remark  1.1.

Lemma 2.2

If (H1)-(H4) hold, then problem (2.2) has a solution y, and y can be expressed in the form

$$ y(t) =\lambda^{-1} \int_{0}^{1}H(t,s)c^{-1}(s)\omega(s)f \bigl(s,c(s)y(s)\bigr)\,ds, $$
(2.3)

where

$$\begin{aligned}& H(t,s)=G(t,s)+\frac{A(t)}{1-\mu} \int_{0}^{1}G(s,\tau)c(\tau)h(\tau)\,d\tau , \end{aligned}$$
(2.4)
$$\begin{aligned}& G(t,s) =\frac{1}{d} \textstyle\begin{cases} (b+as)(b+a(1-t)),& 0\leq s \leq t \leq1, \\ (b+at)(b+a(1-s)),&0\leq t \leq s \leq1, \end{cases}\displaystyle \\& A(t)=\frac{(a+b-at)c(1)+a+b}{dc(1)},\quad d=a(a+2b). \end{aligned}$$
(2.5)

Proof

First suppose that y is a solution of problem (2.2). It is easy to see by integration of problem (2.2) that

$$\begin{aligned}& y'(t)=y'(0)-\lambda^{-1} \int_{0}^{t}c^{-1}(s)\omega(s)f \bigl(s,c(s)y(s)\bigr)\,ds, \end{aligned}$$
(2.6)
$$\begin{aligned}& y(t)=y(0)+y'(0)t-\lambda^{-1} \int_{0}^{t}(t-s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds. \end{aligned}$$
(2.7)

Letting \(t=1\) in (2.6), (2.7), we find

$$ \begin{aligned} &y'(1)=y'(0)- \lambda^{-1} \int_{0}^{1}c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds, \\ &y(1)=y(0)+y'(0)-\lambda^{-1} \int_{0}^{1}(1-s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds. \end{aligned} $$
(2.8)

Combining the boundary condition \(ay(0)-by'(0)=\int _{0}^{1}h(s)c(s)y(s)\,ds\), \(ac(1)y(1)+bc(1) y'(1)=\int_{0}^{1}h(s)c(s)y(s)\,ds\) and (2.8), we obtain

$$\begin{aligned}& y'(0)=\frac{a\lambda^{-1}}{a+2b} \int_{0}^{1}(1-s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds \\& \hphantom{y'(0)={}}{}+\frac{b\lambda^{-1}}{a+2b} \int_{0}^{1}c^{-1}(s)\omega(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\& \hphantom{y'(0)={}}{}+\frac{1-c(1)}{(a+2b)c(1)} \int_{0}^{1}h(s)c(s)y(s)\,ds, \end{aligned}$$
(2.9)
$$\begin{aligned}& y(0)=\frac{b\lambda^{-1}}{a+2b} \int_{0}^{1}(1-s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds \\& \hphantom{y(0)={}}{}+\frac{b^{2}\lambda^{-1}}{a(a+2b)} \int_{0}^{1}c^{-1}(s)\omega(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\& \hphantom{y(0)={}}{}+\frac{b+(a+b)c(1)}{a(a+2b)c(1)} \int _{0}^{1}h(s)c(s)y(s)\,ds. \end{aligned}$$
(2.10)

Substituting (2.9), (2.10) into (2.7) and letting

$$A(t)=\frac{(a+b-at)c(1)+a+b}{a(a+2b)c(1)},\qquad \mu= \int_{0}^{1}A(t)h(t)c(t)\,dt, $$

we have

$$ \begin{aligned} &y(t)=\lambda^{-1} \int_{0}^{1}G(t,s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds+A(t) \int_{0}^{1}h(s)c(s)y(s)\,ds, \\ & \int_{0}^{1}h(s)c(s)y(s)\,ds \\ &\quad ={\frac{\lambda^{-1}}{1-\mu}} \int_{0}^{1}h(s)c(s) \biggl( \int_{0}^{1}G(\tau,s)c^{-1}(\tau)\omega( \tau )f\bigl(\tau,c(\tau)y(\tau)\bigr)\,d\tau \biggr)\,ds. \end{aligned} $$
(2.11)

Therefore, we have

$$\begin{aligned} y(t) =&\lambda^{-1} \int_{0}^{1}G(t,s)c^{-1}(s)\omega(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\ &{}+\frac{\lambda^{-1}A(t)}{1-\mu} \int_{0}^{1}h(s)c(s) \biggl( \int_{0}^{1}G(\tau,s)c^{-1}(\tau)\omega( \tau )f\bigl(\tau,c(\tau)y(\tau)\bigr)\,d\tau \biggr)\,ds \\ =&\lambda^{-1} \int_{0}^{1}G(t,s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds \\ &{}+\lambda^{-1} \int_{0}^{1} \biggl({\frac{A(t)}{1-\mu}} \int_{0}^{1}G(\tau ,s)c(\tau)h(\tau)\,d\tau \biggr)c^{-1}(s)\omega(s)f\bigl(s,c(s)y(s)\bigr)\,ds. \end{aligned}$$
(2.12)

Let

$$ H(t,s)=G(t,s)+\frac{A(t)}{1-\mu} \int_{0}^{1}G(\tau,s)c(\tau)h(\tau)\,d\tau , $$
(2.13)

then

$$ y(t)=\lambda^{-1} \int_{0}^{1}H(t,s)c^{-1}(s)\omega (s)f \bigl(s,c(s)y(s)\bigr)\,ds. $$
(2.14)

The proof of the lemma is complete. □

Lemma 2.3

Let \(\theta\in(0,\frac{1}{2})\) and \(\theta_{i}\) be defined in (3.4). Noticing that \(a, b>0\), it follows from (2.4) and (2.5) that

$$\begin{aligned}& 0< \alpha_{1}\leq G(t,s)\leq G(s,s)\leq\beta_{1}, \qquad H(t,s)\leq DG(s,s)\leq D\beta_{1}=\beta, \quad \forall t,s\in J, \end{aligned}$$
(2.15)
$$\begin{aligned}& G(t,s)\geq\delta G(s,s), \qquad H(t,s)\geq\delta G(s,s),\quad \forall t\in [ \theta,1-\theta], s\in J, \end{aligned}$$
(2.16)
$$\begin{aligned}& G(t,s)\geq\alpha^{*}_{i},\qquad H(t,s)\geq \alpha_{i},\quad \forall t\in[\theta _{i},1- \theta_{i}], s\in J, \end{aligned}$$
(2.17)

where

$$ \begin{aligned} &\beta_{1}=\frac{a+2b}{4a}, \qquad \alpha_{1}= \frac{b^{2}}{d},\qquad \alpha ^{*}_{i}=\frac{b(b+a\theta_{i})}{d}, \\ &\alpha_{i}=\alpha^{*}_{i}D_{i}, \qquad \beta=D\beta_{1},\qquad \delta=\frac{b+a\theta}{b+a}, \end{aligned} $$
(2.18)

and

$$\begin{aligned}& D_{i}=\frac{1-\mu+A_{i}\int_{\theta_{i}}^{1-\theta_{i}}c(\tau)h(\tau )\,d\tau}{1-\mu},\qquad D=\frac{1-\mu+A_{M}\int_{0}^{1}c(\tau)h(\tau)\,d\tau }{1-\mu}, \\& A_{M}=\max_{t\in J}A(t),\qquad A_{i}= \frac{(a+b-a\theta_{i})c(1)+a+b}{dc(1)}. \end{aligned}$$

Proof

It is obvious that (2.15) and (2.16) hold by the definition of \(G(t,s)\) and \(H(t,s)\).

Next, we show that (2.17) holds for \(t\in[\theta_{i},1-\theta_{i}]\), \(s\in J\). In fact, if \(s\le t\), it follows from (2.5) that

$$G(t,s)\geq\frac{1}{d}(b+as) (b+a\theta_{i})= \frac{b(b+a\theta_{i})}{d}. $$

Similarly, we can prove that \(G(t,s)\ge\frac{b(b+a\theta_{i})}{d}\), \(\forall 0\le t\le s\le1\).

Therefore,

$$G(t,s)\ge\alpha^{*}_{i}, \quad \forall t\in[ \theta_{i},1-\theta_{i}], s\in J. $$

And then, by (2.4), for \(t\in[\theta_{i},1-\theta_{i}]\), \(s\in J\), we have

$$\begin{aligned} H(t,s) =&G(t,s)+\frac{A(t)}{1-\mu} \int_{0}^{1}G(s,\tau)c(\tau)g(\tau)\,d\tau \\ \ge&\alpha^{*}_{i}+\frac{A_{i}}{1-\mu} \int _{0}^{1}G(s,\tau)c(\tau)g(\tau)\,d\tau \\ \ge&\alpha^{*}_{i}+\frac{A_{i}}{1-\mu} \int_{\theta _{i}}^{1-\theta_{i}}G(s,\tau)c(\tau)g(\tau)\,d\tau \\ \ge&\alpha^{*}_{i}+\frac{A_{i}\alpha^{*}_{i}}{1-\mu } \int_{\theta_{i}}^{1-\theta_{i}}c(\tau)g(\tau)\,d\tau \\ =&\alpha^{*}_{i}D_{i}. \end{aligned}$$

So,

$$H(t,s)\geq\alpha_{i},\quad \forall t\in[\theta_{i},1- \theta_{i}], s\in J. $$

The proof is complete. □

Lemma 2.4

see [15]

Let E be a real Banach space and K be a cone in E. For \(r>0\), define \(K_{r}=\{x\in K: \|x\|< r \}\). Assume that \(T: \bar{K}_{r}\rightarrow K\) is completely continuous such that \(Tx\neq x\) for \(x\in\partial K_{r}=\{x\in K:\|x\|=r\}\).

  1. (i)

    If \(\|Tx\|\geq\|x\|\) for \(x\in\partial K_{r}\), then \(i(T, K_{r}, K)=0\).

  2. (ii)

    If \(\|Tx\|\leq\|x\|\) for \(x\in\partial K_{r}\), then \(i(T, K_{r}, K)=1\).

Lemma 2.5

Hölder

Let \(e\in L^{p}[a,b]\) with \(p>1\), \(h\in L^{q}[a,b]\) with \(q>1\), and \(\frac{1}{p}+\frac{1}{q}=1\). Then \(eh\in L^{1}[a,b]\) and

$$\| eh\|_{1}\le\| e\|_{p}\| h\|_{q}. $$

Let \(e\in L^{1}[a,b]\), \(h\in L^{\infty}[a,b]\). Then \(eh\in L^{1}[a,b]\) and

$$\| eh\|_{1}\le\| e\|_{1}\| h\|_{\infty}. $$

3 The existence of two infinite families of positive solutions

In this part, applying the well-known fixed point index theory in a cone, we get the optimal interval of parameter λ in which problem (1.1) has two infinite families of positive solutions. We remark that our methods are entirely different from those used in [814].

Let \(E=C[0,1]\). Then E is a real Banach space with the norm \(\| \cdot\|\) defined by

$$\|y\|=\max_{ t\in J} \bigl\vert y(t) \bigr\vert , \quad y\in E. $$

Define a cone K in E by

$$ K= \Bigl\{ y\in E: y(t)\geq0, t\in J, \min_{t\in[\theta,1-\theta]}y(t) \geq \delta_{D} \|y\|, t\in J \Bigr\} , $$
(3.1)

where \(\delta_{D}=\frac{\delta}{D}\).

Remark 3.1

It follows from the definition of δ and D that \(0<\delta_{D}<1\).

Define \(T_{\lambda}:K\rightarrow K\) by

$$ (T_{\lambda}y) (t)=\frac{1}{\lambda} \int_{0}^{1}H(t,s)\omega(s)c^{-1}(s) f \bigl(s,c(s)y(s)\bigr)\,ds. $$
(3.2)

Theorem 3.1

Assume that (H1)-(H4) hold. Then \(T_{\lambda}(K)\subset K\) and \(T_{\lambda}:K\rightarrow K\) is completely continuous.

Proof

For \(y\in K\), it follows from (2.15) and (3.2) that

$$\begin{aligned} (T_{\lambda}y) (t) =&\frac{1}{\lambda} \int_{0}^{1}H(t,s)\omega (s)c^{-1}(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\ \leq&\frac{1}{\lambda}D \int_{0}^{1}G(s,s)\omega (s)c^{-1}(s)f \bigl(s,c(s)y(s)\bigr)\,ds,\quad t \in J. \end{aligned}$$
(3.3)

It follows from (2.16), (3.2) and (3.3) that

$$\begin{aligned} \min_{t\in[\theta,1-\theta]}(T_{\lambda}y) (t) =&\frac{1}{\lambda }\min _{t\in[\theta,1-\theta]} \int_{0}^{1}H(t,s)\omega(s)c^{-1}(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\ \geq&\frac{1}{\lambda}\delta \int_{0}^{1}G(s,s)\omega (s)c^{-1}(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\ \geq&\frac{1}{\lambda}\frac{\delta}{D}D \int_{0}^{1}G(s,s)\omega (s)c^{-1}(s)f \bigl(s,c(s)y(s)\bigr)\,ds \\ \geq&\delta_{D}\|T_{\lambda}y\|. \end{aligned}$$

Next, by similar arguments of Theorem 1 in [16] one can prove that \(T_{\lambda}:K\rightarrow K\) is completely continuous. So it is omitted, and the theorem is proved. □

Remark 3.2

From (3.2), we know that \(y\in E\) is a solution of problem (2.2) if and only if y is a fixed point of operator \(T_{\lambda}\).

Let \(\{\theta_{i}\}_{i=1}^{\infty}\) be such that \(t_{i+1}'<\theta _{i}<t_{i}'\), \(i=1,2,\ldots\) . Then, for any \(i\in\mathrm{N}\), we define the cone \(K_{\theta_{i}}\) by

$$ K_{\theta_{i}}=\Bigl\{ y\in E: y(t)\geq0, t\in J, \min_{t\in[\theta_{i},1-\theta_{i}]}y(t) \geq\delta_{iD} \|y\|, t\in J\Bigr\} , $$
(3.4)

where

$$ \delta_{iD}=\frac{\delta_{i}}{D},\qquad \delta_{i}= \frac{b+a\theta _{i}}{a+b}, \quad i=1,2,\ldots. $$
(3.5)

Remark 3.3

Assume that (H1)-(H4) hold. Then \(T_{\lambda}(K_{\theta_{i}})\subset K_{\theta_{i}}\) and \(T_{\lambda}: K_{\theta_{i}}\rightarrow K_{\theta_{i}}\) is completely continuous.

Next, using Lemmas 2.1-2.5, we give our main results under the case \(\omega\in L^{P}[0,1]\); \(p>1\), \(p=1\) and \(p=\infty\).

For convenience, we write

$$\begin{aligned}& f_{\delta_{D} \rho}^{\rho}=\min\biggl\{ \min_{t\in[\theta,1-\theta ]} \frac{f(t,y)}{\rho}:y\in[\delta_{D}\rho,\rho]\biggr\} ,\qquad f_{0}^{\rho}=\max \biggl\{ \max_{t\in J} \frac{f(t,y)}{\rho}:y\in[0,\rho]\biggr\} , \\& K_{\rho\theta_{i}}=\{y\in K_{\theta_{i}}: \|y\|< \rho\}, \quad \rho>0. \end{aligned}$$

Firstly, we consider the case \(p>1\).

Theorem 3.2

Assume that (H1)-(H4) hold. Let \(\{r_{i}\} _{i=1}^{\infty}\), \(\{\gamma_{i}\}_{i=1}^{\infty}\) and \(\{R_{i}\} _{i=1}^{\infty}\) be such that

$$R_{i+1}< \delta_{iD}r_{i}< r_{i}< c_{m} \delta_{iD}\gamma_{i}< c_{M}\gamma _{i}< R_{i},\quad i=1,2,\ldots. $$

For each natural number i, let f satisfy the following conditions:

(H5):

\(f_{0}^{c_{M}r_{i}}\leq l\) and \(f_{0}^{c_{M}R_{i}}\leq l\), where

$$ 0< l\leq\max \biggl\{ \frac{2\lambda c_{m}}{c_{M}\|G\|_{q}\|\omega\| _{p}},\frac{2\lambda c_{m}}{c_{M}\|G\|_{1}\|\omega\|_{\infty}}, \frac{2\lambda c_{m}}{c_{M}\beta\|\omega\|_{1}} \biggr\} ; $$
(3.6)
(H6):

\(f_{c_{m}\delta_{iD}\gamma_{i}}^{c_{M}\gamma_{i}}\geq \eta\), where \(\eta>0\).

Then there exists \(\tau>0\) such that, for \(0<\lambda<\tau\), problem (1.1) has two infinite families of positive solutions \(x_{i\lambda}^{(1)}(t)\), \(x_{i\lambda}^{(2)}(t)\) and \(\max_{t\in J}x_{i\lambda}^{(1)}(t)>c_{m}\delta_{iD}\gamma _{i}\), \(i=1,2,\ldots \) .

Proof

Let \(\tau=\inf\{\tau_{i}\}\), \(\tau_{i}=\alpha _{i}c_{m}^{-1} \xi\eta(1-2\theta_{i})\gamma_{i}^{-1}\), \(i=1,2,\ldots \) . Then, for \(0<\lambda<\tau\), (3.1) and Theorem 3.1 imply that \(T_{\lambda }:K\rightarrow K\) is completely continuous.

Let \(t\in J\), \(y\in\partial K_{r_{i}\theta_{i}}\). Then \(0\leq c(t)y(t)\leq c_{M}r_{i}\). Therefore, for \(t\in J\), \(y\in\partial K_{r_{i}\theta_{i}}\), it follows from \(f_{0}^{c_{M}r_{i}}\leq l\) that

$$\begin{aligned} (T_{\lambda}y) (t) =& \frac{1}{\lambda} \int_{0}^{1}H(t,s)\omega(s)c^{-1}(s) f \bigl(s,c(s)y(s)\bigr)\,ds \\ \leq&\frac{1}{\lambda}c_{m}^{-1} \int _{0}^{1}H(s,s)\omega(s) f\bigl(s,c(s)y(s)\bigr) \,ds \\ \leq&\frac{1}{\lambda}c_{m}^{-1}\|H\|_{q}\| \omega \|_{p} lc_{M}r_{i} \\ \leq&\frac{1}{2}r_{i}< r_{i}. \end{aligned}$$
(3.7)

Consequently, for \(y \in\partial K_{r_{i}\theta_{i}}\), we have \(\|T_{\lambda}y\|<\|y\|\), i.e., by Lemma 2.4,

$$ i(T_{\lambda}, K_{r_{i}\theta_{i}}, K_{\theta_{i}})=1. $$
(3.8)

Similarly, for \(y \in\partial K_{R_{i}\theta_{i}}\), we have \(\|T_{\lambda}y\|<\|y\|\), and then it follows from Lemma 2.4 that

$$ i(T_{\lambda}, K_{R_{i}\theta_{i}}, K_{\theta_{i}})=1. $$
(3.9)

On the other hand, let

$$y \in\bar{K}_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}= \Bigl\{ y\in K_{\theta _{i}}: \|y\| \leq\gamma_{i}, \min_{t\in[\theta_{i},1-\theta _{i}]}y(t)\geq \delta_{iD}\gamma_{i} \Bigr\} , $$

then \(0\leq c(t)y(t)\leq c_{M}\|y\|\leq c_{M}\gamma_{i}\). And hence, it follows from (2.15) and (3.2) that

$$ \|T_{\lambda}y\|\leq\frac{1}{\lambda}c_{m}^{-1}\|H \|_{q}\|\omega\|_{p} lc_{M}\gamma_{i}< \gamma_{i}. $$
(3.10)

Furthermore, for \(y \in\bar{K}_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}\), we have \(c(t)y(t)\leq c_{M}\gamma_{i}\), \(t\in J\), \(\min_{t\in[\theta _{iD},1-\theta_{i}]}c(t)y(t)\geq c_{m}\delta_{iD}\gamma_{i}\), and then

$$\begin{aligned} \min_{t\in[\theta_{i},1-\theta_{i}]}(T_{\lambda}y) (t) =& \min_{t\in[\theta_{i},1-\theta_{i}]} \frac{1}{\lambda} \int _{0}^{1}H(t,s)\omega(s)c^{-1}(s) f \bigl(s,c(s)y(s)\bigr)\,ds \\ \geq&\frac{1}{ \lambda}\alpha_{i}c^{-1}_{m}\xi \int_{0}^{1} f\bigl(s,c(s)y(s)\bigr)\,ds \\ \geq&\frac{1}{ \lambda}\alpha_{i}c^{-1}_{m}\xi \int_{\theta_{i}}^{1-\theta_{i}} f\bigl(s,c(s)y(s)\bigr)\,ds \\ \geq&\frac{1}{ \lambda}\alpha_{i}c^{-1}_{m} \xi(1-2\theta_{i})\eta \\ >& \frac{1}{ \tau}\alpha_{i}c^{-1}_{m}\xi(1-2 \theta_{i})\eta \\ =&\gamma_{i}. \end{aligned}$$
(3.11)

Let \(y_{0}\equiv\frac{\delta_{iD}\gamma_{i}+\gamma_{i}}{2}\) and \(F(t,y)=(1-t)T_{\lambda}y+ty_{0}\), then \(F:J\times\bar{K}_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}\rightarrow K_{\theta_{i}}\) is completely continuous. From the analysis above, we obtain for \((t,y)\in J\times\bar{K}_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}\),

$$ F(t,y)\in K_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}. $$
(3.12)

Therefore, for \(t\in J\), \(y\in\partial K_{\delta_{iD}\gamma_{i}\theta _{i}}^{\gamma_{i}}\), we have \(F(t,y)\neq y\). Hence, by the normality property and the homotopy invariance property of the fixed point index, we obtain

$$ i\bigl(T_{\lambda}, K_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}, K_{\theta_{i}} \bigr)=i\bigl(x_{0}, K_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma _{i}}, K_{\theta_{i}} \bigr)=1. $$
(3.13)

Consequently, by the solution property of the fixed point index, \(T_{\lambda}\) has a fixed point \(y_{i\lambda}^{(1)}\) and \(y_{i\lambda}^{(1)} \in K_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma _{i}}\). By Lemma 2.2 and Remark 3.1, it follows that \(y_{i\lambda}^{(1)}\) is a solution to problem (2.2), and

$$\max_{t\in J}y_{i\lambda}^{(1)}(t)\geq \min _{t\in[\theta_{i},1-\theta_{i}]}y_{i\lambda }^{(1)}(t)>\delta_{iD} \gamma_{i}. $$

Therefore, it follows from Lemma 2.1 that problem (1.1) has a solution \(x_{i\lambda}^{(1)}(t)=c(t)y_{i\lambda}^{(1)}(t)\) with

$$\max_{t\in J}x_{i\lambda}^{(1)}(t)=\max _{t\in J}c(t)y_{i\lambda}^{(1)}(t)\geq \min _{t\in[\theta_{i},1-\theta_{i}]}c(t)y_{i\lambda }^{(1)}(t)>c_{m} \delta_{iD}\gamma_{i}. $$

On the other hand, from (3.8), (3.9) and (3.13) together with the additivity of the fixed point index, we get

$$\begin{aligned}& i\bigl(T_{\lambda}, K_{R_{i}\theta_{i}}\backslash\bigl(\bar{K}_{r_{i}\theta _{i}} \cup\bar{K}_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}\bigr), K_{\theta_{i}}\bigr) \\& \quad =i(T_{\lambda}, K_{R_{i}\theta_{i}}, K_{\theta_{i}})-i \bigl(T_{\lambda}, K_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}}, K_{\theta _{i}} \bigr)-i(T_{\lambda}, K_{r_{i}\theta_{i}}, K_{\theta_{i}}) \\& \quad =1-1-1=-1. \end{aligned}$$
(3.14)

Hence, by the solution property of the fixed point index, \(T_{\lambda}\) has a fixed point \(y_{i\lambda}^{(2)}\) and \(y_{i\lambda}^{(2)} \in K_{R_{i}}\backslash(\bar{K}_{r_{i}}\cup \bar{K}_{\delta_{iD}\gamma_{i}\theta_{i}}^{\gamma_{i}})\). By Lemma 2.2 and Remark 3.1, it follows that \(y_{i\lambda}^{(2)}\) is also a solution to problem (2.2), and \(y_{i\lambda}^{(1)}\neq y_{i\lambda}^{(2)}\). And then, by Lemma 2.1, we have problem (1.1) has another solution \(x_{i\lambda }^{(2)}(t)=c(t)y_{i\lambda}^{(2)}(t)\). Since \(i\in\mathrm{N}\) was arbitrary, the proof is complete. □

The following results deal with the case \(p=\infty\).

Theorem 3.3

Assume that (H1)-(H4) hold. Let \(\{r_{i}\} _{i=1}^{\infty}\), \(\{\gamma_{i}\}_{i=1}^{\infty}\) and \(\{R_{i}\} _{i=1}^{\infty}\) be such that

$$R_{i+1}< \delta_{iD}r_{i}< r_{i}< c_{m} \delta_{iD}\gamma_{i}< c_{M}\gamma _{i}< R_{i},\quad i=1,2,\ldots. $$

For each natural number i, letf satisfy (H5) and (H6), then there exists \(\tau>0\) such that, for \(0<\lambda<\tau\), problem (1.1) has two infinite families of positive solutions \(x_{i\lambda}^{(1)}(t)\), \(x_{i\lambda}^{(2)}(t)\) and \(\max_{t\in J}x_{i\lambda}^{(1)}(t)>c_{m}\delta_{iD}\gamma _{i}\), \(i=1,2,\ldots \) .

Proof

Let \(\|G\|_{1}\|\omega\|_{\infty}\) replace \(\|G\|_{q}\| \omega\|_{p}\) and repeat the previous argument. □

Finally, we consider the case of \(p=1\).

Theorem 3.4

Assume that (H1)-(H4) hold. Let \(\{r_{i}\} _{i=1}^{\infty}\), \(\{\gamma_{i}\}_{i=1}^{\infty}\) and \(\{R_{i}\} _{i=1}^{\infty}\) be such that

$$R_{i+1}< \delta_{iD}r_{i}< r_{i}< c_{m} \delta_{iD}\gamma_{i}< c_{M}\gamma _{i}< R_{i},\quad i=1,2,\ldots. $$

For each natural number i, let f satisfy (H5) and (H6), then there exists \(\tau>0\) such that, for \(0<\lambda<\tau\), problem (1.1) has two infinite families of positive solutions \(x_{i\lambda}^{(1)}(t)\), \(x_{i\lambda}^{(2)}(t)\) and \(\max_{t\in J}x_{i\lambda}^{(1)}(t)>c_{m}\delta_{iD}\gamma _{i}\), \(i=1,2,\ldots \) .

Proof

Let \(\beta\|\omega\|_{1}\) replace \(\|G\|_{q}\|\omega\| _{p}\) and repeat the previous argument. □

Remark 3.4

Comparing with Kaufmann and Kosmatov [8], the main features of this paper are as follows.

  1. (i)

    The solvable intervals of positive parameter λ are available.

  2. (ii)

    Two infinite families of positive solutions are obtained.

  3. (iii)

    \(c_{k}>-1\), \(k=1, 2, \ldots, n\), not only \(c_{k}\equiv0\).

4 Examples

From Section 3, it is not difficult to see that (H1) and (H2) play an important role in the proof that problem (1.1) has two infinite families of positive solutions. So, we firstly provide an example of families of functions \(\omega(t)\) satisfying conditions (H1) and (H2). And then we consider a boundary value problem associated with problem (1.1).

Example 4.1

We will check that there exists a function \(\omega(t)\) satisfying conditions (H1) and (H2).

Let

$$t_{n}'=\frac{3}{7}-\frac{1}{5}\sum _{i=1}^{n}\frac{1}{i^{4}},\quad n=1,2, \ldots. $$

It is easy to know

$$ t_{1}'=\frac{3}{7}-\frac{1}{5}= \frac{8}{35}< \frac{1}{2},\qquad t_{n}'-t_{n+1}'= \frac{1}{5(n+1)^{4}}, \quad n=1,2,\ldots, $$

and from \(\sum_{n=1}^{\infty}\frac{1}{n^{4}}=\frac{\pi ^{4}}{90}\), there is

$$t_{0}'=\lim_{n\rightarrow\infty}t_{n}'= \frac{3}{7}-\frac {1}{5}\sum_{n=1}^{\infty} \frac{1}{n^{4}} =\frac{3}{7}-\frac{1}{5}\cdot\frac{\pi^{4}}{90}= \frac{3}{7}-\frac{\pi ^{4}}{450}>\frac{2}{21}. $$

Consider the function

$$\omega(t)=\sum_{n=1}^{\infty} \omega_{n}(t),\quad t\in J, $$

where

$$\omega_{n}(t)= \textstyle\begin{cases} \frac{1}{(5n-1)(5n+4)(t_{n}'+t_{n+1}')},& t\in[0,\frac {t_{n}'+t_{n+1}'}{2}), \\ \frac{1}{\sqrt{t_{n}'-t}},& t\in[\frac{t_{n}'+t_{n+1}'}{2},t_{n}'), \\ \frac{1}{\sqrt{t-t_{n}'}},& t\in[t_{n}',\frac{t_{n}'+t_{n-1}'}{2}], \\ \frac{1}{(5n-1)(5n+4)(2-t_{n}'-t_{n-1}')}, & t\in(\frac {t_{n}'+t_{n-1}'}{2},1]. \end{cases} $$

From \(\sum_{n=1}^{\infty}\frac{1}{(5n-1)(5n+4)}=\frac{1}{20}\) and \(\sum_{n=1}^{\infty}\frac{1}{n^{2}}=\frac{\pi^{2}}{6}\), we have

$$\begin{aligned} \sum_{n=1}^{\infty} \int_{0}^{1}\omega_{n}(t)\,dt =& \sum _{n=1}^{\infty} \biggl\{ \int_{0}^{(t_{n}'+t_{n+1}')/2}\frac {1}{(5n-1)(5n+4)(t_{n}'+t_{n+1}')}\,dt \\ &{}+ \int_{(t_{n-1}'+t_{n}')/2}^{1}\frac {1}{(5n-1)(5n+4)(2-t_{n}'-t_{n-1}')}\,dt \\ &{}+ \int_{(t_{n}'+t_{n+1}')/2}^{t_{n}}\frac{1}{\sqrt{t_{n}'-t}}\,dt+ \int_{t_{n}}^{(t_{n-1}'+t_{n}')/2}\frac{1}{\sqrt {t-t_{n}'}}\,dt \biggr\} \\ =&\sum_{n=1}^{\infty}\frac{1}{(5n-1)(5n+4)}+ \frac{\sqrt {10}}{5}\sum_{n=1}^{\infty}\Bigl(\sqrt {\bigl(t_{n}'-t_{n+1}' \bigr)}+\sqrt {\bigl(t_{n-1}'-t_{n}' \bigr)}\Bigr) \\ =&\frac{1}{20}+\frac{\sqrt{10}}{5}\sum_{n=1}^{\infty} \biggl(\frac {1}{(n+1)^{2}}+\frac{1}{n^{2}}\biggr) \\ =&\frac{1}{20}+\frac{\sqrt{10}}{5}\biggl(\frac{\pi^{2}}{3}-1\biggr). \end{aligned}$$

Thus, it is easy to see

$$\int_{0}^{1}\omega(t)\,dt= \int_{0}^{1}\sum_{n=1}^{\infty} \omega_{n}(t)\,dt= \sum_{n=1}^{\infty} \int_{0}^{1}\omega_{n}(t)\,dt= \frac{1}{20}+\frac {\sqrt{10}}{5}\biggl(\frac{\pi^{2}}{3}-1\biggr)< \infty, $$

which shows that \(\omega(t)\in L^{1}[0,1]\).

On the other hand, a simple calculation shows that \(\omega(t)=\sum_{n=1}^{\infty}\omega_{n}(t)\geq\xi=\frac{21}{38}\times\frac {1}{20}\). So ω satisfies conditions (H2) and (H1).

Example 4.2

Let \(\omega(t)\) be defined as in Example 4.1. Consider the following boundary value problem:

$$ \textstyle\begin{cases} \lambda x''(t)+\omega(t)f(t,x(t))=0,\quad t\in J, t\neq\frac{1}{2}, \\ x(\frac{1}{2}^{+})-x(\frac{1}{2})=\frac{1}{2}x(\frac{1}{2}), \\ x(0)-x'(0)=\int_{0}^{1}sx(s)\,ds, \\ x(1)+x'(1)=\int_{0}^{1}sx(s)\,ds. \end{cases} $$
(4.1)

Let \(c_{1}=t_{1}=\frac{1}{2}\), \(h(t)=\frac{6}{119}t\), \(a=\frac{1}{2}\), \(b=\frac{1}{4}\). Then \(d=\frac{1}{2}\), and

$$c(t)= \textstyle\begin{cases} 1, & 0\le t\le\frac{1}{2}, \\ \frac{3}{2}, & \frac{1}{2}\le t\le1, \end{cases} $$

and then \(c_{M}=\frac{3}{2}\), \(c_{m}=1\), \(c(1)=\frac{3}{2}\).

Similarly, a simple calculation shows that \(A(t)=\frac{5}{2}-t\), \(A_{M}=\frac{5}{2}\), \(D=\frac{130}{119}\), \(\beta=\frac{65}{119}\), \(\delta _{i}=\frac{1+2\theta_{i}}{3}\), \(\delta_{iD}=\frac{119(1+2\theta _{i})}{390}\), \(i=1,2,\ldots \) , and

$$\begin{aligned}& \mu= \int_{0}^{1}A(t)h(t)c(t)\,dt \\& \hphantom{\mu}=\frac{6}{119} \biggl[ \int_{0}^{\frac{1}{2}}\biggl(\frac{5}{2}-t\biggr)t \,dt+\frac {3}{2} \int_{\frac{1}{2}}^{1}\biggl(\frac{5}{2}-t\biggr)t\,dt \biggr] \\& \hphantom{\mu}=\frac{1}{16}, \\& H(t,s)=G(t,s)+\frac{4(5-2t)}{15} \int_{0}^{1}G(s,\tau)c(\tau)\,d\tau, \\& G(t,s) =\frac{1}{8} \textstyle\begin{cases} (1+2s)(3-2t),&0\leq s \leq t \leq1, \\ (1+2t)(3-2s),&0\leq t \leq s \leq1. \end{cases}\displaystyle \end{aligned}$$

Now we consider the multiplicity of positive solutions for problem (4.1).

Let \(f(t,x)=\frac{1}{27\|\omega\|_{1}\beta}(t+1)x\). It follows from the definitions of \(\omega(t)\), \(f(t,x)\), \(c(t)\) and \(h(t)\) that conditions (H1)-(H4) hold. Hence, we only verify the other conditions of our main results.

Let \(\theta_{i}=\frac{3}{7}-\frac{1}{4(i+2)^{4}}\). Then \(\theta_{i}\in (0,\frac{3}{7})\). For \(R_{i}=\frac{1}{100^{i}}\), \(\gamma_{i}=\frac {1}{10\times100^{i}}\) and \(r_{i}=\frac{1}{30\times100^{i}}\), \(i=1,2,\ldots \) , we have

$$\frac{1}{100^{i+1}}< \frac{\delta_{iD}}{30\times100^{i}}< \frac {1}{30\times100^{i}}< \frac{\frac{3}{2}}{ 10\times100^{i}}< \frac{\delta_{iD}}{ 10\times100^{i}}< \frac{1}{100^{i}}. $$

By a direct calculation, we have

$$\begin{aligned} f(t,x) =&\frac{1}{27\|\omega\|_{1}\beta}(t+1)x \\ \leq&\frac{2}{27\|\omega\|_{1}\beta}\times\frac {3}{2}r_{i} \\ =&\frac{1}{9\|\omega\|_{1}\beta\times30\times100^{i}} \\ < &\frac{1}{9\|\omega\|_{1}\beta\times9^{i}}=l,\quad \forall t\in J, x\in\biggl[0,\frac{3}{2}r_{i} \biggr]. \end{aligned}$$

Similarly, for any \(t\in J\), \(x\in[0,\frac{3}{2}R_{i}]\), we have \(f(t,x)\leq\frac{1}{9\|\omega\|_{1}\beta\times100^{i}}< l\), and

$$\begin{aligned} f(t,x) =&\frac{1}{27\|\omega\|_{1}\beta}(t+1)x \\ \geq&\frac{1+\theta_{i}}{27\|\omega\|_{1}\beta}\times \delta_{iD}\gamma_{i} \\ \geq&\frac{1}{27\|\omega\|_{1}\beta}\times\frac {119}{390}\frac{1}{10\times100^{i}} \\ >&\frac{1}{27\|\omega\|_{1}\beta}\times\frac {1}{4}\frac{1}{10\times100^{i}} \\ =&\frac{1}{1\text{,}080\|\omega\|_{1}\beta}\times\frac {1}{100^{i}}=\eta,\quad \forall t\in[ \theta_{i},1-\theta_{i}], x\in \biggl[\delta_{iD} \gamma_{i},\frac{3}{2}\gamma_{i}\biggr]. \end{aligned}$$

Hence, by Theorem 3.2, problem (4.1) has two infinite families of positive solutions \(x_{i\lambda}^{(1)}(t)\) and \(x_{i\lambda }^{(2)}(t)\) for \(0< \lambda<\tau= \inf\{\alpha_{i}\xi\eta(1-2\theta _{i})\gamma_{i}^{-1}\}\), \(i=1,2,\ldots \) .