1 Introduction

In this paper, we study the existence of positive solutions for a boundary value problem of nonlinear fourth-order impulsive differential equation with multi-strip integral conditions given by

$$ \textstyle\begin{cases} u^{(4)}(t)=h(t)f(u(t),u'(t),u''(t)), \quad t\in J^{*} \\ \Delta u(t_{k})=I_{k}(u(t_{k})), \quad k=1,2,\ldots,p, \\ \Delta u'(t_{k})=J_{k}(u(t_{k}),u'(t_{k})),\quad k=1,2,\ldots,p, \\ u'(0)=0, \qquad u(1)+\lambda u'(1)=\sum_{i=1}^{n}\gamma_{i}\int _{\alpha_{i}}^{\beta_{i}}g_{1}(t)u(t)\,dt, \\ u'''(0)=0, \qquad u''(1)+\mu u'''(1)=\sum_{i=1}^{n}\rho _{i}\int_{\xi_{i}}^{\eta_{i}}g_{2}(t)u''(t)\,dt, \end{cases} $$
(1.1)

where \(0<\alpha_{i}<\beta_{i} <1\), \(0<\xi_{i}<\eta_{i}<1\), \(\gamma_{i} >0\), \(\rho _{i}>0\), \(i=1,2,\ldots,n\), \(\lambda>0\), \(\mu>0\). \(J=[0,1]\), \(J^{*}=J\backslash\{t_{1},t_{2},\ldots,t_{p}\}\), \(0=t_{0}< t_{1}< t_{2}<\cdots<t_{p}<t_{p+1}=1\), \(\Delta u(t_{k})=u(t_{k}^{+})-u(t_{k}^{-})\) denotes the jump of \(u(t)\) at \(t=t_{k}\), \(u(t_{k}^{+})\) and \(u(t_{k}^{-})\) represent the right and left limits of \(u(t)\) at \(t=t_{k}\), respectively, \(\Delta u'(t_{k})\) has a similar meaning for \(u'(t)\).

In fact, multi-strip conditions correspond to a linear combination of integrals of unknown function on the sub-intervals of J. The multi-strip conditions appear in the mathematical modelings of the physical phenomena, for instance, see [1, 2]. They have various applications in realistic fields such as blood flow problems, semiconductor problems, hydrodynamic problems, thermoelectric flexibility, underground water flow and so on. For a detailed description of multi-strip integral boundary conditions, we refer the reader to the papers [36].

Impulsive differential equations, which provide a natural description of observed evolution processes, are regarded as important mathematical tools for a better understanding of several real world problems in the applied sciences. For an overview of existing results and of recent research areas of impulsive differential equations, see [710].

The existing literature indicates that research of fourth-order nonlocal integral boundary value problems is excellent, and the relevant methods are developed to be various. Generally, the fixed point theorems in cones, the method of upper and lower solutions, the monotone iterative technique, the critical point theory and variational methods play extremely important roles in establishing the existence of solutions to boundary value problems. It is well known that the classical shooting method could be effectively used to prove the existence results for differential equation boundary value problems. To some extent, this approach has an advantage over the traditional methods. Readers can see [1116] and the references therein for details.

To the best of our knowledge, no paper has considered the existence of positive solutions for a fourth-order impulsive differential equation multi-strip integral boundary value problem with the shooting method till now. Motivated by the excellent work mentioned above, in this paper, we try to employ the shooting method to establish the criteria for the existence of positive solutions to BVP (1.1).

The rest of the paper is organized as follows. In Section 2, we provide some necessary lemmas. In particular, we transform fourth-order impulsive problem (1.1) into a second-order differential integral equation BVP (2.10), and by using the shooting method, we convert BVP (2.10) into a corresponding IVP (initial value problem). In Section 3, the main theorem is stated and proved. Finally, an example is discussed for the illustration of the main work.

Throughout the paper, we always assume that:

(H1):

\(f\in C(\mathbb{R^{+}}\times\mathbb{R^{-}}\times\mathbb{R^{-}} , \mathbb{R}^{+})\), \(I_{k}\in C(\mathbb{R^{+}},\mathbb{R^{-}})\), \(J_{k}\in C(\mathbb{R^{+}}\times\mathbb{R},\mathbb{R^{-}})\) for \(1\leqslant k\leqslant p \), here \(\mathbb{R}^{+}=[0,\infty)\), \(\mathbb{R}^{-}=(-\infty ,0]\);

(H2):

h, \(g_{1}\) and \(g_{2}\in C(J,\mathbb{R}^{+}) \);

(H3):

\(\lambda,\mu,\gamma_{i} ,\rho_{i} >0\) for \(i=1,2,\ldots,n\) and \(0<\Gamma=\sum_{i=1}^{n} \gamma_{i}\int_{\alpha_{i}}^{\beta _{i}}g_{1}(t)\,dt<1\).

2 Preliminaries

For \(v(t)\in C(J)\), we consider the equation

$$ u^{(4)}(t)=v(t), \quad t\in J^{*}, $$
(2.1)

subject to the boundary conditions of (1.1).

We need to reduce BVP (1.1) to two second-order BVPs. To this end, first of all, by means of the transformation

$$ u''(t)=-y(t), \quad t\in J^{*}, $$
(2.2)

we convert problem (2.1) into the BVP

$$ \textstyle\begin{cases} u''(t)+y(t)=0, \quad t\in J^{*}, \\ \Delta u(t_{k})=I_{k}(u(t_{k})), \quad k=1,2,\ldots,p, \\ \Delta u'(t_{k})=J_{k}(u(t_{k}),u'(t_{k})), \quad k=1,2,\ldots,p, \\ u'(0)=0,\qquad u(1)+\lambda u'(1)=\sum_{i=1}^{n}\gamma_{i}\int _{\alpha_{i}}^{\beta_{i}}g_{1}(t)u(t)\,dt \end{cases} $$
(2.3)

and the BVP

$$ \textstyle\begin{cases} y''(t)+v(t)=0, \quad t\in J^{*}, \\ y'(0)=0, \qquad y(1)+\mu y'(1)=\sum_{i=1}^{n}\rho_{i}\int_{\xi _{i}}^{\eta_{i}}g_{2}(t)y(t)\,dt. \end{cases} $$
(2.4)

Lemma 2.1

Assume that conditions (H1)-(H3) hold. Then, for any \(y(t)\in C(J)\), BVP (2.3) has a unique solution as follows:

$$ u(t)= \int_{0}^{1} H(t,s)y(s)\,ds-\sum _{k=1}^{p}H(t,t_{k})J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr)-\Phi \bigl(t,I_{k}\bigl(u(t_{k})\bigr)\bigr), $$
(2.5)

where

$$\begin{aligned}& H(t,s)=G(t,s)+\frac{1}{1-\Gamma}\sum_{i=1}^{n} \gamma_{i} \int _{\alpha_{i}}^{\beta_{i}}G(t,s)g_{1}(t)\,dt, \\& G(t,s)= \textstyle\begin{cases} 1+\lambda-t, & 0\leqslant s\leqslant t\leqslant1, \\ 1+\lambda-s, & 0\leqslant t\leqslant s\leqslant1, \end{cases}\displaystyle \end{aligned}$$

and

$$\Phi\bigl(t,I_{k}\bigl(u(t_{k})\bigr)\bigr)=\sum _{t_{k} \geqslant t}I_{k}\bigl(u(t_{k})\bigr)+ \frac {1}{1-\Gamma }\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta _{i}}g_{1}(t)\sum _{t_{k}\geqslant t }I_{k}\bigl(u(t_{k})\bigr)\,dt. $$

Proof

Integrating (2.3), we get

$$ \begin{aligned}[b] u'(t)&=u'(0)- \int_{0}^{t} y(s)\,ds+\sum _{t_{k}< t} J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr) \\ &=- \int_{0}^{t} y(s)\,ds+\sum _{t_{k}< t} J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr). \end{aligned} $$
(2.6)

By the same way, we get

$$ u(t)=u(0)- \int_{0}^{t}(t-s) y(s)\,ds+\sum _{t_{k}< t}I_{k}\bigl(u(t_{k})\bigr)+\sum _{t_{k}< t} J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr) (t-t_{k}). $$
(2.7)

Setting \(t=1\) in (2.6) and (2.7), yields that

$$\textstyle\begin{cases} u'(1)=-\int_{0}^{1} y(s)\,ds+\sum_{k=1}^{p} J_{k}(u(t_{k}),u'(t_{k})), \\ u(1)=u(0)-\int_{0}^{1} (1-s)y(s)\,ds+\sum_{k=1}^{p} I_{k}(u(t_{k}))+\sum_{k=1}^{p} J_{k}(u(t_{k}),u'(t_{k}))(1-t_{k}). \end{cases} $$

Taking into account the integral boundary condition of (2.3), we have

$$\begin{aligned} u(0) =& \int_{0}^{1}\lambda y(s)\,ds+ \int_{0}^{1}(1-s)y(s)\,ds+\sum _{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)u(t)\,dt \\ &{}-\sum_{k=1}^{p} \lambda J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr)-\sum_{k=1}^{p} J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr) (1-t_{k})-\sum_{k=1}^{p} I_{k}\bigl(u(t_{k})\bigr). \end{aligned}$$

Combining with (2.7), we have

$$\begin{aligned} u(t) =& \int_{0}^{1}\lambda y(s)\,ds+ \int_{0}^{1}(1-s)y(s)\,ds- \int _{0}^{t}(t-s)y(s)\,ds+\sum _{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)u(t)\,dt \\ &{}-\sum_{k=1}^{p} \lambda J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr)-\sum_{k=1}^{p} J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr) (1-t_{k})-\sum_{k=1}^{p} I_{k}\bigl(u(t_{k})\bigr) \\ &{}+\sum_{t_{k}< t}I_{k}\bigl(u(t_{k}) \bigr)+\sum_{t_{k}< t} J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr) (t-t_{k}) \\ =& \int_{0}^{1} G(t,s)y(s)\,ds+\sum _{i=1}^{n} \gamma_{i} \int_{\alpha _{i}}^{\beta_{i}} g_{1}(t)u(t)\,dt \\ &{}-\sum_{k=1}^{p}G(t,t_{k})J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr)-\sum _{t_{k} \geqslant t}I_{k}\bigl(u(t_{k})\bigr), \end{aligned}$$
(2.8)

where \(G(t,s)\) is defined in Lemma 2.1.

Further, it holds that

$$\begin{aligned}& \sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)u(t)\,dt \\& \quad =\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t) \int_{0}^{1} G(t,s)y(s)\,ds\,dt+\Gamma \Biggl(\sum _{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)u(t)\,dt \Biggr) \\& \qquad {} -\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)\sum _{k=1}^{p}G(t,t_{k})J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr)\,dt \\& \qquad {} -\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)\sum _{t_{k}\geqslant t}I_{k}\bigl(u(t_{k})\bigr)\,dt, \end{aligned}$$

where Γ is given by (H3). Therefore,

$$\begin{aligned} \sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)u(t)\,dt =& \frac{1}{1-\Gamma}\Biggl[\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t) \int_{0}^{1} G(t,s)y(s)\,ds\,dt \\ &{}-\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)\sum _{k=1}^{p}G(t,t_{k})J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr)\,dt \\ &{}-\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}} g_{1}(t)\sum _{t_{k}\geqslant t}I_{k}\bigl(u(t_{k})\bigr)\,dt \Biggr]. \end{aligned}$$
(2.9)

Substituting (2.9) into (2.8), we get

$$\begin{aligned} u(t) =& \int_{0}^{1} \Biggl[G(t,s)+\frac{1}{1-\Gamma}\sum _{i=1}^{n}\gamma _{i} \int_{\alpha_{i}}^{\beta_{i}}G(t,s)g_{1}(t)\,dt \Biggr]y(s)\,ds \\ &{}-\sum_{k=1}^{p} \Biggl[G(t,t_{k})+ \frac{1}{1-\Gamma}\sum_{i=1}^{n} \gamma_{i} \int_{\alpha_{i}}^{\beta_{i}}G(t,t_{k})g_{1}(t) \,dt \Biggr]J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr) \\ &{}-\sum_{t_{k}>t}I_{k}\bigl(u(t_{k}) \bigr)-\frac{1}{1-\Gamma}\sum_{i=1}^{n} \gamma _{i} \int_{\alpha_{i}}^{\beta_{i}}g_{1}(t)\sum _{t_{k}\geqslant t}I_{k}\bigl(u(t_{k})\bigr)\,dt \\ =& \int_{0}^{1} H(t,s)y(s)\,ds-\sum _{k=1}^{p}H(t,t_{k})J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr)-\Phi \bigl(t,I_{k}\bigl(u(t_{k})\bigr)\bigr), \end{aligned}$$

where \(H(t,s)\) and \(\Phi(t,I_{k}(u(t_{k})))\) are given previously. □

Lemma 2.2

Assume that conditions (H2)-(H3) hold. \(G(t,s)\) and \(H(t,s)\) are given as in the statement of Lemma 2.1. Then \(G(t,s)\geqslant0\), \(H(t,s)\geqslant0\) for any \(t,s\in[0,1]\).

We consider the operator A defined by

$$(Ay) (t)= \int_{0}^{1} H(t,s)y(s)\,ds-\sum _{k=1}^{p}H(t,t_{k})J_{k} \bigl(u(t_{k}),u'(t_{k})\bigr)-\Phi \bigl(t,I_{k}\bigl(u(t_{k})\bigr)\bigr), $$

and the operator B defined by

$$(By) (t)=(Ay)'(t)=- \int_{0}^{t} y(s)\,ds+\sum _{t_{k}< t} J_{k}\bigl(u(t_{k}),u'(t_{k}) \bigr). $$

Then BVP (1.1) is equivalent to the following second-order differential integral equation BVP:

$$ \textstyle\begin{cases} y''(t)+h(t)f ((Ay)(t),(By)(t),-y(t) )=0, \quad t\in(0,1), \\ y'(0)=0, \qquad y(1)+\mu y'(1)=\sum_{i=1}^{n}\rho_{i}\int_{\xi _{i}}^{\eta_{i}}g_{2}(t)y(t)\,dt. \end{cases} $$
(2.10)

Lemma 2.3

If y is a positive solution of (2.10), then u is a positive solution of (1.1).

Proof

Assume y is a positive solution of (2.10), then \(y(t)>0\) for \(t\in (0,1)\), and it follows from \(u(t)=(Ay)(t)\) that \(u(t)\) satisfies (2.5). By (H1) and Lemma 2.2, we can obtain

$$u(t)>0, \quad t\in(0,1). $$

This ends the proof. □

The principle of the shooting method converts the BVP into an IVP by finding suitable initial value m such that the equation in (2.10) comes with the initial value condition as follows:

$$ \textstyle\begin{cases} y''(t)+h(t)f ((Ay)(t),(By)(t),-y(t) )=0, \quad t\in(0,1), \\ y'(0)=0, \qquad y(0)=m. \end{cases} $$
(2.11)

Under assumptions (H1)-(H3), denote by \(y(t,m)\) the solution of IVP (2.11). We assume that f is strong continuous enough to guarantee that \(y(t,m)\) is uniquely defined and that \(y(t,m)\) depends continuously on both t and m. The studies of this kind of problem can be available in [17]. Consequently, the solution of IVP (2.11) exists.

Denote

$$k(m)=- \frac{\mu y'(1,m)}{ y(1,m)}+\frac{\sum_{i=1}^{n} \rho _{i}\int_{\xi_{i}}^{\eta_{i}}g_{2}(t)y(t,m)\,dt}{y(1,m)} \quad \mbox{for } m>0. $$

Then solving (2.10) is equivalent to finding \(m^{*}\) such that \(k(m^{*})=1\).

Lemma 2.4

([18])

If there exist two initial values \(m_{1}>0\) and \(m_{2}>0\) such that

  1. (i)

    the solution \(y(t,m_{1})\) of (2.11) remains positive in \((0,1)\) and \(k(m_{1})\leqslant1\);

  2. (ii)

    the solution \(y(t,m_{2})\) of (2.11) remains positive in \((0,1)\) and \(k(m_{2})\geqslant1\).

Then BVP (2.10) has a positive solution with \(y(0)=m^{*}\) between \(m_{1}\) and \(m_{2}\).

Now we introduce the Sturm comparison theorem derived from [19].

Lemma 2.5

Let \(y(t,m)\), \(z(t,m)\), \(Z(t,m)\) be the solutions of the following IVPs, respectively:

$$\begin{aligned}& y''(t)+F(t)y(t)=0, \qquad y(0)=m, \qquad y'(0)=0, \\& z''(t)+g(t)z(t)=0, \qquad z(0)=m, \qquad z'(0)=0, \\& Z''(t)+G(t)Z(t)=0, \qquad Z(0)=m, \qquad Z'(0)=0, \end{aligned}$$

and suppose that \(F(t)\), \(g(t)\) and \(G(t)\) are continuous functions defined on \([0,1]\) satisfying

$$g(t)\leqslant F(t)\leqslant G(t), \quad t\in[0,1]. $$

If \(Z(t,m)\) does not vanish in \([0,1]\), for any \(0 \leqslant t\leqslant1\), we have

$$ \frac{z(t ,m)}{z(1,m)}\leqslant\frac{y(t ,m)}{y(1,m)}\leqslant \frac{Z(t ,m)}{Z(1,m)}, $$
(2.12)

further

$$ \begin{aligned}[b] \frac{\sum_{i=1}^{n} \rho_{i} \int_{\xi_{i}}^{\eta _{i}}g_{2}(t)z(t,m)\,dt}{z(1,m)}&\leqslant\frac{\sum_{i=1}^{n} \rho_{i} \int_{\xi_{i}}^{\eta_{i}}g_{2}(t)y(t,m)\,dt}{y(1,m)} \\ &\leqslant \frac {\sum_{i=1}^{n} \rho_{i} \int_{\xi_{i}}^{\eta_{i}}g_{2}(t)Z(t,m)\,dt}{Z(1,m)}. \end{aligned} $$
(2.13)

Proof

The classical Sturm comparison theorem gives us the inequalities

$$\frac{z'(t)}{z(t)}\geqslant\frac{y'(t)}{y(t)}\geqslant\frac {Z'(t)}{Z(t)} \quad \mbox{for all } t\in[0,1]. $$

Integrating these inequalities over \([t,1]\) yields (2.12). In view of (H2), (H3) and the continuity of the integral, (2.13) can be obtained. □

Lemma 2.6

Under assumptions (H1)-(H3), the shooting solution \(y(t,m)\) of IVP (2.11) has the following properties:

$$y(t_{2},m)\leqslant y(t_{1},m)\leqslant m, \qquad y'(t_{2},m)\leqslant y'(t_{1},m) \leqslant0 \quad \textit{for any } 0\leqslant t_{1}\leqslant t_{2} \leqslant1. $$

Proof

Integrating both sides of equation (2.11) from 0 to t, we have

$$\begin{aligned} &y'(t,m)=- \int_{0}^{t} h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds, \\ &y(t,m)=m- \int_{0}^{t} (t-s)h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds. \end{aligned}$$

Indeed,

$$\begin{aligned}& y'(t_{1},m)-y'(t_{2},m)= \int_{t_{1}}^{t_{2}}h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds\geqslant0, \\& y(t_{1},m)-y(t_{2},m) = \int_{t_{1}}^{t_{2}} (t_{2}-s)h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds \\& \hphantom{y(t_{1},m)-y(t_{2},m) = {}}{}+ \int_{0}^{t_{1}} (t_{2}-t_{1})h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds\geqslant0. \end{aligned}$$

This ends the proof. □

Lemma 2.7

Let \(\sum_{i=1}^{n}\rho_{i}\int_{\xi_{i}}^{\eta_{i}}g_{2}(t)\,dt>1\), then BVP (2.10) has no positive solution.

Proof

Assume that BVP (2.10) has a positive solution \(y(t)\), then \(y(t,m)\) is the positive solution of IVP (2.11). For any given \(m>0\), the solution \(y(t,m)\) of IVP (2.11) will be compared with the solution \(z(t)=m\) of

$$z''(t)+0z(t)=0, \qquad z(0)=m, \qquad z'(0)=0. $$

By Lemmas 2.4 and 2.5, we have

$$\begin{aligned} k(m)&=- \frac{\mu y'(1,m)}{ y(1,m)}+\frac{\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta_{i}}g_{2}(t)y(t,m)\,dt}{y(1,m)} \\ &\geqslant\frac{\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta _{i}}g_{2}(t)y(t,m)\,dt}{y(1,m)}\geqslant \frac{\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta _{i}}g_{2}(t)z(t,m)\,dt}{z(1,m)} \\ &=\sum_{i=1}^{n} \rho_{i} \int_{\xi_{i}}^{\eta_{i}}g_{2}(t)\,dt. \end{aligned}$$

In fact, \(k(m)=1\), which means \(\sum_{i=1}^{n} \rho_{i}\int_{\xi _{i}}^{\eta_{i}}g_{2}(t)\,dt\leqslant1\). □

In the rest of the paper, we always assume

(H4):

\(0<\Lambda=\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta _{i}}g_{2}(t)\,dt<1\).

3 Existence results

For the sake of convenience, we denote

$$\begin{aligned} &\max_{0\leqslant t\leqslant1}\bigl\{ h(t)\bigr\} =h^{L},\qquad \min _{0\leqslant t\leqslant1}\bigl\{ h(t)\bigr\} =h^{l}; \\ &\max_{0\leqslant t\leqslant1}\bigl\{ g_{2}(t)\bigr\} =g_{2}^{L}, \qquad \min_{0\leqslant t\leqslant1}\bigl\{ g_{2}(t)\bigr\} =g_{2}^{l}. \end{aligned}$$

Lemma 3.1

Let \(g_{2}^{L} \sum_{i=1}^{n} \rho_{i}(\eta_{i} -\xi_{i})\leqslant1\), then there exist a real number \(x=A_{1}\in (0,\frac{\pi}{2} )\) satisfying the inequality

$$f_{1}(x)=\mu x\tan x+\frac{g_{2}^{L} \sum_{i=1}^{n} \rho_{i}[\sin (\eta_{i}x)-\sin(\xi_{i}x)]}{x\cos x}\leqslant1 $$

and another real number \(x=A_{2}\in (0,\frac{\pi}{2} )\) satisfying the inequality

$$f_{2}(x)=\mu x\tan x+\frac{g_{2}^{L} \sum_{i=1}^{n} \rho_{i}(\eta _{i}-\xi_{i})[\cos(\eta_{i}x)+\cos(\xi_{i}x)]}{2\cos(\eta _{n}x)}\geqslant1. $$

Proof

It is easy to show that

$$\begin{aligned} \lim_{x\to0+}f_{1}(x)&=0+\lim_{x\to0+} \frac{g_{2}^{L} \sum_{i=1}^{n} \rho_{i}[\sin(\eta_{i}x)-\sin(\xi_{i}x)]}{x\cos x} \\ &=\lim_{x\to0+} \frac{2g_{2}^{L} \sum_{i=1}^{n} \rho_{i}\sin \frac{(\eta_{i}-\xi_{i})x}{2}\cos\frac{(\eta_{i}+\xi_{i})x}{2}}{x\cos x} \\ &=g_{2}^{L} \sum_{i=1}^{n} \rho_{i}(\eta_{i} -\xi_{i})\leqslant1. \end{aligned}$$

Since the function \(f_{1}(x)\) is continuous on \((0,\frac{\pi}{2} )\), there exists a real number \(A_{1}\in (0,\frac{\pi}{2} )\) such that \(f_{1}(A_{1})\leqslant1\).

Analogously,

$$\lim_{x\to\frac{\pi}{2}-}f_{2}(x)=+\infty>1. $$

Thus, there exists a real number \(A_{2}\in (0,\frac{\pi}{2} )\) such that \(f_{2}(A_{2})\geqslant1\). □

Set

$$\begin{aligned} & f^{x}=\lim_{w\to x} \sup\max_{(u,v)\in(\mathbb{R^{+}}, \mathbb{R^{-}})} \frac{f(u,v,-w)}{w}, \\ &f_{x}=\lim_{w\to x} \inf\min_{(u,v)\in(\mathbb{R^{+}}, \mathbb{R^{-}})} \frac{f(u,v,-w)}{w}. \end{aligned}$$

Theorem 3.1

Assume that (H1)-(H4) hold. Suppose that one of the following conditions holds:

  1. (i)

    \(0\leqslant f^{0} < \frac{\underline{A}^{2}}{h^{L}}\), \(f_{\infty}>\frac{\bar{A}^{2}}{h^{l}}\);

  2. (ii)

    \(0\leqslant f^{\infty}< \frac{\underline{A}^{2}}{h^{L}}\), \(f_{0}>\frac{\bar{A}^{2}}{h^{l}}\).

Then problem (2.10) has at least one positive solution, where

$$\underline{A}=\min\{A_{1}, A_{2} \}, \qquad \bar{A}=\max\{ A_{1}, A_{2}\}, $$

\(A_{1}\) and \(A_{2}\) are defined by Lemma 3.1.

Proof

(i) Since \(0\leqslant f^{0} < \frac{\underline{A}^{2}}{h^{L}}\), there exists a positive number r such that

$$ \frac{f(Ay,By,-y)}{y}< \frac{\underline{A}^{2}}{h^{L}}\leqslant \frac{A_{1}^{2}}{h^{L}}, \quad \forall 0< y\leqslant r. $$
(3.1)

Let \(0< m_{1}< r\), it gives \(y(t,m_{1})\leqslant y(0,m_{1})= m_{1}< r\). According to (H1) and (H2), we have

$$ 0 \leqslant \frac{h(t)f ((Ay)(t,m_{1}),(By)(t,m_{1}),-y(t,m_{1}) )}{y(t,m_{1})}< h^{L} \frac{A_{1}^{2}}{h^{L}}=A_{1}^{2}, \quad \forall t\in [0,1]. $$
(3.2)

Let \(Z(t)=m_{1}\cos(A_{1}t)\) for \(t\in[0,1]\), then \(Z(t)\) satisfies the following IVP:

$$ Z''(t)+A_{1}^{2}Z(t)=0, \qquad Z(0)=m_{1},\qquad Z'(0)=0. $$
(3.3)

Applying the Sturm comparison theorem, from (3.2), Lemma 2.5 and Lemma 3.1, we have

$$ \begin{aligned}[b] k(m_{1})&=- \frac{\mu y'(1,m_{1})}{ y(1,m_{1})}+\frac{\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta_{i}}g_{2}(t)y(t,m_{1})\,dt}{y(1,m_{1})} \\ &\leqslant- \frac{\mu Z'(1,m_{1})}{ Z(1,m_{1})}+\frac{\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta_{i}}g_{2}(t)Z(t,m_{1})\,dt}{Z(1,m_{1})} \\ &\leqslant \mu A_{1}\tan A_{1} + \frac{g_{2}^{L}\sum_{i=1}^{n} \rho _{i}\int_{\xi_{i}}^{\eta_{i}}\cos(A_{1}t) \,dt}{\cos A_{1}} \\ &= \mu A_{1}\tan A_{1}+\frac{g_{2}^{L} \sum_{i=1}^{n} \rho_{i}[\sin (\eta_{i}A_{1})-\sin(\xi_{i}A_{1})]}{A_{1}\cos A_{1}}\leqslant1. \end{aligned} $$
(3.4)

Conversely, the second part of condition (i) guarantees that there exists a number L large enough such that

$$ \frac{f(Ay,By,-y)}{y}>\frac{\bar{A}^{2}}{h^{l}}\geqslant \frac{A_{2}^{2}}{h^{l}}, \quad \forall y\geqslant L, $$
(3.5)

and there exists some fixed positive constant \(\varepsilon<\min \{A_{2},\frac{\pi}{2}-A_{2} \}\) small enough such that

$$ \frac{f(Ay,By,-y)}{y}> \frac{(A_{2}+\varepsilon )^{2}}{h^{l}}, \quad \forall y\geqslant L. $$
(3.6)

In what follows, we need to find a positive number \(m_{2}\) satisfying \(k(m_{2})>1\).

Claim

There exists an initial value \(m_{2}\) and a positive number σ such that

$$0< \max \biggl\{ \frac{A_{2}}{A_{2}+\varepsilon}, \eta_{n} \biggr\} \leqslant\sigma \leqslant1 \quad \textit{and} \quad y(t,m_{2})\geqslant L \quad \textit{for } t\in[0,\sigma]. $$

Since the function \(y(t,m)\) is concave and \(y'(0,m)=0\), the function \(y(t,m)\) and the line \(y=L\) intersects at most one time for the constant L defined in (3.5) and \(t\in(0,1]\). The intersecting point is denoted as \(\bar{\delta} _{m}\) provided it exists. Furthermore, we set \(I_{m}=(0,\bar{\delta} _{m}]\subseteq (0,1]\). If \(y(1,m)\geqslant L\), then \(\bar{\delta} _{m}=1\).

Next, we divide the discussion into three steps.

Step 1. We declare that there exists \(m_{0}\) large enough such that \(0\leqslant y(t,m_{0})\leqslant L\) for \(t\in[\bar{\delta} _{m_{0}},1]\) and \(y(t,m_{0})\geqslant L\) for \(t\in I_{m_{0}}\).

If not, provided \(y(t,m)\leqslant L\) for all \(t\in(0,1]\) as \(m\to \infty\), then by integrating both sides of equation (2.10) over \([0,t]\), we get

$$y(t,m)=m- \int_{0}^{t} (t-s)h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds. $$

Set \(t=1\), we have

$$m=y(1,m)+ \int_{0}^{1} (1-s)h(s)f \bigl((Ay) (s,m),(By) (s,m),-y(s,m) \bigr)\,ds\leqslant L+L_{f}h^{L}, $$

where \(L_{f}=\max_{y\in[0,L]}f(Ay,By,-y)\), which is a contradiction as \(m\to\infty\). Hence we have the claim.

Step 2. There exists a monotone increasing sequence \(\{m_{k}\}\) such that the sequence \(\{\bar{\delta}_{m_{k}}\}\) is increasing on \(m_{k}\). That is,

$$I_{m_{0}}\subset I_{m_{1}}\subset\cdots\subset I_{m_{k}} \subset \cdots\subseteq (0,1]\quad \text{and} \quad y(t,m_{k})\geqslant L \quad \text{for } t\in I_{m_{k}}. $$

We prove that

$$ \bar{\delta} _{m_{k-1}}< \bar{\delta} _{m_{k}}, \quad k=1,2,\ldots, \text{for } m_{k-1}< m_{k}. $$
(3.7)

Since f guarantees that \(y(t,m)\) is uniquely defined, the solutions \(y(t,m_{k-1})\) and \(y(t,m_{k})\) have no intersection in the interval \([\bar{\delta} _{m_{k-1}},1)\). It follows from

$$y(0,m_{k})=m_{k}>m_{k-1}=y(0,m_{k-1}) $$

that

$$y(\bar{\delta} _{m_{k-1}},m_{k})>y(\bar{\delta} _{m_{k-1}},m_{k-1}). $$

Therefore, (3.7) is obtained.

Step 3. Search a suitable \(m_{2}^{*}\) and a positive number σ such that \(0<\max \{\frac{A_{2}}{A_{2}+\varepsilon}, \eta_{n} \}\leqslant\sigma\leqslant1\) and \(y(t,m_{2}^{*})\geqslant L\) for \(t\in(0,\sigma]\).

From Step 1, Step 2 and the extension principle of solutions, there exists a positive integer n large enough such that

$$\bar{\delta} _{m_{n}}\geqslant\max \biggl\{ \frac {A_{2}}{A_{2}+\varepsilon}, \eta_{n} \biggr\} . $$

If we take \(m_{2}^{*}=m_{n}\), \(\sigma=\bar{\delta} _{m_{n}}\), then

$$ \sigma(A_{2}+\varepsilon)\geqslant A_{2}. $$
(3.8)

Next, we show that \(k(m_{2}^{*})\geqslant1\) for the chosen \(m_{2}^{*}\) and σ.

Take \(z(t)=m_{2}^{*}\cos( (A_{2}+\varepsilon)t)\), then \(z(t)\) satisfies the following IVP:

$$z''(t)+(A_{2}+\varepsilon )^{2}z(t)=0, \qquad z(0)=m_{2}^{*}, \qquad z'(0)=0, \quad t\in [0,\sigma], $$

where \(\sigma\leqslant1\). Noting (3.6), we have

$$\frac{h(t)f ((Ay)(t,m_{2}^{*}),(By)(t,m_{2}^{*}),-y(t,m_{2}^{*}) )}{y(t,m_{2}^{*})}>h^{l}\frac{(A_{2}+\varepsilon )^{2}}{h^{l}}= (A_{2}+ \varepsilon)^{2}, \quad t\in(0,\sigma]. $$

Taking into account Lemmas 2.4, 2.5 and 3.1, we have

$$\begin{aligned} k\bigl(m_{2}^{*}\bigr) =&- \frac{\mu y'(1,m_{2}^{*})}{ y(1,m_{2}^{*})}+\frac{\sum_{i=1}^{n} \rho_{i}\int_{\xi_{i}}^{\eta _{i}}g_{2}(t)y(t,m_{2}^{*})\,dt}{y(1,m_{2}^{*})} \\ \geqslant&- \frac{\mu y'(\sigma,m_{2}^{*})}{ y(\sigma,m_{2}^{*})}+\frac {g_{2}^{l}\sum_{i=1}^{n} \rho_{i}(\eta_{i}-\xi_{i})[y(\xi _{i},m_{2}^{*})+y(\eta_{i},m_{2}^{*})]}{2y(\sigma,m_{2}^{*})} \\ \geqslant& - \frac{\mu z'(\sigma,m_{2}^{*})}{ z(\sigma,m_{2}^{*})}+\frac {g_{2}^{l}\sum_{i=1}^{n} \rho_{i}(\eta_{i}-\xi_{i})[z(\xi _{i},m_{2}^{*})+z(\eta_{i},m_{2}^{*})]}{2z(\sigma,m_{2}^{*})} \\ =& \mu(A_{2}+\varepsilon)\tan\sigma( A_{2}+\varepsilon)+ \frac{g_{2}^{l}\sum_{i=1}^{n} \rho_{i}(\eta _{i}-\xi _{i})[\cos(A_{2}+\varepsilon)\xi_{i}+\cos(A_{2}+\varepsilon) \eta _{i}]}{2\cos(A_{2}+\varepsilon)\sigma}. \end{aligned}$$

For \(t\in (0,\frac{\pi}{2} )\), define

$$C_{i}(t)=\frac{\cos\xi_{i} t}{\cos\sigma t},\quad i=1,2,\ldots,n, $$

where \(0<\xi_{i} \leqslant\sigma\leqslant1\) for \(i=1,2,\ldots,n\). Through calculating, we have

$$C_{i}'(t)=\frac{\sigma\sin\sigma t \cos\xi_{i} t-\xi_{i} \sin\xi_{i} t\cos\sigma t}{(\cos\sigma t)^{2}}. $$

Since \(\sin\sigma t \geqslant\sin\xi_{i} t\), \(\cos \xi_{i} t\geqslant \cos\sigma t\), we have \(C_{i}'(t)\geqslant0\), for \(i=1,2,\ldots,n\), which implies that \(C_{i}(t)\) is nondecreasing for \(t\in (0,\frac{\pi}{2} )\). Similarly, we can show that \(\frac{\cos\eta_{i} t}{\cos\sigma t}\) is nondecreasing for \(t\in (0,\frac{\pi}{2} )\), \(i=1,2,\ldots,n\).

The above discussion guarantees that

$$ \begin{aligned}[b] k\bigl(m_{2}^{*}\bigr) &\geqslant \mu A_{2}\tan A_{2} +\frac{g_{2}^{L} \sum_{i=1}^{n} \rho _{i}(\eta_{i}-\xi_{i})[\cos(A_{2}\xi_{i})+\cos(A_{2}\eta_{i})]}{2\cos (A_{2}\sigma)} \\ &\geqslant\mu A_{2}\tan A_{2} +\frac{g_{2}^{L} \sum_{i=1}^{n} \rho _{i}(\eta_{i}-\xi_{i})[\cos(A_{2}\xi_{i})+\cos(A_{2}\eta_{i})]}{2\cos (A_{2}\eta_{n} )} \geqslant1. \end{aligned} $$
(3.9)

On account of Lemma 2.4, some initial value \(m^{*}\) can be found such that \(y(t,m^{*})\) is the solution of (2.10). Consequently, \(u(t,m^{*})=(Ay)(t,m^{*})\) is the solution of BVP (1.1).

We omit the derivation for (ii) since it is similar to the above proof.  □

4 Example

Consider the following boundary value problem:

$$ \textstyle\begin{cases} u^{(4)}(t)=(t+1)f(u(t),u'(t),u''(t)), \quad t\in J^{*}, \\ \Delta u(t_{k})=I_{k}(u(t_{k})), \quad k=1,2, \\ \Delta u'(t_{k})=J_{k}(u(t_{k}),u'(t_{k})), \quad k=1,2, \\ u'(0)=0, \qquad u(1)+\frac{1}{5} u'(1)=\sum_{i=1}^{2}\gamma _{i}\int_{\alpha_{i}}^{\beta_{i}}(2t+1)u(t)\,dt, \\ u'''(0)=0, \qquad u''(1)+\frac{1}{5} u'''(1)=\sum_{i=1}^{2}\rho _{i}\int_{\xi_{i}}^{\eta_{i}}(2t+1)u''(t)\,dt, \end{cases} $$
(4.1)

with \(\gamma_{1}=\rho_{1}=\frac{1}{2}\), \(\gamma_{2}=\rho _{2}=\frac{3}{5} \) and

$$\begin{aligned}& t_{k}=\frac{k}{3}, \qquad I_{k}(x)=-2x, \qquad J_{k}(y,z)=-y-z^{2}, k=1,2, \\& [\alpha_{1}, \beta_{1}]=[\xi_{1}, \eta_{1}]= \biggl[\frac{1}{8} , \frac{3}{8} \biggr],\qquad [\alpha_{2}, \beta_{2}]=[\xi_{2}, \eta _{2}]= \biggl[\frac{5}{8}, \frac{7}{8} \biggr], \\& f(u,v,w)=\sin(uv)-\frac{1}{10}w+2\quad \mbox{for } (u,v,w)\in \bigl( \mathbb{R^{+}}\times\mathbb{R^{-}}\times\mathbb{R^{-}}\bigr). \end{aligned}$$

It is easy to check that (H1) and (H2) are satisfied. Simple calculation shows that \(\Gamma=\Lambda=\frac{9}{16}<1\), which implies that (H3) and (H4) are satisfied.

Let \(A_{1}=\frac{1}{2}\) and \(A_{2}=\frac{3}{2}\) such that \(f_{1}(A_{1})=0.9552<1\) and \(f_{2}(A_{2})>4.2304>1\).

Hence,

$$f^{\infty}=\frac{1}{10}< \frac{1}{8}=\frac{\underline{A}^{2}}{h^{L}},\qquad f_{0}=\infty>\frac{9}{4}=\frac{\bar{A}^{2}}{h^{l}}. $$

Then condition (ii) of Theorem 3.1 is satisfied. Consequently, Theorem 3.1 guarantees that problem (4.1) has at least one positive solution \(u(t)\).