1 Introduction and preliminaries

The fourth order differential equations can be used to model the steady states of deflections of elastic beams as, for example,

$$\begin{aligned} u^{(4)}(t)=f(t,u(t)),\qquad 0<t<1, \end{aligned}$$
(1)

under the boundary conditions

$$\begin{aligned} u(0)=u'(0)=u''(1)=u'''(1)=0. \end{aligned}$$
(2)

The boundary value problem (1)–(2) describes a bar of length 1 which is clamped on the left end and is free to move at the right end with vanishing bending moment and shearing force (see, for example [1,2,3]).

The main tool used in the proof to the results of [1,2,3] are the measure chains [2], the fixed point index theory in cones [2] and the monotonically iterative technique [3].

In this paper, we investigate the existence and uniqueness of positive solutions to the following cantilever-type boundary value problem

$$\begin{aligned} \displaystyle \left\{ \begin{array}{ll} u^{(4)}(t)=f(t,u(t),u(\alpha t))+g(t,u(t)),&{}\quad 0<t<1,\quad \alpha \in (0,1),\\ u(0)=u'(0)=u''(1)=u'''(1)=0, \end{array} \right. \end{aligned}$$
(3)

where \(f:[0,1]\times [0,\infty )\times [0,\infty )\rightarrow [0,\infty )\) and \(g:[0,1]\times [0,\infty )\rightarrow [0,\infty )\) are continuous functions.

The main tool used in the proof of the results of the paper is the mixed monotone operator method.

The technique of the mixed monotone operators was introduced by Guo and Lakshmikantham in [4] in order to obtain results about coupled fixed points and its applications to the theory of existence of solutions of nonlinear operators. Since then, a great number of papers using this technique has appeared in the literature (see [4,5,6,7,8,9,10,11,12,13,14,15]), among others).

Next, we present some basic facts and results about the mixed monotone operator method which will be the main tool used in the proof of the results of the paper.

Suppose that \((E,\Vert \cdot \Vert )\) is a real Banach space.

A cone in E is a nonempty closed convex set \(K\subset E\) satisfying the following two conditions:

(a):

\(x\in K\) and \(\lambda >0\Rightarrow \lambda x\in K\).

(b):

\(-x,x\in K\Rightarrow x=\theta _{E}\).

(Here \(\theta _{E}\) denotes the zero element of the Banach space E).

Let K be a cone in the Banach space \((E,\Vert \cdot \Vert )\) then K induces a partial order in E defined by, for any \(x,y\in E\),

$$\begin{aligned} x\le y\iff y-x\in K. \end{aligned}$$

By \(x<y\) we denote \(x\le y\) and \(x\ne y\).

If \(\mathring{K}\) denotes the interior of K and \(\mathring{K}\) is nonempty then we say that the cone K is solid. When there exists a constant \(C>0\) such that, for any \(x,y\in E\) with \(\theta _{E}\le x\le y\), we have \(\Vert x\Vert \le C\Vert y\Vert \), we say that the cone K is normal. In this case, the smallest constant C satisfying the above mentioned inequality is called the normality constant of K.

In this context, for any \(x,y\in E\), by \(x\sim y\) we denote the existence of constants \(\lambda ,\mu >0\) satisfying

$$\begin{aligned} \lambda y\le x\le \mu y. \end{aligned}$$

It is easily seen that \(\sim \) is an equivalence relation.

Finally, for \(\theta _{E}<h\) with \(h\in E\), \(K_{h}\) denotes the following set

$$\begin{aligned} K_{h}=\{x\in E: x\sim h\}. \end{aligned}$$

It is clear that \(K_{h}\subset K\).

Next, we need some definitions in order to present the mixed monotone operator method used in our study. This material appears in [10].

Definition 1

An operator \(T:E\rightarrow E\) is increasing (resp. decreasing) if, for any \(x,y\in E\) with \(x\le y\), then \(Tx\le Ty\) (resp. \(Tx\ge Ty\)).

Definition 2

An operator \(A:K\times K\rightarrow K\) is said to be mixed monotone when A(xy) is increasing in x and decreasing in y, that is, for any \((x,y),(u,v)\in K\times K\),

$$\begin{aligned} x\le u\,\,{and}\,\,y\ge v\Rightarrow A(x,y)\le A(u,v). \end{aligned}$$

Definition 3

A mapping \(B:K\longrightarrow K\) is called subhomogeneous if, for any \(t\in (0,1)\) and \(x\in K\), the inequality

$$\begin{aligned} B(tx)\ge tB(x) \end{aligned}$$

holds.

Now, we are ready to present the mixed monotone operator method appearing in [10].

Theorem 1

Suppose that K is a normal cone in the Banach space \((E,\Vert \cdot \Vert )\), \(\gamma \in (0,1)\) and \(h\in E\) with \(\theta _{E}< h\).

Let \(A:K\times K\longrightarrow K\) be a mixed monotone operator satisfying

$$\begin{aligned} A(tx,t^{-1}y)\ge t^{\gamma }A(x,y), \end{aligned}$$

for any \(t\in (0,1)\) and \(x,y\in K\), and \(B:K\longrightarrow K\) an increasing subhomogeneous operator.

Under the following assumptions:

  1. (i)

    there exists \(h_{0}\in K_{h}\) such that \(A(h_{0},h_{0})\in K_{h}\) and \(Bh_{0}\in K_{h}\),

  2. (ii)

    there exists a constant \(\delta _{0}>0\) satisfying

    $$\begin{aligned} A(x,y)\ge \delta _{0}Bx, \end{aligned}$$

    for any \(x,y\in K\),

we have that

  1. (1)

    \(A:K_{h}\times K_{h}\longrightarrow K_{h}\) and \(B:K_{h}\longrightarrow K_{h}\),

  2. (2)

    there exists \(u_{0},v_{0}\in K_{h}\) and \(r\in (0,1)\) such that \(rv_{0}\le u_{0}\le v_{0}\) and

    $$\begin{aligned} u_{0}\le A(u_{0},v_{0})+Bu_{0}\le A(v_{0},u_{0})+Bv_{0}\le v_{0}, \end{aligned}$$
  3. (3)

    there exists a unique \(x^{*}\in K_{h}\) such that

    $$\begin{aligned} x^{*}=A(x^{*},x^{*})+Bx^{*}, \end{aligned}$$
  4. (4)

    for any initial values \(x_{0},y_{0}\in K_{h}\), the sequences defined by

    $$\begin{aligned} x_{n}= & {} A(x_{n-1},y_{n-1})+Bx_{n-1},\\ y_{n}= & {} A(y_{n-1},x_{n-1})+By_{n-1}, \end{aligned}$$

    for \(n=1,2,\ldots ,\) satisfy

    $$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty }\Vert x_{n}-x^{*}\Vert =\displaystyle \lim _{n\rightarrow \infty }\Vert y_{n}-x^{*}\Vert =0. \end{aligned}$$

2 Main result

We start this section presenting the space and the cone where the solutions to our Problem (3) live.

By \(E={\mathcal {C}}[0,1]\) we denote the classical space of the continuous functions \(x:[0,1]\longrightarrow {\mathbb {R}}\) equipped with the supremum standard norm given by \(\Vert x\Vert =\max \{|x(t)|:t\in [0,1]\}\).

In \({\mathcal {C}}[0,1]\), we consider the cone K defined by

$$\begin{aligned} K=\{x\in {\mathcal {C}}[0,1]:x(t)\ge 0\,\,\,\text {for}\,\,\,t\in [0,1]\}. \end{aligned}$$

It is well known that K is a normal cone with normality constant \(C=1\). In this case, the partial order in \({\mathcal {C}}[0,1]\) induced by K is given by, for \(x,y\in {\mathcal {C}}[0,1]\),

$$\begin{aligned} x\le y\iff x(t)\le y(t)\quad \text {for any}\quad t\in [0,1]. \end{aligned}$$

Before to present our main result, we need some lemmas.

The following lemma appears in [2].

Lemma 1

Suppose that \(g\in {\mathcal {C}}[0,1]\). Then the following boundary value problem

$$\begin{aligned} \displaystyle \left\{ \begin{array}{ll} u^{(4)}(t)=g(t),&{}0<t<1,\\ u(0)=u'(0)=u''(1)=u'''(1)=0,&{} \end{array} \right. \end{aligned}$$

has a unique solution

$$\begin{aligned} u(t)=\displaystyle \int _{0}^{1}G(t,s)g(s)ds, \end{aligned}$$

where

$$\begin{aligned} G(t,s)=\displaystyle \left\{ \begin{array}{ll} \displaystyle \frac{1}{6}(3t^{2}s-t^{3}),&{}0\le t\le s\le 1,\\ &{} \\ \displaystyle \frac{1}{6}(3s^{2}t-s^{3}),&{}0\le s\le t\le 1. \end{array} \right. \end{aligned}$$

Remark 1

It is clear that G(ts) is a continuous function on \([0,1]\times [0,1]\) and \(G(t,s)\ge 0\) for \(t,s\in [0,1]\).

The following lemma gives us upper and lower bounds of G(ts).

Lemma 2

For any \(t,s\in [0,1]\), we have that

$$\begin{aligned} \displaystyle \frac{1}{3}t^{2}s^{2}\le G(t,s)\le \displaystyle \frac{1}{2}t^{2}. \end{aligned}$$
(4)

Proof

In order to prove the lower bound, we consider the following two cases i) and ii).

(i):

Suppose that \(0\le s\le t\le 1\). In this case, we have

$$\begin{aligned} G(t,s)= & {} \frac{1}{6}(3s^{2}t-s^{3})\\\ge & {} \frac{1}{6}(3s^{2}t-s^{2}t)=\frac{1}{6}2s^{2}t=\frac{1}{3}s^{2}t\ge \frac{1}{3}s^{2}t^{2}. \end{aligned}$$
(ii):

For \(0\le t\le s\le 1\) we infer that

$$\begin{aligned} G(t,s)= & {} \frac{1}{6}(3t^{2}s-t^{3})\\\ge & {} \frac{1}{6}(3t^{2}s-t^{2}s)=\frac{1}{6}2t^{2}s=\frac{1}{3}t^{2}s\ge \frac{1}{3}t^{2}s^{2}. \end{aligned}$$

This proves the left inequality in (4).

For the upper bound, following a similar argument, we consider that \(0\le s\le t\le 1\) and we have

$$\begin{aligned} G(t,s)= & {} \frac{1}{6}(3s^{2}t-s^{3})\\\le & {} \frac{1}{6}3s^{2}t=\frac{1}{2}s^{2}t\le \frac{1}{2}t^{3}\le \frac{1}{2}t^{2}. \end{aligned}$$

In the case \(0\le t\le s\le 1\), it follows that

$$\begin{aligned} G(t,s)= & {} \frac{1}{6}(3t^{2}s-t^{3})\\\le & {} \frac{1}{6}3t^{2}s=\frac{1}{2}t^{2}s\le \frac{1}{2}t^{2}, \end{aligned}$$

and this proves the right inequality in (4).

This completes the proof. \(\square \)

In the sequel, we present the main result of the paper.

Theorem 2

Suppose the following assumptions:

  1. (i)

    \(f:[0,1]\times [0,\infty )\times [0,\infty )\rightarrow [0,\infty )\) and \(g:[0,1]\times [0,\infty )\rightarrow [0,\infty )\) are continuous functions. Moreover, there exists \(t_{0}\in (0,1]\) satisfying \(g(t_{0},0)>0\).

  2. (ii)

    f(txy) is increasing in x and decreasing in y and g(tx) is increasing in x.

  3. (iii)

    \(g(t,\lambda x)\ge \lambda g(t,x)\) for any \(\lambda \in (0,1)\), \(t\in [0,1]\) and \(x\in [0,\infty )\).

  4. (iv)

    There exists a constant \(\beta \in (0,1)\) satisfying

    $$\begin{aligned} f(t,\lambda x,\lambda ^{-1}y)\ge \lambda ^{\beta }f(t,x,y), \end{aligned}$$

    for any \(\lambda \in (0,1)\), \(t\in [0,1]\) and \(x,y\in [0,\infty )\).

  5. (v)

    There exists a constant \(\delta _{0}>0\) such that

    $$\begin{aligned} f(t,x,y)\ge \delta _{0}g(t,x), \end{aligned}$$

    for any \(t\in [0,1]\) and \(x,y\in [0,\infty )\).

Then we have the following facts.

  1. (1)

    There exist \(u_{0},v_{0}\in K_{h}\) and \(r\in (0,1)\) such that

    $$\begin{aligned} r v_{0}\le u_{0}\le v_{0}, \end{aligned}$$

    and, moreover,

    $$\begin{aligned} u_{0}(t)\le \int _{0}^{1}G(t,s)f(s,u_{0}(s),v_{0}(\alpha s))ds+\int _{0}^{1}G(t,s)g(s,u_{0}(s))ds \end{aligned}$$

    and

    $$\begin{aligned} v_{0}(t)\ge \int _{0}^{1}G(t,s)f(s,v_{0}(s),u_{0}(\alpha s))ds+\int _{0}^{1}G(t,s)g(s,v_{0}(s))ds, \end{aligned}$$

    where \(h(t)=t^{2}\) for \(t\in [0,1]\).

  2. (2)

    Problem (3) has a unique positive solution \(x^{*}\in K_{h}\) (here by positive solution \(x^{*}\) we mean that \(x^{*}(t)>0\) for \(t\in (0,1)\)).

  3. (3)

    For any \(x_{0},y_{0}\in K_{h}\), the sequences inductively defined by

    $$\begin{aligned} x_{n}(t)=\int _{0}^{1}G(t,s)f(s,x_{n-1}(s),y_{n-1}(\alpha s))ds+\int _{0}^{1}G(t,s)g(s,x_{n-1}(s))ds \end{aligned}$$

    and

    $$\begin{aligned} y_{n}(t)=\int _{0}^{1}G(t,s)f(s,y_{n-1}(s),x_{n-1}(\alpha s))ds+\int _{0}^{1}G(t,s)g(s,y_{n-1}(s))ds \end{aligned}$$

    satisfy

    $$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty }\Vert x_{n}-x^{*}\Vert =\displaystyle \lim _{n\rightarrow \infty }\Vert y_{n}-x^{*}\Vert =0. \end{aligned}$$

Proof

Taking into account Lemma 1, our question about the existence of solutions to Problem (3) would be to find solutions to the following integral equation

$$\begin{aligned} x(t)=\int _{0}^{1}G(t,s)f(s,x(s),x(\alpha s))ds+\int _{0}^{1}G(t,s)g(s,x(s))ds, \end{aligned}$$
(5)

for \(t\in [0,1]\).

Next, we consider the two following operators

$$\begin{aligned} A(u,v)(t)=\int _{0}^{1}G(t,s)f(s,u(s),v(\alpha s))ds \end{aligned}$$

and

$$\begin{aligned} (Bu)(t)=\int _{0}^{1}G(t,s)g(s,u(s))ds, \end{aligned}$$

for any \(t\in [0,1]\) and \(u,v\in K\).

By using assumption (i) and Remark 1 about the continuity and the nonnegative character of G(ts), it follows that \(A:K\times K\rightarrow K\) and \(B:K\rightarrow K\).

It is clear that x satisfies Eq. (5) if and only if \(x=A(x,x)+Bx\).

In the sequel, we check that assumptions of Theorem 1 are satisfied.

Taking into account assumption (ii), we infer that A is a mixed monotone operator and B is increasing.

Moreover, by assumption (iv), we have that, for any \(\lambda \in (0,1)\) and \(x,y\in K\),

$$\begin{aligned} A(\lambda x,\lambda ^{-1}y)(t)= & {} \int _{0}^{1}G(t,s)f(s,\lambda x(s),\lambda ^{-1}y(\alpha s))ds\\\ge & {} \lambda ^{\beta }\int _{0}^{1}G(t,s)f(s,x(s),y(\alpha s))ds\\= & {} \lambda ^{\beta }A(x,y)(t), \end{aligned}$$

where \(\beta \in (0,1)\).

This proves that the operator A satisfies the condition appearing in Theorem 1 with \(\gamma =\beta \).

In order to prove that B is a subhomogeneous operator, we take \(\lambda \in (0,1)\) and \(x\in K\).

By using assumption (iii), we deduce that

$$\begin{aligned} B(\lambda x)(t)= & {} \int _{0}^{1}G(t,s)g(s,\lambda x(s))ds\\\ge & {} \lambda \int _{0}^{1}G(t,s)g(s,x(s))ds\\= & {} \lambda Bx(t), \end{aligned}$$

this is, B is a subhomogeneous operator.

Next, we take the function given by \(h(t)=t^{2}\) for \(t\in [0,1]\). Note that \(0\le h(t)\le 1\) for \(t\in [0,1]\). It is clear that \(h\in P\) and \(\theta _{E}<h\). Moreover, Lemma 2 and assumption (ii) gives us that, for any \(t\in [0,1]\),

$$\begin{aligned} A(h,h)(t)= & {} \int _{0}^{1}G(t,s)f(s,h(s),h(\alpha s))ds\nonumber \\\le & {} \frac{1}{2}t^{2}\int _{0}^{1}f(s,1,0)ds\nonumber \\= & {} \frac{1}{2}h(t)\int _{0}^{1}f(s,1,0)ds=h(t)\int _{0}^{1}\frac{1}{2}f(s,1,0)ds. \end{aligned}$$
(6)

On the other hand, by Lemma 2 and assumption (ii), it follows that

$$\begin{aligned} A(h,h)(t)= & {} \int _{0}^{1}G(t,s)f(s,h(s),h(\alpha s))ds\nonumber \\\ge & {} \frac{1}{3}t^{2}\int _{0}^{1}s^{2}f(s,0,1)ds\nonumber \\= & {} \frac{1}{3}h(t)\int _{0}^{1}s^{2}f(s,0,1)ds\nonumber \\= & {} h(t)\int _{0}^{1}\frac{1}{3}s^{2}f(s,0,1)ds. \end{aligned}$$
(7)

If we put

$$\begin{aligned} \alpha _{1}=\int _{0}^{1}\frac{1}{3}s^{2}f(s,0,1)ds \end{aligned}$$

and

$$\begin{aligned} \alpha _{2}=\int _{0}^{1}\frac{1}{2}f(s,1,0)ds \end{aligned}$$

then, from (6) and (7), we get

$$\begin{aligned} \alpha _{1}h\le A(h,h)\le \alpha _{2} h. \end{aligned}$$
(8)

Now, we need to prove that \(\alpha _{i}>0\) for \(i=1,2\).

In order to do this, it is sufficient to prove that \(\alpha _{1}>0\) (because \(\alpha _{1}\le \alpha _{2}\)). In fact, by assumption (i), we have that \(g(t_{0},0)>0\) for certain \(t_{0}\in [0,1]\). By the continuity of g (assumption (i)), from \(g(t_{0},0)>0\) we find a subset \(E\subset [0,1]\) with \(t_{0}\in E\) such that \(\mu (E)>0\), (where \(\mu \) denotes the Lebesgue measure) and \(g(t,0)>0\) for \(t\in E\).

By using assumption (v), we deduce

$$\begin{aligned} f(s,0,1)\ge \delta _{0}g(s,0)\ge 0, \end{aligned}$$

and we get

$$\begin{aligned} \alpha _{1}= & {} \int _{0}^{1}\frac{1}{3}s^{2}f(s,0,1)ds\\\ge & {} \int _{0}^{1}\frac{1}{3}s^{2}\delta _{0}g(s,0)ds\\\ge & {} \int _{E}\frac{1}{3}s^{2}\delta _{0}g(s,0)ds>0, \end{aligned}$$

where in the last inequality we have used the fact that

$$\begin{aligned} \frac{1}{3}s^{2}\delta _{0}g(s,0)>0\,\,\,\text {for}\,\,\,s\in E-\{0\}. \end{aligned}$$

Therefore, \(\alpha _{1}\) and \(\alpha _{2}\) are positive numbers and, consequently, by (8), \(A(h,h)\in K_{h}\).

In the sequel, we prove that \(Bh\in K_{h}\).

Taking into account Lemma 2 and our assumption (ii), it follows that, for any \(t\in [0,1]\), we have

$$\begin{aligned} (Bh)(t)= & {} \int _{0}^{1}G(t,s)g(s,h(s))ds\\\le & {} \int _{0}^{1}\frac{1}{2}t^{2}g(s,h(s))ds\\= & {} t^{2}\int _{0}^{1}\frac{1}{2}g(s,h(s))ds\\\le & {} t^{2}\int _{0}^{1}\frac{1}{2}g(s,1)ds=h(t)\int _{0}^{1}\frac{1}{2}g(s,1)ds. \end{aligned}$$

Similarly, by Lemma 2 and assumption (ii), we get

$$\begin{aligned} (Bh)(t)= & {} \int _{0}^{1}G(t,s)g(s,h(s))ds\\\ge & {} \int _{0}^{1}\frac{1}{3}t^{2}s^{2}g(s,h(s))ds\\\ge & {} t^{2}\int _{0}^{1}\frac{1}{3}s^{2}g(s,0)ds\\= & {} h(t)\int _{0}^{1}\frac{1}{3}s^{2}g(s,0)ds. \end{aligned}$$

Putting

$$\begin{aligned} \rho _{1}=\int _{0}^{1}\frac{1}{3}s^{2}g(s,0)ds \end{aligned}$$

and

$$\begin{aligned} \rho _{2}=\int _{0}^{1}\frac{1}{2}g(s,1)ds, \end{aligned}$$

we have that

$$\begin{aligned} \rho _{1}h\le Bh\le \rho _{2} h. \end{aligned}$$

Next, we prove that \(Bh\in K_{h}\). Using a similar argument to the one used above, we need to prove that \(\rho _{1}>0\). In fact,

$$\begin{aligned} \rho _{1}=\int _{0}^{1}\frac{1}{3}s^{2}g(s,0)ds\ge \int _{E}\frac{1}{3}s^{2}g(s,0)ds>0. \end{aligned}$$

Therefore, \(Bh\in K_{h}\).

Finally, we need to prove that, for any \(u,v\in K\), there exists \(\varepsilon >0\) such that \(A(u,v)\ge \varepsilon Bu\).

In order to do this, we take \(u,v\in K\) and \(t\in [0,1]\) and, by using our assumption (v), it follows

$$\begin{aligned} A(u,v)(t)= & {} \int _{0}^{1}G(t,s)f(s,u(s),u(\alpha s))ds\\\ge & {} \delta _{0}\int _{0}^{1}G(t,s)g(s,u(s))ds=\delta _{0}(Bu)(t). \end{aligned}$$

This says us that \(A(u,v)\ge \delta _{0} Bu\) and, therefore, we can take \(\varepsilon =\delta _{0}\).

This proves that assumptions of Theorem 1 are satisfied and, thus, we obtain our result.

Notice that the solution \(x^{*}\) to our Problem (3) is positive since \(x^{*}\in K_{h}\) and \(0< h(t)=t^{2}\) for \(t\in (0,1)\).

This completes the proof. \(\square \)

Next, we present an example illustrating the result obtained.

Example 1

Consider the following nonlinear boundary value problem

$$\begin{aligned} \displaystyle \left\{ \begin{array}{l} u^{(4)}(t)=3+2\root 3 \of {u(t)}+\displaystyle \frac{1}{\root 3 \of {u(\frac{1}{5}t)+1}},\qquad t\in (0,1)\\ u(0)=u'(0)=u''(1)=u'''(1)=0. \end{array} \right. \end{aligned}$$
(9)

Notice that Problem (9) is a particular case of Problem (3), where

$$\begin{aligned} f(t,u,v)= & {} 2+\root 3 \of {u}+\displaystyle \frac{1}{\root 3 \of {v+1}}, \\ g(t,u)= & {} 1+\root 3 \of {u} \,\,\,\text {and}\,\,\,\alpha =\frac{1}{5}. \end{aligned}$$

It is easily seen that f applies \([0,1]\times [0,\infty )\times [0,\infty )\) into \([0,\infty )\) and g applies \([0,1]\times [0,\infty )\) into \([0,\infty )\).

Moreover, both functions f and g are clearly continuous functions, and, for example, \(g(\frac{1}{3},0)=1>0\).

This says us that assumption (i) of Theorem 2 is satisfied. It is clear that f and g satisfy assumption (ii) of Theorem 2. In order to check assumption (iii) of Theorem 2, we take \(t\in [0,1]\), \(x\in [0,\infty )\) and \(\lambda \in (0,1)\) and it follows

$$\begin{aligned} g(t,\lambda u)= & {} 1+\root 3 \of {\lambda u}>\root 3 \of {\lambda }+\root 3 \of {\lambda }\root 3 \of {u}\\= & {} \root 3 \of {\lambda }(1+\root 3 \of {u})\\> & {} \lambda (1+\root 3 \of {u})=\lambda g(t,u). \end{aligned}$$

Assumption (iv) of Theorem 2 is satisfied because for \(t\in [0,1]\), \(x,y\in [0,\infty )\) and \(\lambda \in (0,1)\), we have

$$\begin{aligned} f(t,\lambda u,\lambda ^{-1}v)= & {} 2+\root 3 \of {\lambda u}+\displaystyle \frac{1}{\root 3 \of {\lambda ^{-1}v+1}}\\= & {} 2+\root 3 \of {\lambda }\root 3 \of {u}+\displaystyle \frac{\root 3 \of {\lambda }}{\root 3 \of {v+\lambda }}\\> & {} 2\root 3 \of {\lambda }+\root 3 \of {\lambda }\root 3 \of {u}+\displaystyle \frac{\root 3 \of {\lambda }}{\root 3 \of {v+\lambda }}\\= & {} \root 3 \of {\lambda }\displaystyle \left( 2+\root 3 \of {u}+\displaystyle \frac{1}{\root 3 \of {v+\lambda }}\displaystyle \right) \\> & {} \root 3 \of {\lambda }\displaystyle \left( 2+\root 3 \of {u}+\displaystyle \frac{1}{\root 3 \of {v+1}}\displaystyle \right) \\= & {} \root 3 \of {\lambda }f(t,u,v), \end{aligned}$$

and this says us that assumption (iv) of Theorem 2 is satisfied with \(\beta =\frac{1}{3}\).

Finally, for \(t\in [0,1]\) and \(u,v\in [0,\infty )\), we deduce

$$\begin{aligned} f(t,u,v)=2+\root 3 \of {u}+\displaystyle \frac{1}{\root 3 \of {v+1}}>1+\root 3 \of {u}=g(t,u) \end{aligned}$$

and this proves that assumption (v) of Theorem 2 is satisfied with \(\delta _{0}=1\).

Now, by Theorem 2, we infer that Problem (9) has a unique positive solution \(u^{*}\in {\mathcal {C}}[0,1]\) with \(u^{*}\in K_{h}\), where \(h(t)=t^{2}\) for \(t\in [0,1]\).

Next, we compare our result with the one appearing in [2].

In [2], the author studied the following fourth order boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll} u^{(4)}(t)=f(t,u(t),u'(t),u''(t),u'''(t)),&{}0<t<1,\\ u(0)=u'(0)=u''(1)=u'''(1)=0.&{} \end{array} \right. \end{aligned}$$
(10)

He proved the existence of positive solutions to Problem (10) under certain growths of the nonlinearity f.

Particularly, if by \(f^{0}\) and \(f_{\infty }\) we denote the following quantities

$$\begin{aligned} f^{0}=\displaystyle \limsup _{\begin{array}{l}x_{0},x_{1},x_{2}\ge 0,x_{3}\le 0\\ |x_{0}|+|x_{1}|+|x_{2}|+|x_{3}|\rightarrow 0\end{array}}\displaystyle \max _{0\le t\le 1}\displaystyle \frac{f(t,x_{0},x_{1},x_{2},x_{3})}{|x_{0}+|x_{1}|+x_{2}|+|x_{3}|} \end{aligned}$$

and

$$\begin{aligned} f_{\infty }=\displaystyle \liminf _{\begin{array}{l}x_{0},x_{1},x_{2}\ge 0,x_{3}\le 0\\ |x_{0}|+|x_{1}|+|x_{2}|+|x_{3}|\rightarrow \infty \end{array}}\displaystyle \min _{0\le t\le 1}\displaystyle \frac{f(t,x_{0},x_{1},x_{2},x_{3})}{|x_{0}|+|x_{1}|+|x_{2}|+|x_{3}|} \end{aligned}$$

then in Corollary 3.1 of [2] it is proved the following result.

Theorem 3

Let \(f:[0,1]\times [0,\infty )\times [0,\infty )\times [0,\infty ]\times (-\infty , 0]\rightarrow [0,\infty )\) be continuous. Suppose the following assumptions

  1. (a)

    For any \(M>0\) there exists a positive continuous function \(H_{M}(\rho )\) defined on \([0,\infty )\) satisfying

    $$\begin{aligned} \displaystyle \int _{0}^{+\infty }\displaystyle \frac{\rho d\rho }{H_{M}(\rho )+1}=+\infty \end{aligned}$$

    such that

    $$\begin{aligned} f(t,x_{0},x_{1},x_{2},x_{3})\le H_{M}(\max (|x_{2}|,|x_{3}|)), \end{aligned}$$

    for any \(t\in [0,1]\), \(x_{0},x_{1}\in [0,M]\), \(x_{2}\in [0,\infty )\) and \(x_{3}\in (-\infty , 0]\). (Nagumo type condition).

  2. (b)

    \(f^{0}<1\) and \(f_{\infty }>10'5.\)

Then Problem (10) has at least one positive solution.

Next, we present an example where condition (b) of Theorem 3 is not satisfied while to this example we can apply Theorem 2.

Example 2

Consider the following nonlinear bouondary value problem

$$\begin{aligned} \displaystyle \left\{ \begin{array}{ll} u^{(4)}(t)=3+\root 3 \of {u},&{}t\in (0,1),\\ u(0)=u'(0)=u''(1)=u'''(1)=0. \end{array}\right. \end{aligned}$$
(11)

Notice that Problem (11) is a particular case of Problem (3) with

$$\begin{aligned} f(t,u,v)=2+\root 3 \of {u}, \end{aligned}$$

and

$$\begin{aligned} g(t,u)=1. \end{aligned}$$

It is clear that f applies \([0,1]\times [0,\infty )\times [0,\infty )\) into \([0,\infty )\), g applies \([0,1]\times [0,\infty )\) into \([0,\infty )\), both functions f and g are continuous and, particularly, \(g(\frac{1}{2},0)=1>0\). Moreover, f is increasing in u and decreasing in v and g is increasing in u.

On the other hand, for any \(\lambda \in (0,1)\), \(t\in [0,1]\) and \(u\in [0,\infty )\),

$$\begin{aligned} g(t,\lambda u)=1>\lambda =\lambda g(t,u). \end{aligned}$$

To check assumption (iv) of Theorem 2, we take \(t\in [0,1]\), \(u,v\in [0,\infty )\) and \(\lambda \in (0,1)\) and we deduce

$$\begin{aligned} f(t,\lambda u,\lambda ^{-1}v)= & {} 2+\root 3 \of {\lambda u}=2+\root 3 \of {\lambda }\root 3 \of {u}\\> & {} 2\root 3 \of {\lambda }+\root 3 \of {\lambda }\root 3 \of {u}\\= & {} \root 3 \of {\lambda }(2+\root 3 \of {u})=\root 3 \of {\lambda }f(t,u,v), \end{aligned}$$

and this proves that assumption (iv) of Theorem 2 is satisfied with \(\beta =\frac{1}{3}\).

Finally, for any \(t\in [0,1]\) and \(u,v\in [0,1]\), we have

$$\begin{aligned} f(t,u,v)=2+\root 3 \of {u}>1=g(t,u). \end{aligned}$$

Therefore, since assumptions of Theorem 2 are satisfied, it follows that Problem (11) has a unique positive solution \(u^{*}\in {\mathcal {C}}[0,1]\) such that, for any \(t\in [0,1]\),

$$\begin{aligned} c_{1}t^{2}\le u^{*}(t)\le c_{2}t^{2}, \end{aligned}$$

where \(c_{1}\) and \(c_{2}\) are positive constants.

On the other hand,

$$\begin{aligned} f^{0}= & {} \displaystyle \limsup _{u+v\rightarrow 0}\displaystyle \max _{0\le t\le 1}\displaystyle \frac{f(t,u,v)}{u+v}\\= & {} \displaystyle \limsup _{u+v\rightarrow 0}\displaystyle \max _{0\le t\le 1}\displaystyle \frac{2+\root 3 \of {u}}{u+v}\\= & {} \displaystyle \limsup _{u+v\rightarrow 0}\displaystyle \frac{2+\root 3 \of {u}}{u+v}=\infty , \end{aligned}$$

and

$$\begin{aligned} f_{\infty }= & {} \displaystyle \liminf _{u+v\rightarrow \infty }\displaystyle \min _{0\le t\le 1}\displaystyle \frac{f(t,u,v)}{u+v}\\{} & {} \displaystyle \liminf _{u+v\rightarrow \infty }\displaystyle \min _{0\le t\le 1}\displaystyle \frac{2+\root 3 \of {u}}{u+v}\\{} & {} \displaystyle \liminf _{u+v\rightarrow \infty }\displaystyle \frac{2+\root 3 \of {u}}{u+v}=0. \end{aligned}$$

Consequently, Problem (11) cannot be studied by Theorem 3 since assumption (b) of this theorem is not satisfied