1 Introduction and Main Results

The lower and upper solutions method is a classical tool for studying boundary value problems associated with ordinary and partial differential equations of different types. Since the pioneering works by Picard [11], Scorza Dragoni [13], Nagumo [10], thousands of papers have employed it to study existence, multiplicity, localisation and stability properties of the solutions of first- and second-order problems. See, e.g. [4] for a classical monograph on the topic. To present a simple but illustrative example, let us consider the Neumann problem

$$\begin{aligned} x''=g(t,x),\quad x'(0)=x'(1)=0. \end{aligned}$$
(1)

For this problem, a lower solution \(\alpha :[0,1]\rightarrow \mathbb {R}\) is defined as a \(C^2\)-function satisfying \( \alpha ''(t)\ge g\big (t,\alpha (t)\big ), \) for every \(t\in [0,1]\), and \(\alpha '(0)\ge 0 \ge \alpha '(1)\). An upper solution \(\beta \) is similarly defined by reversing the inequalities. If we set \(y=x'\), problem (1) is equivalent to the planar system

$$\begin{aligned} x'=f(t,y),\qquad y'=g(t,x), \quad y(0)=y(1)=0, \end{aligned}$$
(2)

where \(f(t,y)=y\). The relations defining \(\alpha \) and \(\beta \) translate into

$$\begin{aligned} y_\alpha '(t)\ge g(t,\alpha (t)),\quad \text { for every }t\in [0,1], \quad y_\alpha (0)\ge 0\ge y_\alpha (1), \end{aligned}$$

where \(y_\alpha =\alpha '\),

$$\begin{aligned} y_\beta '(t)\le g(t,\beta (t)),\quad \text { for every }t\in [0,1], \quad y_\beta (0)\le 0\le y_\beta (1), \end{aligned}$$

where \(y_\beta =\beta '\).

With similar models in mind, Fonda and Toader in [8] have extended to planar systems the definitions of lower and upper solutions for a wide class of problems and for general equations of the form

$$\begin{aligned} x'=f(t,x,y),\qquad y'=g(t,x,y). \end{aligned}$$

In particular, we refer to [8, §2] for the definition in the periodic setting (see also [6, §2] for the non-smooth case), and to [7, §2] for separated boundary value problems (see also [5, §4]).

In [7] (see, e.g. Theorems 5 and Corollary 10), assuming the existence of a lower solution \(\alpha \) and an upper solution \(\beta \), with \(\alpha \le \beta \), it is proved that there exists a solution (xy) of (2) satisfying \(\alpha \le x\le \beta \). In this paper, we are interested in extending this result to systems of the type

$$\begin{aligned} x'=f(t,y),\qquad (a(t) y)'=g(t,x), \end{aligned}$$
(3)

motivated by the study of radial weighted p-Laplacian differential equations as, e.g. in [1,2,3, 9]. Consider, without loss of generality, the equation in the unitary ball \({\mathscr {B}}\)

$$\begin{aligned} \textrm{div}\big ( \eta (|x|) \vert \nabla v\vert ^{p-2}\nabla v\big )=h(|x|,v), \end{aligned}$$
(4)

where \(\eta :[0,1]\rightarrow \mathbb {R}^+\) is a strictly positive smooth radial weight, \(h:[0,1]\times \mathbb {R}\rightarrow \mathbb {R}\) is continuous, and \(p>1\). We are interested in finding radial solutions of (4) of the form \(v(x)=u(|x|)=u(r)\). The function \(u:[0,1]\rightarrow \mathbb {R}\) satisfies the equation

$$\begin{aligned} (a(r) \vert u^{\prime }\vert ^{p-2}u^{\prime })^{\prime }=g(r,u), \qquad r\in ]0,1], \end{aligned}$$
(5)

with \(a(r)=r^{N-1} \eta (r)\) and \(g(r,u)=r^{N-1} h(r,u)\), it is continuously differentiable with \(u'(0)=0\), and \(a(\cdot )|u'|^{p-2} u'\) is also continuously differentiable.

Denoting by \(q>1\) the conjugate exponent of p, satisfying \(\tfrac{1}{p} +\tfrac{1}{q}=1\), Eq. (5) is equivalent to the system

$$\begin{aligned} u^{\prime }=|y|^{q-2}y,\qquad (a(r) y)^{\prime }=g(r,u), \end{aligned}$$

which is a special form of (3). Note that the function a(r) vanishes at \(r=0\), creating a singularity for our problem, and this fact generates a main difficulty in our study. The problem of the presence of the singularity, concerning existence, uniqueness and continuous dependence on initial data, for the Cauchy problems associated with the second-order differential equation (5), was already faced in [9] (in the Appendix), see also [2,3,4]. In the appendix of this paper, we present the corresponding discussion for system (3). Moreover, we can also allow the possibility of having a second singularity at \(r=1\).

We consider the mixed boundary conditions problem

$$\begin{aligned} {\left\{ \begin{array}{ll} x'=f(t,y),\qquad (a(t) y)'=g(t,x), \\ y(0)=0=x(1) \sin \theta +y(1) \cos \theta , \end{array}\right. } \end{aligned}$$
(6)

with \(\theta \in ]-\tfrac{\pi }{2},\tfrac{\pi }{2}]\). Having in mind the radial problem, it is natural to assume the Neumann condition \(y(0)=0\) at the left endpoint of our interval. Concerning the right endpoint, notice that, in case \(\theta =0\), we have a Neumann-type boundary condition, while in case \(\theta =\tfrac{\pi }{2}\), we are dealing with a Dirichlet-type condition.

Let \(a:[0,1]\rightarrow \mathbb {R}\) satisfy the following assumptions:

(A1):

\(a\in C^{1}([0,1])\);

(A2):

\(a(t)>0\), for all \(t\in ]0,1]\);

(A3):

\(a(0)=0\), and there exists \(\rho _0\in ]0,1]\) such that

$$\begin{aligned} a'(t)\ge 0,\quad \text { for every }t\in [0,\rho _0]. \end{aligned}$$

Remark 1

Assume \(N\ge 2\) and \(\eta :[0,1]\rightarrow \mathbb {R}^+\) is strictly positive and continuously differentiable on [0, 1]. Then, the function \(a(r)=r^{N-1}\eta (r)\) introduced in (5) satisfies assumptions (A1), (A2) and (A3).

We now state our first result, where the existence of a pair of well-ordered lower and upper solutions is assumed, referring to Sect. 2.1 for their definitions.

Theorem 2

Assume \(f, g:[0,1]\times \mathbb {R}\rightarrow \mathbb {R}\) are continuous, locally Lipschitz continuous in the second variable, and \(a:[0,1]\rightarrow \mathbb {R}\) satisfies (A1), (A2) and (A3). Assume that the function \(\hat{g}: ]0,1]\times \mathbb {R}\rightarrow \mathbb {R}\), defined by

$$\begin{aligned} \hat{g}(t,x)=\frac{1}{a(t)} g(t,x), \end{aligned}$$
(7)

can be continuously extended to \([0,1]\times \mathbb {R}\). Suppose further that there exist a lower solution \(\alpha \) and an upper solution \(\beta \) for problem (6), satisfying \(\alpha \le \beta \). Then, problem (6) has a solution (xy) such that \(\alpha \le x\le \beta \).

Concerning the proof of the above theorem, we present an alternative approach to the standard application of degree theory, based on a shooting method, after a careful phase plane analysis of the solutions.

Remark 3

We underline that if we replace assumptions (A2) and (A3) with the hypothesis \(a(t)>0\), for all \(t\in [0,1]\), then the conclusion of Theorem 2 can be proved with simpler computations.

If we are only interested in the Neumann problem (6), with \(\theta =0\), we can weaken the assumptions on the function a by allowing \(a(1)=0\). We shall assume that \(a:[0,1]\rightarrow \mathbb {R}\) satisfies (A1) and

\((A2)'\):

\(a(t)>0\), for all \(t\in ]0,1[\);

\((A3)'\):

\(a(0)=0\), \(a(1)=0\), and there exist \(\rho _0\le \rho _1\) in ]0, 1[ such that

$$\begin{aligned} a'(t)\ge 0 \text { for every }t\in [0,\rho _0] \text{ and } a'(t)\le 0 \text { for every }t\in [\rho _1,1]. \end{aligned}$$

An example of a function a satisfying these assumptions is

$$\begin{aligned} a(t)=\sin ^{N-2}(\pi t), \quad N\ge 3. \end{aligned}$$
(8)

It arises, e.g. when dealing with the Laplace–Beltrami operator on the sphere \({\mathbb {S}}^{N-1}\subseteq \mathbb {R}^N\), if we are looking for solutions depending only on the latitude \(\varphi =\pi t\) (asking for symmetry with respect to all the other angle variables). In this case, the problem we need to solve is the following

$$\begin{aligned} {\left\{ \begin{array}{ll} \big (\sin ^{N-2}(\pi t) x'\big )'=\sin ^{N-2}(\pi t) g(t,x),\\ x'(0)=0=x'(1), \end{array}\right. } \end{aligned}$$
(9)

which is a special form of (6), with \(\theta =0\), the function a defined by (8) and the function g replaced by \(\sin ^{N-2}(\pi t) g(t,x)\).

Concerning this new setting, we can state our second result.

Theorem 4

Assume \(f, g:[0,1]\times \mathbb {R}\rightarrow \mathbb {R}\) are continuous, locally Lipschitz continuous in the second variable, and \(a:[0,1]\rightarrow \mathbb {R}\) satisfies (A1), \((A2)'\) and \((A3)'\). Assume that the function \(\hat{g}: ]0,1[ \times \mathbb {R}\rightarrow \mathbb {R}\), defined by (7) can be continuously extended to \([0,1]\times \mathbb {R}\). Suppose further that there exist a lower solution \(\alpha \) and an upper solution \(\beta \) for problem (6), with \(\theta =0\), satisfying \(\alpha \le \beta \). Then, problem (6) with \(\theta =0\) has a solution (xy) such that \(\alpha \le x\le \beta \).

The proof of this second theorem will be provided through a double shooting method.

Remark 5

If the functions f and g are only continuous, similar results can still be proved, adapting the approximation technique used in [8]. However, in this case, we need to assume the existence of strict lower and upper solutions \(\alpha \) and \(\beta \) satisfying \(\alpha (t)<\beta (t)\) for all \(t\in ]0,1[\).

2 Preliminaries

2.1 Lower and Upper Solutions

We now provide the definitions of lower and upper solutions for problem (6), thus extending the ones given in [5,6,7].

Definition 6

A continuously differentiable function \(\alpha :[0,1]\rightarrow \mathbb {R}\) is said to be a lower solution for problem (6) if the following properties hold:

  1. (i)

    there exists a unique function \(y_\alpha :[0,1]\rightarrow \mathbb {R}\) such that

    $$\begin{aligned} {\left\{ \begin{array}{ll} y<y_\alpha (t)\quad \Rightarrow \quad f(t,y)<\alpha '(t),\\ y>y_\alpha (t)\quad \Rightarrow \quad f(t,y)>\alpha '(t) ; \end{array}\right. } \end{aligned}$$
    (10)
  2. (ii)

    \(y_\alpha \) is continuously differentiable, and

    $$\begin{aligned} (a(t) y_\alpha (t))'\ge g(t,\alpha (t)),\quad \text { for every }t\in [0,1] ; \end{aligned}$$
    (11)
  3. (iii)

    \(y_\alpha (0)\ge 0\) and \(\alpha (1) \sin \theta + y_\alpha (1) \cos \theta \le 0\).

Definition 7

A continuously differentiable function \(\beta :[0,1]\rightarrow \mathbb {R}\) is said to be an upper solution for problem (6) if the following properties hold:

(j):

there exists a unique function \(y_\beta :[0,1]\rightarrow \mathbb {R}\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} y<y_\beta (t)\quad \Rightarrow \quad f(t,y)<\beta '(t),\\ y>y_\beta (t)\quad \Rightarrow \quad f(t,y)>\beta '(t) ; \end{array}\right. } \end{aligned}$$
(12)
\((j\!j)\):

\(y_\beta \) is continuously differentiable, and

$$\begin{aligned} (a(t) y_\beta (t))'\le g(t,\beta (t)),\quad \text { for every }t\in [0,1] ; \end{aligned}$$
(13)
\((j\!j\!j)\):

\(y_\beta (0)\le 0\) and \(\beta (1) \sin \theta + y_\beta (1) \cos \theta \ge 0\).

For an intuitive meaning of the previous definitions, see Fig. 1.

Fig. 1
figure 1

An illustration of the definition of lower and upper solutions from a dynamical point of view. Horizontal arrows represent the relative velocity \(x'\) of solutions of system (3) compared with \(\alpha '\) and \(\beta '\), as stated in (10) and (12). Curved arrows indicate the essence of conditions (11) and (13), comparing \(y'\) with \(y'_\alpha \) and \(y'_\beta \)

2.2 Phase Plane Estimates

In this section, we provide some results which will be later used to prove the main theorems.

Proposition 8

Suppose that \(a:[0,1]\rightarrow \mathbb {R}\) satisfies (A1), (A2) and (A3). Then, there exists \(C>0\) such that

$$\begin{aligned}&\int \limits _{0}^{t}a(s) \textrm{d}s\le C a(t),\quad \text {for every } t\in [0,1]. \end{aligned}$$

On the other hand, if \(a:[0,1]\rightarrow \mathbb {R}\) satisfies (A1), \((A2)'\) and \((A3)'\), then there exists \(C>0\) such that

$$\begin{aligned}&\int \limits _{0}^{t}a(s) \textrm{d}s\le C a(t),\quad \text {for every }t\in [0,\tfrac{1}{2}],\\&\int \limits _{t}^{1}a(s) \textrm{d}s\le C a(t),\quad \textrm{for every }t\in [\tfrac{1}{2},1]. \end{aligned}$$

Proof

Assume a satisfies (A1), \((A2)'\) and \((A3)'\), the former case being easier. Consider the functions \(\psi _1:[0,1[ \rightarrow \mathbb {R}\) and \(\psi _2: ]0,1]\rightarrow \mathbb {R}\) defined by

$$\begin{aligned} \psi _1(t)= {\left\{ \begin{array}{ll} \displaystyle \frac{\int _0^t a(s) \textrm{d}s}{a(t)} &{}\quad t>0,\\ 0 &{}\quad t=0; \end{array}\right. } \quad \psi _2(t)= {\left\{ \begin{array}{ll} \displaystyle \frac{\int _t^1 a(s) \textrm{d}s}{a(t)} &{}\quad t<1,\\ 0 &{}\quad t=1. \end{array}\right. } \end{aligned}$$

These functions are continuous, since by \((A3)'\), we have

$$\begin{aligned} \psi _1(t) \le t \quad \text {for all } t\in [0,\rho _0] ; \quad \quad \psi _2(t) \le 1-t \quad \text {for all } t\in [\rho _1,1]. \end{aligned}$$

Then, the conclusion easily follows. \(\square \)

We set

$$\begin{aligned} M=\max \lbrace |\hat{g}(t,x)| : 0\le t \le 1, \alpha (t)\le x\le \beta (t)\rbrace , \end{aligned}$$
(14)

where \(\hat{g}\) was defined in (7), and take a constant K satisfying

$$\begin{aligned} K>\max \lbrace \Vert \alpha '\Vert _{\infty }, \Vert \beta ^{\prime }\Vert _{\infty },\Vert y_{\alpha }\Vert _{\infty },\Vert y_{\beta }\Vert _{\infty }, CM\rbrace , \end{aligned}$$

where C is the constant introduced in Proposition 8.

We first modify the functions f(ty) and g(tx) as follows. Define \(\tilde{g}:[0,1]\times \mathbb {R}\rightarrow \mathbb {R}\) by

$$\begin{aligned} \tilde{g}(t,x)= \left\{ \begin{array}{ll} g(t,\alpha (t))+a(t)(x-\alpha (t)), &{}\quad \text {if } x<\alpha (t), \\ g(t,x),&{}\quad \text {if }\alpha (t) \le x\le \beta (t), \\ g(t,\beta (t))+a(t)(x-\beta (t)),&{}\quad \text {if } x>\beta (t), \end{array} \right. \end{aligned}$$

and \(\tilde{f}:[0,1]\times \mathbb {R}\rightarrow \mathbb {R}\) by

$$\begin{aligned} \tilde{f}(t,y)= \left\{ \begin{array}{ll} y, &{}\quad \text {if }y\le -K-1,\\ f(t,y)(1+K+y)-y (y+K), &{}\quad \text {if }-K-1< y< -K, \\ f(t,y), &{}\quad \text {if }-K \le y\le K, \\ f(t,y)(1+K-y)+y (y-K), &{}\quad \text {if }K< y< K+1, \\ y, &{}\quad \text {if }y\ge K+1. \end{array} \right. \end{aligned}$$

Let us consider the correspondingly modified problem

figure a

We shall prove the existence of a solution of (\(\widetilde{P}\)) and then verify that such a solution is indeed a solution of problem (6). To this aim, we define some regions in the space \([0,1]\times \mathbb {R}\times \mathbb {R}\) and prove some invariance properties of them with respect to the solutions of the planar system

figure b

We set

$$\begin{aligned} \begin{aligned} A_{NE}&=\lbrace (t,x,y):t\in [0,1], x>\beta (t), y>y_{\beta }(t)\rbrace ,\\ A_{SE}&=\lbrace (t,x,y):t\in [0,1], x>\beta (t), y<y_{\beta }(t)\rbrace ,\\ A_{SW}&=\lbrace (t,x,y):t\in [0,1], x<\alpha (t), y<y_{\alpha }(t)\rbrace ,\\ A_{NW}&=\lbrace (t,x,y):t\in [0,1], x<\alpha (t), y>y_{\alpha }(t)\rbrace . \end{aligned} \end{aligned}$$
(15)

Lemma 9

Let (xy) be a solution of (\(\widetilde{S}\)) defined at a point \(t_0\in [0,1]\). We have:

  1. (i)

    if \(y(t_{0})>y_{\beta }(t_{0}) \), then \( x^{\prime }(t_{0})>\beta ^{\prime }(t_{0})\) \(;\)

  2. (ii)

    if \(y(t_{0})<y_{\beta }(t_{0})\), then \(x^{\prime }(t_{0})<\beta ^{\prime }(t_{0})\) \(;\)

  3. (iii)

    if \(y(t_{0})>y_{\alpha }(t_{0}) \), then \(x^{\prime }(t_{0})>\alpha ^{\prime }(t_{0})\) \(;\)

  4. (iv)

    if \(y(t_{0})<y_{\alpha }(t_{0})\), then \(x^{\prime }(t_{0})<\alpha ^{\prime }(t_{0})\).

Proof

We only prove (i), as the other assertions follow in a similar way. Assume \(y(t_{0})>y_{\beta }(t_{0})\). Note that, as \(\Vert y_\beta \Vert _\infty \le K\), we have \(-K\le y_\beta (t_0)<y(t_0)\).

Suppose first that \(y(t_{0})\le K\). Then, \(\tilde{f}(t_{0},y(t_{0}))=f(t_{0},y(t_{0}))\) and hence, from (12), we get

$$\begin{aligned} (x-\beta )^{\prime }(t_{0})=f(t_{0},y(t_{0}))-\beta ^{\prime }(t_{0})>0. \end{aligned}$$

Suppose next that \(K< y(t_{0})< K+1\). Then, using (12) again and the fact that \(\Vert \beta '\Vert _\infty < K\), we obtain

$$\begin{aligned} (x-\beta )^{\prime }(t_{0})&= f(t_{0},y(t_{0})) (1+K-y(t_{0}))+y(t_{0})(y(t_{0})-K)-\beta ^{\prime }(t_{0})\\&>\beta ^{\prime }(t_{0})(1+K-y(t_{0}))+K(y(t_{0})-K)-\beta ^{\prime }(t_{0})\\&=\beta ^{\prime }(t_{0})(K-y(t_{0}))+K(y(t_{0})-K)\\&=(K-\beta ^{\prime }(t_{0}))(y(t_{0})-K)>0. \end{aligned}$$

Suppose finally that \(y(t_{0})\ge K+1\). Then,

$$\begin{aligned} (x-\beta )^{\prime }(t_{0}) =y(t_{0})-\beta ^{\prime }(t_{0}) \ge K+1-\beta ^{\prime }(t_{0})>0. \end{aligned}$$

Therefore, \(x^{\prime }(t_{0})>\beta ^{\prime }(t_{0})\). \(\square \)

Lemma 10

Let (xy) be a solution of (\(\widetilde{S}\)) defined at a point \(t_0\in [0,1]\), and suppose that both \(x(t_{0})>\beta (t_{0})\) and \(y(t_{0})=y_{\beta }(t_{0})\) hold. We have:

  1. (i)

    if \(t_{0}\in ]0,1[ \), then \(y^{\prime }(t_{0})>y_{\beta }^{\prime }(t_{0})\)

  2. (ii)

    if \(t_{0}=0\), then there exists \(\delta >0\) such that \(y(t)>y_{\beta }(t)\) for all \(t\in ]0,\delta [;\)

  3. (iii)

    if \(t_{0}=1\), then there exists \(\delta >0\) such that \(y(t)<y_{\beta }(t)\) for all \(t\in ]1-\delta ,1[ \).

Proof

We first consider case (i). We recall that, from (13), we have

$$\begin{aligned} \left( a(t_{0}) y_{\beta }(t_{0})\right) ^{\prime }\le g(t_{0},\beta (t_{0})). \end{aligned}$$

Furthermore, we compute

$$\begin{aligned} \left( a(t)\big (y(t)-y_{\beta }(t)\big )\right) ^{\prime }\vert _{t=t_{0}}= & {} a^{\prime }(t_0) \left( y(t_0)-y_{\beta }(t_0)\right) +a(t_0) \left( y^{\prime }(t_0)-y_{\beta }^{\prime }(t_0)\right) \nonumber \\= & {} a(t_0) \left( y^{\prime }(t_0)-y_{\beta }^{\prime }(t_0)\right) . \end{aligned}$$
(16)

Since \(x(t_0)>\beta (t_0)\), we have \(\tilde{g}(t_{0},x(t_{0}))={g}(t_{0},\beta (t_{0})) + a(t_0) (x(t_0)-\beta (t_0))\), hence we obtain, using (16),

$$\begin{aligned}&a(t_{0}) \left( y^{\prime }(t_{0})-y_{\beta }^{\prime }(t_{0})\right) =\left( a(t_0) y(t_0)\right) ^{\prime }-\left( a(t_0) y_{\beta }(t_0)\right) ^{\prime }\\&\quad \ge \tilde{g}(t_{0},x(t_{0}))-g(t_{0},\beta (t_{0})) =a(t_0) (x(t_{0})-\beta (t_{0})). \end{aligned}$$

Since \(a(t_0)>0\), we conclude that \(y^{\prime }(t_{0})-y_{\beta }^{\prime }(t_{0})\ge x(t_{0})-\beta (t_{0})>0\).

We consider now case (ii). Let us set \(z(t)=a(t) (y(t)-y_{\beta }(t))\). Since \(x(0)>\beta (0)\), there exists \(\delta >0\) such that \(x(s)>\beta (s)\), for all \(s\in [0,\delta [.\) Pick \(t\in ]0,\delta [ \). Then, we have

$$\begin{aligned} z(t)&=\int \limits _{0}^{t}z^{\prime }(s) \textrm{d}s =\int \limits _{0}^{t}\big (a(s) (y(s)-y_{\beta }(s))\big )^\prime \textrm{d}s\\&\ge \int \limits _{0}^{t} \big (\tilde{g}(s,x(s))- g(s,x(s))\big ) \textrm{d}s = \int \limits _{0}^{t} a(s) (x(s)-\beta (s)) \textrm{d}s >0, \end{aligned}$$

hence \(y(t)-y_{\beta }(t)>0\) for all \(t\in ]0,\delta [ \).

Case (iii) can be proved in a similar way. \(\square \)

The following symmetric result can be proved similarly for the lower solution \(\alpha \).

Lemma 11

Let (xy) be a solution of (\(\widetilde{S}\)) defined at a point \(t_0\in [0,1]\), and suppose that both \(x(t_{0})<\alpha (t_{0})\) and \(y(t_{0})=y_{\alpha }(t_{0})\). We have:

  1. (i)

    if \(t_{0}\in ]0,1[ \), then \(y^{\prime }(t_{0})<y_{\alpha }^{\prime }(t_{0});\)

  2. (ii)

    if \(t_{0}=0 \), then there exists \(\delta >0\) such that \(y(t)<y_{\alpha }(t)\) for all \(t\in ]0,\delta [;\)

  3. (iii)

    if \(t_{0}=1 \), then there exists \(\delta >0\) such that \(y(t)>y_{\alpha }(t)\) for all \(t\in ]1-\delta ,1[ \).

The previous results allow us to prove some invariance properties of the regions \(A_{NE}\), \(A_{SE}\), \(A_{NW}\), \(A_{SW}\) introduced in (15). To this aim, in the following statement, we consider a solution (xy) of (\(\widetilde{S}\)) defined on a maximal interval of existence \(\mathcal I\). Notice that, due to the linear growth of the functions \(\tilde{f}\) and \(\tilde{g}\), if (A1), (A2) and (A3) hold, we can have the following two alternatives:

$$\begin{aligned} \mathcal I = [0,1], \quad \mathcal I = ]0,1]. \end{aligned}$$

On the other hand, if (A1), \((A2)'\) and \((A3)'\) hold, we can have the following four alternatives:

$$\begin{aligned} \mathcal I = [0,1], \quad \mathcal I = ]0,1],\quad \mathcal I = ]0,1[,\quad \mathcal I = [0,1[. \end{aligned}$$

We use the conventional notation \([s,s]=\{s\}\), whenever \(s\in \mathbb {R}\).

Lemma 12

Let \((x,y):\mathcal I \rightarrow \mathbb {R}^2\) be a solution of (\(\widetilde{S}\)) defined at a point \(t_0\in [0,1]\). We have:

(i):

if \((t_0,x(t_0),y(t_0))\!\in \! A_{NE} \), then \((t,x(t),y(t))\!\in \! A_{NE}\) for all \(t\!\in \![t_0,1]\cap \mathcal I;\)

(ii):

if \((t_0,x(t_0),y(t_0))\!\in \! A_{SE}\), then \((t,x(t),y(t))\!\in \! A_{SE}\) for all \(t\!\in \![0,t_0]\cap \mathcal I;\)

(iii):

if \((t_0,x(t_0),y(t_0))\!\in \! A_{SW}\), then \((t,x(t),y(t))\!\in \! A_{SW}\) for all \(t\!\in \![t_0,1]\cap \mathcal I;\)

(iv):

if \((t_0,x(t_0),y(t_0))\!\in \! A_{NW}\), then \((t,x(t),y(t))\!\in \! A_{NW}\) for all \(t\!\in \![0,t_0]\cap \mathcal I \).

Proof

Let us prove the first assertion, the others follow similarly.

Let \((t_0,x(t_0),y(t_0))\in A_{NE}\) for some \(t_0\in [0,1]\). By contradiction, assume that there exists \(t_1\in ]t_0,1] \cap \mathcal I\) such that \((t,x(t),y(t))\in A_{NE}\), for every \(t\in [t_0,t_1[ \), and \((t_1,x(t_1),y(t_1))\notin A_{NE}\). In particular we have either \(x(t_1)=\beta (t_1)\), or \(y(t_1)=y_\beta (t_1)\).

Since \(y(t_0)>y_\beta (t_0)\), recalling Lemma 9, we have \(x'(t)>\beta '(t)\), for every \(t\in [t_0,t_1[ \); therefore, the first alternative is forbidden.

Finally, using Lemma 10 (i), we get a contradiction also in the case \(y(t_1)=y_\beta (t_1)\). \(\square \)

Lemma 13

Let (xy) be a solution of (\(\widetilde{S}\)), defined on a nontrivial interval \([t_1,t_2]\subseteq [0,1]\), satisfying \( \alpha (t_1)\le x(t_1) \le \beta (t_1) \) and \(\alpha (t_2)\le x(t_2) \le \beta (t_2)\). Then, \(\alpha (t)\le x(t)\le \beta (t)\), for all \(t\in [t_1,t_2]\).

Proof

We assume, by contradiction, that there exists \(t_{0}\in ]t_1,t_2[ \) with \(x(t_{0})>\beta (t_{0})\).

Suppose first that \(y(t_0)>y_\beta (t_0)\). Then \((t_0,x(t_0),y(t_0))\in A_{NE}\). From Lemma 12, we have \((t_2,x(t_2),y(t_2))\in A_{NE}\). In particular, \(x(t_2)>\beta (t_2)\), a contradiction.

Suppose now that \(y(t_0)=y_\beta (t_0)\). From Lemma 10, we see that \(y(t)>y_\beta (t)\) in a right neighbourhood of \(t_0\), and we conclude as before.

Finally, if \(y(t_0)<y_\beta (t_0)\), we have \((t_0,x(t_0),y(t_0))\in A_{SE}\). From Lemma 12, we have \((t_1,x(t_1),y(t_1))\in A_{SE}\). In particular, \(x(t_1)>\beta (t_1)\), again a contradiction.

Hence, we conclude that \(x(t)\le \beta (t)\) for every \(t\in [t_1,t_2]\).

In a similar way, we can prove that \(x(t)\ge \alpha (t)\) for every \(t\in [t_1,t_2]\), thus concluding the proof. \(\square \)

Lemma 14

Let (xy) be a solution of (\(\widetilde{S}\)), defined on a nontrivial interval \([0,t_2]\subseteq [0,1]\), satisfying the following properties:

$$\begin{aligned} y(0)=0, \quad \alpha (0)\le x(0) \le \beta (0), \quad \alpha (t_2)\le x(t_2) \le \beta (t_2). \end{aligned}$$

Then \(|y(t)|\le K\), for all \(t\in [0,t_2]\).

Proof

By Lemma 13, we have that \(\alpha (t)\le x(t)\le \beta (t)\), for every \(t\in [0,t_2]\). In particular,

$$\begin{aligned} \tilde{g}(t,x(t))=g(t,x(t))=a(t) \hat{g}(t,x(t)), \end{aligned}$$

for all \(t\in [0,t_2]\). Let us set \(z(t)=a(t) y(t)\). Then, for all \(t\in [0,t_2]\),

$$\begin{aligned} z(t)= \int _{0}^{t} z'(s) \textrm{d}s = \int _{0}^{t} \tilde{g}(s,x(s)) \textrm{d}s = \int _{0}^{t}a(s) \hat{g}(s,x(s)) \textrm{d}s. \end{aligned}$$

Recalling the definition (14) of M and Proposition 8, we deduce that

$$\begin{aligned} |z(t)| \le \int \limits _{0}^{t}M a(s) \textrm{d}s \le M C a(t). \end{aligned}$$

Therefore, \(a(t) |y(t)|\le M C a(t)\) for all \(t\in [0,t_2]\). If \(a(t)\not =0\), we obtain

$$\begin{aligned} |y(t)|\le MC<K, \end{aligned}$$

for every \(t\in ]0,t_2]\). This inequality is trivially satisfied also in case \(t=0\), hence the lemma is completely proved. \(\square \)

Arguing similarly, we can prove the following result.

Lemma 15

Assume (A1), \((A2)'\) and \((A3)'\). Let (xy) be a solution of (\(\widetilde{S}\)), defined on a nontrivial interval \([t_1,1]\subseteq [0,1]\), satisfying the following properties:

$$\begin{aligned} y(1)=0, \quad \alpha (t_1)\le x(t_1) \le \beta (t_1), \quad \alpha (1)\le x(1) \le \beta (1). \end{aligned}$$

Then, \(|y(t)|\le K\), for all \(t\in [t_1,1]\).

So far, we have proved the following a priori bounds.

Proposition 16

Assume (A1), (A2) and (A3). If (xy) is a solution of (\(\widetilde{P}\)), satisfying \( \alpha (0)\le x(0) \le \beta (0)\) and \(\alpha (1)\le x(1) \le \beta (1)\), then (xy) is a solution of problem (6) and satisfies \(\alpha (t)\le x(t)\le \beta (t)\), for all \(t\in [0,1]\).

Proof

It is an immediate consequence of the application of Lemma 13 with \([t_1,t_2]=[0,1]\) and Lemma 14 with \([0,t_2]=[0,1]\). \(\square \)

Proposition 17

Assume (A1), \((A2)'\) and \((A3)'\). If (xy) is a solution of (\(\widetilde{P}\)), with \(\theta =0\), satisfying, for a certain \(t_0\in ]0,1[ \),

$$\begin{aligned} \alpha (0)\le x(0) \le \beta (0), \quad \alpha (t_0)\le x(t_0) \le \beta (t_0), \quad \alpha (1)\le x(1) \le \beta (1) ; \end{aligned}$$

then (xy) is a solution of problem (6), with \(\theta =0\), and satisfies \(\alpha (t)\le x(t)\le \beta (t)\), for all \(t\in [0,1]\).

Proof

We need to apply twice Lemma 13 with \([t_1,t_2]=[0,t_0]\) and \([t_1,t_2]=[t_0,1]\). Then we apply Lemma 14 with \([0,t_2]=[0,t_0]\) and Lemma 15 with \([t_1,1]=[t_0,1]\). \(\square \)

Summing up, to prove Theorem 2, we need to find a solution of (\(\widetilde{P}\)) satisfying the assumptions of Proposition 16. Similarly, to prove Theorem 4, we need to find a solution of (\(\widetilde{P}\)), with \(\theta =0\), satisfying the assumptions of Proposition 17.

3 Proof of the Theorems

3.1 Proof of Theorem 2

To prove our result, we shall apply a shooting argument, with the aim of finding \(\sigma \in \mathbb {R}\) such that the solution (xy) of the Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} x^{\prime }=\tilde{f}(t,y), \quad \big (a(t) y\big )^{\prime }= \tilde{g}(t,x), \\ x(0)=\sigma ,\quad y(0)=0 \end{array}\right. } \end{aligned}$$
(17)

also satisfies \(x(1) \sin \theta +y(1) \cos \theta =0\).

We start by defining the flow associated with system (\(\widetilde{S}\)). Let \(\mathcal {X}\) be the set of initial data

$$\begin{aligned} \mathcal {X} =\{ (t_0,\sigma ,\tau )\in [0,1]\times \mathbb {R}^2 : \tau =0 \text { if } t_0=0 \}, \end{aligned}$$

and consider the solution

$$\begin{aligned} (x(\cdot ),y(\cdot ))=\Phi ( \cdot ; t_0,\sigma ,\tau ) =\big ( \Phi _1( \cdot ; t_0,\sigma ,\tau ), \Phi _2( \cdot ; t_0,\sigma ,\tau ) \big ) \end{aligned}$$

of (\(\widetilde{S}\)) satisfying \(x(t_0)=\sigma \) and \(y(t_0)=\tau \). The proof concerning the existence of such a solution, which is not completely standard due to the presence of the singularity at 0, is given in the Appendix. This solution will be proved to be defined on ]0, 1], thanks to the linear growth of the functions \(\tilde{f}\) and \(\tilde{g}\), but not necessarily at \(t=0\). However, if \(t_0=0\), the solution is defined on the whole interval [0, 1]. We denote by \(\mathcal D\subseteq \mathbb {R}^4\) the domain of the flow \(\Phi =\Phi (t;t_0,\sigma ,\tau )\). We have

$$\begin{aligned} ]0,1]\times \mathcal {X} \subseteq \mathcal D \subseteq [0,1]\times \mathcal {X}. \end{aligned}$$

The continuity of the flow \(\Phi \) follows from the continuous dependence of the solutions of (\(\widetilde{S}\)) on the initial data. Again, the proof of this fact is given in the Appendix.

Let us fix \(U_0>0\) such that

$$\begin{aligned} -U_0< \alpha (0) \le \beta (0) < U_0, \end{aligned}$$
(18)

and define the continuous curve

$$\begin{aligned} \begin{aligned}&\mathscr {C}:[-U_0,U_0] \rightarrow \mathbb {R}^2 \\&\mathscr {C}(\sigma )= (x_\mathscr {C}(\sigma ),y_\mathscr {C}(\sigma )):=\Phi (1; 0, \sigma , 0). \end{aligned} \end{aligned}$$
(19)

The following proposition localises the curve \(\mathscr {C}\).

Proposition 18

Let \(\mathscr {C}\) be the curve defined by (19). Then, the following properties hold:

  1. (i)

    \(\big (1,\mathscr {C}(\sigma )\big )\in A_{SW}\) for every \(\sigma \in [-U_0,\alpha (0)[;\)

  2. (ii)

    \(\big (1,\mathscr {C}(\sigma )\big )\in A_{NE}\) for every \(\sigma \in ]\beta (0),U_0];\)

  3. (iii)

    \(\big (1,\mathscr {C}(\sigma )\big )\not \in {A_{NW}} \cup {A_{SE}} \) for all \(\sigma \in [-U_0,U_0] \).

Proof

Let us prove (i). Let (xy) be the solution of (\(\widetilde{S}\)) satisfying \(x(0)=\sigma <\alpha (0)\) and \(y(0)=0\). Recall that, by the definition of lower solution, \(y_\alpha (0)\ge 0\). Assume first that \(y_\alpha (0)>0\). Then \((0,x(0), y(0))\in A_{SW}\). By Lemma 12, we have that \(\big (t,x(t),y(t)\big )\in A_{SW}\) for all \(t\in [0,1]\) and, in particular, \(\big (1,\mathscr {C}(\sigma )\big )= \big (1,x(1),y(1)\big )\in A_{SW}\).

Assume next that \(y_\alpha (0)=0\). Then, by Lemma 11, there exists \(\delta >0\) such that \(y(t)<y_\alpha (t)\) for all \(t\in ]0,\delta [ \). By continuity, we can find \(t_1\in ]0,\delta [ \) such that \(\big (t_1,x(t_1),y(t_1)\big )\in A_{SW}\). Therefore, by Lemma 12, we have \(\big (t,x(t),y(t)\big )\in A_{SW}\) for all \(t\in [t_1,1]\) and, in particular, \(\big (1,\mathscr {C}(\sigma )\big ) \in A_{SW}\).

The proof of (ii) is similar, hence we omit it, for briefness.

Let us prove (iii). Suppose, by contradiction, that there is \(\sigma \in [-U_0,U_0]\) such that \(\big (1,\mathscr {C}(\sigma )\big )\in A_{NW}\cup A_{SE}\). Let (xy) be the solution of (\(\widetilde{S}\)) satisfying \(x(0)=\sigma \) and \(y(0)=0\).

Suppose first that \(\big (1,\mathscr {C}(\sigma )\big )\in A_{NW}\). Since \(\big (1, x(1), y(1)\big )\in A_{NW}\), by Lemma 12 we have that \(\big (t,x(t),y(t)\big )\in A_{NW}\) for all \(t\in [0,1]\) and, in particular, \(\big (0, \sigma , 0\big )\in A_{NW}\). This is a contradiction, since any point \((0, \sigma , y)\in A_{NW}\) satisfies \(y>y_\alpha (0)\ge 0\).

Suppose now that \(\big (1,\mathscr {C}(\sigma )\big )\in A_{SE}\). Then, by Lemma 12 again, we have that \(\big (t,x(t),y(t)\big )\in A_{SE}\) for all \(t\in [0,1]\) and, in particular, \(\big (0, \sigma , 0\big )\in A_{SE}\). This is a contradiction, since any \((0, \sigma , y)\in A_{SE}\) satisfies \(y<y_\beta (0)\le 0\). \(\square \)

We shall consider the restriction of \(\mathscr {C}\) on some intervals \([\sigma _\ell ,\sigma _r]\) so that \(\alpha (1)\le x_\mathscr {C}(\sigma ) \le \beta (1)\) for all \(\sigma \in [\sigma _\ell ,\sigma _r]\). To this aim, we set

$$\begin{aligned}&\sigma _\ell =\min \{\sigma \in [-U_0,U_0] : x_\mathscr {C}(s)\ge \alpha (1) \text { for all } s\in [ \sigma ,U_0] \} ;\\&\sigma _r=\max \{\sigma \in [-U_0,U_0] : x_\mathscr {C}(s)\le \beta (1) \text { for all } s\in [-U_0, \sigma ]\}. \end{aligned}$$

Observe that, from Proposition 18 (i)–(ii), we have

$$\begin{aligned} \alpha (0)\le \sigma _\ell \le \sigma _r\le \beta (0), \end{aligned}$$
(20)

and

$$\begin{aligned} x_\mathscr {C}(\sigma _\ell )=\alpha (1),\qquad x_\mathscr {C}(\sigma _r)=\beta (1). \end{aligned}$$

Then, by Proposition 18 (iii), we have

$$\begin{aligned} y_\mathscr {C}(\sigma _\ell )\le y_\alpha (1),\qquad y_\mathscr {C}(\sigma _r)\ge y_\beta (1). \end{aligned}$$

Since \(\cos \theta \ge 0\), we get both

$$\begin{aligned} x_\mathscr {C}(\sigma _\ell ) \sin \theta + y_\mathscr {C}(\sigma _\ell ) \cos \theta \le \alpha (1) \sin \theta + y_\alpha (1) \cos \theta \le 0, \end{aligned}$$

and

$$\begin{aligned} x_\mathscr {C}(\sigma _r) \sin \theta + y_\mathscr {C}(\sigma _r) \cos \theta \ge \beta (1) \sin \theta + y_\beta (1) \cos \theta \ge 0. \end{aligned}$$

Since the curve \(\mathscr {C}\) is continuous, we can find \(\sigma \in [\sigma _\ell ,\sigma _r]\) such that

$$\begin{aligned} x_\mathscr {C}(\sigma ) \sin \theta + y_\mathscr {C}(\sigma ) \cos \theta =0. \end{aligned}$$

Therefore, the function \((x,y)=\Phi ( \cdot ;0,\sigma ,0)\) is a solution of problem (\(\widetilde{P}\)). Notice that we have both \(\alpha (0)\le x(0)=\sigma \le \beta (0)\), from (20), and \(\alpha (1)\le x(1)=x_\mathscr {C}(\sigma )\le \beta (1)\), from the definition of the interval \([\sigma _\ell ,\sigma _r]\). By Proposition 16, (xy) is a solution of problem (6) and satisfies \(\alpha (t)\le x(t)\le \beta (t)\), for all \(t\in [0,1]\). The proof of Theorem 2 is, thus, completed.

3.2 Proof of Theorem 4

To prove our second result, we shall apply a double shooting argument, with the aim of finding \(\sigma \in \mathbb {R}\) such that the solution (xy) of the Cauchy problem (17) also satisfies \(y(1)=0\).

To define the flow associated with system (\(\widetilde{S}\)) under assumptions (A1), \((A2)'\) and \((A3)'\), the set of possible initial data is now

$$\begin{aligned} \mathcal {X} =\{ (t_0,\sigma ,\tau )\in [0,1]\times \mathbb {R}^2 : \tau =0 \text { if } t_0=0 \text { or } t_0=1 \}. \end{aligned}$$

The solutions are defined on ]0, 1[ but not necessarily at \(t=0\) or \(t=1\). See the Appendix for details. We denote by \(\mathcal D\subseteq \mathbb {R}^4\) the domain of the flow \(\Phi =\Phi (t;t_0,\sigma ,\tau )\). We have

$$\begin{aligned} ]0,1[ \times \mathcal {X} \subseteq \mathcal D \subseteq [0,1]\times \mathcal {X}. \end{aligned}$$

Let us fix \(U_0>0\) such that

$$\begin{aligned} -U_0< \min \{\alpha (0), \alpha (1)\} \le \max \{\beta (0), \beta (1)\} < U_0. \end{aligned}$$

For any \(\sigma _0,\sigma _1\in [-U_0,U_0]\), we consider the the initial value problem

$$\begin{aligned} \left\{ \begin{array}{ll} x^{\prime }=\tilde{f}(t,y), &{}\quad \big (a(t) y\big )^{\prime }=\tilde{g}(t,x), \\ x(0)=\sigma _0,&{}\quad y(0)=0, \end{array} \right. \end{aligned}$$
(21)

and the final value problem

$$\begin{aligned} \left\{ \begin{array}{ll} x^{\prime }=\tilde{f}(t,y),&{} \quad \big (a(t) y\big )^{\prime }= \tilde{g}(t,x), \\ x(1)=\sigma _1,&{}\quad y(1)=0. \end{array} \right. \end{aligned}$$
(22)

We use a shooting argument to find a solution \((x_{\sigma _0}, y_{\sigma _0})\) of (21) (defined on [0, 1[ ), and a solution \((x^{\sigma _1}, y^{\sigma _1})\) of (22) (defined on ]0, 1]), satisfying

$$\begin{aligned} \textstyle (x_{\sigma _0}(\frac{1}{2}), y_{\sigma _0}(\frac{1}{2}))= (x^{\sigma _1}(\frac{1}{2}), y^{\sigma _1}(\frac{1}{2})). \end{aligned}$$

Clearly, the function (xy) is defined by

$$\begin{aligned} (x(t),y(t))= \left\{ \begin{array}{ll} (x_{\sigma _0}(t), y_{\sigma _0}(t)),&{}\quad \text {if } 0\le t\le \frac{1}{2}, \\ (x^{\sigma _1}(t), y^{\sigma _1}(t)),&{}\quad \text {if } \frac{1}{2}<t\le 1, \end{array} \right. \end{aligned}$$
(23)

will be the solution of (\(\widetilde{P}\)) we are looking for.

Let us define two continuous curves \(\mathscr {C}_0,\mathscr {C}_1: [-U_0,U_0] \rightarrow \mathbb {R}^2\) by

$$\begin{aligned} \begin{aligned}&\mathscr {C}_0(\sigma )= (x_\mathscr {C}^0(\sigma ),y_\mathscr {C}^0(\sigma )):=\Phi (\tfrac{1}{2}; 0, \sigma , 0), \\&\mathscr {C}_1(\sigma )= (x_\mathscr {C}^1(\sigma ),y_\mathscr {C}^1(\sigma )):= \Phi (\tfrac{1}{2}; 1, \sigma , 0). \end{aligned} \end{aligned}$$
(24)

The following statement describes some localisation properties of the curves \(\mathscr {C}_0\) and \(\mathscr {C}_1\).

Proposition 19

Let \(\mathscr {C}_0\) and \(\mathscr {C}_1\) be the curves defined by (24). Then, the following properties hold:

  1. (i)

    \(\big (\frac{1}{2},\mathscr {C}_0(\sigma )\big )\in A_{SW}\) and \(\big (\frac{1}{2},\mathscr {C}_1(\sigma )\big )\in A_{NW}\) for every \(\sigma \in [-U_0,\alpha (0)[;\)

  2. (ii)

    \(\big (\frac{1}{2},\mathscr {C}_0(\sigma )\big )\in A_{NE}\) and \(\big (\frac{1}{2},\mathscr {C}_1(\sigma )\big )\in A_{SE}\) for every \(\sigma \in ]\beta (0),U_0];\)

  3. (iii)

    \(\big (\frac{1}{2},\mathscr {C}_0(\sigma )\big )\not \in {A_{NW}} \cup {A_{SE}} \) for all \(\sigma \in [-U_0,U_0];\)

  4. (iv)

    \(\big (\frac{1}{2},\mathscr {C}_1(\sigma )\big )\not \in {A_{SW}} \cup {A_{NE}} \) for all \(\sigma \in [-U_0,U_0] \).

The proof can be adapted from that of Propositions 18. We prove now that the two curves have a common value.

Proposition 20

Let \(\mathscr {C}_0\) and \(\mathscr {C}_1\) be the curves defined by (24). Then, there are \(\sigma _0, \sigma _1\in ]-U_0,U_0[\) such that \(\mathscr {C}_0(\sigma _0)=\mathscr {C}_1(\sigma _1).\)

Proof

We shall consider the restriction of \(\mathscr {C}_0\) and \(\mathscr {C}_1\) on some intervals \([\sigma _\ell ^0,\sigma _r^0]\) and \([\sigma _\ell ^1,\sigma _r^1]\), respectively, so that \(\alpha (\tfrac{1}{2})\le x_\mathscr {C}^0(\sigma ) \le \beta (\tfrac{1}{2})\) for all \(\sigma \in [\sigma _\ell ^0,\sigma _r^0]\) and \(\alpha (\tfrac{1}{2})\le x_\mathscr {C}^1(\sigma )\le \beta (\tfrac{1}{2})\) for all \(\sigma \in [\sigma _\ell ^1,\sigma _r^1]\). To this aim, we set

$$\begin{aligned} \sigma _\ell ^0=\min&\{\sigma \in [-U_0,U_0] : x_\mathscr {C}^0(s)\ge \alpha (\tfrac{1}{2}) \text { for all } s\in [ \sigma ,U_0] \} ; \\ \sigma _r^0=\max&\{\sigma \in [-U_0,U_0] : x_\mathscr {C}^0(s)\le \beta (\tfrac{1}{2}) \text { for all } s\in [-U_0, \sigma ]\} ; \\ \sigma _\ell ^1=\min&\{\sigma \in [-U_0,U_0] : x_\mathscr {C}^1(s)\ge \alpha (\tfrac{1}{2}) \text { for all } s\in [ \sigma ,U_0] \} ; \\ \sigma _r^1=\max&\{\sigma \in [-U_0,U_0] : x_\mathscr {C}^1(s)\le \beta (\tfrac{1}{2}) \text { for all } s\in [-U_0, \sigma ]\}. \end{aligned}$$

Observe that, from Proposition 19 (i)–(ii), we have

$$\begin{aligned} \alpha (0)\le \sigma _\ell ^0\le \sigma _r^0\le \beta (0) \quad \text {and}\quad \alpha (1)\le \sigma _\ell ^1\le \sigma _r^1\le \beta (1). \end{aligned}$$
(25)

Moreover,

$$\begin{aligned} x_\mathscr {C}^0(\sigma _\ell ^0)=\alpha (\tfrac{1}{2})=x_\mathscr {C}^1(\sigma _\ell ^1) \quad \text {and}\quad x_\mathscr {C}^0(\sigma _r^0)=\beta (\tfrac{1}{2})=x_\mathscr {C}^1(\sigma _r^1). \end{aligned}$$

By Proposition 19 (iii)–(iv), we have

$$\begin{aligned} y_\mathscr {C}^0(\sigma _\ell ^0)\le y_\alpha (\tfrac{1}{2})\le y_\mathscr {C}^1(\sigma _\ell ^1) \quad \text {and}\quad y_\mathscr {C}^0(\sigma _\ell ^0)\ge y_\beta (\tfrac{1}{2})\ge y_\mathscr {C}^1(\sigma _\ell ^1). \end{aligned}$$

Since the curves are continuous, they must cross each other at some point \(\big (x_\mathscr {C}^0(\sigma _0),y_\mathscr {C}^0(\sigma _0\big )=\big (x_\mathscr {C}^1(\sigma _1),y_\mathscr {C}^1(\sigma _1\big )\), with \(\sigma _0\in [\sigma _\ell ^0,\sigma _r^0]\) and \(\sigma _1\in [\sigma _\ell ^1,\sigma _r^1]\).\(\square \)

The parameters \(\sigma _0,\sigma _1\) obtained in Proposition 20 permit us to define the solution (xy) of problem (\(\widetilde{P}\)) as in (23). In particular, we have

$$\begin{aligned} \alpha (0)\le x(0)=\sigma _0\le \beta (0),\quad \alpha (1)\le x(1)=\sigma _1\le \beta (1), \end{aligned}$$

from (25), and

$$\begin{aligned} \alpha (\tfrac{1}{2})\le x(\tfrac{1}{2})\le \beta (\tfrac{1}{2}), \end{aligned}$$

from the definition of the intervals \([\sigma _\ell ^0,\sigma _r^0]\) and \([\sigma _\ell ^1,\sigma _r^1]\). By Proposition 17, (xy) is a solution of problem (6), with \(\theta =0\), and satisfies \(\alpha (t)\le x(t)\le \beta (t)\), for all \(t\in [0,1]\). The proof of Theorem 4 is, thus, completed.

4 Examples and Final Remarks

In this section, we provide some possible applications of our theorems.

In (4), we have considered for simplicity a differential equation ruled by a weighted p-Laplacian. In a similar way, we can consider a double-weighted \(\phi \)-Laplace equation, in the unitary ball, of the type

$$\begin{aligned} \textrm{div}\Big ( \eta (|x|) \phi \big (m(|x|) \nabla v(x) \big )\Big )=h(|x|,v(x)), \end{aligned}$$
(26)

where \(\eta ,m:[0,1]\rightarrow \mathbb {R}^+\) are positive continuous functions, \(\phi (w)=\psi (|w|)\frac{w}{|w|}\), being \(\psi : I \subseteq \mathbb {R}\rightarrow \mathbb {R}\) an odd increasing diffeomorphism, and \(h:[0,1]\times \mathbb {R}\rightarrow \mathbb {R}\) is continuous and locally Lipschitz continuous with respect to the second variable. In the case of Eq. (4), we have \(m\equiv 1\) and \(\psi (y)=y|y|^{p-2}\). Another example is the relativistic curvature operator provided by

$$\begin{aligned} \psi (y)=\frac{y}{\sqrt{1-y^2}}, \end{aligned}$$

cf. [1], where \(I= ]-1,1[ \). The study of radial solutions of Eq. (26) leads to the equivalent equation

$$\begin{aligned} \big (r^{N-1} \eta (r) \psi (m(r) u^{\prime }) \big )^{\prime }=r^{N-1}h(r,u), \qquad r\in [0,1], \end{aligned}$$

which can be written as a planar system of the form

$$\begin{aligned} x^{\prime }=\omega (t) \psi ^{-1}(y),\qquad \big (t^{N-1} \eta (t) y\big )^{\prime }=t^{N-1} h(t,x), \end{aligned}$$

where \(\omega (t)=1/m(t)\), which is a special case of (3).

Theorem 2 may be applied to study the boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} x^{\prime }=\omega (t) \psi ^{-1}(y),\quad (t^{N-1} \eta (t) y)^{\prime }=t^{N-1} h(t,x),\\ y(0)=0=x(1) \sin \theta +y(1) \cos \theta . \end{array}\right. } \end{aligned}$$
(27)

Notice that all the regularity assumptions required in Theorem 2 immediately hold. So, if we are able to provide a well-ordered couple of lower/upper solutions for problem (27), then we can successfully apply it.

The following statement describes a possible example of application in the case of constant lower and upper solutions.

Corollary 21

Let \(\theta \in [0,\frac{\pi }{2}]\), and assume the existence of some constants \(\alpha \le 0 \le \beta \) such that \(h(t,\alpha )\le 0 \le h(t,\beta )\) for every \(t\in [0,1]\). Then, problem (27) has a solution (xy) such that \(\alpha \le x(t)\le \beta \), for every \(t\in [0,1]\).

Proof

It is easy to verify that the constant functions \(\alpha \) and \(\beta \) fulfill the conditions in Definitions 6 and 7 with the choice \(y_\alpha =y_\beta \equiv 0\). Then, Theorem 2 applies, thus completing the proof. \(\square \)

In particular, the previous corollary permits us to find an existence result for Eq. (26) with Dirichlet or Neumann boundary conditions on the unitary ball \({\mathscr {B}}\).

Corollary 22

Assume the existence of some constants \(\alpha \le 0 \le \beta \) such that \(h(r,\alpha )\le 0 \le h(r,\beta )\) for every \(r\in [0,1]\). Then, problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{div}\Big ( \eta (|x|) \phi \big (m(|x|) \nabla v(x) \big )\Big )=h(|x|,v(x)) &{}\quad \text {in } {\mathscr {B}}, \\ v=0 &{}\quad \text {on } \partial {\mathscr {B}}\end{array}\right. } \end{aligned}$$

has a solution v such that \(\alpha \le v(x)\le \beta \) for every \(x\in \overline{{\mathscr {B}}}\).

Corollary 23

Assume the existence of some constants \(\alpha \le 0 \le \beta \) such that \(h(r,\alpha )\le 0 \le h(r,\beta )\) for every \(r\in [0,1]\). Then, problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \textrm{div}\Big ( \eta (|x|) \phi \big (m(|x|) \nabla v(x) \big )\Big )=h(|x|,v(x)) &{}\quad \text {in } {\mathscr {B}}, \\ \partial _\nu v=0 &{}\quad \text {on } \partial {\mathscr {B}}\end{array}\right. } \end{aligned}$$

has a solution v such that \(\alpha \le v(x)\le \beta \) for every \(x\in \overline{{\mathscr {B}}}\).

Example 24

Let us consider two locally Lipschitz continuous functions \(f,g:\mathbb {R}\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} yf(y)>0, \quad \text {when } y\ne 0, \end{aligned}$$

and

$$\begin{aligned} \liminf _{x\rightarrow - \infty } g(x) < -\bar{g}, \qquad \limsup _{x\rightarrow + \infty } g(x) > \bar{g}, \end{aligned}$$

for some \(\bar{g}>0\). Then, given a(t) satisfying (A1), (A2) and (A3), for any function \(e:[0,1]\rightarrow \mathbb {R}\) such that \(\Vert e\Vert _\infty \le \bar{g}\), the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} x'=f(y), \quad (a(t) y(t))'= a(t)[g(x)+e(t)],\\ y(0)=0=x(1) \sin \theta +y(1) \cos \theta , \end{array}\right. } \end{aligned}$$

with \(\theta \in [0,\tfrac{\pi }{2}]\), has a solution by applying Corollary 21. As particular cases, choosing

$$\begin{aligned} p\le 2\le q, \quad \tfrac{1}{p}+\tfrac{1}{q}=1, \quad f(y)=|y|^{q-2} y, \quad a(t)=t^{N-1}, \end{aligned}$$

with \(N\ge 1\), the problems in the unitary ball of \(\mathbb {R}^N\)

$$\begin{aligned} {\left\{ \begin{array}{ll} \Delta _p(v)=g(v)+e(|x|) &{}\quad \text {in }{\mathscr {B}},\\ v=0 &{} \quad \text {on }\partial {\mathscr {B}},\\ \end{array}\right. } \qquad {\left\{ \begin{array}{ll} \Delta _p(v)=g(v)+e(|x|) &{}\quad \text {in }{\mathscr {B}},\\ \partial _\nu v=0 &{}\quad \text {on }\partial {\mathscr {B}},\\ \end{array}\right. } \end{aligned}$$

have at least one radial solution. We, thus, recover some well-known classical results.

Analogous considerations allow to generalize problem (9), providing further applications of Theorem 4.

Example 25

Let g(x) and e(t) be as in the previous example. We can apply Theorem 4 so to find a solution of the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (\sin ^{N-2}(\pi t) x')'= \sin ^{N-2}(\pi t)[g(x)+e(t)],\\ x'(0)=0=x'(1). \end{array}\right. } \end{aligned}$$

Remark 26

In this paper, we have treated the case when the pair of lower and upper solutions is well ordered. The non-well-ordered case is left as an open problem.