1 Introduction

Many mathematical models are expressed as operator equations of fixed point type:

$$\begin{aligned} N\left( u\right) =u \end{aligned}$$

associated with some operators N,  or as critical point type:

$$\begin{aligned} E^{\prime }\left( u\right) =0 \end{aligned}$$

for Fréchet differentiable functionals E. Those equations which are equivalent—in the sense of having the same solutions—with some critical point equation are said to have a variational structure. Solving fixed point equations is the topic of fixed point theory, while critical point equations are the object of critical point theory. Both theories offer lots of principles and methods for the existence, uniqueness, stability, and approximation of solutions. In the last years, the idea of combining different existence principles has become useful for the treatment of many problems. We use the term hybrid to refer to such kind of combined principles and methods. Applied to both fixed point and critical point theory, the term hybrid introduces a certain splitting of the operator N and of the functional E,  respectively. A typical result in this sense coming from the fixed point theory is Krasnosel’skii’s fixed point theorem for a sum of two operators [7], where the operator N is split as a sum \(A+B\) of a contraction A and a compact operator B,  and whose proof combines Banach contraction principle with Schauder’s fixed point theorem.

Theorem 1.1

(Krasnoselskii) .Let D be a closed bounded convex subset of a Banach space \(X,\ A:D\rightarrow X\) a contraction and \(B:D\rightarrow X\) a continuous mapping with \(B\left( D\right) \) relatively compact. If

(1.1)

then the mapping \(A+B\) has at least one fixed point.

There are many extensions of Krasnosel’skii’s theorem in several directions, for single- and multi-valued mappings, self and non-self mappings, for generalized contractions and generalized compact-type operators, see for example [2,3,4, 9].

The hybrid character of Krasnosel’skii’s theorem relies upon the split of the operator N as a sum of two mappings taking over differently some of the properties of N. An other possibility for a hybrid approach arises in case of systems, when the domain of N splits as a Cartesian product, say \( X\times Y,\) and correspondingly, the operator N splits as a couple \(\left( N_{1},N_{2}\right) ,\) where \(N_{1},N_{2}\) take their values in X and Y,  respectively. A typical result in this direction is the following theorem due to Avramescu [1]. Here, we give a version of it with slightly milder assumptions, whose proof is reproduced for the reader’s convenience, and for deriving a necessary ingredient to prove our main results.

Theorem 1.2

(Avramescu) .Let \(\left( D_{1},\ d\right) \) be a complete metric space, \(D_{2}\) a closed convex subset of a normed space \(\left( Y,\ \left\| \cdot \right\| \right) ,\) and let \(N_{i}:D_{1}\times D_{2}\rightarrow D_{i},\) \( i=1,2\) be continuous mappings. Assume that the following conditions are satisfied:

  1. (a)

    There is a constant \(L\in [0,1)\), such that:

    $$\begin{aligned} d\left( N_{1}\left( x,y\right) ,\ N_{1}\left( {\overline{x}},y\right) \right) \le Ld\left( x,\ {\overline{x}}\right) \end{aligned}$$

    for all \(x,{\overline{x}}\in D_{1}\) and \(y\in D_{2};\)

  2. (b)

    \(N_{2}\left( D_{1}\times D_{2}\right) \) is a relatively compact subset of Y.

Then, there exists \(\left( x,y\right) \in D_{1}\times D_{2}\) with:

$$\begin{aligned} N_{1}\left( x,y\right) =x,\quad N_{2}\left( x,y\right) =y. \end{aligned}$$

Proof

For each fixed element \(y\in D_{2},\) using condition (a), we may apply Banach’s contraction principle to the mapping \(N_{1}\left( \cdot ,y\right) :D_{1}\rightarrow D_{1}.\) Hence, there exists a unique element \(S\left( y\right) \in D_{1}\) with:

$$\begin{aligned} N_{1}\left( S\left( y\right) ,y\right) =S\left( y\right) . \end{aligned}$$
(1.2)

The mapping \(S:D_{2}\rightarrow D_{1}\) is continuous. To prove this, use ( 1.2) for two arbitrary elements \(y,{\overline{y}}\in D_{2}\) and get:

$$\begin{aligned} d(S\left( y\right) ,S\left( {\overline{y}}\right) )= & {} d(N_{1}\left( S\left( y\right) ,y\right) ,N_{1}\left( S\left( {\overline{y}}\right) ,{\overline{y}} \right) ) \\\le & {} d(N_{1}\left( S\left( y\right) ,y\right) ,N_{1}\left( S\left( {\overline{y}}\right) ,y\right) )+ d(N_{1}\left( S\left( {\overline{y}}\right) ,y\right) ,N_{1}\left( S\left( {\overline{y}}\right) ,{\overline{y}}\right) ) \\\le & {} L d(S\left( y\right) ,S\left( {\overline{y}}\right) ) + d( N_{1}\left( S\left( {\overline{y}}\right) ,y\right) ,N_{1}\left( S\left( {\overline{y}} \right) ,{\overline{y}}\right) ). \end{aligned}$$

Hence:

$$\begin{aligned} d(S\left( y\right) ,S\left( {\overline{y}}\right) ) \le \frac{1}{1-L} d( N_{1}\left( S\left( {\overline{y}}\right) ,y\right) ,N_{1}\left( S\left( {\overline{y}}\right) ,{\overline{y}}\right) ). \end{aligned}$$
(1.3)

For a fixed \({\overline{y}}\in D_{2},\) since \(N_{1}\left( S\left( {\overline{y}} \right) ,\cdot \right) \) is continuous, from (1.3), we immediately see that S is continuous at \({\overline{y}},\) as claimed.

Next, we look at the composed mapping \(N_{2}\left( S\left( \cdot \right) ,\cdot \right) :D_{2}\rightarrow D_{2}.\) Clearly, it is continuous as a composition of two continuous mappings. In addition, from assumption (b), its image is a relatively compact subset of Y. Thus, Schauder’s fixed point theorem applies and guarantees the existence of a point \(y^{*}\in D_{2}\) with:

$$\begin{aligned} N_{2}\left( S\left( y^{*}\right) ,y^{*}\right) =y^{*}. \end{aligned}$$
(1.4)

Finally, letting \(x^{*}:=S\left( y^{*}\right) ,\) from (1.2) and (1.4), we have that \(N_{1}\left( x^{*},y^{*}\right) =x^{*}\) and \(N_{2}\left( x^{*},y^{*}\right) =y^{*}.\) \(\square \)

Remark 1.1

  1. (a)

    In fact, in Avramescu’s original result, \(D_{1}\) has been a closed subset of a complete metrizable linear space X and \(D_{2}\) is a closed convex bounded subset of a Banach space \(\left( Y,\ \left\| \cdot \right\| \right) .\)

  2. (b)

    If, in Theorem 1.2, \(D_{2}={\overline{U}}\) where \(U\subset Y\) is open and \(0\in U,\) then the invariance condition on \(N_{2},\) namely \(N_{2}\left( D_{1}\times D_{2}\right) \subset D_{2}\) can be replaced by the Leray–Schauder boundary condition:

    $$\begin{aligned} N_{2}\left( x,y\right) \ne \lambda y\quad \text {for all }x\in D_{1},\ y\in \partial U,\ \lambda >1. \end{aligned}$$

    Indeed, under this assumption, instead of Schauder’s fixed point theorem, the Leray–Schauder fixed point theorem applies to the mapping \( N_{2}\left( S\left( \cdot \right) ,\cdot \right) :{\overline{U}}\rightarrow Y.\)

  3. (c)

    Based on the key remark that in the proof of Theorem 1.2, it is essential that a fixed point result can be applied to the mapping \(N_{2}\left( S\left( \cdot \right) ,\cdot \right) ,\) in [13] it is given a meta version of Avramescu’s theorem also involving a more general contraction property. The result is stated in terms of fixed point structures in [15].

The connection between Theorems 1.2 and 1.1 is given by the following remark.

Remark 1.2

  1. (a)

    Theorem 1.2implies Theorem 1.1. To prove this assertion, first note that the invariance condition (1.1) implies \( A\left( x\right) +y\in D\) for every \(x\in D\) and \(y\in \overline{\text {conv}}B\left( D\right) .\) Next apply Theorem 1.2 with \(D_{1}=D,\)\(D_{2}=\overline{\text {conv}}B\left( D\right) ,\)\(N_{1}\left( x,y\right) =A\left( x\right) +y\) and \( N_{2}\left( x,y\right) =B\left( x\right) ,\) and the existence of a fixed point for the map \(A+B\) is proved.

  2. (b)

    Only for a particular situation, Theorem 1.1implies Theorem 1.2. Namely, in case that \(D_{1}\) and \(D_{2}\) are two closed bounded convex subsets of two Banach spaces X , Y, respectively, and on \(N_{1}\) we assume a Lipschitz condition of the form

    $$\begin{aligned} \left\| N_{1}\left( x,y\right) -N_{1}\left( {\overline{x}},{\overline{y}} \right) \right\| _{X}\le L\left\| x-{\overline{x}}\right\| +L_{1}\left\| y-{\overline{y}}\right\| _{Y} on \;D_{1}\times D_{2},\ \end{aligned}$$

    with \(L\in \ ]0,1[\) and \(L_{1}\ge 0.\) Indeed, in this case, the conclusion of Theorem 1.2is obtained by applying Theorem 1.1to the Banach space \(X\times Y\) endowed with the norm :

    $$\begin{aligned} \left\| \left( x,y\right) \right\| :=\left\| x\right\| _{X}+ \frac{L_{1}}{L}\left\| y\right\| _{Y}, \end{aligned}$$

    and to the mappings \(A,B:D_{1}\times D_{2}\rightarrow X\times Y\) given by:

    $$\begin{aligned} A\left( x,y\right) =\left( N_{1}\left( x,y\right) ,\ 0\right) ,\quad B\left( x,y\right) =\left( 0,\ N_{2}\left( x,y\right) \right) . \end{aligned}$$

Note that Avramescu’s theorem is a particular case of a more heterogeneous fixed point theorem given in [6] that combines Banach–Perov contraction principle with Schauder fixed point theorem and its analogue for the weak topology.

In critical point theory, the idea of splitting the functional as a sum of two functions is particularly used for nonsmooth extensions of the classical results (see, e.g., [8, Ch. 3] and [14]). As concerns the idea of splitting the space and correspondingly the mapping \(E^{\prime },\) we mention the recent papers [11, 12].

The main idea of this paper is to combine fixed point arguments with a critical point technique in a hybrid existence result for a system of two operator equations where only one of the equations has a variational structure. The motivation for this problem comes from the class of boundary value problems related to systems of second-order equations where part of the nonlinearities do not depend on the gradient, but the others do. Thus, the first equation has a variational structure, while for the rest, such a structure does not exist.

The paper is organized as follows: first, in Section 2.1, we give a variational analogue of Avramescu’s theorem in terms of a couple \(\left( N,E\right) \) of an operator N and a functional E. More exactly, we find sufficient conditions for the existence of a solution \(\left( x,y\right) \) to the problem:

$$\begin{aligned} N\left( x,y\right) =x, \ E_{y}^{\prime }\left( x,y\right) =0, \ E\left( x,y\right) =\inf _{{\overline{U}}}E\left( x,\cdot \right) ; \end{aligned}$$

see Corollary 2.3. Here, D is a complete metric space, U is an open subset of a Banach space, \(N:D\times {\overline{U}}\rightarrow D,\) E :  \( D\times {\overline{U}}\rightarrow {\mathbb {R}},\) and by \(E_{y}^{\prime }\left( x,\cdot \right) \), we mean the Fréchet derivative of the functional \(E\left( x,\cdot \right) .\) The Corollary is improved in Section 2.2 in case that U is a ball. Furthermore, in Section 2.3, we look for a solution \(\left( x,y\right) \) with y localized in a bounded convex subset of a wedge (cone, in particular) imposing to y a barrier from below. The result can be particularly useful in establishing the existence of multiple positive solutions for operator systems having only a partial variational structure. An application is given in Section 3 to suggest the way that our theoretical results apply to boundary value problems.

Our theoretic results are based on the weak form of the Ekeland variational principle (see, e.g., [10]) and the notion of a contraction with respect to a vector-valued metric in Perov’s sense. Recall that a map \( {\widehat{d}}:X\times X\rightarrow {\mathbb {R}}^{n}\) is said to be a vector-valued metric on the set X if, for every \(x,y,z\in X\), one has that:

  1. (1)

    \({\widehat{d}}(x,y)\ge 0_{n}\) and \({\widehat{d}}(x,y)=0_{n}\) if and only if \(x=y\) where \(0_{n}\) is the null vector in \({\mathbb {R}}^{n},\)

  2. (2)

    \({\widehat{d}}(x,y)={\widehat{d}}(y,x),\)

  3. (3)

    \({\widehat{d}}(x,y)\le {\widehat{d}}(x,z)+{\widehat{d}}(z,y),\)

where if \(x=(x_{1},\dots ,x_{n}),\) \(y=(y_{1},\dots ,y_{n}),\) then by \(x\le y \), one means that \(x_{i}\le y_{i}\) for \(i=1,\dots ,n.\)

A map \(f:D\subset X\rightarrow X\) is said to be a Perov contraction on D with respect to the vector-valued metric \({\widehat{d}}\) if there exists a matrix \(A\in M_{n\times n}({\mathbb {R}}^{+})\) with spectral radius \( \rho \left( M\right) \) less than one, such that:

$$\begin{aligned} {\widehat{d}}(f(x),f(y))\le A\,{\widehat{d}}(x,y)\quad \text {for all }x,y\in D. \end{aligned}$$

Details about these notions can be found in [10].

2 Fixed point–critical point hybrid results

2.1 A variational analogue of Avramescu’s theorem

The conclusion of Avramescu’s theorem says that x is a fixed point of \( N_{1}\left( \cdot ,y\right) \) and y is a fixed point of \(N_{2}\left( x,\cdot \right) .\) We now give its analogue hybrid fixed point-critical point result involving instead of the couple \(\left( N_{1},N_{2}\right) \) of two mappings, a couple \(\left( N,E\right) \) of a mapping and a functional. It will guarantee the existence of a couple \(\left( x,y\right) \), such that x is a fixed point of \(N\left( \cdot ,y\right) , \) while y is a critical point (a minimum point) of a functional \(E\left( x,\cdot \right) .\)

The theory will apply to the treatment of systems of two nonlinear equations in case that only one of them has a variational structure.

Theorem 2.1

Let \(\left( D,\ d\right) \) be a complete metric space, and U an open subset of a Hilbert space \(\left( Y,\ \,\left\langle \cdot ,\cdot \right\rangle ,\left\| \cdot \right\| \right) \) identified to its dual, \(N:D\times {\overline{U}}\rightarrow D\) a map, such that \(N\left( x,\cdot \right) \in C\left( {\overline{U}}\right) \) for every \(x\in D\) and \(E:D\times {\overline{U}}\rightarrow {\mathbb {R}}\) a functional, such that \(E\left( x,\cdot \right) \in C^{1}\left( \overline{U }\right) \) for every \(x\in D.\) Assume that the following conditions are satisfied:

  1. (i)

    There is a constant \(L\in [0,1)\), such that:

    $$\begin{aligned} \textit{d}\left( N\left( x,y\right) ,\ N\left( {\overline{x}},y\right) \right) \le Ld\left( x,\ {\overline{x}}\right) \end{aligned}$$

    for all \(x,{\overline{x}}\in D\) and \(y\in {\overline{U}};\)

  2. (ii)

    for every \(x \in D\), \(E(x,\cdot )\) is bounded from below on \( {\overline{U}}\) and there is \(a>0\) with:

    $$\begin{aligned} \inf _{\partial U}E\left( x,\cdot \right) -\inf _{{\overline{U}}}E\left( x,\cdot \right) \ge a quad \text {for every }x\in D. \end{aligned}$$
    (2.1)

Then, for every \(k \in {\mathbb {N}},\) there exists \(\left( x_{k},y_{k}\right) \in D\times U\), such that:

$$\begin{aligned} N\left( x_{k},y_{k-1}\right) =x_{k},\quad \left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| \le \frac{1}{k},\quad E\left( x_{k},y_{k}\right) \le \inf _{{\overline{U}}}E\left( x_{k},\cdot \right) + \frac{1}{k}. \end{aligned}$$
(2.2)

Corollary 2.2

If, in addition to the assumptions of Theorem 2.1, NE and \(E^\prime _y\) are continuous on \(D \times {\overline{U}}\) and one has that:

  1. (iii)

    \(\left( I_{Y}-E_{y}^{\prime }\right) \left( D\times {\overline{U}} \right) \) is a relatively compact subset of Y (\(I_{Y}\) being the identity map of Y), 

then there exist \(x\in D\) and \(y,{\overline{y}}\in {\overline{U}}\) with:

$$\begin{aligned} N\left( x,{\overline{y}}\right) =x, \ E_{y}^{\prime }\left( x,y\right) =0,\quad E\left( x,y\right) =\inf _{{\overline{U}}}E\left( x,\cdot \right) . \end{aligned}$$
(2.3)

Under a stronger condition on the couple \((N, I_{Y}-E_{y}^{\prime }) \), we can guarantee in the conclusion of Corollary that \(y={\overline{y}}\). More exactly, we have:

Corollary 2.3

Under the hypotheses of Corollary 2.2 with, instead of (iii),

(iii\(^{*}\)) there are two constants \(R_{0},\gamma >0\), such that:

$$\begin{aligned} E\left( x,y\right) -\inf _{{\overline{U}}}E\left( x,\cdot \right) \ge \gamma \end{aligned}$$
(2.4)

for all \(x\in D\) and \(y\in {\overline{U}}\) with \(\left\| y\right\| >R_{0},\) and the mapping \(\left( N,I_{Y}-E_{y}^{\prime }\right) \) is a Perov contraction on \(D\times {\overline{U}}\) with respect to the vector-valued metric \({\widehat{d}}:D \times Y \rightarrow {\mathbb {R}}^2\), \({\widehat{d}} (v,w)=(d(x_1,x_2),\Vert y_1-y_2\Vert )\), where \(v=(x_1,y_1),w=(x_2,y_2) \in D \times Y\), there exists \(\left( x,y\right) \in D\times {\overline{U}}\) with:

$$\begin{aligned} N\left( x,y\right) =x,\quad E_{y}^{\prime }\left( x,y\right) =0,\quad E\left( x,y\right) =\inf _{{\overline{U}}}E\left( x,\cdot \right) . \end{aligned}$$
(2.5)

In addition, \(\left( x,y\right) \) is the unique solution of the system \( N_{1}\left( x,y\right) =x, E_{y}^{\prime }\left( x,y\right) =0.\)

Proof of Theorem 2.1

As in the proof of Theorem 1.2, we can consider the mapping \(S: {\overline{U}}\rightarrow D\) with:

$$\begin{aligned} N\left( S\left( y\right) ,y\right) =S\left( y\right) \quad \text {for all } y\in {\overline{U}}. \end{aligned}$$
(2.6)

Like there, the mapping S is continuous based on the continuity of \( N(x,\cdot )\) for every \(x\in D\).

We now proceed to the construction of a sequence \(\left( y_{k}\right) \) and its accompanying sequence \(\left( x_{k}\right) \) with \(x_{k}=S\left( y_{k-1}\right) \) which in view of (2.6) gives:

$$\begin{aligned} N\left( x_{k},y_{k-1}\right) =x_{k}. \end{aligned}$$
(2.7)

We shall define this sequence for indices \(k > k_{0},\) where \(k_{0}\ge 1/a.\) The first element \(y_{k_{0}}\) is chosen arbitrarily in U. At step j \( \left( j\ge 1\right) ,\) having already introduced \(y_{i}\) for \( i=k_{0},\cdot \cdot \cdot ,k_{0}+j-1=:k-1,\) we apply the weak form of the Ekeland variational principle to the functional \(E\left( S\left( y_{k-1}\right) ,\cdot \right) \), and we get an element \(y_{k}\in {\overline{U}}\) satisfying:

$$\begin{aligned}&E\left( x_{k},y_{k}\right) \le \inf _{{\overline{U}}}E\left( x_{k},\cdot \right) +\frac{1}{k}, \end{aligned}$$
(2.8)
$$\begin{aligned}&\quad E\left( x_{k},y_{k}\right) \le E\left( x_{k},y\right) +\frac{1}{k} \left\| y-y_{k}\right\| \quad \text {for every }y\in {\overline{U}}. \end{aligned}$$
(2.9)

Since \(k=k_{0}+j>k_{0}\ge 1/a,\) from (2.8), we have \(y_{k}\notin \partial U.\) Indeed, if \(y_{k}\in \partial U,\) then \(E\left( x_{k},y_{k}\right) \ge \) \(\inf _{\partial U}E\left( x_{k},\cdot \right) \) which together with (2.8) implies:

$$\begin{aligned} \inf _{\partial U}E\left( x_{k},\cdot \right) -\inf _{{\overline{U}}}E\left( x_{k},\cdot \right) \le \frac{1}{k}<a, \end{aligned}$$

contrary to assumption (2.1). Hence, \(y_{k}\in U\) which makes possible to choose y in (2.9) of the form:

$$\begin{aligned} y=y_{k}-tE_{y}^{\prime }\left( x_{k},y_{k}\right) , \end{aligned}$$
(2.10)

for \(t>0\) sufficiently small that \(y\in U.\) From (2.9), using the definition of the Fréchet derivative gives:

$$\begin{aligned} -\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y-y_{k}\right\rangle -o\left( \left\| y-y_{k}\right\| \right) \le \frac{1}{k}\left\| y-y_{k}\right\| , \end{aligned}$$

and replacing y becomes:

$$\begin{aligned} t\left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| ^{2}-o\left( t\right) \le \frac{t}{k}\left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| . \end{aligned}$$

Dividing by t and letting t go to zero imply:

$$\begin{aligned} \left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| \le \frac{1 }{k}. \end{aligned}$$
(2.11)

Now, (2.7), (2.8), and (2.11) give the conclusion of the theorem. \(\square \)

Proof of Corollary 2.2

From (iii), the sequence \(\left( y_{k}-E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \) is relatively compact, so it has a convergent subsequence. Since, from (2.11), the sequence \(\left( E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \) converges to zero, we have that the corresponding subsequence of \(\left( y_{k}\right) \) is convergent. Let this subsequence be \(\left( y_{k_{j}}\right) \) and its limit be y. Similarly, the sequence \(\left( y_{k_{j}-1}\right) \) has a convergent subsequence to some \({\overline{y}}\) (not necessary equal to y). Thus, passing to a subsequence, we may assume that \(y_{k_{j}}\rightarrow y\) and \( y_{k_{j}-1}\rightarrow {\overline{y}}\) as \(j\rightarrow +\infty .\) Furthermore, from \(x_{k_{j}}=S\left( y_{k_{j}-1}\right) \), one has \( x_{k_{j}}\rightarrow S\left( {\overline{y}}\right) :=x.\) Now, taking into account the continuity of NE and \(E^\prime _y\), we can pass to limit in ( 2.7), (2.8), and (2.11), thus obtaining (2.3). \(\square \)

Proof of Corollary 2.3

We first note that if, for the sequence \(\left( y_{k}\right) \) in (2.2), there would exist a convergent subsequence \(\left( y_{k_{j}}\right) \), such that the translated sequence \(\left( y_{k_{j}-1}\right) \) converges to the same limit as \(\left( y_{k_{j}}\right) ,\) then (2.5) immediately follows from (2.2) after passing to the limit, with \(y=\) \( \lim _{j\rightarrow +\infty }y_{k_{j}}=\lim _{j\rightarrow +\infty }y_{k_{j}-1} \) and \(x=S\left( y\right) .\) Thus, the problem is to guarantee the same limit for two out of phase subsequences \(\left( y_{k_{j}}\right) \) and \(\left( y_{k_{j}-1}\right) .\) Obviously, the simplest way to do this is to ensure the convergence of the entire sequence \(\left( y_{k}\right) .\) We can do this using condition (iii\(^{*}\)), as first shown in [11].

Let \(z_{k}:=E_{y}^{\prime }\left( x_{k},y_{k}\right) .\) Since the mapping \( \left( N,I_{Y}-E_{y}^{\prime }\right) \) is a Perov contraction on \(D \times Y \) with respect to the vector-valued metric \({{\widehat{d}}}\), there is a square matrix:

$$\begin{aligned} M=\left( \begin{array}{cc} \alpha &{} \beta \\ \gamma &{} \delta \end{array} \right) \end{aligned}$$

of nonnegative entries with the spectral radius less than one, such that for all k and p,  one has:

$$\begin{aligned}&d\left( N\left( x_{k+p},y_{k+p-1}\right) ,N\left( x_{k},y_{k-1}\right) \right) \le \alpha d\left( x_{k+p},x_{k}\right) +\beta \left\| y_{k+p-1}-y_{k-1}\right\| , \nonumber \\ \end{aligned}$$
(2.12)
$$\begin{aligned}&\quad \left\| y_{k+p}-z_{k+p}-\left( y_{k}-z_{k}\right) \right\| \le \gamma d\left( x_{k+p},x_{k}\right) +\delta \left\| y_{k+p}-y_{k}\right\| . \end{aligned}$$
(2.13)

Notice that the spectral radius of M is less than one if and only if:

$$\begin{aligned} \alpha<1,\ \delta<1 \text {and }\alpha +\delta <1+\alpha \delta -\beta \gamma . \end{aligned}$$
(2.14)

Obviously, we may assume that \(\alpha =L.\) From (2.13), we deduce that:

$$\begin{aligned} \left\| y_{k+p}-y_{k}\right\| \le \frac{\gamma }{1-\delta }d\left( x_{k+p},x_{k}\right) +\frac{1}{1-\delta }\left\| z_{k+p}-z_{k}\right\| , \end{aligned}$$
(2.15)

while, from (2.12), since \(N\left( x_{k+p},y_{k+p-1}\right) =x_{k+p}\) and \( N\left( x_{k},y_{k-1}\right) =x_{k},\) we have:

$$\begin{aligned} d\left( x_{k+p},x_{k}\right) \le \frac{\beta }{1-\alpha }\left\| y_{k+p-1}-y_{k-1}\right\| . \end{aligned}$$

Then:

$$\begin{aligned} \left\| y_{k+p}-y_{k}\right\| \le \frac{\beta \gamma }{\left( 1-\alpha \right) \left( 1-\delta \right) }\left\| y_{k+p-1}-y_{k-1}\right\| +\frac{1}{1-\delta }\left\| z_{k+p}-z_{k}\right\| . \end{aligned}$$
(2.16)

Here, by (2.14), \(\beta \gamma \left( 1-\alpha \right) ^{-1}\left( 1-\delta \right) ^{-1}<1,\) and the last term tends to zero as \(k\rightarrow +\infty \) uniformly with respect to p. In addition, the sequence of numbers \(\left( \left\| y_{k+p}-y_{k}\right\| \right) \) is bounded uniformly with respect to p, since from (2.4), the sequence \(\left( y_{k}\right) \) is bounded. Indeed, it is easy to see that by (2.4) and (2.8), the sequence \(\left( y_{k}\right) _{k \ge {\overline{k}}} \), \({\overline{k}} > \frac{ 1}{\gamma },\) is bounded by the constant \(R_0\). Thus, Lemma 3.2 in [11] applies and guarantees that the sequence \(\left( y_{k}\right) \) is Cauchy, as wished.

The uniqueness of the solution of the system is an immediate consequence of the fact that the mapping \(\left( N,I_{Y}-E_{y}^{\prime }\right) \) is a Perov contraction. \(\square \)

Remark 2.1

  1. (a)

    One cannot assert that the point \(\left( x,y\right) \) from the conclusion of Corollary 2.3 is guaranteed by Perov’s fixed point theorem. The reason is the absence of the invariance condition \( \left( N,I_{Y}-E_{y}^{\prime }\right) \left( D\times {\overline{U}}\right) \subset D\times {\overline{U}}.\)

  2. (b)

    The boundary condition (2.1) trivially holds if \(U=Y\), recalling the convention \(\displaystyle \inf _{\emptyset } E(x,\cdot )=+\infty \).

2.2 A Leray–Schauder-type variant in balls

Here, we discuss the possibility of replacing the boundary condition (2.1 ) expressed in terms of the functional E,  by a Leray–Schauder-type boundary condition involving the derivative \(E_{y}^{\prime }.\) We shall do it in case that U is a ball. In particular, in the same setting of Corollary 2.3, we can prove the following result.

Theorem 2.4

Let U be a ball, \(U=B_{R}\left( 0\right) .\) The conclusion of Corollary 2.3 remains true if the boundary condition (2.1) is replaced by the following conditions:

$$\begin{aligned}&\exists \; C > 0 \; \text{ such } \text{ that } \; \left\langle E_{y}^{\prime }\left( x,y\right) ,y\right\rangle \ge -C \text {on}\ D\times \partial U; \end{aligned}$$
(2.17)
$$\begin{aligned}&\quad E_{y}^{\prime }\left( x,y\right) +\mu y\ne 0 \ \text {for all }x\in D,\ y\in \partial U ,\ \mu >0. \end{aligned}$$
(2.18)

Proof

Coming back to the proof of Theorem 2.1 shows us that without condition (2.1), the element \(y_{k}\) giving by Ekeland’s principle may belong to the boundary of U. Let \(k > 0\) be such that \(\Vert y_k\Vert =R\). Three cases are possible:

(a) \(E_{y}^{\prime }\left( x_{k},y_{k}\right) =0.\) In this case, the conclusion (2.11) is true without any proof.

(b) \(\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle >0.\) If so, the choice of y prescribed by (2.10) is still admissible for all small enough \(t>0\). Indeed:

$$\begin{aligned} \left\| y\right\| ^{2}= & {} \left\| y_{k}-tE_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| ^{2}=\left\| y_{k}\right\| ^{2}+t^{2}\left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| ^{2}-2t\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle \\= & {} R^{2}+t^{2}\left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| ^{2}-2t\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle < R^{2} \end{aligned}$$

for \(0<t<\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle /\left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) \right\| ^{2}.\)

The proof will then continue as in Theorem 2.1 getting (2.11).

(c) \(E_{y}^{\prime }\left( x_{k},y_{k}\right) \ne 0\) and \(\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle \le 0.\) In this case, we let y in (2.9) be of the form:

$$\begin{aligned} y=y_{k}-t\left( E_{y}^{\prime }\left( x_{k},y_{k}\right) +\mu _{k}y_{k}+\varepsilon y_{k}\right) , \end{aligned}$$
(2.19)

where \(t > 0\), \(\mu _{k}=-\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle /R^{2}\) and \(\varepsilon >0\) is arbitrarily fixed. Then,  since:

$$\begin{aligned} \,\,\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) +\mu _{k}y_{k}+\varepsilon y_{k},\ y_{k}\right\rangle =\varepsilon R^{2}>0, \end{aligned}$$

as above \(\left\| y\right\| < R\) for all small enough \(t>0.\) Replacing in (2.9), dividing by t,  letting t go to zero, and finally letting \( \varepsilon \rightarrow 0\) yield:

$$\begin{aligned} \left\| E_{y}^{\prime }\left( x_{k},y_{k}\right) +\mu _{k}y_{k}\right\| \le \frac{1}{k}. \end{aligned}$$
(2.20)

Note that in virtue of (2.17), one has \(0\le \mu _{k}\le \displaystyle \frac{C}{R^2}.\)

Thus, concerning the sequence \(\left( y_{k}\right) \), two situations are possible:

(I) There is a subsequence of \(\left( y_{k}\right) \) for which (2.11) holds. Then, we follow the proof of Corollary 2.3 with this subsequence instead of the whole sequence \(\left( y_{k}\right) .\)

(II) If case (I) does not hold, then after eliminating a finite number of terms, we may assume that for the sequence \(\left( y_{k}\right) ,\) inequality (2.20) holds, \(y_{k}\in \partial U \ \)and\( \mu _{k}>0.\ \) Moreover, passing if necessary to a subsequence, we may assume that \(\mu _{k}\rightarrow \mu \) for some \(\mu \ge 0.\ \)Then, (2.20) implies \( E_{y}^{\prime }\left( x_{k},y_{k}\right) +\mu y_{k}\rightarrow 0\), and we continue with the proof of the convergence of \(\left( y_{k}\right) \) as for Corollary 2.3, where now we let \(z_{k}=E_{y}^{\prime }\left( x_{k},y_{k}\right) +\mu y_{k}.\) Passing to the limit in (2.20) gives \( E_{y}^{\prime }\left( x,y\right) +\mu y=0,\) where \(y\in \partial U \ \)and\( \mu \ge 0.\) The case \(\mu >0\) being excluded by (2.18), it remains that in any case one has \(E_{y}^{\prime }\left( x,y\right) =0.\) Analogously, by the continuity of N and E,  we get \(N\left( x,y\right) =x\) and \( \displaystyle E\left( x,y\right) =\inf _{{\overline{U}}}E\left( x,\cdot \right) .\) \(\square \)

2.3 A variant in convex conical sets

Next, we shall look for y in a wedge (a cone, in particular) and we shall introduce for y a barrier from below. The new result can be particularly useful in establishing the existence of multiple positive solutions for operator systems having only a partial variational structure. To this aim, we shall adapt the technique used in [12] for systems in which all equations have a variational structure.

Let K be a wedge (a closed convex set with \({\mathbb {R}}_{+}K\subset K, K \ne \{0\})\) of the Hilbert space Y and let \( l:K\rightarrow {\mathbb {R}}_{+}\) be a concave, upper semicontinuous function with \(l\left( 0\right) =0.\) For any two numbers \(r,R>0\) denote by \(K_{rR}\) the conical set:

$$\begin{aligned} K_{rR}:=\{y\in K:\ r\le l(y) \text {and }\left\| y\right\| \le R\}, \end{aligned}$$

which will replace the set \({\overline{U}}.\) The set \(K_{rR}\) is convex, since l is concave, and closed since l is upper semicontinuous. We assume that \( K_{rR}\ne \emptyset \) and we denote:

$$\begin{aligned} \partial K_{R}=\{y\in K:\ \left\| y\right\| =R\}. \end{aligned}$$

We continue to consider the complete metric space D.

Theorem 2.5

Let \(N:D\times K_{rR}\rightarrow D\) and \(E:D\times Y \rightarrow {\mathbb {R}}\) be continuous, such that N satisfies (i) of Theorem 2.1 and \(E\left( x,\cdot \right) \in C^{1}\left( Y\right) \) for every \(x\in D,\) and \( E_{y}^{\prime }\) is continuous on \(D\times K_{rR}.\) Assume that the following conditions are satisfied:

  1. (h1)

    \(m:=\inf _{\left( x,y\right) \in D\times K_{rR}}E\left( x,y\right) >-\infty \) and there is \(\varepsilon >0\), such that:

    $$\begin{aligned}&E\left( x,y\right) \ge m+\varepsilon \text {for all }x\in D\text { and } y\in K_{rR}\text { } \nonumber \\&\text {which simultaneously satisfy }l\left( y\right) =r\text { and } \left\| y\right\| =R; \end{aligned}$$
    (2.21)
  2. (h2)

    \(y-E_{y}^{\prime }\left( x,y\right) \in K\) and \(l\left( y-E_{y}^{\prime }\left( x,y\right) \right) \ge r\) for all \(\left( x,y\right) \in D\times K_{rR};\)

  3. (h3)

    \(\exists \; C > 0 \; \text{ such } \text{ that } \; \left\langle E_{y}^{\prime }\left( x,y\right) ,y\right\rangle \ge -C \)on \(D\times \partial K_{R}\) and:

    $$\begin{aligned} E_{y}^{\prime }\left( x,y\right) +\mu y\ne 0 \ \text {for all }\left( x,y\right) \in D\times \partial K_{R}\ \text {and }\mu >0; \end{aligned}$$
    (2.22)
  4. (h4)

    the mapping \(\left( N,I_{Y}-E_{y}^{\prime }\right) \) is a Perov contraction on \(D\times K_{rR}.\)

Then, there exists \(\left( x,y\right) \in D\times K_{rR}\) with:

$$\begin{aligned} N\left( x,y\right) =x, \ E_{y}^{\prime }\left( x,y\right) =0,\quad E\left( x,y\right) =\inf _{K_{rR}}E\left( x,\cdot \right) . \end{aligned}$$
(2.23)

In addition, \(\left( x,y\right) \) is the unique solution of the system \( N\left( x,y\right) =x, E_{y}^{\prime }\left( x,y\right) =0.\)

Proof

First note that in view of (2.21), we may assume that the terms of the sequence \(\left( y_{k}\right) \) constructed as explained in the proof of Theorem 2.1 do not simultaneously satisfy \(l\left( y_{k}\right) =r\) and \(\left\| y_{k}\right\| =R.\) Hence, for each k, we have either (a) \( l\left( y_{k}\right) \ge r\) and \(\left\| y_{k}\right\| <R,\) or (b) \( l\left( y_{k}\right) >r\) and \(\left\| y_{k}\right\| =R.\)

In case (a), the choice (2.10) of y to replace in inequality (2.9) now given on \(K_{rR}\) instead of \({\overline{U}}\) is admissible for all \(t>0\) small enough. Indeed, using the concavity of l and (h2) yields:

$$\begin{aligned} l\left( y\right)= & {} l\left( \left( 1-t\right) y_{k}+t\left( y_{k}-E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \right) \\\ge & {} \left( 1-t\right) l\left( y_{k}\right) +tl\left( \left( y_{k}-E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \right) \ge \left( 1-t\right) r+tr=r, \end{aligned}$$

for all \(t\in \left[ 0,1\right] .\) In addition, the inequality \(\left\| y\right\| \le R\) follows from \(\left\| y_{k}\right\| <R\) provided that t is sufficiently small. Thus, in case (a), we derive the estimate ( 2.11).

In case (b), if \(\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle >0,\) then the same choice of y is possible based on the previous explanation and that from the proof of Theorem 2.4. It remains to consider the case when:

$$\begin{aligned} l\left( y_{k}\right) >r, \ \left\| y_{k}\right\| =R \ \text {and }\left\langle E_{y}^{\prime }\left( x_{k},y_{k}\right) ,\ y_{k}\right\rangle \le 0. \end{aligned}$$

As in the proof of Theorem 2.4, we now choose y of the form (2.19 ), and like there \(\left\| y\right\| \le R\) for all sufficiently small t. It remains to guarantee that \(l\left( y\right) \ge r.\) From \( l\left( y_{k}\right) >r,\) we have \(\sigma l\left( y_{k}\right) =r\) for some number \(\sigma \in \left( 0,1\right) .\) Then, for any \(\rho \in \left[ \sigma ,1\right] ,\) one has:

$$\begin{aligned} l\left( \rho y_{k}\right)= & {} l\left( \rho y_{k}+\left( 1-\rho \right) 0\right) \ge \rho l\left( y_{k}\right) +\left( 1-\rho \right) l\left( 0\right) \\= & {} \rho l\left( y_{k}\right) \ge \sigma l\left( y_{k}\right) =r. \end{aligned}$$

In particular, we may take \(\rho =\left( 1-t-t\mu _{k}-t\varepsilon \right) /\left( 1-t\right) \) which belongs to \(\left[ \sigma ,1\right] \) for sufficiently small t. Consequently:

$$\begin{aligned} l\left( y\right)= & {} l\left( \left( 1-t\right) \frac{1-t-t\mu _{k}-t\varepsilon }{1-t}y_{k}+t\left( y_{k}-E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \right) \\= & {} l\left( \left( 1-t\right) \rho y_{k}+t\left( y_{k}-E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \right) \\\ge & {} \left( 1-t\right) l\left( \rho y_{k}\right) +tl\left( \left( y_{k}-E_{y}^{\prime }\left( x_{k},y_{k}\right) \right) \right) \ge r. \end{aligned}$$

Therefore, \(y\in K_{rR}\) for every sufficiently small \(t>0.\) Furthermore, following the proof of Theorem 2.4, we obtain the estimate (2.20).

For the rest, we follow the proof of Theorem 2.4 . \(\square \)

Remark 2.2

(Multiplicity) Assume that there is a constant \(c>0\), such that:

$$\begin{aligned} l\left( y\right) \le c\left\| y\right\| \end{aligned}$$
(2.24)

for all \(y\in K.\) Then, from the assumption \(K_{rR}\ne \emptyset ,\) one finds \(r\le cR.\) Indeed, if \(y\in K_{rR},\) then \(r\le l\left( y\right) \le c\left\| y\right\| \le cR.\)

If now:

$$\begin{aligned} r_{1}\le cR_{1}, r_{2}\le cR_{2}\quad \text {and }\quad cR_{1}<r_{2}, \end{aligned}$$
(2.25)

then the sets \(K_{r_{1}R_{1}}\) and \(K_{r_{2}R_{2}}\) are disjoint. Indeed, if \(y\in K_{r_{1}R_{1}},\) then :

$$\begin{aligned} r_{1}\le l\left( y\right) \le c\left\| y\right\| \le cR_{1}<r_{2}. \end{aligned}$$

Hence, \(l\left( y\right) <r_{2}\) which shows that \(y\notin K_{r_{2}R_{2}}.\) The same conclusion holds if:

$$\begin{aligned} r_{1}\le cR_{1}, r_{2}\le cR_{2} \text {and }r_{1}>cR_{2}. \end{aligned}$$
(2.26)

These remarks immediately yield multiplicity results, in case that the assumptions of Theorem 2.5hold for several pairs \(\left( r_{i},R_{i}\right) \) satisfying conditions of type (2.24) and ( 2.25) or (2.26). The obtained solutions may have the same first component \(x\in D,\) but they differ on the second component y.

3 Application

As an application, we consider the system:

$$\begin{aligned} -u^{\prime \prime }+a_{1}^{2}u= & {} f\left( u,v,u^{\prime },v^{\prime }\right) \nonumber \\ -v^{\prime \prime }+a_{2}^{2}v= & {} g\left( u,v,u^{\prime }\right) \end{aligned}$$
(3.1)

subject to the periodic conditions:

$$\begin{aligned} u\left( 0\right)= & {} u\left( T\right) , \ u^{\prime }\left( 0\right) =u^{\prime }\left( T\right) , \nonumber \\ v\left( 0\right)= & {} v\left( T\right) , \ v^{\prime }\left( 0\right) =v^{\prime }\left( T\right) . \end{aligned}$$
(3.2)

We assume that \(a_{i}\ne 0\) for \(i=1,2\) and that the functions \(f:{\mathbb {R}}^{4}\rightarrow {\mathbb {R}},\) g :  \({\mathbb {R}}^{3}\rightarrow {\mathbb {R}} \) are continuous. We seek nonnegative classical solutions, i.e., pairs of nonnegative functions \(\left( u,v\right) \) with \(u,v\in C^{2}\left[ 0,T \right] \) which satisfy (3.1) and (3.2).

The second equation in (3.1) has a variational structure as we shall explain below, while in general, the first equation has not, due to the dependence of the nonlinearity f on \(u^{\prime }.\) Thus, we are entitled to use a hybrid fixed point–critical point technique. We shall apply Theorem 2.5.

Let \(C_{p}^{1}=\left\{ u\in C^{1}\left[ 0,T\right] :\ u\left( 0\right) =u\left( T\right) ,\text { }u^{\prime }\left( 0\right) =u^{\prime }\left( T\right) \right\} ,\) and let \(H_{p}^{1}\) be the closure of \(C_{p}^{1}\) in \( H^{1}\left( 0,T\right) \) and, as usual, we denote with \({\mathcal {D}}^{\prime }\left( 0,T\right) \) the space of distributions, dual of the test functions space \({\mathcal {D}}\left( 0,T\right) \). For \(i=1,2,\) define \( L_{i}:H_{p}^{1}\rightarrow {\mathcal {D}}^{\prime }\left( 0,T\right) \) by:

$$\begin{aligned} L_{i}u=-u^{\prime \prime }+a_{i}^{2}u. \end{aligned}$$

Related to \(L_{1}\), we consider the operator \(J_{1}:L^{2}\left( 0,T\right) \rightarrow C_{p}^{1}\cap H^{2}\left( 0,T\right) :\)

$$\begin{aligned} \left( J_{1}h\right) \left( t\right) =\int _{0}^{T}G\left( t,s\right) h\left( s\right) \mathrm{d}s, \end{aligned}$$
(3.3)

where \(G\left( t,s\right) \) is the Green function associated to \(L_{1}\) (see [5, p. 154]), namely:

$$\begin{aligned} G\left( t,s\right) =\frac{1}{2a_{1}\sinh \frac{a_{1}T}{2}}\left\{ \begin{array}{l} \cosh a_{1}\left( s-t+\frac{T}{2}\right) , \ 0\le s\le t\le T \\ \cosh a_{1}\left( t-s+\frac{T}{2}\right) , \ 0\le t\le s\le T. \end{array} \right. \end{aligned}$$

Notice that G is a nonnegative function, and consequently:

$$\begin{aligned} J_{1}h\ge 0\quad \text { for every }h\ge 0. \end{aligned}$$
(3.4)

We shall use the notations:

$$\begin{aligned} m_{1}= & {} \max _{\left[ 0,T\right] ^{2}}G\left( t,s\right) , \ m_{2}=\max _{ \left[ 0,T\right] ^{2}}\left| G_{t}\left( t,s\right) \right| , \left| G\right| _{\infty }=\max \left\{ m_{1},m_{2}\right\} \\ m_{3}= & {} \max _{\left[ 0,T\right] }\int _{0}^{T}G\left( t,s\right) \mathrm{d}s, \ m_{4}=\max _{\left[ 0,T\right] }\int _{0}^{T}\left| G_{t}\left( t,s\right) \right| \mathrm{d}s, \ \left| G\right| _{1}=\max \left\{ m_{3},m_{4}\right\} . \end{aligned}$$

On \(H_{p}^{1}\), we consider the inner product:

$$\begin{aligned} \left\langle u,v\right\rangle :=\int _{0}^{T}\left( u^{\prime }v^{\prime }+a_{2}^{2}uv\right) \mathrm{d}t \ \end{aligned}$$

which obviously generates an equivalent norm with the usual \(H^{1}\)-norm, and let \(\left\| v\right\| _{2}\) be the induced norm \(\sqrt{ \left\langle v,v\right\rangle }.\) One has:

$$\begin{aligned} \left\langle u,v\right\rangle =\left( L_{2}u,v\right) \ \text {for } u\in H_{p}^{1} \text {and }v\in {\mathcal {D}}\left( 0,T\right) , \end{aligned}$$
(3.5)

where the notation \(\left( L_{2}u,v\right) \) stands for the value of distribution \(\ L_{2}u\ \) at v. From \(\ H_{p}^{1}\subset L^{2}\left( 0,T\right) ,\) if we identify \(L^{2}\left( 0,T\right) \) to its dual, one has \( \ L^{2}\left( 0,T\right) \subset \left( H_{p}^{1}\right) ^{\prime },\) and according to Riesz’s representation theorem, for each \(h\in \left( H_{p}^{1}\right) ^{\prime },\) there exists a unique \(u_{h}\in H_{p}^{1}\), such that \(\left( h,v\right) =\left\langle u_{h},v\right\rangle \) for all \( v\in H_{p}^{1}.\) Thus, we may define the operator:

$$\begin{aligned} J_{2}:\left( H_{p}^{1}\right) ^{\prime }\rightarrow H_{p}^{1}, \ J_{2}h=u_{h}. \end{aligned}$$

Hence:

$$\begin{aligned} \left\langle J_{2}h,v\right\rangle =\left( h,v\right) , \ \ h\in \left( H_{p}^{1}\right) ^{\prime },v\in H_{p}^{1}. \end{aligned}$$

Note that, restricted to \(L^{2}\left( 0,T\right) ,\) the operator \(J_{2}\) is the integral operator of the form (3.3) with \(a_{2}\) instead of \(a_{1},\) and consequently, it also has the positivity property (3.4) on \( L^{2}\left( 0,T\right) .\) Now, we associate with the second equation in (3.1), the functional \(E:C_{p}^{1}\times H_{p}^{1}\rightarrow {\mathbb {R}}:\)

$$\begin{aligned} E\left( u,v\right)= & {} \frac{1}{2}\left\| v\right\| _{2}^{2}-\int _{0}^{T}\int _{0}^{v\left( t\right) }g\left( u\left( t\right) ,\xi ,u^{\prime }\left( t\right) \right) \mathrm{d}\xi \mathrm{d}t \\= & {} \int _{0}^{T}\left( \frac{1}{2}\left( v^{\prime }\left( t\right) ^{2}+a_{2}^{2}v\left( t\right) ^{2}\right) -\int _{0}^{v\left( t\right) }g\left( u\left( t\right) ,\xi ,u^{\prime }\left( t\right) \right) \mathrm{d}\xi \right) \mathrm{d}t. \end{aligned}$$

For every \(u\in C_{p}^{1}\), the Fréchet derivative of \(E\left( u,\cdot \right) \) in \(v\in H_{p}^{1}\), denoted \(E_{v}^{\prime }\left( u,v\right) \), is given by:

$$\begin{aligned} E_{v}^{\prime }\left( u,v\right) =L_{2}v-g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) . \end{aligned}$$

Hence, if we identify \(\left( H_{p}^{1}\right) ^{\prime }\) with \(H_{p}^{1}\) via \(J_{2},\) we may say, by (3.5), that:

$$\begin{aligned} E_{v}^{\prime }\left( u,v\right) =v-J_{2}g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) . \end{aligned}$$

To apply Theorem 2.5, we let:

$$\begin{aligned} D= & {} \{u\in C_{p}^{1}:\ u\ge 0,\ \left\| u\right\| _{C^{1}}:=\max \left\{ \left\| u\right\| _{\infty },\left\| u^{\prime }\right\| _{\infty }\right\} \le R_{0}\};\\ Y= & {} H_{p}^{1}; \ K=\left\{ v\in H_{p}^{1}:\ v\ge 0\right\} ; \ l:K\rightarrow {\mathbb {R}}_{+}, l\left( v\right) =\min _{t\in \left[ 0,T\right] }v\left( t\right) . \end{aligned}$$

Trivially, l is a concave function; moreover, it is continuous by the embedding \(\left( H_{p}^{1},\ \left\| \cdot \right\| _{2}\right) \subset C\left[ 0,T\right] ;\) that is, there exists \(c>0\), such that:

$$\begin{aligned} \left\| v\right\| _{\infty }\le c\left\| v\right\| _{2} \ \text {for all }v\in H_{p}^{1}. \end{aligned}$$

Note that for \(v\equiv 1,\) the above inequality gives \(1\le a_{2}c\sqrt{T},\) whence \(a_{2}^{2}\ge 1/\left( c^{2}T\right) .\) Also, if r and R are positive numbers and \(a_{2}\sqrt{T}r\le R,\) then the set \(\ K_{rR}=\{v\in K:\ l\left( v\right) \ge r,\) \(\left\| v\right\| _{2}\le R\}\ \) is nonempty. Indeed, any constant \(\lambda \in \left[ r,\ R/\left( a_{2}\sqrt{T} \right) \right] \) belongs to \(K_{rR},\) since \(l\left( \lambda \right) =\lambda \ge r\) and \(\left\| \lambda \right\| _{2}=\left( \int _{0}^{T}a_{2}^{2}\lambda ^{2}\mathrm{d}s\right) ^{1/2}=a_{2}\lambda \sqrt{T}\le R.\)

Our first assumption is about function f : 

  1. (H1)

    There exist nonnegative numbers rR with \(0<r<R/\left( a_{2} \sqrt{T}\right) \), such that if \(\ \Lambda :=\left[ 0,R_{0}\right] \times \left[ r,cR\right] \times [-R_{0},R_{0}]\times {\mathbb {R}},\) then:

    $$\begin{aligned} f\left( \Lambda \right) \subset \left[ 0,\frac{R_{0}}{\left| G\right| _{1}}\right] , \end{aligned}$$
    (3.6)

    and there exist \(\ b_{i}\ \left( i=1,\dots , 4 \right) \), such that:

    $$\begin{aligned} \left| f\left( w,x,y,z\right) -f\left( {\overline{w}},{\overline{x}}, {\overline{y}},{\overline{z}}\right) \right|\le & {} b_{1}\left| w- {\overline{w}}\right| +b_{2}\left| x-{\overline{x}}\right| \nonumber \\&+b_{3}\left| y-{\overline{y}}\right| +b_{4}\left| z-{\overline{z}} \right| \end{aligned}$$
    (3.7)

    for all \(\left( w,x,y,z\right) ,\left( {\overline{w}},x,{\overline{y}},z\right) \in \Lambda .\)

Under condition (3.7), we may define the mapping \(\ N:D\times K_{rR}\rightarrow C_{p}^{1}:\)

$$\begin{aligned} N\left( u,v\right) =J_{1}f\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot ),v^{\prime }(\cdot )\right) , \end{aligned}$$

which, based on (3.6), satisfies the required invariance condition \( N\left( D\times K_{rR}\right) \subset D.\) In addition, from (3.7), we have:

$$\begin{aligned} \left\| N\left( u,v\right) -N\left( {\overline{u}},{\overline{v}}\right) \right\| _{C^{1}}\le \alpha \left\| u-{\overline{u}}\right\| _{C^{1}}+\beta \left\| v-{\overline{v}}\right\| _{2} \end{aligned}$$
(3.8)

for all \(\left( u,v\right) ,\left( {\overline{u}},{\overline{v}}\right) \in D\times K_{rR},\) where:

$$\begin{aligned} \alpha =\left| G\right| _{1}\left( b_{1}+b_{3}\right) , \ \beta =\left| G\right| _{\infty }\sqrt{2T}\max \left\{ \frac{b_{2}}{a_{2}} ,\ b_{4}\right\} . \end{aligned}$$

Indeed, for every \(t \in [0,T]\), one has:

$$\begin{aligned} \left| N\left( u,v\right) \left( t\right) -N\left( {\overline{u}}, {\overline{v}}\right) \left( t\right) \right|\le & {} \int _{0}^{T}G\left( t,s\right) (b_{1}\left| u\left( s\right) -{\overline{u}}\left( s\right) \right| +b_{3}\left| u^{\prime }\left( s\right) -{\overline{u}} ^{\prime }\left( s\right) \right| )\mathrm{d}s \\&+\int _{0}^{T}G\left( t,s\right) (b_{2}\left| v\left( s\right) - {\overline{v}}\left( s\right) \right| +b_{4}\left| v^{\prime }\left( s\right) -{\overline{v}}^{\prime }\left( s\right) \right| )\mathrm{d}s \\\le & {} m_{3}\left( b_{1}+b_{3}\right) \left\| u-{\overline{u}}\right\| _{C^{1}} \\&+\int _{0}^{T}G\left( t,s\right) \left( b_{2}\left| v\left( s\right) - {\overline{v}}\left( s\right) \right| +b_{4}\left| v^{\prime }\left( s\right) -{\overline{v}}^{\prime }\left( s\right) \right| \right) \mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned}&\int _{0}^{T}G\left( t,s\right) \left( b_{2}\left| v\left( s\right) - {\overline{v}}\left( s\right) \right| +b_{4}\left| v^{\prime }\left( s\right) -{\overline{v}}^{\prime }\left( s\right) \right| \right) \mathrm{d}s \\\le & {} m_{1}\max \left\{ \frac{b_{2}}{a_{2}},\ b_{4}\right\} \int _{0}^{T}\left( a_{2}\left| v\left( s\right) -{\overline{v}}\left( s\right) \right| +\left| v^{\prime }\left( s\right) -{\overline{v}} ^{\prime }\left( s\right) \right| \right) \mathrm{d}s. \end{aligned}$$

Furthermore, using Hölder’s inequality and the inequality \(\left( a+b\right) ^{2}\le 2\left( a^{2}+b^{2}\right) ,\) we have:

$$\begin{aligned}&\int _{0}^{T}\left( a_{2}\left| v\left( s\right) -{\overline{v}}\left( s\right) \right| +\left| v^{\prime }\left( s\right) -{\overline{v}} ^{\prime }\left( s\right) \right| \right) \mathrm{d}s \\\le & {} \sqrt{T}\left[ \left( \int _{0}^{T}a_{2}^{2}\left| v\left( s\right) -{\overline{v}}\left( s\right) \right| ^{2}\mathrm{d}s\right) ^{1/2}+\left( \int _{0}^{T}\left| v^{\prime }\left( s\right) -{\overline{v}} ^{\prime }\left( s\right) \right| ^{2}ds\right) ^{1/2}\right] \\\le & {} \sqrt{2T}\left[ \int _{0}^{T}\left( a_{2}^{2}\left| v\left( s\right) -{\overline{v}}\left( s\right) \right| ^{2}+\left| v^{\prime }\left( s\right) -{\overline{v}}^{\prime }\left( s\right) \right| ^{2}\right) \mathrm{d}s\right] ^{1/2} \\= & {} \sqrt{2T}\left\| v-{\overline{v}}\right\| _{2}. \end{aligned}$$

It follows that:

$$\begin{aligned}&\left\| N\left( u,v\right) -N\left( {\overline{u}},{\overline{v}}\right) \right\| _{\infty }\le m_{3}\left( b_{1}+b_{3}\right) \left\| u- {\overline{u}}\right\| _{C^{1}}\\&+m_{1}\max \left\{ \frac{b_{2}}{a_{2}},\ b_{4}\right\} \sqrt{2T}\left\| v-{\overline{v}}\right\| _{2}. \end{aligned}$$

Analogously, only replacing G by \(G_{t},\) we obtain:

$$\begin{aligned}&\left\| N\left( u,v\right) ^{\prime }-N\left( {\overline{u}},{\overline{v}} \right) ^{\prime }\right\| _{\infty }\le m_{4}\left( b_{1}+b_{3}\right) \left\| u-{\overline{u}}\right\| _{C^{1}}\\&+m_{2}\max \left\{ \frac{b_{2}}{ a_{2}},\ b_{4}\right\} \sqrt{2T}\left\| v-{\overline{v}}\right\| _{2}. \end{aligned}$$

As a result, we have (3.8).

Also assume the Lipschitz continuity of g,  namely:

  1. (H2)

    There exist nonnegative constants \(c_{1},c_{2},c_{3}\) with:

    $$\begin{aligned} \left| g\left( w,x,y\right) -g\left( {\overline{w}},{\overline{x}},\overline{ y}\right) \right| \le c_{1}\left| w-{\overline{w}}\right| +c_{2}\left| x-{\overline{x}}\right| +c_{3}\left| y-{\overline{y}} \right| \end{aligned}$$

    for all \(\left( w,x,y\right) ,\) \(\left( {\overline{w}},{\overline{x}},\overline{y }\right) \in \left[ 0,R_{0}\right] \times \left[ r,cR\right] \times [-R_{0},R_{0}].\)

Then, since for every \(h\in L^{2}\left( 0,T\right) ,\) one has:

$$\begin{aligned} \left\| J_{2}h\right\| _{2}^{2}=\left\langle h,J_{2}h\right\rangle _{L^{2}}\le \left\| h\right\| _{L^{2}}\left\| J_{2}h\right\| _{L^{2}}\le \frac{1}{a_{2}}\left\| h\right\| _{L^{2}}\left\| J_{2}h\right\| _{2}, \end{aligned}$$

whence:

$$\begin{aligned} \left\| J_{2}h\right\| _{2}\le \frac{1}{a_{2}}\left\| h\right\| _{L^{2}}, \end{aligned}$$

we deduce that:

$$\begin{aligned}&\left\| \left( v-E_{v}^{\prime }\left( u,v\right) \right) -\left( {\overline{v}}-E_{v}^{\prime }\left( {\overline{u}},{\overline{v}}\right) \right) \right\| _{2} \\= & {} \left\| J_{2}\left( g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) -g\left( {\overline{u}}(\cdot ),{\overline{v}}(\cdot ),{\overline{u}} ^{\prime }(\cdot )\right) \right) \right\| _{2} \\\le & {} \frac{1}{a_{2}}\left\| g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) -g\left( {\overline{u}}(\cdot ),{\overline{v}}(\cdot ), {\overline{u}}^{\prime }(\cdot )\right) \right\| _{L^{2}} \\\le & {} \frac{c_{1}}{a_{2}}\left\| u-{\overline{u}}\right\| _{L^{2}}+ \frac{c_{2}}{a_{2}}\left\| v-{\overline{v}}\right\| _{L^{2}}+\frac{c_{3} }{a_{2}}\left\| u^{\prime }-{\overline{u}}^{\prime }\right\| _{L^{2}} \\\le & {} \sqrt{T}\frac{c_{1}+c_{3}}{a_{2}}\left\| u-{\overline{u}}\right\| _{C^{1}}+\frac{c_{2}}{a_{2}^{2}}\left\| v-{\overline{v}}\right\| _{2}. \end{aligned}$$

Hence:

$$\begin{aligned} \left\| \left( v-E_{v}^{\prime }\left( u,v\right) \right) -\left( {\overline{v}}-E_{v}^{\prime }\left( {\overline{u}},{\overline{v}}\right) \right) \right\| _{2}\le \gamma \left\| u-{\overline{u}}\right\| _{C^{1}}+\delta \left\| v-{\overline{v}}\right\| _{2} \end{aligned}$$

for all \(\left( u,v\right) ,\left( {\overline{u}},{\overline{v}}\right) \in D\times K_{rR},\) where:

$$\begin{aligned} \gamma =\sqrt{T}\frac{c_{1}+c_{3}}{a_{2}},\quad \delta =\frac{c_{2}}{ a_{2}^{2}}. \end{aligned}$$

Now, if we assume that:

  1. (H3)

    The spectral radius of the matrix:

    $$\begin{aligned} M=\left( \begin{array}{cc} \alpha &{} \beta \\ \gamma &{} \delta \end{array} \right) \end{aligned}$$

    is strictly less than one,

then both condition (i) of Theorem 2.1 (with \(v={\overline{v}}\) and \( L=\alpha \)) on the mapping N,  and (h4) of Theorem 2.5 on the couple \(\left( N,I_{Y}-E_{y}^{\prime }\right) \) are fulfilled.

Furthermore, for the sake of simplicity, assume that:

  1. (H4)

    The functions \(f\left( w,x,y,z\right) \) and \(g\left( w,x,y\right) \) are nondecreasing in variables wx and y, respectively, in \(\left[ 0,R_{0}\right] \), in \(\left[ r,cR\right] \) and in \([-R_{0},R_{0}]\) and the following inequalities hold:

    $$\begin{aligned} g\left( 0,r,-R_{0}\right) \ge a_{2}^{2}r, \ \ g\left( R_{0},cR,R_{0}\right) \le \frac{R}{cT}. \end{aligned}$$
    (3.9)

Then condition (h2) is satisfied. Indeed, for any \(\left( u,v\right) \in D\times K_{rR},\) we have:

$$\begin{aligned} 0\le u\left( t\right) \le R_{0}, r\le v\left( t\right) \le cR, -R_{0}\le u^{\prime }\left( t\right) \le R_{0} \text {and } v^{\prime }\left( t\right) \in {\mathbb {R}}, \end{aligned}$$

for every \(t\in [0,T]\). Then, \(\ g\left( u(t),v(t),u^{\prime }(t)\right) \ge g\left( 0,r,-R_{0}\right) ,\) for every \(t\in [0,T]\), and so, using the first inequality in (3.9) and the integral representation on \(L^{2}\left( 0,T\right) \) of \(J_{2}\) with a positive Green function, we obtain:

$$\begin{aligned} l\left( v-E_{v}^{\prime }\left( u,v\right) \right)= & {} l(J_{2}g(u(\cdot ),v(\cdot ),u^{\prime }(\cdot )))\ge g\left( 0,r,-R_{0}\right) l\left( J_{2}\left( 1\right) \right) \\= & {} \frac{1}{a_{2}^{2}}g\left( 0,r,-R_{0}\right) \ge r. \end{aligned}$$

Next, we show that condition (h3) is satisfied. Assuming the contrary, there are \(\left( u,v\right) \in D\times \partial K_{R}\) and \(\mu >0\), such that \( E_{v}^{\prime }\left( u,v\right) +\mu v=0.\) Then, \(\ J_{2}g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) =\left( 1+\mu \right) v,\) whence:

$$\begin{aligned} R^{2}< & {} \left( 1+\mu \right) R^{2}=\left( 1+\mu \right) \left\| v\right\| _{2}^{2}=\langle J_{2}g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) ,v\rangle \\&=\left\langle g\left( u(\cdot ),v(\cdot ),u^{\prime }(\cdot )\right) ,v\right\rangle _{L^{2}} \\\le & {} cTg\left( R_{0},cR,R_{0}\right) R. \end{aligned}$$

This gives \(\ g\left( R_{0},cR,R_{0}\right) >R/\left( cT\right) ,\) which is excluded by (3.9).

It remains to guarantee condition (h1). Let \(\left( u,v\right) \in D\times K_{rR}.\) One has:

$$\begin{aligned} E\left( u,v\right) \ge -\int _{0}^{T}\int _{0}^{v\left( t\right) }g\left( u\left( t\right) ,\xi ,u^{\prime }\left( t\right) \right) \mathrm{d}\xi \mathrm{d}t\ge -cTRg\left( R_{0},cR,R_{0}\right) . \end{aligned}$$

Hence:

$$\begin{aligned} m\ge -cTRg\left( R_{0},cR,R_{0}\right) >-\infty . \end{aligned}$$
(3.10)

To ensure (2.21), we need the following additional relationship between r and R : 

  1. (H5)

    One has:

    $$\begin{aligned} \frac{a_{2}^{2}Tr^{2}}{2}-rTg\left( 0,r,-R_{0}\right) <\frac{R^{2}}{2} -cTRg\left( R_{0},cR,R_{0}\right) . \end{aligned}$$

Under this condition, in view of (3.10) and of the equality \(\left\| r\right\| _{2}=a_{2}r\sqrt{T},\) if \(\ l\left( v\right) =r\) and \( \left\| v\right\| _{2}=R\) simultaneously, then there is \(\varepsilon >0 \), such that:

$$\begin{aligned} E\left( u,v\right)\ge & {} \frac{R^{2}}{2}-cTRg\left( R_{0},cR,R_{0}\right) \\= & {} \frac{a_{2}^{2}Tr^{2}}{2}-rTg\left( 0,r,-R_{0}\right) +\varepsilon \\\ge & {} E\left( u,r\right) +\varepsilon \ge m+\varepsilon . \end{aligned}$$

Therefore, we have the following result.

Theorem 3.1

Under the assumptions (H1)–(H5), system (3.1) has a T-periodic solution \(\left( u,v\right) \) with:

$$\begin{aligned} 0\le u\left( t\right) , \left| u^{\prime }\left( t\right) \right| \le R_{0} \text {and }r\le v\left( t\right) \le cR \end{aligned}$$

for all \(t\in \left[ 0,T\right] .\) In addition v minimizes \(E(u,\cdot )\) in the set \(K_{rR}.\)

Example

Consider the system:

$$\begin{aligned} -u^{\prime \prime }+a_{1}^{2}u= & {} \lambda \left( 2{\overline{b}}_{1}\sqrt{u+1}+ {\overline{b}}_{2}\left( 1-e^{-v}\right) +{\overline{b}}_{3}u^{\prime }\right) \\ -v^{\prime \prime }+a_{2}^{2}v= & {} \lambda \left( {\overline{c}}_{1}u+2 {\overline{c}}_{2}\sqrt{v+1}\right) \ \end{aligned}$$

subject to the periodic conditions, where all coefficients are nonnegative, \( {\overline{b}}_{3}>0,\) and \(\lambda >0\) is a suitable parameter (see (3.11)). Here:

$$\begin{aligned} f\left( w,x,y,z\right)= & {} \lambda \left( 2{\overline{b}}_{1}\sqrt{w+1}+ {\overline{b}}_{2}\left( 1-e^{-x}\right) +{\overline{b}}_{3}y\right) , \\ g\left( w,x,y\right)= & {} \lambda \left( {\overline{c}}_{1}w+2{\overline{c}}_{2} \sqrt{x+1}\right) . \end{aligned}$$

Also:

$$\begin{aligned} b_{1}= & {} \lambda {\overline{b}}_{1}, b_{2}=\lambda {\overline{b}}_{2}, b_{3}=\lambda {\overline{b}}_{3}, b_{4}=0, \\ c_{1}= & {} \lambda {\overline{c}}_{1}, c_{2}=\lambda {\overline{c}}_{2}, c_{3}=0. \end{aligned}$$

One has \(\ M=\lambda {\overline{M}},\) where:

$$\begin{aligned}&{\overline{M}}=\left( \begin{array}{cc} {\overline{\alpha }} &{} {\overline{\beta }} \\ {\overline{\gamma }} &{} {\overline{\delta }} \end{array} \right) ,\\&\quad {\overline{\alpha }}=\left| G\right| _{1}\left( {\overline{b}}_{1}+ {\overline{b}}_{3}\right) , {\overline{\beta }}=\left| G\right| _{\infty }\sqrt{2T}\frac{{\overline{b}}_{2}}{a_{2}}, {\overline{\gamma }}= \sqrt{T}\frac{{\overline{c}}_{1}}{a_{2}}, {\overline{\delta }}=\frac{ {\overline{c}}_{2}}{a_{2}^{2}}. \end{aligned}$$

Therefore, the spectral radius \(\rho \left( M\right) \) of matrix M is less than one if \(\lambda <1/\rho \left( {\overline{M}}\right) .\) If we take \(\ R_{0}=2{\overline{b}}_{1}/{\overline{b}}_{3},\) then it is easy to see that \( f\left( w,x,y,z\right) \ge 0\) for all \(w\in \left[ 0,R_{0}\right] ,\) \(x\in {\mathbb {R}}^{+}\) and \(y\in [-R_{0},R_{0}].\) If, in addition, \(\lambda >0\) is small enough, so that:

$$\begin{aligned} \lambda \left( 2{\overline{b}}_{1}\sqrt{R_{0}+1}+{\overline{b}}_{2}+{\overline{b}} _{3}R_{0}\right) \le \frac{R_{0}}{\left| G\right| _{1}}, \end{aligned}$$

then (3.6) is completely satisfied. Furthermore, with the choice \(\ r=4\lambda ^{2}{\overline{c}}_{2}^{2}/a_{2}^{4},\) the first inequality in (3.9) holds. Finally, the second inequality in (3.9) and condition (H5) are satisfied for sufficiently large \(R.\ \)Thus, from Theorem 3.1, we may conclude that for every \(\lambda >0\) with:

$$\begin{aligned} \lambda < \min \left\{ \frac{R_{0}}{\left| G\right| _{1} \left( 2{\overline{b}}_{1}\sqrt{R_{0}+1}+{\overline{b}}_{2}+{\overline{b}} _{3}R_{0}\right) }, \frac{1}{\rho \left( {\overline{M}}\right) }\right\} , \end{aligned}$$
(3.11)

the system has a T-periodic solution \(\left( u,v\right) \) with:

$$\begin{aligned} u\left( t\right) \ge 0 \text {and }v\left( t\right) \ge \frac{ 4\lambda ^{2}{\overline{c}}_{2}^{2}}{a_{2}^{4}} \end{aligned}$$

for all \(t\in \left[ 0,T\right] .\) Notice that if \({\overline{b}}_{1}\ne 0,\) then the first component u of the solution cannot be identically zero. Indeed, if we assume the contrary, then from the first equation of the system, we would have \(\ 0=\lambda \left( 2{\overline{b}}_{1}+{\overline{b}} _{2}\left( 1-e^{-v}\right) \right) \) \(\ge 2\lambda {\overline{b}}_{1}>0,\ \) a contradiction.