1 Introduction and main result

We consider the problem

$$\begin{aligned} \left\{ \begin{array}{ll} x'=\partial _y H(t,x,y),&{} \quad y'=-\partial _x H(t,x,y),\\ y(a)=0=y(b),&{}\\ \end{array} \right. \end{aligned}$$
(1)

where \(H:[a,b]\times {{\mathbb {R}}}^{2N}\rightarrow {{\mathbb {R}}}\) is a continuous function, with continuous partial derivatives \(\partial _x H(t,x,y)\) and \(\partial _y H(t,x,y)\).

We use the notation \(z=(x,y)\), with \(x=(x_1,\dots , x_N)\) and \(y=(y_1,\dots ,y_N)\). Here are our assumptions.

A1. For every \(i\in \{1,\dots ,N\}\), the function H(txy) is \(\tau _i\)-periodic in \(x_i\), for some \(\tau _i>0\).

A2. The solutions of the differential system in (1), with initial value

$$\begin{aligned} \textstyle x(a)\in \prod _{i=1}^N\,[0,\tau _i],\quad y(a)=0, \end{aligned}$$
(2)

are defined on the whole time interval [ab].

By assumption A1, once a solution \(z(t)=(x(t),y(t))\) of (1) has been found, infinitely many others appear by just adding an integer multiple of \(\tau _i\) to one of the components \(x_i(t)\). We will call geometrically distinct two solutions which cannot be obtained from each other in this way.

Here is our main result.

Theorem 1.1

Under assumptions A1 and A2, problem (1) has at least \(N+1\) geometrically distinct solutions.

Different explicit conditions on \(\nabla H\) can be imposed in order to have assumption A2 satisfied. In the “Appendix” at the end of the paper we will present one of them.

In the literature on Hamiltonian systems it is rather unusual to find multiplicity results for two-point boundary value problems. Starting with Rabinowitz [9] in 1978, a lot of effort has been devoted to the study of the periodic problem. Since then, we can identify two main lines of research: variational methods, and the symplectic approach.

Variational methods have to face the great difficulty that the action functional associated with the problem is strongly indefinite. In [9], it was shown that the natural setting for the T-periodic problem is the space \(H^{1/2}_T\). However, the elements of this space are not necessarily continuous, and probably this is the main issue when one wants to deal with two-point boundary conditions.

The symplectic approach emerged from the famous Poincaré–Birkhoff Theorem [8] for the periodic problem associated with a Hamiltonian system. It is very well suited in one degree of freedom (i.e., when \(N=1\)), providing the existence of two T-periodic solutions as fixed points of the Poincaré map, under some twist assumptions. The possibility of a higher dimensional version of the Poincaré–Birkhoff Theorem has been investigated, starting with Birkhoff himself [1], by many authors (see [4] and the references therein).

There is a striking issue in our Theorem 1.1: no twist condition is assumed! We will try to explain the geometrical idea behind this theorem in Sect. 3, in the one-degree of freedom case.

When the Hamiltonian function has the special form \(H(t,x,y)={\textstyle {\frac{1}{2}}}|y|^2+G(t,x)\) and \(N=1\), problem (1) becomes a Neumann boundary value problem for a scalar second order differential equation. We can find a multiplicity result for the Neumann problem by Castro [2] in 1980 and a similar one by Rabinowitz [10] in 1988, where the result is mentioned as a remark at the end of the paper, after a detailed study of the periodic problem associated with a Lagrangian system. Both papers use variational methods.

The proof of Theorem 1.1 is provided in Sect. 2. The main novelty lies in the fact that, while for the periodic problem x and y are usually both taken in the same space \(H^{1/2}_T\), here we assume x and y to belong to some complementary spaces \(X_\alpha \) and \(Y_\beta \), which are closely related to fractional Sobolev spaces. We will explain in detail how these spaces are defined in Sect. 2.1.

The variational setting will then be developed in Sects. 2.2 and 2.3. Then, as in [4], we will be able to apply a multiplicity result by Szulkin [11], based on an infinite-dimensional extension of Lusternik–Schnirelmann theory.

Notation. In the following, we will use the notation for the vectors of the canonical basis of \({{\mathbb {R}}}^N\). The Euclidean scalar product will be denoted by \(\langle \cdot ,\cdot \rangle \), and the corresponding norm by \(|\cdot |\). Moreover, we will denote the average of a function \(f\in L^p(0,T)\) by

$$\begin{aligned} {\bar{f}}=\frac{1}{T}\int _0^T f(t)\,dt. \end{aligned}$$

2 Proof of Theorem 1.1

Without loss of generality, we can assume \([a,b]=[0,\pi ]\). Indeed, we can reduce to this case with a simple change of variable \(t\mapsto \frac{\pi }{b-a}(t-a)\) .

By assumption A2 and a standard compactness argument, there is a constant \(R>0\) such that every solution satisfying (2), with \(a=0\), is such that

$$\begin{aligned} |y(t)|\le R,\quad \hbox { for every }t\in [0,\pi ]. \end{aligned}$$
(3)

Let \(\eta :{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) be a \(C^\infty \)-function such that

$$\begin{aligned} \eta (y)={\left\{ \begin{array}{ll} 1, &{}\hbox {if }|y|\le R,\\ 0, &{}\text{ if } |y|\ge R+1, \end{array}\right. } \end{aligned}$$

and consider the modified Hamiltonian function \({\widetilde{H}}:[0,\pi ]\times {{\mathbb {R}}}^{2N}\rightarrow {{\mathbb {R}}}\) defined as

$$\begin{aligned} {\widetilde{H}}(t,x,y)=\eta (y)H(t,x,y). \end{aligned}$$

We claim that our goal will be attained assuming also, without loss of generality, that

$$\begin{aligned} H(t,x,y)=0\;\hbox { when }|y|\ge R+1. \end{aligned}$$

Indeed, replacing in (1) the function H by \({\widetilde{H}}\), the solutions (xy) of

$$\begin{aligned} x'=\partial _y {\widetilde{H}}(t,x,y),\quad y'=-\partial _x {\widetilde{H}}(t,x,y) \end{aligned}$$

such that \(x(0)\in \prod _{i=1}^N\,[0,\tau _i]\) and \(y(0)=0\), as long as they satisfy \(|y(t)|\le R\), they are also solutions of (1), hence they satisfy the same estimate (3). By A1, the assumption \(x(0)\in \prod _{i=1}^N\,[0,\tau _i]\) is not restrictive. The Claim is thus proved.

As a consequence, by A1, we can assume that there exists a constant \({\bar{c}}>0\) such that

$$\begin{aligned} |\partial _x H(t,x,y)|+|\partial _y H(t,x,y)|\le {\bar{c}},\quad \hbox { for every }(t,x,y)\in [0,\pi ]\times {{\mathbb {R}}}^{2N}. \end{aligned}$$
(4)

The rest of the proof is divided in three parts. In the first one we introduce the function spaces where to build our variational setting, which will be carried out in the further two parts. As in [4], the conclusion will follow by applying the following special case of a multiplicity result by Szulkin [11], where we denote by \({{\mathbb {T}}}^N\) the N-dimensional torus. In the following, we will treat \({{\mathbb {T}}}^N\) as being lifted to \({{\mathbb {R}}}^N\).

Theorem 2.1

Let E be a real Hilbert space, and \(L:E\rightarrow E\) be an invertible bounded selfadjoint operator. Denote by \({\mathscr {M}}\) the set \(E\times {{\mathbb {T}}}^N\), as being lifted to \(E\times {{\mathbb {R}}}^N\), and let \(\psi :{\mathscr {M}}\rightarrow {{\mathbb {R}}}\) be a continuously differentiable function such that \(d\psi ({\mathscr {M}})\), the image of its differential, is relatively compact in the dual space \({{\mathscr {L}}}({\mathscr {M}},{{\mathbb {R}}})\). Then, the function \(\varphi :{\mathscr {M}}\rightarrow {{\mathbb {R}}}\) defined as \(\varphi (e,{\bar{x}})={\textstyle {\frac{1}{2}}}\langle Le,e\rangle +\psi (e,{\bar{x}})\) has at least \(N+1\) critical points.

2.1 The function spaces

In this section, divided into three subsections, we introduce our function spaces.

2.1.1 Some remarks on \(L^p\) spaces

It is well known that any function \(f\in L^p(-\pi ,\pi )\), with \(p\in \,]1,+\infty [\,\), has a real Fourier expansion

$$\begin{aligned} f(t)\sim \frac{a_0}{2}+\sum _{m=1}^\infty \big (a_m\cos (mt)+b_m\sin (mt)\big ), \end{aligned}$$

and a complex expansion

$$\begin{aligned} f(t)\sim \sum _{m=-\infty }^{+\infty } f_m e^{imt}, \end{aligned}$$

with the corresponding coefficients

Given \(p>1\), \(q>1\), with \(p\le 2\le q\) and \((1/p)+(1/q)=1\), we know from Hausdorff–Young inequality (see [12, Ch. XII.2]) that

$$\begin{aligned} \bigg (\sum _{m=-\infty }^{+\infty }|f_m|^q\bigg )^{1/q}\le \bigg (\frac{1}{2\pi }\int _{-\pi }^\pi |f(t)|^p\,dt\bigg )^{1/p}. \end{aligned}$$
(5)

The above inequality becomes an identity if \(p=q=2\).

Let us now work in the subinterval \(]0,\pi [\) . A function \(f\in L^p(0,\pi )\) could be expressed as a traditional Fourier series, but if we enlarge the set of admissible frequencies it can also be expanded in a series involving only cosines,

$$\begin{aligned} f(t)\sim \sum _{m=0}^\infty c_m \cos (mt), \end{aligned}$$
(6)

or in a series involving only sines,

$$\begin{aligned} f(t)\sim \sum _{m=1}^\infty s_m \sin (mt). \end{aligned}$$

Indeed, it is sufficient to extend the function f to the interval \(]-\pi ,\pi [\) , either as an even function, or as an odd function, so to obtain the above expansions. Let us focus on the first case. We will need the following.

Proposition 2.2

Let \(p>1\) and \(q>1\) be such that \(p\le 2\le q\) and \((1/p)+(1/q)=1\). If \(f\in L^p(0,\pi )\) is expanded as in (6) and \({\bar{f}}=0\), then

$$\begin{aligned} \bigg (\sum _{m=1}^{\infty }|c_m|^q\bigg )^{1/q}\le \bigg (\frac{2}{\pi }\int _0^\pi |f(t)|^p\,dt\bigg )^{1/p}. \end{aligned}$$
(7)

Proof

In this case, after having extended f to an even function on \(]-\pi ,\pi [\) , we have

$$\begin{aligned} f(t)\sim \sum _{m=1}^\infty c_m \cos (mt)= \sum _{m=1}^\infty c_m \frac{e^{imt}+e^{-imt}}{2}= \sum _{ \begin{array}{c} m=-\infty \\ m\ne 0 \end{array}}^\infty \Big [{\textstyle {\frac{1}{2}}}c_{|m|}\Big ]e^{imt}. \end{aligned}$$

Hence, by he Hausdorff–Young inequality (5),

$$\begin{aligned} \bigg (\sum _{m=1}^{\infty }|c_m|^q\bigg )^{1/q}{} & {} =\bigg (\frac{1}{2} \sum _{ \begin{array}{c} m=-\infty \\ m\ne 0 \end{array}}^\infty |c_{|m|}|^q\bigg )^{1/q}\\{} & {} =2^{1/p}\bigg ( \sum _{ \begin{array}{c} m=-\infty \\ m\ne 0 \end{array}}^\infty \Big [{\textstyle {\frac{1}{2}}}c_{|m|}\Big ]^q\bigg )^{1/q}\\{} & {} \le 2^{1/p}\bigg (\frac{1}{2\pi }\int _{-\pi }^\pi |f(t)|^p\,dt\bigg )^{1/p} =\bigg (\frac{2}{\pi }\int _0^\pi |f(t)|^p\,dt\bigg )^{1/p}, \end{aligned}$$

so that (7) holds true. \(\square \)

Concerning the second case, when the function f is expanded in sines, one can similarly prove that

$$\begin{aligned} \bigg (\sum _{m=1}^{\infty }|s_m|^q\bigg )^{1/q}\le \bigg (\frac{2}{\pi }\int _0^\pi |f(t)|^p\,dt\bigg )^{1/p}. \end{aligned}$$
(8)

Both (7) and (8) become identities when \(p=q=2\).

2.1.2 The space \({X_\alpha }\)

For any \(\alpha \in \,]0,1[,\) we define \({X_\alpha }\) as the set of those real valued functions \({\tilde{x}}\in L^2(0,\pi )\) such that

$$\begin{aligned} {\tilde{x}}(t)\sim \sum _{m=1}^\infty {\tilde{x}}_m\cos (mt), \end{aligned}$$

where \(({\tilde{x}}_m)_{m\ge 1}\) is a sequence in \({{\mathbb {R}}}\) satisfying

$$\begin{aligned} \sum _{m=1}^\infty m^{2\alpha }{\tilde{x}}_m^2<\infty . \end{aligned}$$

The space \({X_\alpha }\) is endowed with the inner product

$$\begin{aligned} \langle {\tilde{x}},{\tilde{\xi }}\rangle =\sum _{m=1}^\infty m^{2\alpha }{\tilde{x}}_m{\tilde{\xi }}_m, \end{aligned}$$

and corresponding norm

$$\begin{aligned} \Vert {\tilde{x}}\Vert _{X_\alpha }=\sqrt{\sum _{m=1}^{\infty }m^{2\alpha }{\tilde{x}}_m^2}. \end{aligned}$$

Notice that the functions in \({X_\alpha }\) have a zero mean on \([0,\pi ]\). Moreover,

$$\begin{aligned} {\tilde{x}}_m=\frac{2}{\pi }\int _0^\pi {\tilde{x}}(t)\cos (mt)\,dt,\quad m\ge 1. \end{aligned}$$

Proposition 2.3

\({X_\alpha }\) is a separable Hilbert space. A Hilbert basis in \({X_\alpha }\) is provided by the functions \((e_m^{(\alpha )})_{m\ge 1}\) defined as

$$\begin{aligned} e_m^{(\alpha )}(t)=\frac{1}{m^\alpha }\cos (mt). \end{aligned}$$

Proof

We can define a linear mapping \({{\mathcal {L}}}:{X_\alpha }\rightarrow \ell ^2\) by setting

$$\begin{aligned} {{\mathcal {L}}}{\tilde{x}}=({\tilde{x}}_m^{(\alpha )})_{m\ge 1}, \quad \hbox { where }\quad {\tilde{x}}_m^{(\alpha )}=m^\alpha {\tilde{x}}_m. \end{aligned}$$

It is a bijective function, and since

$$\begin{aligned} \langle {{\mathcal {L}}}{\tilde{x}},{{\mathcal {L}}}{\tilde{\xi }}\rangle _{\ell ^2}= \sum _{m=1}^{\infty }{\tilde{x}}_m^{(\alpha )}{\tilde{\xi }}_m^{(\alpha )}= \sum _{m=1}^{\infty }m^{2\alpha }{\tilde{x}}_m{\tilde{\xi }}_m= \langle {\tilde{x}},{\tilde{\xi }}\rangle _{{X_\alpha }}, \end{aligned}$$

we have indeed defined an isometric isomorphism from \({X_\alpha }\) to \(\ell ^2\). Consequently, \({X_\alpha }\) is a separable Hilbert space. It is readily verified that \((e_m^{(\alpha )})_{m\ge 1}\) is an orthonormal family, and for each \({\tilde{x}}\in {X_\alpha }\) we have the expansion

$$\begin{aligned} {\tilde{x}}= \sum _{m=1}^\infty {\tilde{x}}_m^{(\alpha )}e_m^{(\alpha )}\quad \hbox {in } {X_\alpha }, \end{aligned}$$

thus proving the second part of the statement. \(\square \)

Proposition 2.4

\({X_\alpha }\) is continuously embedded in \(L^2(0,\pi )\).

Proof

Since

$$\begin{aligned} \Vert {\tilde{x}}\Vert _{L^2}^2=\frac{\pi }{2}\sum _{m=1}^\infty {\tilde{x}}_m^2\le \frac{\pi }{2} \sum _{m=1}^\infty m^{2\alpha }{\tilde{x}}_m^2=\frac{\pi }{2}\Vert {\tilde{x}}\Vert _{X_\alpha }^2, \end{aligned}$$

the statement clearly holds true. \(\square \)

Let us denote by \({\widetilde{C}}^1([0,\pi ])\) the set of \(C^1\)-functions with zero mean on \([0,\pi ]\).

Proposition 2.5

\({\widetilde{C}}^1([0,\pi ])\) is a dense subset of \({X_\alpha }\).

Proof

Since every function \({\tilde{x}}\) in \({\widetilde{C}}^1([0,\pi ])\) can be extended to an even function on \([-\pi ,\pi ]\) with finite left and right derivatives at 0, it admits a uniformly convergent expansion on \([0,\pi ]\),

$$\begin{aligned} {\tilde{x}}(t)=\sum _{m=1}^\infty {\tilde{x}}_m\cos (mt),\quad \hbox { with }\quad \sum _{m=1}^\infty m^2{\tilde{x}}_m^2<\infty . \end{aligned}$$

Hence, the set \({\widetilde{C}}^1([0,\pi ])\) is contained in \({X_\alpha }\). Now, given \({\tilde{x}}\in {X_\alpha }\), for every \(n\ge 1\) the function \({\tilde{s}}_n(t)=\sum _{m=1}^n {\tilde{x}}_m\cos (mt)\) is of class \(C^1\), and the sequence \(({\tilde{s}}_n)_n\) converges to \({\tilde{x}}\) in \({X_\alpha }\). \(\square \)

2.1.3 The space \({Y_\beta }\)

For any \(\beta \in \,]0,1[,\) we define \({Y_\beta }\) as the set of those real valued functions \(y\in L^2(0,\pi )\) such that

$$\begin{aligned} y(t)\sim \sum _{m=1}^\infty y_m\sin (mt), \end{aligned}$$

where \((y_m)_{m}\) is a sequence in \({{\mathbb {R}}}\) satisfying

$$\begin{aligned} \sum _{m=1}^\infty m^{2\beta }y_m^2<\infty . \end{aligned}$$

The space \({Y_\beta }\) is endowed with the inner product

$$\begin{aligned} \langle y,\eta \rangle =\sum _{m=1}^\infty m^{2\beta } y_m\eta _m, \end{aligned}$$

and corresponding norm

$$\begin{aligned} \Vert y\Vert _{Y_\beta }=\sqrt{\sum _{m=1}^{\infty }m^{2\beta }y_m^2}. \end{aligned}$$

Notice that

$$\begin{aligned} y_m=\frac{2}{\pi }\int _0^\pi y(t)\sin (mt)\,dt,\quad m\ge 1. \end{aligned}$$

Proposition 2.6

\({Y_\beta }\) is a separable Hilbert space. A Hilbert basis in \({Y_\beta }\) is provided by the functions \((\epsilon _m^{(\beta )})_{m\ge 1}\) defined as

$$\begin{aligned} \epsilon _m^{(\beta )}(t)=\frac{1}{m^\beta }\sin (mt). \end{aligned}$$

Proof

It goes the same way as the proof of Proposition 2.3. \(\square \)

Let us denote by \(C^k_0([0,\pi ])\) the set of \(C^k\)-functions on \([0,\pi ]\) vanishing at 0 and at \(\pi \). If \(k=0\), we simply write \(C_0([0,\pi ])\).

Proposition 2.7

If \(\beta >\frac{1}{2}\), then \({Y_\beta }\) is continuously embedded in \(C_0([0,\pi ])\).

Proof

When \(\beta >\frac{1}{2}\) , we can define the real number

$$\begin{aligned} C_\beta =\bigg (\sum _{m=1}^\infty \frac{1}{m^{2\beta }}\bigg )^{1/2}. \end{aligned}$$

If \(y\in {Y_\beta }\), the Cauchy–Schwarz inequality implies that

$$\begin{aligned} \Vert y\Vert _\infty{} & {} \le \sum _{m=1}^{\infty }|y_m|= \sum _{m=1}^{\infty }\frac{1}{m^\beta }(m^\beta |y_m|)\\{} & {} \le \bigg (\sum _{m=1}^\infty \frac{1}{m^{2\beta }}\bigg )^{1/2}\bigg (\sum _{m=1}^\infty m^{2\beta }y_m^2\bigg )^{1/2}=C_\beta \Vert y\Vert _{Y_\beta }. \end{aligned}$$

This shows that the space \({Y_\beta }\) is continuously embedded in \(C([0,\pi ])\). Since the series of sines defining y is convergent everywhere, we also deduce that \(y(0)=0=y(\pi )\). \(\square \)

Proposition 2.8

\(C^1_0([0,\pi ])\) is a dense subset of \({Y_\beta }\).

Proof

The argument in the proof of Proposition 2.5 applies the same also here, since every function y in \(C^1_0([0,\pi ])\) admits a uniformly convergent expansion

$$\begin{aligned} y(t)=\sum _{m=1}^\infty y_m\sin (mt),\quad \hbox { with }\quad \sum _{m=1}^\infty m^2y_m^2<\infty . \end{aligned}$$

The statement then holds true. \(\square \)

In the following, we will consider functions having values in \({{\mathbb {R}}}^N\). They can be seen as elements of the Cartesian products

$$\begin{aligned} X_\alpha ^N={X_\alpha }\times \dots \times {X_\alpha },\qquad Y_\alpha ^N={Y_\beta }\times \dots \times {Y_\beta }. \end{aligned}$$

With the natural scalar products and associated norms they become separable Hilbert spaces. For simplicity, we will still denote them by \({X_\alpha }\) and \({Y_\beta }\), respectively, and the same will be done for the other spaces involved, like, e.g., \(L^p(0,\pi )\), \(W^{k,p}(0,\pi )\), \(H^k(0,\pi )\) or \(C^k([0,\pi ])\).

2.2 The variational setting: I

We choose two positive numbers \(\alpha<{\textstyle {\frac{1}{2}}}<\beta \) such that \(\alpha +\beta =1\), and consider the space \(E={X_\alpha }\times {Y_\beta }\). It is a separable Hilbert space, being endowed with the scalar product

$$\begin{aligned} \langle ({\tilde{x}}, y),({\tilde{u}}, v)\rangle _E=\langle {\tilde{x}},{\tilde{u}}\rangle _{{X_\alpha }}+ \langle y,v\rangle _{{Y_\beta }}, \end{aligned}$$

and the corresponding norm

$$\begin{aligned} \Vert ({\tilde{x}}, y)\Vert _E=\sqrt{\Vert {\tilde{x}}\Vert ^2_{{X_\alpha }}+\Vert y\Vert ^2_{{Y_\beta }}}. \end{aligned}$$

Let us introduce the torus

$$\begin{aligned} {{\mathbb {T}}}^N=({{\mathbb {R}}}/\tau _1{{\mathbb {Z}}})\times \dots \times ({{\mathbb {R}}}/\tau _N{{\mathbb {Z}}}). \end{aligned}$$
(9)

The solutions of problem (1) will be written as

$$\begin{aligned} (x(t),y(t))=({\tilde{x}}(t),y(t))+({\bar{x}},0), \end{aligned}$$

where \({\bar{x}}=\frac{1}{\pi }\int _0^\pi x(t)\,dt\), and we will assume that

$$\begin{aligned} ({\tilde{x}},y)\in E\quad \hbox {and}\quad ({\bar{x}},0)\in {{\mathbb {T}}}^N\times \{0\}\equiv {{\mathbb {T}}}^N. \end{aligned}$$

We will thus search the solutions in the space \(E\times {{\mathbb {T}}}^N\).

Set \({\mathscr {M}}=E\times {{\mathbb {T}}}^N\) and let \(\psi :{\mathscr {M}}\rightarrow {{\mathbb {R}}}\) be the functional defined as

$$\begin{aligned} \psi (({\tilde{x}},y),{\bar{x}})=\int _0^\pi H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big )\,dt. \end{aligned}$$

It is well defined, since the function \(H:[0,\pi ]\times {{\mathbb {R}}}^{2N}\rightarrow {{\mathbb {R}}}\) is continuous and globally bounded, in view of the comments made at the beginning of this section. In the following, we will treat \({{\mathbb {T}}}^N\) as being lifted to \({{\mathbb {R}}}^N\), so \({\mathscr {M}}\) will often be identified with \(E\times {{\mathbb {R}}}^N\).

Proposition 2.9

The functional \(\psi :{\mathscr {M}}\rightarrow {{\mathbb {R}}}\) is continuously differentiable.

Proof

Let \((({\tilde{x}}_0,y_0),{\bar{x}}_0)\) be any point of \({\mathscr {M}}\equiv E\times {{\mathbb {R}}}^N\). Then, for every \((({\tilde{u}},v),{\bar{u}})\in {\mathscr {M}}\), by (4) and the Dominated Convergence Theorem,

$$\begin{aligned}{} & {} \lim _{s\rightarrow 0}\frac{\psi (({\tilde{x}}_0+s{\tilde{u}},y_0+sv),{\bar{x}}_0+s{\bar{u}})-\psi (({\tilde{x}}_0,y_0),{\bar{x}}_0)}{s}=\\{} & {} =\int _0^\pi \big \langle \partial _x H\big (t,{\bar{x}}_0+{\tilde{x}}_0(t),y_0(t)\big ),{\bar{u}}+{\tilde{u}}(t)\big \rangle \,dt\\{} & {} +\int _0^\pi \big \langle \partial _y H\big (t,{\bar{x}}_0+{\tilde{x}}_0(t),y_0(t)\big ),v(t)\big \rangle \,dt. \end{aligned}$$

We thus see that \(\psi \) is Gâteaux differentiable at \((({\tilde{x}}_0,y_0),{\bar{x}}_0)\), having computed the corresponding differential \(d_G\psi (({\tilde{x}}_0,y_0),{\bar{x}}_0)\) at \((({\tilde{u}},v),{\bar{u}})\).

Let us now verify that \(d_G\psi :{\mathscr {M}}\rightarrow {\mathscr {L}}({\mathscr {M}},{{\mathbb {R}}})\) is continuous at the point \((({\tilde{x}}_0,y_0),{\bar{x}}_0)\in {\mathscr {M}}\), starting from the first term in the above sum; i.e., we want to prove the continuity of the map \({\mathscr {F}}_1:{\mathscr {M}}\rightarrow {\mathscr {L}}({\mathscr {M}},{{\mathbb {R}}})\), defined as

$$\begin{aligned} {\mathscr {F}}_1(({\tilde{x}},y),{\bar{x}})(({\tilde{u}},v),{\bar{u}})= \int _0^\pi \big \langle \partial _x H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),{\bar{u}}+{\tilde{u}}(t)\big \rangle \,dt. \end{aligned}$$

Since \({X_\alpha }\) and \({Y_\beta }\) are continuously embedded in \(L^2(0,\pi )\), the map \({{\mathcal {Q}}}:E\times {{\mathbb {R}}}^N\rightarrow L^2(0,\pi )\times L^2(0,\pi )\), defined as

$$\begin{aligned} {{\mathcal {Q}}}(({\tilde{x}},y),{\bar{x}})=({\bar{x}}+{\tilde{x}},y), \end{aligned}$$

is continuous. By (4), the Nemytskii operator \({{\mathcal {N}}}:L^2(0,\pi )\times L^2(0,\pi )\rightarrow L^2(0,\pi )\), defined as

$$\begin{aligned} {{\mathcal {N}}}(x,y)(t)=\partial _x H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ), \end{aligned}$$

where \({\bar{x}}=\frac{1}{\pi }\int _0^\pi x(t)\,dt\) and \({\tilde{x}}(t)=x(t)-{\bar{x}}\), is a continuous function, as well (see, e.g., [6]). Finally, the linear map \({{\mathcal {R}}}:L^2(0,\pi )\rightarrow {\mathscr {L}}({\mathscr {M}},{{\mathbb {R}}})\), defined as

$$\begin{aligned} {{\mathcal {R}}}(f)(({\tilde{u}},v),{\bar{u}})=\int _0^\pi \langle f(t),{\bar{u}}+{\tilde{u}}(t)\rangle \,dt, \end{aligned}$$

is bounded, hence continuous. Since \({\mathscr {F}}_1={{\mathcal {R}}}\circ {{\mathcal {N}}}\circ {{\mathcal {Q}}}\), we have proved that \({\mathscr {F}}_1\) is continuous. Similarly one can prove the continuity of the map \({\mathscr {F}}_2:{\mathscr {M}}\rightarrow {\mathscr {L}}({\mathscr {M}},{{\mathbb {R}}})\), defined as

$$\begin{aligned} {\mathscr {F}}_2(({\tilde{x}},y),{\bar{x}})(({\tilde{u}},v),{\bar{u}})= \int _0^\pi \big \langle \partial _y H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),v(t)\big \rangle \,dt. \end{aligned}$$

Therefore, \(d_G\psi ={\mathscr {F}}_1+{\mathscr {F}}_2\) is continuous on \({\mathscr {M}}\).

Hence, \(\psi \) is Fréchet differentiable at \((({\tilde{x}}_0,y_0),{\bar{x}}_0)\), and its Fréchet differential \(d_F \psi (({\tilde{x}}_0,y_0),{\bar{x}}_0))\) coincides with \(d_G\psi (({\tilde{x}}_0,y_0),{\bar{x}}_0))\). The statement is thus proved. \(\square \)

Having identified \({\mathscr {M}}\) with \(E\times {{\mathbb {R}}}^N\), we can consider the gradient function \(\nabla \psi :{\mathscr {M}}\rightarrow {\mathscr {M}}\), so that

$$\begin{aligned} d_F\psi (({\tilde{x}},y),{\bar{x}}))(({\tilde{u}},v),{\bar{u}})=\langle \nabla \psi (({\tilde{x}},y),{\bar{x}})),(({\tilde{u}},v),{\bar{u}})\rangle _{{\mathscr {M}}}, \end{aligned}$$

with the natural scalar product \(\langle \cdot ,\cdot \rangle _{{\mathscr {M}}}\).

Proposition 2.10

The gradient function \(\nabla \psi \) has a relatively compact image.

Proof

If

$$\begin{aligned} \nabla \psi (({\tilde{x}},y),{\bar{x}})=(({\tilde{w}},\zeta ),{\bar{w}}), \end{aligned}$$

then, for every \((({\tilde{u}},v),{\bar{u}})\in {\mathscr {M}}\),

$$\begin{aligned}{} & {} \int _0^\pi \big \langle \partial _x H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),{\tilde{u}}(t)\big \rangle \,dt = \langle {\widetilde{w}},{\tilde{u}}\rangle _{{X_\alpha }}, \end{aligned}$$
(10)
$$\begin{aligned}{} & {} \int _0^\pi \big \langle \partial _y H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),v(t)\big \rangle \,dt = \langle \zeta ,v\rangle _{{Y_\beta }}, \end{aligned}$$
(11)

and

$$\begin{aligned} \int _0^\pi \big \langle \partial _x H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),{\bar{u}}\big \rangle \,dt= \langle {\overline{w}},{\bar{u}}\rangle . \end{aligned}$$
(12)

Let

$$\begin{aligned} {{\mathscr {F}}}=\partial _x H\big (\cdot ,{\bar{x}}+{\tilde{x}}(\cdot ),y(\cdot )\big )\in L^2(0,\pi ). \end{aligned}$$

We can write

$$\begin{aligned} {{\mathscr {F}}}(t)\sim {{\mathscr {F}}}_0+\sum _{m=1}^{\infty }\widetilde{{\mathscr {F}}}_m\cos (mt),\quad \hbox { in }L^2(0,\pi ), \end{aligned}$$

for some \({{\mathscr {F}}}_0\) and \(\widetilde{{\mathscr {F}}}_m\in {{\mathbb {R}}}^N\). From (12) we see that \({{\mathscr {F}}}_0={\overline{w}}\in {{\mathbb {T}}}^N\), showing us that \(\nabla _{{\bar{x}}}\psi \) has a relatively compact image. Let us also write

$$\begin{aligned} {\widetilde{w}}(t)\sim \sum _{m=1}^{\infty }{{\widetilde{w}}}_m\cos (mt),\quad \hbox { in }L^2(0,\pi ), \end{aligned}$$

for some \({{\widetilde{w}}}_m\in {{\mathbb {R}}}^N\). Selecting in (10), with \(i=1,\dots ,N\), we obtain

$$\begin{aligned} \widetilde{{\mathscr {F}}}_m=\frac{2}{\pi }m^{2\alpha }{\widetilde{w}}_m. \end{aligned}$$

Take \({{\mathscr {P}}}>1\) and \({{\mathscr {Q}}}>1\) such that \((1/{{\mathscr {P}}})+(1/{{\mathscr {Q}}})=1\) and \(2\alpha {{\mathscr {P}}}>1\), and set

$$\begin{aligned} S=\Bigg ({\sum _{m=1}^{\infty }}\;\frac{1}{m^{2\alpha {{\mathscr {P}}}}}\Bigg )^{1/{{\mathscr {P}}}},\qquad S_M=\Bigg (\sum _{m\ge M}\frac{1}{m^{2\alpha {{\mathscr {P}}}}}\Bigg )^{1/{{\mathscr {P}}}},\quad M\ge 1. \end{aligned}$$

We can write, using the Hölder Inequality,

$$\begin{aligned} \Vert {\widetilde{w}}\Vert _{{X_\alpha }}^2{} & {} =\sum _{m=1}^{\infty }m^{2\alpha }|{\widetilde{w}}_m|^2 =\sum _{m=1 }^{\infty } \frac{\pi ^2}{4}\frac{1}{m^{2\alpha }}|\widetilde{{\mathscr {F}}}_m|^2\le \frac{\pi ^2}{4}S\bigg (\sum _{m=1}^{\infty }|\widetilde{{\mathscr {F}}}_m|^{2{{\mathscr {Q}}}}\bigg )^{1/{{\mathscr {Q}}}}. \end{aligned}$$

By the Hausdorff–Young Inequality (7) with \(p=2{{\mathscr {Q}}}/(2{{\mathscr {Q}}}-1) \) and \(q=2{{\mathscr {Q}}}\),

$$\begin{aligned} \bigg (\sum _{m=1}^{\infty }|\widetilde{{\mathscr {F}}}_m|^{2{{\mathscr {Q}}}}\bigg )^{\frac{1}{2{{\mathscr {Q}}}}}{} & {} \le \left( \frac{2}{\pi }\int _0^\pi |{{\mathscr {F}}}(t)-{{\mathscr {F}}}_0|^{\frac{2{{\mathscr {Q}}}}{2{{\mathscr {Q}}}-1}}dt\right) ^{\frac{2{{\mathscr {Q}}}-1}{2{{\mathscr {Q}}}}}\\{} & {} \le \left( \frac{2}{\pi }\int _0^\pi |{{\mathscr {F}}}(t)|^{\frac{2{{\mathscr {Q}}}}{2{{\mathscr {Q}}}-1}}dt\right) ^{\frac{2{{\mathscr {Q}}}-1}{2{{\mathscr {Q}}}}}+\left( \frac{2}{\pi }\int _0^\pi |{{\mathscr {F}}}_0|^{\frac{2{{\mathscr {Q}}}}{2{{\mathscr {Q}}}-1}}dt\right) ^{\frac{2{{\mathscr {Q}}}-1}{2{{\mathscr {Q}}}}} \end{aligned}$$

(we have used the triangle inequality in \(L^{\frac{2{{\mathscr {Q}}}}{2{{\mathscr {Q}}}-1}}(0,\pi )\)). Hence, by (4),

$$\begin{aligned} \Vert {\widetilde{w}}\Vert _{{X_\alpha }}^2\le 2^{\frac{2{{\mathscr {Q}}}-1}{{\mathscr {Q}}}}\pi ^2\,{\bar{c}}^2S. \end{aligned}$$

We have thus proved that the image of \(\nabla _{{\tilde{x}}}\psi \) is bounded in \({X_\alpha }\), precisely

$$\begin{aligned} \nabla _{{\tilde{x}}}\psi ({\mathscr {M}})\subseteq B(0,R),\quad \hbox { with }\quad R=2^{\frac{2{{\mathscr {Q}}}-1}{2{{\mathscr {Q}}}}}\pi \,{\bar{c}}\,\sqrt{S}. \end{aligned}$$

In the same way we see that, for every \(M\ge 1\),

$$\begin{aligned} \sum _{m\ge M}m^{2\alpha }|{\widetilde{w}}_m|^2\le 2^{\frac{2{{\mathscr {Q}}}-1}{{\mathscr {Q}}}}\pi ^2\,{\bar{c}}^2S_M, \end{aligned}$$

thus showing that

$$\begin{aligned} \lim _{M\rightarrow +\infty }\bigg (\sum _{m\ge M}m^{2\alpha }|{\widetilde{w}}_m|^2\bigg )=0, \quad \hbox { uniformly in } (({\tilde{x}},y),{\bar{x}})\in {\mathscr {M}}. \end{aligned}$$

Hence, by Proposition 3.2 applied with the complete orthonormal system

we conclude that the image of \(\nabla _{{\tilde{x}}}\psi \) is relatively compact in \({X_\alpha }\).

In a similar way, using (11), one shows that the image of \(\nabla _{y}\psi \) is relatively compact, as well, thus ending the proof. \(\square \)

2.3 The variational setting: II

We want to define a continuous symmetric bilinear form \({{\mathcal {B}}}:E\times E\rightarrow {{\mathbb {R}}}\). We first define it on \(D\times D\), where

$$\begin{aligned} D=E\cap [C^1([0,\pi ])\times C^1([0,\pi ])]={\widetilde{C}}^1([0,\pi ])\times C^1_0([0,\pi ]). \end{aligned}$$

For each \(({\tilde{x}},y)\) and \(({\tilde{u}},v)\) in D, we set

$$\begin{aligned} {{\mathcal {B}}}(({\tilde{x}}, y),({\tilde{u}}, v))=\int _0^\pi \langle {\tilde{x}}',v\rangle -\int _0^\pi \langle {\tilde{u}},y'\rangle . \end{aligned}$$
(13)

Let us see that \({{\mathcal {B}}}:D\times D\rightarrow {{\mathbb {R}}}\) is symmetric. Indeed, since \(y(0)=y(\pi )=0\) and \(v(0)=v(\pi )=0\),

$$\begin{aligned} \int _0^\pi \langle {\tilde{x}}',v\rangle =-\int _0^\pi \langle {\tilde{x}},v'\rangle ,\qquad \int _0^\pi \langle {\tilde{u}},y'\rangle =-\int _0^\pi \langle {\tilde{u}}',y\rangle . \end{aligned}$$
(14)

Let us now prove that, if we consider on \(D\times D\) the topology of \(E\times E\), then \({{\mathcal {B}}}:D\times D\rightarrow {{\mathbb {R}}}\) is continuous. We write

$$\begin{aligned} {\tilde{x}}(t)=\sum _{m=1}^\infty {\tilde{x}}_m \cos (mt),\qquad v(t)=\sum _{m=1}^\infty v_m\sin (mt). \end{aligned}$$

Then,

$$\begin{aligned} \bigg |\int _0^\pi \langle {\tilde{x}}',v\rangle \bigg |{} & {} =\frac{\pi }{2}\bigg |\sum _{m=1}^\infty m\langle {\tilde{x}}_m,v_m\rangle \bigg |\\{} & {} =\frac{\pi }{2}\bigg |\sum _{m=1}^\infty \langle m^\alpha {\tilde{x}}_m,m^\beta v_m\rangle \bigg |\le \frac{\pi }{2}\Vert {\tilde{x}}\Vert _{{X_\alpha }}\Vert v\Vert _{{Y_\beta }}. \end{aligned}$$

Similarly one proves that

$$\begin{aligned} \bigg |\int _0^\pi \langle {\tilde{u}},y'\rangle \bigg |\le \frac{\pi }{2}\Vert {\tilde{u}}\Vert _{X_\alpha }\Vert y\Vert _{Y_\beta }, \end{aligned}$$

so that

$$\begin{aligned} |{{\mathcal {B}}}(({\tilde{x}}, y),({\tilde{u}}, v))|\le \frac{\pi }{2}\big (\Vert {\tilde{x}}\Vert _{{X_\alpha }}\Vert v\Vert _{{Y_\beta }}+\Vert {\tilde{u}}\Vert _{X_\alpha }\Vert y\Vert _{Y_\beta }\big )\le \pi \Vert ({\tilde{x}}, y)\Vert _E\,\Vert ({\tilde{u}}, v)\Vert _E. \end{aligned}$$

Then, since by Propositions 2.5 and 2.8 we know that D is a dense subspace of E, the function \({{\mathcal {B}}}:D\times D\rightarrow {{\mathbb {R}}}\) can be extended by continuity to \(E\times E\). We will still denote by \({{\mathcal {B}}}:E\times E\rightarrow {{\mathbb {R}}}\) this continuous (and therefore continuously differentiable) symmetric bilinear form.

Remark 2.11

It should be noticed that the equality in (13) still holds if \({\tilde{x}}\), y are in \(H^1(0,\pi )\) and \({\tilde{u}}\), v are in \(L^2(0,\pi )\). In view of the identities in (14), if \({\tilde{x}}\), y are in \(L^2(0,\pi )\) and \({\tilde{u}}\), v are in \(H^1(0,\pi )\) we can also write

$$\begin{aligned} {{\mathcal {B}}}(({\tilde{x}}, y),({\tilde{u}}, v))=\int _0^\pi \langle {\tilde{u}}',y\rangle -\int _0^\pi \langle {\tilde{x}},v'\rangle . \end{aligned}$$

Remark 2.12

When writing this paper, in a first attempt we defined the functional spaces making use of the classical Fourier expansions in both sines and cosines, but we had to face rather technical proofs. We then realized that the choice of the systems \(\{\cos (mt)\}\) and \(\{\sin (mt)\}\), associated with \(X_\alpha \) and \(Y_\beta \), respectively, enjoy an important property: they diagonalize the bilinear form \({{\mathcal {B}}}\). For this reason, the computations are now very natural.

Let \(\varphi :{\mathscr {M}}\rightarrow {{\mathbb {R}}}\) be the functional defined as

$$\begin{aligned} \varphi (({\tilde{x}},y),{\bar{x}})={\textstyle {\frac{1}{2}}}\,{{\mathcal {B}}}(({\tilde{x}}, y),({\tilde{x}}, y))-\psi (({\tilde{x}},y),{\bar{x}}). \end{aligned}$$

By Proposition 2.9, it is continuously differentiable.

Proposition 2.13

If \((({\tilde{x}},y),{\bar{x}})\in {\mathscr {M}}\) is a critical point for \(\varphi \), then \(z(t)=({\bar{x}}+{\tilde{x}}(t),y(t))\) is a solution of (1).

Proof

Let \((({\tilde{x}},y),{\bar{x}})\in E\times {{\mathbb {T}}}^N\) be a critical point of \(\varphi \). Again we treat \({{\mathbb {T}}}^N\) as lifted to \({{\mathbb {R}}}^N\). Then, for every \((({\tilde{u}},v),{\bar{u}})\in E\times {{\mathbb {T}}}^N\),

$$\begin{aligned} {{\mathcal {B}}}(({\tilde{x}}, y),({\tilde{u}}, v))={} & {} \int _0^\pi \big \langle \partial _x H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),{\bar{u}}+{\tilde{u}}(t)\big \rangle \,dt+\\{} & {} +\int _0^\pi \big \langle \partial _y H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),v(t)\big \rangle \,dt. \end{aligned}$$

For every \(\phi \in C^1_0([0,\pi ])\), we write \(\phi (t)={\bar{\phi }}+{\tilde{\phi }}(t)\), with \({\bar{\phi }}=\frac{1}{\pi }\int _0^\pi \phi (t)\,dt\). Taking \(v=0\) and \({\bar{u}}={\bar{\phi }}\), \({\tilde{u}}={\tilde{\phi }}\), by Remark 2.11 we see that

$$\begin{aligned} \int _0^\pi \langle y(t),\phi '(t)\rangle \,dt=\int _0^\pi \big \langle \partial _x H\big (t,{\bar{x}}+{\tilde{x}}(t),y(t)\big ),\phi (t)\big \rangle \,dt. \end{aligned}$$

Hence, the equality

$$\begin{aligned} y'=-\partial _x H\big (\cdot ,{\bar{x}}+{\tilde{x}}(\cdot ),y(\cdot )\big ) \end{aligned}$$

holds in the sense of distributions. Then, by (4), y belongs \(W^{1,\infty }(0,\pi )\).

On the other hand, taking \({\bar{u}}=0\), \({\tilde{u}}=0\) and \(v=\phi \), after noticing that \(x(t)={\bar{x}}+{\tilde{x}}(t)\) differs from \({\tilde{x}}(t)\) by a constant, we see that the equality

$$\begin{aligned} x'=\partial _y H\big (\cdot ,{\bar{x}}+{\tilde{x}}(\cdot ),y(\cdot )\big ) \end{aligned}$$

holds in the sense of distributions. Again by (4), this yields that x belongs to \(W^{1,\infty }(0,\pi )\).

Since both x(t) and y(t) are continuous, from the differential equations they satisfy we deduce that they are continuously differentiable. \(\square \)

The continuous symmetric bilinear form \({{\mathcal {B}}}:E\times E\rightarrow {{\mathbb {R}}}\) generates a bounded selfadjoint operator \(L:E\rightarrow E\) such that

$$\begin{aligned} {{\mathcal {B}}}(z,\zeta )=\langle Lz,\zeta \rangle _E,\quad \hbox { for every }(z,\zeta )\in E\times E. \end{aligned}$$

Proposition 2.14

The operator \(L:E\rightarrow E\) is invertible, with a continuous inverse.

Proof

Let us first prove that

$$\begin{aligned} \Vert Lz\Vert _E=\frac{\pi }{2}\,\Vert z\Vert _E,\quad \hbox { for every }z\in E. \end{aligned}$$
(15)

Let \(Lz=\omega \), with \(z=({\tilde{x}},y)\in E\) and \(\omega =({\tilde{p}},q)\in E\). We know that

$$\begin{aligned} {{\mathcal {B}}}(z,\zeta )=\langle \omega ,\zeta \rangle _E,\quad \hbox { for every }\zeta \in E. \end{aligned}$$

If \(\zeta =({\tilde{u}},v) \in D\), by Remark 2.11 we can write

$$\begin{aligned} \int _0^\pi \langle {\tilde{u}}',y\rangle -\int _0^\pi \langle {\tilde{x}},v'\rangle =\langle {\tilde{p}},{\tilde{u}}\rangle _{X_\alpha }+\langle q,v\rangle _{Y_\beta }. \end{aligned}$$
(16)

Taking \(v=0\) and writing the expansions

$$\begin{aligned} {\tilde{u}}(t)\sim \sum _{m=1}^\infty {\tilde{u}}_m\cos (mt),\; {\tilde{p}}(t)\sim \sum _{m=1}^\infty {\tilde{p}}_m\cos (mt),\; y(t)\sim \sum _{m=1}^\infty y_m\sin (mt), \end{aligned}$$

we get

$$\begin{aligned} -\frac{\pi }{2}\sum _{m=1}^\infty m\langle {\tilde{u}}_m,y_m\rangle = \sum _{m=1}^\infty m^{2\alpha }\langle {\tilde{p}}_m,{\tilde{u}}_m\rangle , \end{aligned}$$

whence

$$\begin{aligned} -\frac{\pi }{2}my_m= m^{2\alpha }{\tilde{p}}_m. \end{aligned}$$

Then, using the fact that \(\alpha +\beta =1\), we conclude that

$$\begin{aligned} \Vert y\Vert _{Y_\beta }=\frac{2}{\pi }\,\Vert {\tilde{p}}\Vert _{X_\alpha }. \end{aligned}$$
(17)

On the other hand, taking \({\tilde{u}}=0\) in (16) and proceeding in the same way as above, we find that

$$\begin{aligned} \Vert {\tilde{x}}\Vert _{X_\alpha }=\frac{2}{\pi }\,\Vert q\Vert _{Y_\beta }. \end{aligned}$$
(18)

By the definition of the norm in E, the equalities (17) and (18) imply (15).

As a consequence of (15), we have that \(\ker L=\{0\}.\) Let us prove that the image of L is closed. Let \((\omega _n)_n\) be a sequence in the image of L having a limit \(\omega \in E\). Let \((z_n)_n\) be a sequence in E such that \(Lz_n=\omega _n\). By (15), for every mn we have that

$$\begin{aligned} \Vert z_m-z_n\Vert _E=\frac{2}{\pi }\Vert Lz_m-Lz_n\Vert _E=\frac{2}{\pi }\Vert \omega _m-\omega _n\Vert _E. \end{aligned}$$

Hence, \((z_n)_n\) is a Cauchy sequence, thus converging to some \(z\in E\). Passing to the limit in \(Lz_n=\omega _n\) we obtain \(Lz=\omega \), proving that w belongs to the image of L. We have thus proved that \(\textrm{Im}(L)\), the image of L, is closed.

Since L is selfadjoint, with \(\ker L=\{0\}\), and its image is closed, we conclude that

$$\begin{aligned} \textrm{Im}(L)=\ker (L)^\perp =\{0\}^\perp =E, \end{aligned}$$

so that \(L:E\rightarrow E\) is bijective. Its inverse \(L^{-1}:E\rightarrow E\) is continuous, by (15). \(\square \)

We can finally apply Theorem 2.1 to obtain \(N+1\) critical points of the functional \(\varphi \). By Proposition 2.13, these critical points determine the \(N+1\) solutions (xy) of (1) we are looking for.

The proof is thus completed.

3 A symplectic approach: the case of one degree of freedom

In this section we are interested in the two-point boundary value problem defined by (1) in one degree of freedom, that is, \(N=1\). In addition, it will be assumed that the Hamiltonian function is smooth, say \(H\in C^{0,2}([a,b]\times {{\mathbb {R}}}^2)\). We still assume A1 and A2 to hold true; for simplicity, let \(\tau _1=2\pi \) and \([a,b]=[0,1]\). Moreover, after a truncation argument, as explained above, it is not restrictive to assume that H also satisfies (4), with \([0,\pi ]\) replaced by [0, 1].

Given \((x_0,y_0)\in {{\mathbb {R}}}^2\), the solution of the differential system in (1) with initial conditions \(x(0)=x_0\), \(y(0)=y_0\) will be denoted by \((x(t;x_0,y_0), y(t;x_0,y_0))\). The Poincaré map is then defined as

$$\begin{aligned} {{\mathscr {P}}}:{{\mathbb {R}}}^2\rightarrow {{\mathbb {R}}}^2,\quad (x_0,y_0)\mapsto (x_1,y_1), \end{aligned}$$

with

$$\begin{aligned} x_1=x(1;x_0,y_0),\quad y_1=y(1;x_0,y_0). \end{aligned}$$

In view of A1, the identities below hold:

$$\begin{aligned} x(t;x_0+2\pi ,y_0)=x(t;x_0,y_0)+2\pi ,\quad y(t;x_0+2\pi ,y_0)=y(t;x_0,y_0), \end{aligned}$$

and we can interpret the variable x as an angle. We consider the quotient space \({{\mathbb {T}}}={{\mathbb {R}}}/2\pi {{\mathbb {Z}}}\), with projection \(p:{{\mathbb {R}}}\rightarrow {{\mathbb {T}}}\) given by \(p(\alpha )={\bar{\alpha }}=\alpha +2\pi {{\mathbb {Z}}}\).

The Poincaré map induces a diffeomorphism

$$\begin{aligned} \overline{{\mathscr {P}}}:{{\mathbb {T}}}\times {{\mathbb {R}}}\rightarrow {{\mathbb {T}}}\times {{\mathbb {R}}},\quad ({\bar{x}}_0,y_0)\mapsto ({\bar{x}}_1,y_1). \end{aligned}$$

It is well known that this map is exact symplectic, meaning that the differential form \(y_1dx_1-y_0dx_0\) is exact in the cylinder \({{\mathbb {T}}}\times {{\mathbb {R}}}\) (see, e.g. [5, Appendix 1]). As a consequence we have the identity for two-forms \(dy_1\wedge dx_1=dy_0\wedge dx_0\), and \(\overline{{\mathscr {P}}}\) preserves Haar measure on the cylinder.

Indeed, \(\overline{{\mathscr {P}}}\) satisfies a more subtle property. Given a Jordan curve \(\Gamma \subseteq {{\mathbb {T}}}\times {{\mathbb {R}}}\) which is \(C^1\), regular and non-contractible, the image \(\Gamma _1=\overline{{\mathscr {P}}}(\Gamma )\) enjoys the same properties. Let us fix \(y_0\in {{\mathbb {R}}}\) such that \(\Gamma \) and \(\Gamma _1\) both lie in \(\{y>y_0\}\), and let \({{\mathcal {A}}}\) and \({{\mathcal {A}}}_1\) be the bounded components of \(\{y>y_0\}\setminus \Gamma \) and \(\{y>y_0\}\setminus \Gamma _1\), respectively. Then \(\mu ({{\mathcal {A}}})=\mu ({{\mathcal {A}}}_1)\), where \(\mu \) denotes the Haar measure. To prove this, it is sufficient to apply Stokes–Cartan Theorem to the differential form \(\eta =y\,dx\), so to obtain

$$\begin{aligned} \mu ({{\mathcal {A}}})=\int _{{\mathcal {A}}}d\eta =\int _{\Gamma }\eta -\int _{\{y=y_0\}}\eta , \end{aligned}$$

and

$$\begin{aligned} \mu ({{\mathcal {A}}}_1)=\int _{{{\mathcal {A}}}_1}d\eta =\int _{\Gamma _1}\eta -\int _{\{y=y_0\}}\eta . \end{aligned}$$

Since \(\overline{{\mathscr {P}}}^{\,*}\eta -\eta \) is exact,

$$\begin{aligned} \int _{\Gamma _1}\eta =\int _{\Gamma }\overline{{\mathscr {P}}}^{\,*}\eta =\int _\Gamma \eta . \end{aligned}$$

From this property we deduce a well known principle: given any Jordan \(C^1\)-curve \(\Gamma \) with the previous properties, the intersection of \(\Gamma \) with \(\Gamma _1\) has at least two points. The proof of Theorem 1.1 for \(N=1\) and H smooth then follows from this principle, with \(\Gamma =\{({\bar{x}},0):{\bar{x}}\in {{\mathbb {T}}}\}\).

It is interesting to observe that the same ideas can be applied to very general nonlinear boundary value problems. Assume that \(\sigma :{{\mathbb {T}}}\times {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) is a \(C^1\)-function such that 0 is a regular value and

$$\begin{aligned} \Gamma =\{({\bar{x}},y)\in {{\mathbb {T}}}\times {{\mathbb {R}}}:\sigma ({\bar{x}},y)=0\} \end{aligned}$$

defines a non-contractible Jordan curve. We are interested in the problem

(19)

Let us keep assumption A1, with \(\tau _1=2\pi \), and replace A2 with \([a,b]=[0,1]\) by \(A2'.\)

The solutions of the differential system in (19), with initial conditions lying in \(\Gamma \), are defined on the whole time interval [0, 1].

Theorem 3.1

Under assumptions A1 and \(A2'\), problem (19) has at least two geometrically distinct solutions.

As a particular case, we can deal with a problem like

$$\begin{aligned} \left\{ \begin{array}{ll} x'=\partial _y H(t,x,y),&{}\quad y'=-\partial _x H(t,x,y),\\ y(0)=\Sigma (x(0)),&{}\quad y(1)=\Sigma (x(1)), \end{array} \right. \end{aligned}$$

where \(\Sigma :{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) is any T-periodic \(C^1\)-function. Indeed, taking \(\sigma (x,y)=y-\Sigma (x)\), we can apply Theorem 3.1 and get the existence of at least two geometrically distinct solutions.