1 Introduction

Denote by \(\mathbb{N}\), \(\mathbb{Z}\), \(\mathbb{R}^{*}\), \(\mathbb{R}\) the sets of all positive integers, integers, nonnegative real numbers and real numbers, respectively.

In the past 40 years, many authors have studied the existence of periodic solutions to classical Hamiltonian systems,

$$ \ddot{x}+V^{\prime }(x)=0, \quad x\in \mathbb{R}^{N}, $$
(1)

where \(N\in \mathbb{N}\) and \(\mathbb{R}^{N}\) is the set of N-tuples of real numbers. Suppose that

(V1):

\(V\in C^{1}(\mathbb{R}^{N},\mathbb{R})\) and \(V(x)\ge 0\) for all \(x\in \mathbb{R}^{N}\);

(V2):

(AR-condition) there exist \(\alpha >2\) and \(r_{0}>0\) such that

$$ 0< \alpha V(x)\le \bigl(V^{\prime }(x),x\bigr), \quad \forall \vert x \vert \ge r_{0}; $$
(V3):

\(V(x)=o(|x|^{2})\), as \(|x|\rightarrow 0\).

In 1978, Rabinowitz (cf. [16]) has proved that, for any \(T>0\), system (1) admits a T-periodic solution under the assumptions (V1)–(V3). He conjectured that such a solution has T as its minimal period. This is called the Rabinowitz conjecture. Since then many mathematicians devoted themselves to resolve this conjecture. Below we describe some important contributions related to this conjecture.

In 1985, Ekeland and Hofer (cf. [3]) proved that, for any \(T>0\), there exists a T-periodic solution to system (1) with prescribed minimal period T when V is strictly convex with some additional conditions. For more results in this direction, we refer to [2, 7, 8] and the references therein.

Let us mention the approach of Long (cf. [10]) in which the convexity hypothesis was replaced by the following conditions:

(V1′):

\(V\in C^{2}(\mathbb{R}^{N},\mathbb{R})\), \(V(x)\ge 0\), \(\forall x\in \mathbb{R}^{N}\);

(V4):

V is even, i.e., \(V(-x)=V(x)\), \(\forall x\in \mathbb{R}^{N}\).

By developing the Maslov-type index theory, Long showed that, under the assumptions (V1′), (V2)–(V4), system (1) has a non-constant T-periodic solution with minimal period T or \(T/3\). On the other hand, if V satisfies the conditions (V1′), (V2) and (V3), it was proved in [11] that system (1) admits a non-constant T-periodic solution with minimal period \(T/k\) for some integer k belonging to \([1,N+2]\). We refer the interested reader to [12, 13] for more results in this direction.

In 1997, Fei and Wang (cf. [5]) studied system (1), where \(V(x)=1/2h_{0}x\cdot x+\widetilde{V}(x)\) and \(h_{0}\) is a real positive semi-definite symmetric matrix. Assume that satisfies (V1′), (V3)–(V4) and the following assumptions:

(V2′):

There exist constants \(\mu >2\), \(r_{0}>0\), \(0\le \beta \le 2\), and \(d\ge 0\) such that

$$ \mu V(x)-V^{\prime }(x)\cdot x\le d \vert x \vert ^{\beta },\quad \forall \vert x \vert \ge r_{0}; $$
(V5):

\(V(x)/|x|^{2}\rightarrow \infty \), as \(|x|\rightarrow \infty \),

where V is replaced by . Then system (1) has a T-periodic solution with minimal period \(T/k\) for some odd integer k satisfying \(1\le k\le 2(i_{T}(h_{0})+v_{T}(h_{0}))+3\). Here \(i_{T}(h_{0})\) and \(v_{T}(h_{0})\) denote the dimensions of negative space and null space of a self-adjoint operator related to \(h_{0}\), respectively. Furthermore, if V satisfies (V1′), (V2)–(V4) and its Jacobian matrix \(V^{\prime \prime }(x)\) is semi-positive definite, Fei and Wang (cf. [4]) showed that system (1) admits a non-constant T-periodic solution with minimal period T. We refer to [6] for other similar results.

A different kind of hypothesis is the so-called strongly global AR-condition.

(V2′′):

(strongly global AR-condition) There exists a constant \(\theta >1\) such that

$$ 0< \theta \bigl(V^{\prime }(x),x\bigr)\le \bigl(V^{\prime \prime }(x)x,x \bigr),\quad \forall x \in \mathbb{R}^{N}\setminus \{0\}. $$

In 2010, Xiao (cf. [25]) proved that, if V satisfies the conditions (V1′), (V2′) and (V4), system (1) admits a non-constant T-periodic solution with minimal period T. Under the same assumptions as in [25], Krawcewicz, Lv and Xiao (cf. [9]) showed the existence of multiple T-periodic solutions with common minimal period T. Recently, a variant of (V2′), which is the so-called ARS condition, was introduced by Souissi in [18]. The ARS condition was used to study the existence of periodic solutions to Hamiltonian systems, when the nonlinearity satisfies a local one (cf. [18]) or a global one (cf. [19, 20]). For more related results under the strongly global AR condition, we refer to [1, 14, 24] and the references therein.

In this article, we weaken the condition (V2′). Assume that V satisfies (V1), (V4) and the following hypotheses:

(V3′):

\(V^{\prime }(x)=o(|x|)\), as \(x\rightarrow 0\) in \(\mathbb{R}^{N}\),

(V6):

there exist \(p>2\) and \(C>0\) such that \(|V^{\prime }(x)|\le C(1+|x|^{p-1})\),

(V7):

for any \(x\in \mathbb{R}^{N}\) with \(|x|=1\), the map \(s\mapsto (V^{\prime }(sx),x)/s\) is strictly increasing on \((0,\infty )\),

(V7′):

for any \(x\in \mathbb{R}^{N}\) with \(|x|=1\), the map \(s\mapsto (V^{\prime }(sx),x)/s\) is non-decreasing on \((0,\infty )\).

The hypotheses (V7) and (V7′) are the so-called strictly monotonic condition and the non-decreasing condition, respectively. Under the strictly monotonic condition, there exists a homeomorphism between the Nehari manifold and the unit sphere of a subspace. By making use of the Nehari manifold method, we can prove that, for any \(T>0\), there exists a ground state solution to system (1). Recall that a solution is called a ground state solution if its energy is minimal among all nontrivial solutions (cf. [21]). Also, we can prove that such a periodic solution has T as its minimal period.

However, under the non-decreasing condition, there is no such a homeomorphism any more. Hence the Nehari manifold cannot be used directly to study the existence of T-periodic solutions. There are some approaches dealing with the non-decreasing assumption. For example, by finding a minimizing Cerami sequence, Tang (cf. [22]) proved the existence of periodic solutions to asymptotically periodic Schrödinger equations. Here, we adopt a different approach to deal with this situation. Inspired by the [15] (see Theorem 3.1 for details), we make use of a perturbation technique to study the existence of T-periodic solutions to systems (1). We show that there exists a sequence of critical points corresponding to perturbed variational functionals. The critical points converge strongly to a critical point of the original functional which corresponds to a T-periodic solution to system (1). Moreover, we can prove that such a solution has T as its minimal period. Our main results are presented below.

Theorem 1.1

Assume that V satisfies (V1), (V3′) and (V4)(V7). Then, for any given positive constant T, system (1) admits a non-constant T-periodic solution with minimal period T.

Corollary 1.1

Assume that V satisfies (V1′), (V2′′) and (V4). Then, for any given positive constant T, system (1) admits a non-constant T-periodic solution with minimal period T.

Remark 1.1

Corollary 1.1 is the main result proved in [25].

Theorem 1.2

Assume that V satisfies (V1), (V3′), (V4)(V6) and (V7′). Then, for any given positive constant T, system (1) admits a non-constant T-periodic solution with minimal period T.

The rest of this article splits into three parts. In Sect. 2, we establish the variational functional corresponding to system (1) and state some useful lemmas. In Sect. 3, we will make use of the Nehari manifold method to prove Theorem 1.1 and Corollary 1.1. In Sect. 4, we will use the perturbed technique to prove Theorem 1.2.

2 Preliminaries

Given \(T>0\), let \(S_{T}={\mathbb{R}}/(T{\mathbb{Z}})\). Denote by \(\mathbb{H}^{1}=W^{1, 2}(S_{T}, \mathbb{R}^{N})\) the space of functions \(x\in L^{2}([0, T],\mathbb{R}^{N})\) having a weak derivative \(\dot{x}\in L^{2}([0, T], \mathbb{R}^{N})\), equipped with the usual norm

$$ \Vert x \Vert _{\mathbb{H}^{1}}^{2}= \int _{0}^{T}\bigl( \vert \dot{x} \vert ^{2}+ \vert x \vert ^{2}\bigr)\,dt, \quad \forall x\in \mathbb{H}^{1}, $$

where \(|\cdot |\) denotes the standard norm in \(\mathbb{R}^{N}\). Then \(\mathbb{H}^{1}\) is a Hilbert space with the inner product

$$ \langle x,y \rangle_{\mathbb{H}^{1}}= \int _{0}^{T}\bigl[(\dot{x},\dot{y})+(x,y) \bigr]\,dt, \quad \forall x, y \in \mathbb{H}^{1}, $$

where \((\cdot , \cdot )\) denotes the standard inner product in \(\mathbb{R}^{N}\). Any \(x\in \mathbb{H}^{1}\) admits a Fourier expansion

$$ x(t)=a_{0}+\sum_{k=1}^{+\infty } \biggl(a_{k}\cos \frac{2k\pi t}{T}+b_{k} \sin \frac{2k\pi t}{T}\biggr), $$

where \(a_{0}, a_{k}, b_{k}\in \mathbb{R}^{N}\), \(k=1,2, \ldots \) .

The variational functional corresponding to the system (1) is

$$ \varphi (x)= \int _{0}^{T}\biggl[\frac{1}{2} \vert \dot{x} \vert ^{2}-V(x)\biggr]\,dt, \quad \forall x\in \mathbb{H}^{1}. $$
(2)

Lemma 2.1

([17])

Assume that V satisfies (V1), (V5), (V6). Then φ is continuously differentiable on \(\mathbb{H}^{1}\) and

$$ \bigl\langle \varphi ^{\prime }(x),y\bigr\rangle _{\mathbb{H}^{1}}= \int _{0}^{T}\bigl[(\dot{x}, \dot{y})- \bigl(V^{\prime }(x),y\bigr)\bigr]\,dt, \quad \forall x,y\in \mathbb{H}^{1}. $$

Set \(\phi (x)=\int _{0}^{T}V(x)\,dt\). Then ϕ is weakly continuous and \(\phi ^{\prime }:\mathbb{H}^{1}\rightarrow \mathbb{H}^{1}\) is compact.

Define a subspace \(\mathbb{E}\) of \(\mathbb{H}^{1}\) by setting

$$ \mathbb{E}=\bigl\{ x\in \mathbb{H}^{1}| x \mbox{ is odd in } t\bigr\} \equiv \Biggl\{ x\in \mathbb{H}^{1}\Big| x(t)=\sum _{k=1}^{\infty }b_{k} \sin \frac{2\pi kt}{T}, b_{k}\in \mathbb{R}^{N}\Biggr\} . $$

Obviously, \(\mathbb{E}\) is a closed subspace of \(\mathbb{H}^{1}\) and \(\mathbb{R}^{N}\cap \mathbb{E}=\{0\}\). Define an inner product \(\langle \cdot , \cdot \rangle\) on \(\mathbb{E}\) by setting

$$ \langle x,y\rangle = \int _{0}^{T}(\dot{x},\dot{y})\,dt,\quad \forall x, y \in \mathbb{E}. $$
(3)

The norm \(\|\cdot \|\) on \(\mathbb{E}\), induced by the inner product (3), is

$$ \Vert x \Vert ^{2}= \int _{0}^{T} \vert \dot{x} \vert ^{2}\,dt, \quad \forall x\in \mathbb{E}. $$

It is well known that \(\|\cdot \|_{\mathbb{H}^{1}}\) and \(\|\cdot \|\) are equivalent norms on \(\mathbb{E}\).

Restricted to \(\mathbb{E}\), functional (2) can be rewritten as

$$ \varphi (x)= \int _{0}^{T}\biggl[\frac{1}{2} \vert \dot{x} \vert ^{2}-V(x)\biggr]\,dt= \frac{1}{2} \Vert x \Vert ^{2}-\phi (x),\quad x\in \mathbb{E}. $$
(4)

Lemma 2.2

([23, 26])

Critical points of φ restricted to \(\mathbb{E}\) are critical points of φ on the whole space \(\mathbb{H}^{1}\), which correspond to periodic solutions to system (1).

At the end of this section, we state some useful lemmas.

Lemma 2.3

For any \(x\in \mathbb{H}^{1}\) satisfying \(\int _{0}^{T}x(t)\,dt=0\), there exists a \(c_{\infty }>0\) such that

$$ \Vert x \Vert _{L^{\infty }}\le c_{\infty } \Vert x \Vert ,\qquad \Vert x \Vert _{L^{2}}\le \frac{T}{2\pi } \Vert x \Vert . $$
(5)

Obviously, for any \(x\in \mathbb{E}\), \(\int _{0}^{T}x(t)\,dt=0\). Hence all elements of \(\mathbb{E}\) satisfy (5).

Lemma 2.4

([15])

If a sequence \(\{u_{k}\}\) converges weakly to u in \(\mathbb{H}^{1}\), then \(\{u_{k}\}\) converges uniformly to u on \([0, T]\).

To prove our main results, we state a useful result which had been proved in [21] (see Theorem 12 for details).

Lemma 2.5

Let \(\mathbb{F}\) be a Hilbert space and suppose that \(\Phi (x)=\frac{1}{2}\|x\|^{2}-I(x)\), where

  1. (i)

    \(I^{\prime }(x)=o(\|x\|)\) as \(x\rightarrow 0\) in \(\mathbb{F}\),

  2. (ii)

    \(s\mapsto I^{\prime }(sx)x/s\) is strictly increasing for all \(x\neq0\) and \(s>0\),

  3. (iii)

    \(I(sx)/s^{2}\rightarrow \infty \) uniformly for x on weakly compact subsets of \(\mathbb{F}\setminus \{0\}\) as \(s\to \infty \),

  4. (iv)

    \(I^{\prime }\) is completely continuous.

Then the equation \(\Phi ^{\prime }(x)=0\) has a ground state solution. Moreover, if I is even, then this equation has infinitely many pairs of solutions.

3 The strictly monotonic case

In this section, we make use of Lemma 2.5 to prove Theorem 1.1. To do this, let us define the Nehari manifold.

Given \(x\in \mathbb{E}\setminus \{0\}\), define \(g_{x}:\mathbb{R}^{*}\rightarrow \mathbb{R}\) by setting \(g_{x}(s)=\varphi (sx)\). One can easily verify that \(g_{x}\) is continuously differentiable. In particular, if \(x\in \mathbb{S}^{1}\), where \(\mathbb{S}^{1}\) denotes the unit sphere of \(\mathbb{E}\), then \(g_{x}\in C^{1}(\mathbb{R}^{*}, \mathbb{R})\).

Lemma 3.1

Assume that V satisfies (V1), (V3′) and (V5)(V7). Given \(x\in \mathbb{S}^{1}\), there exists a unique positive constant \(s_{x}\) depending on x such that

$$\begin{aligned}& g_{x}(s_{x})=\max_{s\in \mathbb{R}^{*}}g_{x}(s), \\& g_{x}^{\prime }(s)>0,\quad \forall 0< s< s_{x}, \\& g_{x}^{\prime }(s)< 0,\quad \forall s>s_{x}. \end{aligned}$$

Proof

Firstly, we will show that \(g_{x}(s)>0\) in a small interval. Thanks to (V3′), for any \(\epsilon >0\), there exists \(s_{\epsilon }>0\) such that

$$ \bigl\vert V^{\prime }(x) \bigr\vert \le \epsilon \vert x \vert ,\quad \forall \vert x \vert < s_{\epsilon }. $$

Consequently, we have

$$ V(x)\le \epsilon \vert x \vert ^{2},\quad \forall \vert x \vert < s_{\epsilon }. $$
(6)

Substituting (6) into (4), we have

$$ g_{x}(s)=\frac{1}{2}s^{2}- \int _{0}^{T}V(sx)\,dt\ge \frac{1}{2}s^{2}- \epsilon \Vert sx \Vert _{L^{2}}^{2}\ge \biggl[ \frac{1}{2}-\epsilon \frac{T^{2}}{4\pi ^{2}}\biggr]s^{2},\quad \forall \vert x \vert < s_{\epsilon }. $$
(7)

Take \(\epsilon _{1}\le \pi ^{2}/T^{2}\) and choose \(s_{1}>0\) so small that

$$ \bigl\vert sx(t) \bigr\vert < s_{\epsilon _{1}},\quad \forall 0< s< s_{1}. $$
(8)

Together (7) with (8) give us

$$ g_{x}(s)\ge \frac{1}{4}s^{2}>0,\quad \forall 0< s< s_{1}. $$
(9)

Next, we will show that there exists \(s_{2}>0\) such that \(g_{x}(s)<0\), \(\forall s>s_{2}\). Denote \(\delta _{1}=\int _{0}^{T}|x(t)|^{2}\,dt/T>0\). Fixing \(\delta \in (0, \delta _{1})\), set

$$ \Omega _{1}=\bigl\{ t\in [0,T]| \bigl\vert x(t) \bigr\vert ^{2}\ge \delta \bigr\} \quad \mbox{and} \quad \Omega _{2}=[0,T] \setminus \Omega _{1}. $$

Since the average of \(|x(t)|^{2}\) equals \(\delta _{1}\), there exists \(\delta _{2}>0\) such that \(\operatorname{meas}(\Omega _{1})\ge \delta _{2}\). Here and hereafter, \(\operatorname{meas}(\cdot )\) denotes the measure of the set. Put \(M=1/(\delta \delta _{2})\). Thanks to the condition (V5), there exists \(R_{M}>0\) such that \(V(x)\ge M|x|^{2}\) for all \(|x|\ge R_{M}\). For \(s_{2}\) large enough, one has \(|s_{2}x(t)|\ge R_{M}\) for all \(t\in \Omega _{1}\). Consequently, for all \(s>s_{2}\), we have

$$\begin{aligned} g_{x}(s) =&\varphi (sx)=\frac{1}{2} \Vert sx \Vert ^{2}- \int _{0}^{T}V(sx)\,dt \\ \le& \frac{1}{2}s^{2}- \int _{\Omega _{1}}Ms^{2} \vert x \vert ^{2}\,dt- \int _{ \Omega _{2}}V(sx)\,dt \\ \le& \frac{1}{2}s^{2}-Ms^{2} \delta \delta _{2}\le -\frac{1}{2}s^{2}< 0. \end{aligned}$$
(10)

Both (9) and (10) imply that \(g_{x}\) attains its maximum on \([0,s_{2}]\). Hence there exists \(s_{x}\in [0, s_{2}]\) such that \(g_{x}(s_{x})=\max_{s\in \mathbb{R}^{*}}g_{x}(s)>0\). Consequently, \(g_{x}^{\prime }(s_{x})=0\).

Finally, we will show that there exists a unique \(s_{x}\) such that \(g_{x}(s_{x})=\max_{s\in \mathbb{R}^{*}}g_{x}(s)\). Calculating the derivative of \(g_{x}\), one obtains

$$ g_{x}^{\prime }(s)=s- \int _{0}^{T}\bigl(V^{\prime }(sx), x \bigr)\,dt=s\biggl[1- \int _{0}^{T} \frac{(V^{\prime }(sx), x)}{s}\,dt\biggr]. $$
(11)

Thanks to \((V7)\), \(g_{x}^{\prime }\) has the unique zero, which is \(s_{x}\), and \(g_{x}^{\prime }(s)>0\) for all \(s\in (0, s_{x})\), \(g_{x}^{\prime }(s)<0\) for all \(s>s_{x}\). This finishes the proof of Lemma 3.1. □

Remark 3.1

Take \(x\in \mathbb{E}\setminus \{0\}\). Then Lemma 3.1 states that there exists a unique \(s_{x}>0\) such that \(g_{x}(s_{x})=\sup_{s\in \mathbb{R}^{*}}g_{x}(s)=\sup_{s\in \mathbb{R}^{*}}\varphi (s\|x\|\cdot x/\|x\|)\) and \(g_{x}^{\prime }(s)>0\) for all \(s\in (0, s_{x})\), \(g_{x}^{\prime }(s)<0\) for all \(s>s_{x}\).

Define the Nehari manifold by setting

$$ \mathcal{M}=\Bigl\{ s_{x}x\big| x\in \mathbb{E}\setminus \{0\}, g_{x}(s_{x})= \max_{s\in \mathbb{R}^{*}}g_{x}(s) \Bigr\} , $$

or equivalently

$$ \mathcal{M}=\Bigl\{ s_{x}x\big| x\in \mathbb{S}^{1}, g_{x}(s_{x})=\max_{s\in \mathbb{R}^{*}}g_{x}(s) \Bigr\} . $$

Now we are ready to prove Theorem 1.1. To do this, put \(\mathbb{F}=\mathbb{E}\), \(\Phi (x)=\varphi (x)\) and \(I(x)=\phi (x)\). Let us check that all conditions of Lemma 2.5 hold.

Lemma 3.2

If V satisfies (V1) and (V3′), then \(\phi ^{\prime }(x)=o(\|x\|)\) as \(x\to 0\) in \(\mathbb{E}\).

Proof

Since V satisfies (V3′), for any \(\epsilon >0\), there exists \(\delta >0\) such that

$$ \bigl\vert V^{\prime }(x) \bigr\vert < \epsilon \vert x \vert ,\quad \forall \vert x \vert < \delta . $$

If \(x\rightarrow 0\) in \(\mathbb{E}\), choosing x such that \(\|x\|<\delta /{c_{\infty }}\), it follows that

$$ \bigl\vert x(t) \bigr\vert \le \Vert x \Vert _{L^{\infty }}\le c_{\infty } \Vert x \Vert < \delta , $$

where \(c_{\infty }\) is given in (5). Thanks to Lemma 2.1, for all \(\|x\|<\delta /{c_{\infty }}\), one has

$$\begin{aligned} \bigl\langle \phi ^{\prime }(x),y\bigr\rangle =& \int _{0}^{T}\bigl(V^{\prime }(x),y \bigr)\,dt\le \epsilon \int _{0}^{T}\bigl[ \vert x \vert \cdot \vert y \vert \bigr]\,dt \\ \le& \epsilon \Vert x \Vert _{L^{2}}\cdot \Vert y \Vert _{L^{2}}\le \epsilon \frac{T^{2}}{4\pi ^{2}} \Vert x \Vert \cdot \Vert y \Vert ,\quad \forall y\in \mathbb{E}. \end{aligned}$$
(12)

This implies that \(\|\phi ^{\prime }(x)\|\le \epsilon \frac{T^{2}}{4\pi ^{2}}\|x\|\). Since ϵ is arbitrary, one has \(\phi ^{\prime }(x)|=o(\|x\|)\). Thus (i) of Lemma 2.5 holds. □

Lemma 3.3

If V satisfies (V1), (V6) and (V7), \(s\mapsto <\phi ^{\prime }(sx)\), \(x>/s\) is strictly increasing for all \(x\neq0\) and \(s>0\).

Proof

According to Lemma 2.1, for all \(x\in \mathbb{E}\setminus \{0\}\), \(s>0\), one has

$$ \frac{\langle \phi ^{\prime }(sx), x\rangle }{s}= \int _{0}^{T} \frac{(V^{\prime }(sx),x)}{s}\,dt. $$

If \(x\in \mathbb{E}\setminus \{0\}\), then \(|x(t)|\neq0\) almost everywhere on \([0,T]\). By setting \(\tau =s|x|\) and \(y=x/|x|\), one can easily observe that \(\tau >0\) and \(|y|=1\) almost everywhere on \([0, T]\). By straightforward computation, one has

$$ \frac{\langle \phi ^{\prime }(sx), x\rangle }{s}= \int _{0}^{T} \frac{(V^{\prime }(s \vert x \vert \frac{x}{ \vert x \vert }),\frac{x}{ \vert x \vert })}{s \vert x \vert } \vert x \vert ^{2}\,dt= \int _{0}^{T}\frac{(V^{\prime }(\tau y),y)}{\tau } \vert x \vert ^{2}\,dt. $$

Obviously, τ is strictly increasing only if so is s. Thanks to assumption (V7), the map \(s\mapsto <\phi ^{\prime }(sx)\), \(x>/s\) is strictly increasing on \((0,\infty )\). Hence (ii) of Lemma 2.5 holds. □

Lemma 3.4

If V satisfies (V5), then \(\phi (sx)/s^{2}\rightarrow \infty \) uniformly for x on weakly compact subsets of \(\mathbb{E}\setminus \{0\}\) as \(s\rightarrow \infty \).

Proof

Let \(\mathcal{X}\subset \mathbb{E}\setminus \{0\}\) be a weakly compact set and let \(\{\overline{x}_{n}\}\subset \mathcal{X}\). It suffices to show that if \(s_{n}\rightarrow \infty \) as \(n\rightarrow \infty \), then so does a subsequence of \(\phi (s_{n}\overline{x}_{n})/s_{n}^{2}\). Passing to a subsequence, \(\overline{x}_{n}\) converges weakly to a point, denoted by \(\overline{x}_{0}\), i.e. \(\overline{x}_{n}\rightharpoonup \overline{x}_{0}\) in \(\mathbb{E}\). According to Lemma 2.4, \(\{\overline{x}_{n}\}\) converges uniformly to \(\overline{x}_{0}\) on \([0, T]\). The weak compactness of \(\mathcal{X}\) implies that \(\overline{x}_{0}\neq0\). Consequently \(|s_{n}\overline{x}_{n}(t)|\rightarrow \infty \) as \(n\rightarrow \infty \) almost everywhere on \([0, T]\). Assumption (V5) and Fatou’s lemma yield

$$ \frac{\phi (s_{n}\overline{x}_{n})}{s_{n}^{2}}= \int _{0}^{T} \frac{V(s_{n}\overline{x}_{n})}{ \vert s_{n}\overline{x}_{n} \vert ^{2}} \vert \overline{x}_{n} \vert ^{2}\,dt\rightarrow \infty \quad \mbox{as } n\rightarrow \infty . $$

Hence (iii) of Lemma 2.5 holds. □

Now we are in a position to prove Theorem 1.1.

Proof of Theorem 1.1

We have showed in Lemma 2.1 that \(\phi ^{\prime }\) is completely continuous. Thus (iv) of Lemma 2.5 holds. According to Lemmas 3.2, 3.3 and 3.4, all conditions of Lemma 2.5 are satisfied. Applying Lemma 2.5, one finds that φ restricted to \(\mathcal{M}\) has a ground state solution \(x_{0}\). As we can see in the proof of Theorem 12 in [21], \(\varphi (x_{0})>0\). Consequently, \(x_{0}\) is not a trivial solution.

Next, we will show that \(x_{0}\) has T as its minimal period. Arguing as in [18, 24, 25], suppose that \(x_{0}\) has minimal period \(T/k\), where \(k\ge 2\) is an integer. Denote \(y_{0}(t)=x_{0}(t/k)\). Obviously, \(y_{0}\in \mathbb{E}\). It follows from the definition of \(\mathcal{M}\) that there is a positive constant \(r_{y_{0}}\) such that \(r_{y_{0}}y_{0}\in \mathcal{M}\). By straightforward computation, one has

$$\begin{aligned} \inf_{x\in \mathcal{M}}\varphi (x) \le& \varphi (r_{y_{0}}y_{0})= \int _{0}^{T}\biggl[\frac{1}{2k^{2}} \biggl\vert r_{y_{0}}\dot{x}_{0}\biggl( \frac{t}{k}\biggr) \biggr\vert ^{2}-V\biggl(r_{y_{0}}x_{0} \biggl( \frac{t}{k}\biggr)\biggr)\biggr]\,dt \\ =& \int _{0}^{T}\biggl[\frac{1}{2k^{2}} \bigl\vert r_{y_{0}}\dot{x}_{0}(\tau ) \bigr\vert ^{2}-V\bigl(r_{y_{0}}x_{0}( \tau )\bigr) \biggr]\, d\tau \\ < & \int _{0}^{T}\biggl[\frac{1}{2} \bigl\vert r_{y_{0}}\dot{x}_{0}(\tau ) \bigr\vert ^{2}-V\bigl(r_{y_{0}}x_{0}( \tau )\bigr) \biggr]\, d\tau \\ =&\varphi (r_{y_{0}}x_{0})\le \varphi (x_{0})=\inf_{x\in \mathcal{M}} \varphi (x), \end{aligned}$$

which is a contradiction. Hence \(x_{0}\) has a minimal period T. □

Proof of Corollary 1.1

We need only to check that V satisfies (V3′), (V5)–(V7) under the hypotheses (V2′′). It is easy to check that V satisfies (V3′), (V5) and (V6). Therefore, we only verify (V7).

Set \(k(s)=(V^{\prime }(sx),x)/s\). Calculating the derivative of k, one has

$$ k^{\prime }(s)= \frac{(V^{\prime \prime }(sx)sx,x)-(V^{\prime }(sx),x)}{s^{2}}= \frac{(V^{\prime \prime }(sx)sx,sx)-(V^{\prime }(sx),sx)}{s^{3}}. $$

Thanks to hypotheses (V2′′), \(k^{\prime }(s)>0\). Hence (V7) holds. By applying Theorem 1.1, system (1) admits a T-periodic solution with minimal period T. □

4 The non-decreasing case

In this section, we will use a perturbation technique to prove Theorem 1.2. Using the same argument as in the proof of Lemma 3.1 and Remark 3.1, one can obtain the following lemma.

Lemma 4.1

Assume that V satisfies (V1), (V3′), (V5) and (V7′). Given \(x\in \mathbb{E}\setminus \{0\}\), there exist \(s_{2}>s_{1}>0\) and at least a \(s_{x}\in [s_{1}, s_{2}]\) such that

$$\begin{aligned}& g_{x}(s)>\frac{s^{2}}{4},\quad \forall 0< s< s_{1}, \\& g_{x}(s)< 0, \quad \forall s>s_{2}, \\& g_{x}(s_{x})=\max_{s\in \mathbb{R}^{*}}g_{x}(s)= \max_{s\in \mathbb{R}^{*}}\varphi (sx). \end{aligned}$$

Remark 4.1

Since \(g_{x}^{\prime }(s)=s[1-\int _{0}^{T}\frac{(V^{\prime }(sx), x)}{s}\,dt]\), then (V7′) implies that \(g_{x}\) may attain its maximum at a point or an interval.

Define the Nehari manifold by setting

$$ \mathcal{M}^{*}=\Bigl\{ s_{x}x\big| x\in \mathbb{E}\setminus \{0\}, g_{x}(s_{x})= \max_{s\in \mathbb{R}^{*}}\varphi (sx)\Bigr\} =\Bigl\{ s_{x}x\big| x \in \mathbb{S}^{1}, g_{x}(s_{x})=\max _{s\in \mathbb{R}^{*}}\varphi (sx)\Bigr\} . $$

Thanks to Remark 4.1, there is no homeomorphism between \(\mathcal{M}^{*}\) and \(\mathbb{S}^{1}\). Now the Nehari manifold method cannot be used directly to study the existence of periodic solutions.

Being inspired by [15], we will use a perturbation technique. For \(\eta \in (0,1)\), define \(V_{\eta }\) by setting

$$ V_{\eta }(x)=\frac{\eta }{p} \vert x \vert ^{p}+V(x), \quad \forall x\in \mathbb{R}^{N}, $$

where p was defined in \((V6)\).

Lemma 4.2

Assume that (V1), (V3′), (V4)(V6) and (V7′) hold. Then \(V_{\eta }\) satisfies (V1), (V3′), (V4)(V7) with V and \(V^{\prime }\) being replaced by \(V_{\eta }\) and \(V^{\prime }_{\eta }\), respectively.

Proof

It is easy to check that \(V_{\eta }\) satisfies (V1), (V3′), (V4)–(V6). We only verify (V7). By straightforward computation, one obtains

$$ \frac{1}{s}\frac{d(V_{\eta }(sx))}{ds}= \frac{(V^{\prime }_{\eta }(sx),x)}{s}=\eta s^{p-2} \vert x \vert ^{p}+ \frac{(V^{\prime }(sx),x)}{s}. $$

Since \(p>2\), \(\eta s^{p-2}|x|^{p}\) is strictly increasing in s on \((0, \infty )\). However, by (V7′), \((V^{\prime }(sx),x)/s\) is non-decreasing in s on \((0, \infty )\). Consequently, \((V^{\prime }_{\eta }(sx),x)/s\) is strictly increasing in s on \((0, \infty )\). This finishes the proof of Lemma 4.2. □

Consider the perturbed variational functional defined on \(\mathbb{H}^{1}\) by

$$ \varphi _{\eta }(x)= \int _{0}^{T}\biggl[\frac{1}{2} \vert \dot{x} \vert ^{2}-V_{\eta }(x)\biggr]\,dt. $$
(13)

Using the same argument as in the Sect. 2, \(\varphi _{\eta }\) is continuously differentiable on \(\mathbb{H}^{1}\). Restricted to \(\mathbb{E}\), the variational functional (13) can be rewritten as

$$ \varphi _{\eta }(x)=\frac{1}{2} \Vert x \Vert ^{2}- \int _{0}^{T}V_{\eta }(x)\,dt,\quad x \in \mathbb{E}, $$
(14)

and critical points of \(\varphi _{\eta }\) restricted to \(\mathbb{E}\) correspond to T-periodic solutions to systems

$$ \ddot{x}+V_{\eta }^{\prime }(x)=0, \quad x \in \mathbb{R}^{N}. $$
(15)

Fixing \(x\in \mathbb{E}\setminus \{0\}\), define the continuous function \(h_{x}:\mathbb{R}^{*}\rightarrow \mathbb{R}\) by setting \(h_{x}(r)=\varphi _{\eta }(rx)\). Since \(V_{\eta }\) satisfies the same conditions as V in Sect. 3, according to Lemma 3.1 and Remark 3.1, we conclude that the following lemma holds.

Lemma 4.3

Assume that \(V_{\eta }\) satisfies (V1), (V3), (V5) and (V7). Given \(x\in \mathbb{E}\setminus \{0\}\), there exists a unique positive constant \(r_{x}\) depending on x such that

$$ h_{x}(r_{x})=\max_{r\in \mathbb{R}^{*}}h_{x}(r)= \max_{r\in \mathbb{R}^{*}}\varphi _{\eta }(rx). $$

Now we can define the Nehari manifold as follows:

$$ \mathcal{M}_{\eta }=\Bigl\{ r_{x}x\big| x\in \mathbb{E}\setminus \{0\}, h_{x}(r_{x})= \max_{r\in \mathbb{R}^{*}}h_{x}(r)\Bigr\} =\Bigl\{ r_{x}x\big| x\in \mathbb{S}^{1}, h_{x}(r_{x})= \max_{r\in \mathbb{R}^{*}}h_{x}(r)\Bigr\} . $$
(16)

Denote by \(c_{\eta }\) the minimum value of \(\varphi _{\eta }\) restricted to \(\mathcal{M}_{\eta }\), i.e.

$$ c_{\eta }=\inf_{x\in \mathcal{M}_{\eta }}\varphi _{\eta }(x)= \inf_{x \in \mathbb{S}^{1}}\sup_{r\ge 0}\varphi _{\eta }(rx). $$

As follows from the proof of Theorem 12 in [21], \(c_{\eta }>0\).

Choose a sequence \(\{\eta _{n}\}\) such that \(\eta _{n}\rightarrow 0\) as \(n\rightarrow \infty \). Both Lemma 4.3 and Theorem 1.1 imply that there exist the unique \(x_{\eta _{n}}\in \mathbb{S}^{1}\) and \(r_{x_{\eta _{n}}}>0\) such that \(r_{x_{\eta _{n}}}x_{\eta _{n}}\in \mathcal{M}_{\eta _{n}}\) and \(\varphi _{\eta _{n}}(r_{x_{\eta _{n}}}x_{\eta _{n}})=c_{\eta _{n}}\). Hence \(r_{x_{\eta _{n}}}x_{\eta _{n}}\) is a nontrivial T-periodic solution of system (15) with minimal period T. Next we will show that \(\{r_{x_{\eta _{n}}}x_{\eta _{n}}\}\) converges strongly to a critical point which corresponds to a periodic solution to system (1).

To simplify our notation, we denote \(V_{n}=V_{\eta _{n}}\), \(\varphi _{n}=\varphi _{\eta _{n}}\), \(x_{n}=x_{\eta _{n}}\), \(r_{n}=r_{x_{\eta _{n}}}\), \(\mathcal{M}_{n}=\mathcal{M}_{\eta _{n}}\) and \(c_{n}=c_{\eta _{n}}\). Since \(\{x_{n}\}\subset \mathbb{S}^{1}\), passing to a subsequence, \(\{x_{n}\}\) converges weakly, whose weak limit is denoted by \(x_{0}\), i.e. \(x_{n}\rightharpoonup x_{0}\) in \(\mathbb{E}\). Lemma 2.4 implies that \(\{x_{n}\}\) converges uniformly to \(x_{0}\) on \([0, T]\).

Proposition 4.1

Assume that all assumption of Theorem 1.2hold. Then \(\varphi _{n}^{\prime }(r_{n}x_{n})=0\) for all \(n\in \mathbb{N}\).

Proof

For each \(n\in \mathbb{N}\), since \(\varphi _{n}(r_{n}x_{n})=\inf_{x\in \mathcal{M}_{n}}\varphi _{n}(rx)\), we have \(\varphi _{n}^{\prime }(r_{n}x_{n})|_{\mathcal{M}_{n}}=0\). Consequently,

$$ \bigl\langle \varphi _{n}^{\prime }(r_{n}x_{n}), y\bigr\rangle =0, \quad \forall y\in \mathcal{M}_{n}. $$

If \(x\in \mathbb{E}\setminus \{0\}\), Lemma 4.3 implies that there exists a unique \(r_{x}>0\) such that \(r_{x}x\in \mathcal{M}_{n}\). It follows that

$$ \bigl\langle \varphi _{n}^{\prime }(r_{n}x_{n}), x\bigr\rangle =\frac{1}{r_{x}}\bigl\langle \varphi _{n}^{ \prime }(r_{n}x_{n}), r_{x}x\bigr\rangle =0,\quad \forall x\in \mathbb{E} \setminus \{0\}. $$
(17)

Obviously, \(\langle \varphi _{n}^{\prime }(r_{n}x_{n}),0\rangle =0\). Thus (17) holds for all \(x\in \mathbb{E}\). Hence \(\varphi ^{\prime }(r_{n}x_{n})=0\). This finishes the proof of the lemma. □

Next we will show that both \(\{c_{n}\}\) and \(\{r_{n}\}\) are bounded.

Proposition 4.2

Assume that all assumption of Theorem 1.2hold. Then \(\{c_{n}\}\) is a bounded sequence.

Proof

Let \(\widetilde{x}(t)=b_{1}\sqrt{T/(2\pi ^{2})}\sin (2\pi t/T)\in \mathbb{E}\) with \(b_{1}=(1,0,\ldots , 0)^{\tau }\in \mathbb{R}^{N}\), where τ denotes the transposition of a vector. By straightforward computation, one has

$$ \Vert \widetilde{x} \Vert ^{2}=1,\qquad \Vert \widetilde{x} \Vert _{L^{2}}^{2}= \frac{T^{2}}{4\pi ^{2}}. $$

So \(\widetilde{x}\in \mathbb{S}^{1}\). It follows from the definition of \(\varphi _{n}\) that

$$ c_{n}=\inf_{x\in \mathbb{S}^{1}}\sup_{r\ge 0} \varphi _{n}(rx) \le \sup_{r\ge 0}\varphi _{n}(r\widetilde{x})\le \sup_{r\ge 0}\varphi (r \widetilde{x}). $$

Thanks to Lemma 3.1, there exists \(r_{\widetilde{x}}>0\) such that

$$ g_{\widetilde{x}}(r_{\widetilde{x}})=\sup_{r\ge 0}g_{\widetilde{x}}(r)= \sup_{r\ge 0}\varphi (r\widetilde{x})>0. $$

By construction, \(c_{n}\le C_{5}=\sup_{r\in \mathbb{R}^{*}}\varphi (r\widetilde{x})\). □

Proposition 4.3

Assume that all assumptions of Theorem 1.2hold. Then \(\{r_{n}\}\) is a bounded sequence.

Proof

Arguing indirectly, suppose that \(\{r_{n}\}\) is an unbounded sequence. Passing to a subsequence, \(r_{n}\rightarrow \infty \) as \(n\rightarrow \infty \). Consider two cases.

Case I: \(x_{0}=0\). Then \(x_{n}(t)\) converges uniformly to 0 on \([0, T]\). Consequently, \(\int _{0}^{T}V_{\eta }(x_{n})\,dt\rightarrow 0\). Taking \(\overline{r}>\sqrt{2C_{5}+2}\), by straightforward computation, one has

$$ C_{5}\ge c_{n}=\varphi _{n}(r_{n}x_{n})= \sup_{r\ge 0} \varphi _{n}(rx_{n})\ge \varphi _{n}(\overline{r}x_{n})\rightarrow \frac{1}{2} \overline{r}^{2}>C_{5}+1, $$

which is a contradiction.

Case II: \(\overline{x}\neq0\). Then \(|r_{n}x_{n}|\rightarrow \infty \) almost everywhere on \([0, T]\). The definition of \(V_{\eta }\) implies that

$$ 0\le \frac{\varphi _{n}(r_{n}x_{n})}{r_{n}^{2}}=\frac{1}{2} \Vert x_{n} \Vert ^{2}- \int _{0}^{T}\frac{V_{n}(r_{n}x_{n})}{r_{n}^{2}}\,dt\le \frac{1}{2}- \int _{0}^{T}\frac{V(r_{n}x_{n})}{ \vert r_{n}x_{n} \vert ^{2}} \vert x_{n} \vert ^{2}\,dt. $$
(18)

Thanks to the assumption (V5), Fatou’s lemma implies that the right-hand side of (18) tends to −∞ as \(n\rightarrow \infty \), which is a contradiction.

Consequently \(r_{n}\) is bounded, which finishes the proof of this lemma. □

It follows from Proposition 4.3 that, passing to a subsequence, \(\{r_{n}\}\) converges to some point \(r_{0}\in \mathbb{R}\). Put \(y_{0}=r_{0}x_{0}\in \mathbb{E}\). Then \(r_{n}x_{n}\rightharpoonup y_{0}\) in \(\mathbb{E}\) as \(n\rightarrow \infty \). Next we will show that \(r_{n}x_{n}\rightarrow y_{0}\) in \(\mathbb{E}\) as \(n\rightarrow \infty \).

Lemma 4.4

Assume that all assumptions of Theorem 1.2hold. Then \(\varphi _{n}^{\prime }(r_{n}x_{n})\rightarrow \varphi ^{\prime }(y_{0})\) in \(\mathbb{E}\) as \(n\rightarrow \infty \).

Proof

Take \(y\in \mathbb{E}\). By straightforward computation, we obtain

$$\begin{aligned}& \bigl|\bigl\langle \varphi _{n}^{\prime }(r_{n}x_{n})- \varphi ^{\prime }(y_{0}),y\bigr\rangle \bigr| \\& \quad \le \bigl|\bigl\langle \varphi _{n}^{\prime }(r_{n}x_{n})- \varphi ^{\prime }(r_{n}x_{n}),y\bigr\rangle \bigr\vert + \bigl\vert \bigl\langle \varphi ^{\prime }(r_{n}x_{n})-\varphi ^{\prime }(y_{0}),y\bigr\rangle \bigr| \\& \quad \le \eta _{n}r_{n}^{p-1} \int _{0}^{T} \vert x_{n} \vert ^{p-1}|y \vert \,dt+ \bigl\vert \langle r_{n}x_{n}-y_{0}, y\rangle \bigr\vert + \bigl\vert \bigl\langle \phi ^{\prime }(r_{n}x_{n})- \phi ^{\prime }(y_{0}),y\bigr\rangle \bigr|. \end{aligned}$$
(19)

Observe that \(\langle r_{n}x_{n}-y_{0}, y\rangle\rightarrow 0\) since \(r_{n}x_{n}\rightharpoonup y_{0}\) in \(\mathbb{E}\) and \(y\in \mathbb{E}\subset \mathbb{E}^{*}\). To complete the proof of the lemma, we only need to show that the first and third summands of (19) approach 0.

First, we find that (5) implies that \(|x_{n}(t)|\le \|x_{n}\|_{L^{\infty }}\le c_{\infty }\|x\|\le c_{\infty }\). The boundedness of \(\{r_{n}\}\) and \(\{|x_{n}|\}\) together with the fact that \(\eta _{n}\rightarrow 0\) as \(n\rightarrow \infty \), implies that

$$ \eta _{n}r_{n}^{p-1} \int _{0}^{T} \vert x_{n} \vert ^{p-1} \vert y \vert \,dt\rightarrow 0\quad \mbox{as } n\rightarrow \infty . $$

Now, since \(\phi ^{\prime }\) is compact and \(r_{n}x_{n}\rightharpoonup y_{0}\) in \(\mathbb{E}\), \(\phi ^{\prime }(r_{n}x_{n})\rightarrow \phi ^{\prime }(y_{0})\) in \(\mathbb{E}^{*}\) as \(n\rightarrow \infty \). Thus \(\langle \phi ^{\prime }(r_{n}x_{n})-\phi ^{\prime }(y_{0}),y\rangle \rightarrow 0\) as \(n\rightarrow \infty \). □

Remark 4.2

Since \(\varphi _{n}^{\prime }(r_{n}x_{n})=0\) for all \(n\in \mathbb{N}\), we have \(\varphi ^{\prime }(y_{0})=0\).

Now we are ready to show that \(r_{n}x_{n}\rightarrow y_{0}\) in \(\mathbb{E}\) as \(n\rightarrow \infty \). This is the following lemma.

Lemma 4.5

Assume that all assumptions of Theorem 1.2hold. Then \(\{r_{n}x_{n}\}\) converges strongly to \(y_{0}\) in \(\mathbb{E}\).

Proof

Since \(r_{n}x_{n}\rightharpoonup y_{0}\) in \(\mathbb{E}\), it suffices to show that \(\|r_{n}x_{n}\|\rightarrow \|y_{0}\|\). On the one hand, since \(\|\cdot \|\) is continuous and convex, \(\|\cdot \|\) is weakly lower semi-continuous. Consequently, one has

$$ \Vert y_{0} \Vert \le \varliminf _{n\rightarrow \infty } \Vert r_{n}x_{n} \Vert . $$

On the other hand, Proposition 4.1 and Remark 4.2 imply that

$$ \bigl\langle \varphi ^{\prime }(r_{n}x_{n}), r_{n}x_{n}\bigr\rangle \rightarrow \bigl\langle \varphi ^{ \prime }(y_{0}), y_{0}\bigr\rangle ,\quad \mbox{as } n \rightarrow \infty , $$

which is equivalent to

$$ \Vert r_{n}x_{n} \Vert ^{2}- \int _{0}^{T}\bigl(V^{\prime }(r_{n}x_{n}), r_{n}x_{n}\bigr)\,dt \rightarrow \Vert y_{0} \Vert ^{2}- \int _{0}^{T}\bigl(V^{\prime }(y_{0}), y_{0}\bigr)\,dt, \quad \mbox{as } n\rightarrow \infty . $$
(20)

Since \(\{r_{n}x_{n}\}\) converges uniformly to \(y_{0}\) on \([0, T]\), Lebesgue’s dominated convergence theorem implies that

$$ \int _{0}^{T}\bigl(V^{\prime }(r_{n}x_{n}), r_{n}x_{n}\bigr)\,dt\rightarrow \int _{0}^{T}\bigl(V^{ \prime }(y_{0}), y_{0}\bigr)\,dt, \quad \mbox{as } n\rightarrow \infty . $$
(21)

Then (20) together with (21) gives us \(\|r_{n}x_{n}\|\rightarrow \|y_{0}\|\) as \(n\rightarrow \infty \) and the result follows. □

Lemma 4.6

Assume that all assumptions of Theorem 1.2hold. Then \(\varphi _{n}(r_{n}x_{n})\rightarrow \varphi (y_{0})\) as \(n\rightarrow \infty \).

Proof

By straightforward computation, we obtain

$$\begin{aligned} \bigl\vert \varphi _{n}(r_{n}x_{n})- \varphi (y_{0}) \bigr\vert \le& \bigl\vert \varphi _{n}(r_{n}x_{n})- \varphi (r_{n}x_{n}) \bigr\vert + \bigl\vert \varphi (r_{n}x_{n})-\varphi (y_{0}) \bigr\vert \\ \le& \frac{\eta _{n}r_{n}^{p}}{p} \int _{0}^{T} \vert x_{n} \vert ^{p}\,dt+ \bigl\vert \varphi (r_{n}x_{n})- \varphi (y_{0}) \bigr\vert . \end{aligned}$$
(22)

On one hand, using the same argument as in the proof of the first summand of (19), one can conclude that the first summand in (22) tends to 0 as \(n\rightarrow \infty \). On the other hand, since \(r_{n}x_{n}\rightarrow y_{0}\) in \(\mathbb{E}\) as \(n\rightarrow \infty \), \(\varphi (r_{n}x_{n})-\varphi (y_{0})\rightarrow 0\) as \(n\rightarrow \infty \). Consequently, the right side item of (22) tends to 0 as \(n\rightarrow \infty \). Thus, the lemma is proved. □

Now we are ready to complete the proof of Theorem 1.2.

Proof of Theorem 1.2

It follows from Remark 4.2 and Lemma 4.6 that

$$ \varphi (y_{0})=\lim_{n\rightarrow \infty }\varphi _{n}(r_{n}x_{n}) \quad \mbox{and}\quad \varphi ^{\prime }(y_{0})=0. $$

Then \(y_{0}\) is a periodic solution to system (1). Next we prove that \(y_{0}\) has T as its minimal period.

Denote \(c_{0}=\inf_{x\in \mathcal{M}^{*}}\varphi (x)=\inf_{x\in \mathbb{S}^{1}}\sup_{r\in \mathbb{R}^{*}}\varphi (rx)\).

Claim

\(\varphi (y_{0})=c_{0}\).

On the one hand, as \(V_{n}(rx)\ge V(rx)\), we have \(\varphi _{n}(rx)\le \varphi (rx)\), for all \(x\in \mathbb{S}^{1}\) and \(r\in \mathbb{R}^{*}\). It follows that

$$ \inf_{x\in \mathbb{S}^{1}}\sup_{r\in \mathbb{R}^{*}} \varphi _{n}(rx)\le \inf_{x\in \mathbb{S}^{1}}\sup_{r \in \mathbb{R}^{*}} \varphi (rx). $$

Hence

$$ \varphi (y_{0})=\lim _{n\rightarrow \infty }\varphi _{n}(r_{n}x_{n})= \lim_{n\rightarrow \infty }c_{n}\le c_{0}. $$
(23)

On the other hand, since \(\varphi ^{\prime }(y_{0})=0\), setting \(z_{0}=y_{0}/\|y_{0}\|\), one has

$$ g_{z_{0}}^{\prime }\bigl( \Vert y_{0} \Vert \bigr)=\bigl\langle \varphi ^{\prime }\bigl( \Vert y_{0} \Vert z_{0}\bigr),z_{0}\bigr\rangle =\bigl\langle \varphi ^{\prime }(y_{0}),z_{0}\bigr\rangle =0. $$

It follows from the definition of \(\mathcal{M}^{*}\) that \(y_{0}\in \mathcal{M}^{*}\). Hence we obtain

$$ c_{0}=\inf_{x\in \mathcal{M}^{*}} \varphi (x)\le \varphi (y_{0}). $$
(24)

Together (23) with (24) we conclude that the claim holds. Arguing similarly to the proof of Theorem 1.1, we can prove that \(y_{0}\) has T as its minimal period. □