1 Introduction

In the sixties, a remarkable paper of Loud [8] studies systematically the perturbation of equilibria for differential equations of second order of the form

$$\begin{aligned} \ddot{x}+g(x,\dot{x})=0, \end{aligned}$$
(1)

where g is a regular function introducing a T-periodic perturbation in (1) (forcing term) in the following way

$$\begin{aligned} \ddot{x}+g(x,\dot{x})=\varepsilon f(t,x,\dot{x},\varepsilon ), \end{aligned}$$
(2)

with \(f(\cdot ,x,\dot{x},\varepsilon )\) T-periodic and \(\varepsilon \) a small parameter (see [1, 3] for related results and [10] when \(\varepsilon \) is not a small parameter). Starting with an equilibrium \(x\equiv x_{0}\) of (1), the author searches the existence of a T-periodic solution of (2) (called T-periodic continuation of \(x_{0}\)) of the form

$$\begin{aligned} x_{\varepsilon }(t)=x_{0}+\varepsilon x_{1}(t)+O(\varepsilon ^{2}), \quad \varepsilon \rightarrow 0, \end{aligned}$$
(3)

where \(x_{1}(t)\) is a T-periodic function. The existence and stability of a continuation like (3) depends on the properties of the variational equation of (1) around \(x\equiv x_{0}\) given by

$$\begin{aligned} \ddot{y}+g_{x}(x_{0},0)y+g_{\dot{x}}(x_{0},0)\dot{y}=0. \end{aligned}$$
(4)

Considering (4) like a T-periodic equation, in [8], were studied all the possibilities over the associated Floquet multipliers by means of several versions of implicit function theorems [3, 7, 8]. See for instance the results in [7] when the rank of the associated Jacobian matrix for the nonlinear system is 0 or 1. For more examples, we recommend to see [6] and Chapter 7 of [2].

If the Floquet multipliers are both equal to 1 case IV in [8], it is worth to mention the following version of the Theorems 2.6 and 2.7 there in adapted for a conservative oscillator of pendulum type.

Theorem 1

[8] Consider the equation

$$\begin{aligned} \ddot{x}+g(x)=\varepsilon p(t), \end{aligned}$$

where \(g\in C^{2}(\mathbb {R}), p\) is a continuous \(2\pi \)-periodic function and \(\varepsilon \) is a small real parameter. Assume that there exists a \(x_{0}\) such that \(g(x_{0})=0, g^{\prime }(x_{0})=n^{2}\) for some \(n\in \mathbb {N}\) and \(g^{\prime \prime }(x_{0})\ne 0\). Then, there exists a \(2\pi \)-periodic continuation \(x_{\varepsilon }(t)\) of the equilibrium \(x\equiv x_{0}\) for the unperturbed equation \((\varepsilon =0)\) if the following conditions hold

$$\begin{aligned}&\int _{0}^{T}p(s)\sin ns\,\mathrm{d}s =\int _{0}^{T}p(s)\cos n s \,\mathrm{d}s=0,\end{aligned}$$
(5)
$$\begin{aligned}&b_{2n}^{2}+a_{2n}^{2}-2a_{0}a_{2n} -3a_{0}^{2}\ne 0, \end{aligned}$$
(6)

where \(a_{k}, b_{k}\) are the Fourier’s coefficients of p(t). Moreover, if

$$\begin{aligned} b_{2n}^{2}+a_{2n}^{2}-2a_{0}a_{2n}-3a_{0}^{2}>0, \end{aligned}$$
(7)

then \(x_{\varepsilon }\) is unstable in the sense of Lyapunov.

Notice that this result does not provide any information about the existence and stability of periodic solutions for the paradigmatic n-resonant forced pendulum

$$\begin{aligned} \ddot{x}+n^{2}\sin x=\varepsilon \sin t, \end{aligned}$$
(8)

when \(n\in \mathbb {N}\), because for the equilibrium \(x\equiv 0\) of the unperturbed equation we have \(g^{\prime \prime }(0)=0\). This motivates the two main questions in this document

  • How to proceed for studying the existence of an odd \(2\pi \)-periodic solution of (8) obtained by continuation of \(x\equiv 0\)?

  • What stability properties does the possible odd \(2\pi \)-periodic continuation have?

In order to answer these questions, we shall follow the Loudian approach, making clear the kind of implicit functions theorems that must be used. Thus, for the first question our objective is double, the first one is to study a special implicit function theorem at rank 0 as an extension of the results in [7] very general for dynamic applications. This will be stated and proved in Sect. 2. The second one is to get odd \(2\pi \)-periodic solutions for resonant oscillators of the form

$$\begin{aligned} \ddot{x}+g(x)=\varepsilon p(t), \end{aligned}$$
(9)

with p(t) an odd \(2\pi \)-periodic continuous function and g a smooth odd function that satisfies

$$\begin{aligned} g^{\prime }(0)= n^{2} \quad g^{\prime \prime \prime }(0)\ne 0, \end{aligned}$$
(10)

for some \( n\in \mathbb {N}\). This will be established and proved in Sect. 3 giving some sufficient conditions on p (Theorem 3) for obtaining one or three odd \(2\pi \)-periodic solutions for small \(\varepsilon .\)

The second main question is answered in Sect. 4 (Theorem 4) where we present a stability criterion of the continuation for (9). Moreover, we prove the existence of a uniqueness of an odd \(2\pi \)-periodic continuation of (8) which is linearly stable.

The lay reader may have noticed that the first question is a classical matter with enough references in the literature which involve several methods of nonlinear analysis at resonance. Indeed, the Lyapunov–Schmidt method more generally known like the Alternative method, is the traditional treatment for this type of problems [2]. More explicitly, notice that the Eq. (8) can be re-written as follows

$$\begin{aligned} \ddot{x}+n^{2}x=\varepsilon p(t)+\frac{n^{2}}{6}x^{3}+\,\,h.o.t.,\,\,\,p(t)=\varepsilon \sin t, \end{aligned}$$
(11)

and the resonant term \(n^{2}x\) is the responsible for the non-invertibility of \({\mathcal {L}}: x\rightarrow \ddot{x}+n^{2}x,\) the lineal part at \(x=0\) of the differential operator associated with (8) on the Banach space of the \(2\pi \)-periodic continuous functions. In consequence, it is not possible to apply the Schauder fixed point theorem and it is necessary work on the complementary space of the kernel of \({\mathcal {L}}\).

The Lyapunov–Schmidt method follows this strategy and enable us to prove the existence of one, two or three odd \(2\pi \)-periodic solutions of (11) (one can apply the Theorem 5.1 in [2] to the boundary problem \(x(0)=x(\pi )=0\)). These kinds of results are not new; however, the stability properties of the periodic solutions obtained by this way are an absent matter in the literature of bifurcations. See for instance the Chapter 8 of [2] for the existence of even \(2\pi \)-periodic solutions in a Duffing equation with forcing \(A \cos t\), and also see Theorem 5.1 there in for cubic nonlinearities for oscillators like (9) for the corresponding Dirichlet problems.

Our main goal is to trace a clear path in order to get two objectives for oscillators like (9) with odd symmetry, namely, to decide the multiplicity and the stability of odd \(2\pi \)-periodic solutions in terms of a generic odd periodic forcing p. With this purpose in mind, it will be necessary revisiting the existence of odd periodic solutions following the Loudian approach, since their linear stability will strongly depend on the first approximation \(x_1(t)\) in the asymptotic approximation (3). Unfortunately, the Lyapunov–Schmidt method does not provide such approximation.

Finally, it is important to remark some advantages of our methodology over the conventional treatment in [2]:

  • First, we focus on the linear stability of the continued periodic solutions.

  • Second, the treatment given in [2], Chapter 8 for the Duffing equations uses many parameters (at least four), instead we use only one, namely \(\varepsilon \). The parameter \(\varepsilon \) is the argument of the discriminant function which decides the linear stability (see Sect. 4). In [2] the resonant term is perturbed of the form \((n^{2}-\lambda _1)x\) and the number of periodic solutions will depend on the sign of \(\lambda _1\) and its relation with the amplitude of the forcing (Theorem 7.3). In the present paper, the multiplicity is clearly expressed by means of the forcing p (see Theorem 3).

  • Third, the framework in [2] is the implicit function theorem in Banach spaces; in contrast, our treatment is based on implicit functions in finite dimension with some degeneracy conditions (rank 0). In these sense, our approach is simpler.

  • Fourth, the implicit function theorem version used here is quite degenerated because it is assumed that all derivatives (including those respect to the parameter) are null up to the order \(m\ge 2\) at the bifurcation point. Notice that this degeneracy condition is not considered in [2]. For instance in this book, the authors have not studied the case when the first jet nonzero in the Taylor expansion contains a parameter pure term, i.e., a power in the some parameter. So, the Theorem 2 is not in [2]. Other difference with respect to [2] is that the Theorem 2 does not require a priori bounds (see Lemmas 3.1, 6.1 and 7.1 therein).

Let us fix some notation to be used in the following: for all multi-index \(\alpha \in \mathbb {N}_{0}^{n+1}\) and \(x=(x_{1},x_{2},\ldots , x_{n+1})\in \mathbb {R}^{n+1}\), we define

$$\begin{aligned} |\alpha |=\sum _{i=1}^{n+1}|\alpha _{i}| \quad \text {and} \quad x^{\alpha }=\prod _{i=1}^{n+1}x_{i}^{\alpha _{i}}. \end{aligned}$$

For a function \(f:\mathbb {R}^{n+1}\rightarrow \mathbb {R}\), we define the \(\alpha \)-partial derivative \(D^{\alpha }f\) like

$$\begin{aligned} D^{\alpha }f=\frac{\partial ^{|\alpha |}f}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n+1}^{\alpha _{n+1}}}. \end{aligned}$$

In the whole document, \(\displaystyle {\left\| \cdot \right\| }\) denotes the Euclidean norm in \(\mathbb {R}^{l}, D^{k}_{\varepsilon }\) denotes the kth higher total derivative respect to the parameter \(\varepsilon \) and \(\partial ^{j}_{u}\) denotes the jth derivative respect to the real variable u.

2 Implicit functions with rank 0

In this section, we present the first main result in this document related with a special version of implicit functions theorem

Theorem 2

Let \(F \in C^{\infty }(\mathbb {R}^{n}\times \mathbb {R};\mathbb {R}^{n})\), with \(F(0)=0\) and \(D^{\alpha }F(0)=0\) for \(\alpha =1,\ldots , m-1\) for certain \(m\ge 2\). Define \(H(x,\varepsilon )\) as the homogeneous Taylor’s polynomial of F at \(x=0\) of degree m and let \(P(x)=H(x,1)\). Assume that there exists \(y_{0}\) such that \(P(y_{0})=0\) and \(\det DP(y_{0})\ne 0\). Then, there exist \(\delta >0\) and an unique function \(\varphi \in C^{\infty }(I_{\delta },\mathbb {R}^{n}), I_{\delta }=]-\delta ,\delta [\) such that for all \(\varepsilon \in I_{\delta }\) it holds:

$$\begin{aligned} \varphi (0)=0, \, \varphi '(0)=y_{0} \quad \text {and} \quad F(\varphi (\varepsilon ),\varepsilon )=0 \quad \text{ for } \text{ all } \varepsilon \in I_{\delta }\text{. } \end{aligned}$$

Proof

From the Taylor’s formula we have:

$$\begin{aligned} F(x,\varepsilon )=H(x,\varepsilon )+{\mathcal {R}}(x,\varepsilon ), \end{aligned}$$

in the \((n+1)\)-ball in \(\displaystyle {B_{\rho }=\left\{ (x,\varepsilon ) \in \mathbb {R}^{n}\times \mathbb {R}: \left\| (x,\varepsilon )\right\| < \rho \right\} }\) for certain \(\rho >0\) with

$$\begin{aligned} {\mathcal {R}}(x,\varepsilon )= \sum _{\mid \alpha \mid =m+1} \int _{0}^{1} D^{\alpha }F(t(x,\varepsilon ))\dfrac{(t-1)^{m+1}}{(m+1)!} (x,\varepsilon )^{\alpha } \mathrm{d}t. \end{aligned}$$

Notice that for each \(\varepsilon \ne 0\) we have

$$\begin{aligned} \varepsilon ^{-m}F(x,\varepsilon )=0\Leftrightarrow P\left( \dfrac{x}{\varepsilon }\right) +\varepsilon ^{-m}{\mathcal {R}}(x,\varepsilon )=0, \end{aligned}$$

because the homogeneity property of H. This suggests the change of variables \(x=\varepsilon y\). Therefore, for each \(R>0\) we consider the auxiliary equation

$$\begin{aligned} G(y,\varepsilon )=P(y)+\varepsilon ^{-m}{\mathcal {R}}(\varepsilon y,\varepsilon )=0, \end{aligned}$$

in the domain \(\displaystyle {U_{R}=\left\{ \left\| y\right\|< R, |\varepsilon |< \rho /\sqrt{R^{2}+1}\right\} }\). A straightforward computation shows that \(U_{R}\) is transform into a sub-domain of \(B_{\rho }\) via the previous change of variables. Observe that

$$\begin{aligned} \varepsilon ^{-m}{\mathcal {R}}(\varepsilon y,\varepsilon )= & {} \sum _{\mid \alpha \mid =m+1} \varepsilon ^{-m} (\varepsilon y,\varepsilon )^{\alpha }\int _{0}^{1} D^{\alpha }F(t(\varepsilon y,\varepsilon ))\dfrac{(t-1)^{m+1}}{(m+1)!} \mathrm{d}t,\\= & {} \sum _{\mid \alpha \mid =m+1} \varepsilon ^{\mid \alpha \mid -m} (y,1)^{\alpha }\int _{0}^{1} D^{\alpha }F(t\varepsilon (y,1))\dfrac{(t-1)^{m+1}}{(m+1)!}\mathrm{d}t, \end{aligned}$$

in consequence

$$\begin{aligned} G(y,\varepsilon )=P(y)+\varepsilon M(y,\varepsilon ), \end{aligned}$$
(12)

with

$$\begin{aligned} M(y,\varepsilon )= \sum _{\mid \alpha \mid =n+1} (y,1)^{\alpha }\int _{0}^{1}D^{\alpha }F(t\varepsilon (y,1)) \dfrac{(t-1)^{m+1}}{(m+1)!}\mathrm{d}t. \end{aligned}$$

From the Leibnitz’s rule of derivation under the integral sign, it can be proven that \(M \in C^{\infty }(U_{R})\) then \(G\in C^{\infty }(U_{R})\). Furthermore, we can choose \(R>0\) such that \(\left\| y_{0}\right\| <R\) which implies \((y_{0},0)\in U_{R}\). Under these conditions, we can apply the standard implicit function theorem to the equation \(G(y,\varepsilon )=0\) on \(U_{R}\) at the point \((y_{0},0)\). In fact, from (12) it follows

$$\begin{aligned} G(y_{0},0)=P(y_{0})=0, \end{aligned}$$

and

$$\begin{aligned} D_{y}G(y,\varepsilon )\Big |_{(y_{0},0)}= & {} \left( DP(y)+\varepsilon D_{y}M(y,\varepsilon )\right) \Big |_{(y_{0},0)}\\= & {} D P(y_{0}), \end{aligned}$$

which is a non-singular matrix by hypotheses. Then, there exists \(\delta >0\) and an unique function \(\psi \in C^{\infty }(I_{\delta },\mathbb {R}^{n}), I_{\delta }=]-\delta ,\delta [\) such that

$$\begin{aligned} \psi (0)=y_{0} \quad \text {and} \quad G(\psi (\varepsilon ),\varepsilon )=0, \quad \forall \varepsilon \in I_{\delta }. \end{aligned}$$

This implies that \(\displaystyle {F(\varepsilon \psi (\varepsilon ),\varepsilon )=0}\) for all \(\varepsilon \in I_{\delta }\). Taking \(\varphi (\varepsilon )=\varepsilon \psi (\varepsilon )\), the proof is complete. \(\square \)

Corollary 1

Let \(F\in C^{\infty }(\mathbb {R}^{2},\mathbb {R}),F=F(x,\varepsilon )\) such that

$$\begin{aligned} F(0,0)=0 \quad \text {and} \quad DF(0,0)=0, \quad D^{2}F(0,0)=0. \end{aligned}$$

Assume that \(\displaystyle {F_{xxx}(0,0)\ne 0}\). We define

$$\begin{aligned} a=\frac{3F_{x x \varepsilon }}{F_{xxx}}, \quad b=\frac{3F_{x \varepsilon \varepsilon }}{F_{xxx}}, \quad c=\frac{F_{\varepsilon \varepsilon \varepsilon }}{F_{xxx}}, \end{aligned}$$

where all the coefficients are evaluated at (0, 0). Let \(\displaystyle {{\mathcal {D}}=\left( \frac{q}{2}\right) ^{2} +\left( \frac{p}{3}\right) ^{3}}\), with

$$\begin{aligned} p=\frac{3b-a^{2}}{3}\quad q=\frac{2a^{3}-9ab+27c}{27}. \end{aligned}$$
  1. 1.

    If \({\mathcal {D}}>0\) then there exist \(\delta >0\) and an unique function \(\varphi \in C^{\infty }(I_{\delta },\mathbb {R}), I_{\delta }=]-\delta ,\delta [\) such that \(\varphi (0)=0, F(\varphi (\varepsilon ),\varepsilon )=0\) for all \(\varepsilon \in I_{\delta }\).

  2. 2.

    If \({\mathcal {D}}<0\) then there exist three different functions \(\varphi _{i}\in C^{\infty }(I_{\delta },{\mathbb {R}}),i=1,2,3\) such that \(\varphi _{i}(0)=0, F(\varphi _{i}(\varepsilon ),\varepsilon )=0\), for all \(\varepsilon \in I_{\delta }\). In both cases i) and ii) \(\varphi _{i}^{\prime }(0)\) is computed as the real root of the polynomial \(y^{3}+ay^{2}+by+c\).

Proof

For this case, the polynomial P(x) of Theorem 2 is given by

$$\begin{aligned} P(y)=H(y,1)=\dfrac{F_{xxx}(0,0)}{6}\left( y^{3}+ay^{2}+by+c\right) . \end{aligned}$$

Applying the criteria to obtain simple real roots in a polynomial of degree three based on the Cardano’s formula (see [5]), and Theorem 2 the conclusions follows directly. \(\square \)

3 Continuation under resonance

This section is devoted to study the existence and multiplicity of the odd \(2\pi \)-periodic solutions for the oscillator (9) that bifurcates from the equilibrium \(x\equiv 0\) in the unperturbed case (\(\varepsilon =0\)). The key idea is to establish an appropriate implicit equation in two variables, namely the initial velocity at rest and the parameter \(\varepsilon \) improving the symmetries of the oscillator (9).

Let \(x(t,\eta ,\varepsilon )\) be the solution of (9) satisfying the initial conditions

$$\begin{aligned} x(0)=0, \quad \dot{x}(0)=\eta . \end{aligned}$$
(13)

From now on, we assume that the function g is good enough to guarantee that all the solutions are globally defined in \(]-\infty ,\infty [\) (for example consider g bounded). In [4] Hamel shows that the existence of odd \(2\pi \)-periodic solutions of (9) can be reduced to study the boundary value problem

$$\begin{aligned} \ddot{x}+g(x)=\varepsilon p(t), \quad x(0)=x(\pi )=0, \end{aligned}$$
(14)

this follows by performing odd and \(2\pi \)-periodic extension improving the symmetries of the equation (9) and its periodicity. The original idea of Hamel in [4] is what we known nowadays as the shooting method. Therefore, the problem (14) can be reduced to the study of zeros of the function \(F\in C^{\infty }(\mathbb {R}^{2},\mathbb {R})\) defined by the implicit equation

$$\begin{aligned} F(\eta ,\epsilon ):=x(\pi ,\eta ,\varepsilon )=0. \end{aligned}$$
(15)

In this frame, we have an implicit function problem at \((\eta ,\varepsilon )=(0,0)\) with rank 0. Indeed, notice that \(x_{\eta }(t,0,0)\) is the unique solution of the Cauchy problem

$$\begin{aligned} \ddot{y}+n^{2}y=0, \quad y(0)=0,\quad \dot{y}(0)=1, \end{aligned}$$

i.e., \(\displaystyle {x_{\eta }(t,0,0)=\frac{\sin nt}{n}}\) therefore \(F_{\eta }(0,0)=0\). In consequence if we are looking for solutions \((\eta (\varepsilon ),\varepsilon )\) of (15) with \(\eta =\eta (\varepsilon )\) a regular function of \(\varepsilon \), such that \(\eta (0)=0\), then

$$\begin{aligned} F_{\eta }(\eta (\varepsilon ),\varepsilon )\eta ^{\prime } +F_{\varepsilon }(\eta (\varepsilon ),\varepsilon )=0, \end{aligned}$$

which involves the following necessary condition

$$\begin{aligned} F_{\varepsilon }(0,0)=\frac{(-1)^{n+1}}{n}\int _{0}^{\pi }p(s)\sin (ns)\mathrm{d}s=0, \end{aligned}$$

this means, the n-Fourier sine coefficient of p(t) must be null. With the aim to apply the Corollary 1 to \(F(\eta ,\varepsilon )\), we need to compute the derivatives of F up to the third order at (0, 0). For this purpose, we present two preliminary lemmas that will be proved in the “Appendix” section.

Lemma 1

The unique solution of the problem

$$\begin{aligned} \ddot{x}+\omega ^{2}x=f(t), \quad x(0)=\dot{x}(0)=0, \end{aligned}$$

where \(f\in C[0,T]\), and \(\omega >0\), is given by

$$\begin{aligned} x(t)=\frac{1}{\omega }\int _{0}^{t}\sin (\omega (t-s))f(s)\,\mathrm{d}s. \end{aligned}$$

Lemma 2

Let \(x(t,\xi ,\eta ,\epsilon )\) the general solution of (9) that satisfies the initial conditions

$$\begin{aligned} x(0)=\xi , \quad \dot{x}(0)=\eta . \end{aligned}$$

Then, for \(\forall \alpha \in \mathbb {N}^{4}_{0}\) with \(\alpha _{1}=0, \vert \alpha \vert =k,\,k=2,3\), \(y(t)=D^{\overline{\alpha }}x(t;0,0,0)\) where satisfies the Cauchy Problem

$$\begin{aligned} \left\{ \begin{array}{ll} \ddot{y}+n^{2}y=-g^{k}(0)(x_{\xi },x_{\eta }, x_{\varepsilon })^{\overline{\alpha }} \\ y(0)=\dot{y}(0)=0, \end{array} \right. \end{aligned}$$

where \(\overline{\alpha }=(\alpha _{2},\alpha _{3},\alpha _{4})\).

Combining the Lemmas 1 and 2, we obtain the following results over the function \(F(\eta ,\varepsilon )\)

$$\begin{aligned} F_{\eta \eta }(0,0)=F_{\eta \epsilon }(0,0)=F_{\varepsilon \varepsilon }(0,0)=0, \end{aligned}$$
(16)

and

$$\begin{aligned} F_{\eta \eta \eta }(0,0)= & {} \kappa \int _{0}^{\pi }\sin ^{4}(ns)\,\mathrm{d}s=\frac{3\kappa \pi }{8},\nonumber \\ F_{\eta \eta \varepsilon }(0,0)= & {} \kappa \int _{0}^{\pi }\sin ^{3}(ns) \int _{0}^{s}\sin (n(s-u))p(u)\,\mathrm{d}u\,\mathrm{d}s\nonumber \\ F_{\eta \varepsilon \varepsilon }(0,0)= & {} \kappa \int _{0}^{\pi }\sin ^{2}(ns) \left( \int _{0}^{s}\sin (n(s-u))p(u)\,\mathrm{d}u\right) ^{2}\,\mathrm{d}s\nonumber \\ F_{\epsilon \varepsilon \varepsilon }(0,0)= & {} \kappa \int _{0}^{\pi } \sin (ns)\left( \int _{0}^{s}\sin (n(s-u))p(u)\,\mathrm{d}u\right) ^{3}\,\mathrm{d}s \end{aligned}$$
(17)

with \(\displaystyle {\kappa =\frac{(-1)^{n}g^{\prime \prime \prime }(0)}{n^{4}}}\ne 0\).

Now, we are able to present the second main result.

Theorem 3

Consider the n-resonant oscillator (9) with g a smooth function satisfying (10) and p an odd continuous \(2\pi \)-periodic function satisfying

$$\begin{aligned} \int _{0}^{\pi }p(s)\sin (ns)\mathrm{d}s=0. \end{aligned}$$
(18)

Let

$$\begin{aligned} a= & {} \frac{8}{\pi }\int _{0}^{\pi }\sin ^{3}(ns)\left( \int _{0}^{s} \sin (n(s-u))p(u)\,\mathrm{d}u\right) \,\mathrm{d}s,\nonumber \\ b= & {} \frac{8}{\pi }\int _{0}^{\pi }\sin ^{2}(ns)\left( \int _{0}^{s} \sin (n(s-u))p(u)\,\mathrm{d}u\right) ^{2}\,\mathrm{d}s,\nonumber \\ c= & {} \frac{8}{3\pi }\int _{0}^{\pi }\sin (ns)\left( \int _{0}^{s} \sin (n(s-u))p(u)\,\mathrm{d}u\right) ^{3}\,\mathrm{d}s. \end{aligned}$$
(19)

and \(p, q, {\mathcal {D}}\) defined as in Corollary 1. Then

  1. I.

    If \({\mathcal {D}}>0\) there exist an odd, \(2\pi \)-periodic continuation \(x_{\varepsilon }(t)\) of the equilibrium \(x\equiv 0\).

  2. II.

    If \({\mathcal {D}}<0\) there exist three different odd \(2\pi \)-periodic continuations \(x_{i,\varepsilon }(t), i=1,2,3\), of the equilibrium \(x\equiv 0\).

Moreover, if \(x_{\varepsilon }(t)\) is one continuation given by the cases I) or II) then \(x_{\varepsilon }(t)\) is of the form (3) with \(x_{1}(t)\) the solution of the initial value problem

$$\begin{aligned} \left\{ \begin{array}{l} \ddot{y}+n^{2}y=p(t),\\ y(0)=0, \, \dot{y}(0)=y_{0}\end{array}, \right. \end{aligned}$$
(20)

where \(y_{0}\) is a real root of the polynomial

$$\begin{aligned} {{\mathcal {Q}}}(y)=y^{3}+ay^{2}+by+c. \end{aligned}$$

Proof

From the previous discussions, the existence of odd \(2\pi \)-periodic solutions for (9) is reduced to study (14). By (15) and (16), the Taylor’s expansion of the function \(F(\eta ,\varepsilon )\) at the origin is given by

$$\begin{aligned} F(\eta ,\varepsilon )=\frac{\kappa \pi }{16}\eta ^{3}+\frac{F_{\eta \eta \varepsilon }}{2}\eta ^{2}\varepsilon +\frac{F_{\eta \varepsilon \varepsilon }}{2}\eta \varepsilon ^{2}+\frac{F_{\varepsilon \varepsilon \varepsilon }}{6}\varepsilon ^{3}+\cdots \end{aligned}$$

This is equivalent to

$$\begin{aligned} F(\eta ,\varepsilon )=\frac{\kappa \pi }{16}\Big (\eta ^{3}+a\eta ^{2} \varepsilon + b\eta \varepsilon ^{2}+c\varepsilon ^{3}+\cdots \Big ), \end{aligned}$$

with ab and c are given by (19). Applying the Corollary 1, the conclusions in (I) and (II) follow directly. Notice that,

$$\begin{aligned} x_{\varepsilon }(t)=\varepsilon x_{1}(t)+O(\varepsilon ^{2}), \quad \varepsilon \rightarrow 0, \end{aligned}$$

where \(\displaystyle {x_{1}(t)=\left. \frac{d x_{\varepsilon }(t)}{d \varepsilon }\right| _{\varepsilon =0}}\). Besides, by construction \(x_{\varepsilon }(t)=x(t,\eta (\varepsilon ),\varepsilon )\) with \(\eta (\varepsilon )\) a \(C^{\infty }\) function such that \(\eta (0)=0\). Thus

$$\begin{aligned} \left. \frac{d x_{\varepsilon }(t)}{d \varepsilon }\right| _{\varepsilon =0} = x_{\eta }(t,0,0)\eta ^{\prime }(0)+\frac{\partial x}{\partial \varepsilon }(t,0,0), \end{aligned}$$

but

$$\begin{aligned} x_{\eta }(t,0,0)=\frac{\sin nt}{n}, \quad \text {and} \quad \frac{\partial x}{\partial \varepsilon }(t,0,0)=\frac{1}{n}\int _{0}^{t}\sin (n(t-s))p(s)\,\mathrm{d}s, \end{aligned}$$

then

$$\begin{aligned} x_{1}(t)=\frac{\sin nt}{n}\eta ^{\prime }(0)+\frac{1}{n}\int _{0}^{t} \sin (n(t-s))p(s)\,\mathrm{d}s, \end{aligned}$$

So, \(x_{1}(t)\) satisfies all the required conditions with \(\eta ^{\prime }(0)=y_{0}\) like a straightforward computation shows. \(\square \)

Example 1

Consider the resonant forced pendulum

$$\begin{aligned} \ddot{x}+ n_{0}^{2}\sin x=\varepsilon \sin t, \end{aligned}$$
(21)

Notice that in this case, \(p(t)=\sin t\) is an odd, \(2\pi \)-periodic function for which the orthogonality condition (18) is satisfied for all \(n_{0}\ge 2, n_{0}\in \mathbb {N}.\) Then, the formulas given in (17) with \(\displaystyle {\kappa =\frac{(-1)^{n_{0}+1}}{n_{0}^{2}}}\) became

$$\begin{aligned} F_{\eta \eta \eta }(0,0)= & {} \frac{3\kappa \pi }{8}, \quad F_{\eta \eta \varepsilon }(0,0)=-\frac{3}{8}\frac{\kappa \pi }{n_{0}^{2}-1},\\ F_{\eta \varepsilon \varepsilon }(0,0)= & {} \frac{1}{8}\frac{(2n_{0}^{2}+3)\kappa \pi }{(n_{0}^{2}-1)^{2}}, \quad F_{\varepsilon \varepsilon \varepsilon }(0,0)=-\frac{3}{8}\frac{(2n_{0}^{2}+1)\kappa \pi }{(n_{0}^{2}-1)^{3}}. \end{aligned}$$

In consequence, the function \(F(\eta ,\varepsilon )\) has the Taylor’s expansion

$$\begin{aligned} F(\eta ,\varepsilon )= & {} \frac{3\kappa \pi }{48}{\eta }^{3}-\frac{3}{16}\frac{\kappa \pi }{n_{0}^{2}-1}\eta ^{2}\varepsilon +\frac{1}{16}\frac{(2n_{0}^{2} +3)\kappa \pi }{(n_{0}^{2}-1)^{2}}\eta \varepsilon ^{2}\\&\quad -\frac{3}{48}\frac{(2n_{0}^{2}+1)\kappa \pi }{(n_{0}^{2}-1)^{3}}\varepsilon ^{3}+\cdots , \end{aligned}$$

and this implies that the auxiliary polynomial \({\mathcal {Q}}(y)\) of the Theorem 3 takes the form

$$\begin{aligned} {{\mathcal {Q}}}(y)={y}^{3}-\frac{3}{n_{0}^{2}-1}y^{2} +\frac{(2n_{0}^{2}+3)}{(n_{0}^{2}-1)^{2}}y -\frac{(2n_{0}^{2}+1)}{(n_{0}^{2}-1)^{3}}. \end{aligned}$$
(22)

The Cardano’s formula gives \({\mathcal {D}}>0\) and roots

$$\begin{aligned} y_{0}=\dfrac{1}{n_{0}^2-1}, \quad \dfrac{(1\pm n_{0}\sqrt{2}i)}{n^{2}_{0}-1}. \end{aligned}$$
(23)

In consequence, we get an odd \(2\pi \)-periodic continuation \(x_{\varepsilon }(t)\) of the equilibrium \(x \equiv 0\) with asymptotic expansion

$$\begin{aligned} x_{\varepsilon }(t)= & {} \varepsilon \Big (\dfrac{\sin n_{0}t}{n_{0} (n_{0}^2-1)}+\int _{0}^{t}\frac{\sin (n_{0}(t-s))\sin s}{n_{0}}\,\mathrm{d}s\Big )+O(\varepsilon ^{2}),\\= & {} \frac{\varepsilon \sin t}{n_{0}^2-1}+O(\varepsilon ^{2}). \end{aligned}$$

4 Linear stability

In order to study the linear stability of the continuation \(x_{\varepsilon }(t)\) obtained in Theorem 3, we shall consider the classical method (see [9]) throughout the discriminant function \(\Delta (\varepsilon )\) defined by

$$\begin{aligned} \Delta (\varepsilon )=\phi _{1}(2\pi ,\varepsilon ) +\dot{\phi _{2}}(2\pi ,\varepsilon ), \end{aligned}$$

where \(\displaystyle {\phi _{i}(t,\varepsilon )}, i=1,2\) are the solutions (called canonical solutions) of the variational equation

$$\begin{aligned} \ddot{y}+g^{\prime }(x_{\varepsilon }(t))y=0, \end{aligned}$$
(24)

satisfying the initial conditions

$$\begin{aligned} \phi _{1}(0,\varepsilon )=\dot{\phi }_{2}(0,\varepsilon )=1, \quad \dot{\phi }_{1}(0,\varepsilon )=\phi _{2}(0,\varepsilon )=0, \end{aligned}$$
(25)

The Eq. (24) is a typical Hill’s equation with \(2\pi \)-periodic coefficient \(\displaystyle {a_{\varepsilon }(t)=g^{\prime }(x_{\varepsilon }(t))}\). Notice that for \(\varepsilon =0\) the canonical solutions are

$$\begin{aligned} \phi _{1}(t,0)=\cos nt, \quad \phi _{2}(t,0)=\frac{\sin n t}{n}, \end{aligned}$$

therefore \(\displaystyle {\Delta (0)=2}\). On the other hand, it is a well-known fact that the Floquet multipliers \(\lambda _{1}\),\(\lambda _{2}\) of (24) satisfy \(\lambda _{1}\lambda _{2}=1\) and are related with the discriminant in this way:

  1. i.

    Elliptic Case: \(\left| \Delta (\varepsilon ) \right| <2 \Leftrightarrow \lambda _1=\overline{\lambda _2}\notin \mathbb {R},\,\, |\lambda _{1,2}|=1\),

  2. ii.

    Parabolic Case: \(\left| \Delta (\varepsilon ) \right| =2 \Leftrightarrow \lambda _{1}=\lambda _{2}=\pm 1\),

  3. iii.

    Hyperbolic Case: \(\left| \Delta (\varepsilon ) \right| >2 \Leftrightarrow \lambda _{1,2}\in \mathbb {R}, \, \vert \lambda _{1}\vert<1<\vert \lambda _{2}\vert \).

The Eq. (24) is only stable (i.e., all the solutions are bounded) in the Elliptic case or in the Parabolic case with monodromy matrix equal to \(\pm I_{d}\) where \(I_{d}\) is the identity matrix, i.e., \(\dot{\phi _1}(2\pi ,e)=\phi _2(2\pi ,e)=0\).

Lemma 3

\(\displaystyle {\Delta ^{(k)}(0)=0}\), for \(k=1,2,3\). Moreover, in the case \(p(t)=\sin t\) we have

$$\begin{aligned} \Delta ^{(4)}(0)=-\frac{3\pi ^{2}}{2(n^{2}-1)^{4}}. \end{aligned}$$
(26)

Proof

Observe that \(\partial ^{k}_{\varepsilon }\phi _{i}\) satisfies

$$\begin{aligned} \partial ^{2}_{t}\left( \partial ^{k}_{\varepsilon }\phi _{i}\right) +D^{k}_{\varepsilon }\left( g^{\prime }(x_{\varepsilon }(t))\phi _{i}\right) =0 \end{aligned}$$

for all \(k\in \mathbb {N}, i=1,2\), where

$$\begin{aligned} D^{k}_{\varepsilon }\left( g^{\prime }(x_{\varepsilon }(t))\phi _{i}\right) =g^{\prime }(x_{\varepsilon }(t))\partial ^{k}_{\varepsilon }\phi _{i} +F_{k,\phi _{i}}(t,\varepsilon ), \end{aligned}$$

where

$$\begin{aligned} F_{k,\phi _{i}}(t,\varepsilon )=\sum _{j=1}^{k} \left( \begin{array}{l} k\\ j \end{array}\right) D^{j}_{\varepsilon }(g^{\prime }(x_{\varepsilon }(t))) \partial ^{k-j}_{\varepsilon }\phi _{i}(t,\varepsilon ). \end{aligned}$$

In consequence \(\displaystyle {y(t)=\partial ^{k}_{\varepsilon }}\phi _{i}(t,0)\) satisfies

$$\begin{aligned} \left\{ \begin{array}{ll} \ddot{y}+&{}n^{2}y+ F_{k,\phi _{i}}(t,0)=0,\\ &{}y(0)=0, \, \dot{y}(0)=0. \end{array} \right. \end{aligned}$$
(27)

From the method of variation of parameters, we have

$$\begin{aligned} \partial ^{k}_{\varepsilon }\phi _{i}(t,0)= & {} -\frac{1}{n}\int _{0}^{t} \sin (n(t-s))F_{k,\phi _{i}}(s,0)\mathrm{d}s,\nonumber \\ \partial ^{k}_{\varepsilon }\dot{\phi }_{i}(t,0)= & {} -\int _{0}^{t} \cos (n(t-s))F_{k,\phi _{i}}(s,0)\,\mathrm{d}s. \end{aligned}$$
(28)

Since \(\displaystyle {F_{1,\phi _{i}}(t,0)=0}\) follows from (28) that \(\displaystyle {\Delta ^{\prime }(0)=0}\). Besides, it can be proved for \(k=2,3\)

$$\begin{aligned} F_{k,\phi _{i}}(t,0)=\mu _{k}(t)\phi _{i}(t,0), \end{aligned}$$

for certain continuous function \(\mu _{k}(t)\). Replacing this into (28) we have

$$\begin{aligned} \Delta ^{(k)}(0)=-\frac{1}{n}\int _{0}^{2\pi }\sin (2n\pi )\mu _{k}(s) \mathrm{d}s=0, \quad \text {for k=2,3.} \end{aligned}$$

Now, for \(k=4\) we have the following computations

$$\begin{aligned} \mu _{4}(t):=D^{4}_{\varepsilon }(g^{\prime }(x_{\varepsilon })) \big |_{\varepsilon =0}=g^{(5)}(0)x^{4}_{1}+3g^{\prime \prime \prime }(0)\left[ \left. \frac{\mathrm{d}^{2}x_{\varepsilon }}{\mathrm{d}\varepsilon ^{2}} \right| _{\varepsilon =0}\right] ^{2}+4g^{\prime \prime \prime }(0)x_{1}\left. \frac{\mathrm{d}^{3}x_{\varepsilon }}{\mathrm{d}\varepsilon ^{3}} \right| _{\varepsilon =0}, \end{aligned}$$

where \(x_{1}(t)\) is the solution of (20) and therefore

$$\begin{aligned} F_{4,\phi _{i}}(t,0)=6g^{\prime \prime \prime }(0)x^{2}_{1}(t)\partial ^{2}_{\varepsilon }\phi _{i}(t,0) +\mu _{4}(t)\phi _{i}(t,0). \end{aligned}$$

Henceforth

$$\begin{aligned} \Delta ^{(4)}(0)= & {} \mathcal {I}_{1}+\mathcal {I}_{2}-\frac{1}{n} \int _{0}^{2\pi }\sin (2n\pi )\mu _{4}(s)\mathrm{d}s,\\= & {} \mathcal {I}_{1}+\mathcal {I}_{2}, \end{aligned}$$

with

$$\begin{aligned} \mathcal {I}_{1}= & {} -\frac{6g^{\prime \prime \prime }(0)}{n}\int _{0}^{2\pi } \sin (n(2\pi -s))x^{2}_{1}(s)\partial ^{2}_{\varepsilon }\phi _{1}(s,0),\\ \mathcal {I}_{2}= & {} -6g^{\prime \prime \prime }(0)\int _{0}^{2\pi } \cos (n(2\pi -s))x^{2}_{1}(s)\partial ^{2}_{\varepsilon }\phi _{2}(s,0). \end{aligned}$$

On the other hand, \(\displaystyle {\partial ^{2}_{\varepsilon }\phi _{i}(t,0)}\) satisfies for \(i=1,2\)

$$\begin{aligned} \left\{ \begin{array}{ll} \ddot{y}+n^{2}y+g^{\prime \prime \prime }(0)x_{1}^{2}(t) \phi _{i}(t,0)=0,&{}\\ y(0)=0,\,\,\dot{y}(0)=0,&{} \end{array} \right. \end{aligned}$$
(29)

therefore

$$\begin{aligned} \partial ^{2}_{\epsilon }\phi _{i}(t,0)=-\frac{g^{\prime \prime \prime }(0)}{n}\int _{0}^{t}\sin (n(t-s))x_{1}^{2}(s)\phi _{i}(s,0)\,\mathrm{d}s. \end{aligned}$$
(30)

This implies

$$\begin{aligned} {\mathcal {I}}_{1}= & {} -6\left( \frac{g^{\prime \prime \prime }(0)}{n} \right) ^{2}\int _{0}^{2\pi }\sin ns \,x^{2}_{1}(s)\int _{0}^{s} \sin (n(s-u))\cos nu\, x_{1}^{2}(u)\,\mathrm{d}u\,\mathrm{d}s,\nonumber \\ {\mathcal {I}}_{2}= & {} 6\left( \frac{g^{\prime \prime \prime }(0)}{n} \right) ^{2}\int _{0}^{2\pi }\cos ns\, x^{2}_{1}(s)\int _{0}^{s}\sin (n(s-u))\sin nu\, x_{1}^{2}(u)\,\mathrm{d}u \,\mathrm{d}s. \end{aligned}$$
(31)

Applying the Fubini’s theorem on the triangle

$$\begin{aligned} \bigtriangleup _{2\pi }=\left\{ (u,s)\in \mathbb {R}^{2}: 0<u<s, 0<s<2\pi \right\} , \end{aligned}$$

it can be prove that the above integrals are equal, then from now on we denote \(\mathcal {I}=\mathcal {I}_{1}=\mathcal {I}_{2}.\)

In the particular case \(p(t)=\sin t\) we get \(\displaystyle {x_{1}(s)=\frac{\sin s}{n^{2}-1}}\). After several computations on the above double integrals, we arrive at

$$\begin{aligned} \mathcal {I}=-\frac{3\pi ^{2}}{4(n^{2}-1)^{4}}, \end{aligned}$$

and this completes the proof. \(\square \)

Now, we are able to state and prove the main result of this section

Theorem 4

Let \(x_{\varepsilon }(t)\) be a continuation given by the Theorem 3 for the oscillator (9) and \(\displaystyle {x_{1}(t)=\left. \frac{\mathrm{d} x_{\varepsilon }(t)}{\mathrm{d} \varepsilon }\right| _{\varepsilon =0}}\). Let \(\mathcal {I}=\mathcal {I}_{1}\) with \(\mathcal {I}_{1}\) given by (31).

  1. a)

    If \(\mathcal {I}>0\) then \(x_{\varepsilon }(t)\) is unstable,

  2. b)

    If \(\mathcal {I}<0\) then \(x_{\varepsilon }(t)\) is linearly stable.

Proof

By Lemma 3 for small \(\varepsilon \), the discriminant function \(\Delta (\varepsilon )\) has the form

$$\begin{aligned} \Delta (\varepsilon )=2+\frac{\mathcal {I}}{12}\varepsilon ^{4} +O(\varepsilon ^{4}), \end{aligned}$$

thus \(\Delta (\varepsilon )\) has a local minimum (local maximum) at \(\varepsilon =0\) if \(\mathcal {I}\gtrless 0\) respectively. This finishes the proof.

Corollary 2

In the forced pendulum (21) with \(n_{0}\ge 2\), there exists an unique odd \(2\pi \)-periodic continuation of the lower equilibrium \(x\equiv 0\) which is linearly stable for small \(\varepsilon \).

Proof

Applying the conclusions of Example 1, Lemma 3 and Theorem 4 the proof follows directly.