1 Introduction

1.1 Non-smooth differential equations

A modern approach of differential equations with discontinuous right hand sides has risen up from applications (see [1]), and one can find in [2] an early theoretical frame for their study, enhanced by the formulation of [9]. They are being increasingly used in applications to describe a large variety of physical phenomena (refer to [20, 21]). Mathematically, these systems are often described by sets of piecewise-smooth ordinary differential equations whose phase space is partitioned, by a set of switching manifolds, in different regions (see [4, 11, 12]). It is common that they correspond to real electrical models, even in presence of quite complicated singularities (see [5, 15]).

We start our analysis from a very simple model, which will appear to correspond to an instance of a distinguished class of differential equations.

Let us consider the problem of an automatic train, having to commute between two stations, and changing its direction at given times. We assume that the absolute value of the acceleration is constant, just switching its sign when the train changes its direction. We wish to identify conditions under which the train has a periodic behavior.

We assume that the variable \(y\) indicates the position of the train (algebraic distance from a given origin) and \(t\) the time and that the switching times are given by the zeros of a periodic function \(t\mapsto \sigma (t)\). Without loss of generality, we assume that \(\sigma \) is a piecewise-\({\fancyscript{C}}^{\infty }\) function, \(2\,\tau \)-periodic (\(\tau \in \mathbb R , \tau >0\)), such that \(\sigma (0)=0, \sigma (t)>0\) on \(]0,\tau [\) and \(\sigma (t)<0\) on \(]-\tau ,0[\).

Let us also introduce the sign function \(t\mapsto \mathop {\text{ sgn}}\nolimits (t)\) :

$$\begin{aligned} \mathop {\text{ sgn}}\nolimits (t)=1 \text{ if} \,t >0 \ , \ \mathop {\text{ sgn}}\nolimits (0)=0 \ , \ \mathop {\text{ sgn}}\nolimits (t)=-1 \text{ if} \,t <0. \end{aligned}$$

It is worth remarking that \(\mathop {\text{ sgn}}\nolimits \) models various electronic devices such as a physical relay, or corresponds to the output of a flip-flop device (bistable switch). Relays and bistable switches are very common in electronic items (for instance, digital clocks). We mention that a large class of autonomous systems involving \(\mathop {\text{ sgn}}\nolimits \), introduced in [2] as relay systems, have been studied. It is an important issue to find conditions for the existence of periodic solutions, since they are time-persistent phenomena. Another challenge is to give explicit formulas or algorithms to compute these solutions (see [10, 12, 13, 16]). The dynamics of non-autonomous equations are in general much more complex, especially when the system is forced by an external excitation (see the case of a simple model of a ship autopilot in [14]).

Returning to our setup, the problem of the train mentioned above corresponds to a non-autonomous second order equation which is, up to constants:

$$\begin{aligned} y^{\prime \prime }=-\mathop {\text{ sgn}}\nolimits \big (\sigma (t)\big ). \end{aligned}$$

In the case of such a simple equation of order \(2\), the solutions can be found readily, just by means of computing some constants. It then appears (see Sect. 2.5) that periodic solutions correspond to a given speed at time \(0\) and that there is only one periodic solution if one fixes the position at the origin of time.

But our main goal in what follows is to show, by examining the case of order \(n\) equations, that this type of equation belongs indeed to a larger class of non-smooth differential equations whose periodic solutions are given explicitly. Let us mention that such equations may of course be treated by means of Fourier transforms, but the issue here is to have explicit formulas expressing our solutions exactly with a finite number of terms.

1.2 From smooth to non-smooth order \(n\) equations

In many applications (in electronics for instance), one can easily replace a smooth parameter of a differential equation by a non-smooth one, just using semi-conductors. Mathematically, the effect is to jump from smooth dynamical systems theory to non-smooth dynamical systems theory, and the methods change drastically.

A large part of the dynamical systems theory linked to applications is concerned with periodicity: differential equations involving periodic excitations or periodic coefficient functions (refer for instance to [18]) and of course periodic solutions.

So let \(F\) be a \({\fancyscript{C}}^{\infty } 2\,\tau _{F}\)-periodic function (where \(\tau _{F}\in \mathbb R , \tau _{F}>0\)) and \(n\) be a strictly positive integer. Let us remark that the solutions of a smooth differential equation of the type

$$\begin{aligned} y^{(n)}=F(t), \end{aligned}$$

are expressed by:

$$\begin{aligned} y(t)=\varTheta _F^{[n]}(t)+P_{n-1}(t) \end{aligned}$$

where \(\varTheta _F^{[n]}\) is the \(n\)-primitive of \(F\) vanishing at \(0\) defined in Sect. 2.3, and \(P_{n-1}\) is a polynomial of degree \(n-1\). Moreover, \(\varTheta _F^{[n]}\) is periodic (of period \(2\,\tau _{F}\)) if and only if the average value of \(F\)

$$\begin{aligned} \frac{1}{2\,\tau _{F}} \int \limits _{-\tau _{F}}^{\tau _{F}} F(t) \, \text{ d}t \end{aligned}$$

is equal to \(0\) (see Lemma 9).

Under this hypothesis, there is only one periodic solution vanishing at the origin, namely \(\varTheta _F^{[n]}\). The algorithmic computation of \(\varTheta _F^{[n]}\) (involving a finite number of expressions) is in general an intractable problem. Two natural questions arise. What happens if \(F\) is no longer \({\fancyscript{C}}^{\infty }\)? Are periodic solutions given by explicit formulas?

In our approach, we will start from non-smooth differential equations which are piecewise-autonomous. The switch cuts the time in subintervals, and between two switches, one has to deal with an autonomous equation. After having solved the problem for these basic differential equations, we will perturb them by a general periodic function, thus exiting from the piecewise-autonomous world. After an analysis of the effects of such perturbations, we will exhibit a general class of equations where the problem of finding explicitly the periodic solutions are solvable.

1.3 Differential equations involving sawtooth functions

We should remark that in the model proposed above, the regularity of the periodic function \(\sigma \) is not an issue. We just use the fact that the sign alternatively changes from half-period to half-period.

It is worth to introduce the sawtooth function \(\mathop {\text{ sw}}\nolimits \), whose graph is given in Fig. 1. It is defined in the following way. If \(t\in \mathbb N \), we put \(\mathop {\text{ sw}}\nolimits (t)=0\). If \(t> 0, t\notin \mathbb N \), there exists a unique \(m\in \mathbb N \) such that \(m< t <m+1\). We then define \(\mathop {\text{ sw}}\nolimits (t)=t-m\) if \(m\) is even, \(\mathop {\text{ sw}}\nolimits (t)=t-m-1\) if \(m\) is odd. If \(t<0, \mathop {\text{ sw}}\nolimits (t)=-\mathop {\text{ sw}}\nolimits (-t)\).

Let us denote by \(O(\mathbb Z )\) the set of the odd integers, and by \(\mathop {\text{ frac}}\nolimits \) the fractional part function (according to our notations, when \(t>0, t\notin \mathbb N , \mathop {\text{ frac}}\nolimits (t)=t-m\)).

The function \(\mathop {\text{ sw}}\nolimits \) is 2-periodic and coincides with the identity on \(]-1,1[\). It presents discontinuities on \(O(\mathbb Z )\), and for all \(t\in \mathbb R \setminus O(\mathbb Z )\), we have

$$\begin{aligned} \mathop {\text{ sw}}\nolimits (t)=\mathop {\text{ sgn}}\nolimits (t)\,\left(2\,\mathop {\text{ frac}}\nolimits \left(\frac{|t|+1}{2}\right)-1\right)\!. \end{aligned}$$

Observe that in electronics such a function can be produced from the periodic discharge of a capacitor.

In all that follows, the notation \(\tau \) shall represent a strictly positive real, and our basic \(2\,\tau \)-periodic function \(\sigma \) will simply be \(t\mapsto \mathop {\text{ sw}}\nolimits (t\,\tau ^{-1})\).

Fig. 1
figure 1

Graph of the sawtooth function \(\mathop {\text{ sw}}\nolimits \)

The basic equation from which we start is a discontinuous equation of order \(n\):

$$\begin{aligned} y^{(n)}-\eta \, \mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits \left(t\,\tau ^{-1}\right)\big ) =0 \text{ with} \,\eta \in \mathbb R \ , \ \eta \ne 0 \ , \ n \in \mathbb N , \ n > 0. \end{aligned}$$
(1)

Notice that the function \(t\mapsto \mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t\,\tau ^{-1})\big )\) is piecewise-constant. So, on each interval on which \(\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t\,\tau ^{-1})\big )\) has a constant value, Eq. (1) appears as an autonomous differential equation.

We are interested in solutions of maximal regularity, which are \({\fancyscript{C}}^{n-1}\) on a countable set of points (including the origin), and \({\fancyscript{C}}^{\infty }\) everywhere else (in particular on a dense set of points). Observe that, at any point \(t_0\) there are only two possibilities: either the solution is not \({\fancyscript{C}}^n\) or it is \({\fancyscript{C}}^{\infty }\).

1.4 Euler polynomials

We introduce here the Euler polynomials which will appear in the resolution of our problem. For all \(n\in \mathbb N \), the Euler polynomial \(E_n\) is a degree \(n\) polynomial defined by:

$$\begin{aligned} {E_n(t)= \sum _{k=0}^{n}\frac{1}{2^k} \left( \sum _{j=0}^{k}(-1)^{j} \left( \begin{array}{l} k \\ j \end{array} \right) \left(t+j\right)^n \right) } . \end{aligned}$$

For instance, \(E_0(t)=1, E_1(t)=t-\frac{1}{2}, E_2(t)=t^2-t\).

These Euler polynomials have been extensively studied by many authors, mainly in the mathematical field of Number Theory. We refer the reader to [3, 68] and [17] (Chapter VI) for details concerning this subject. Some of their properties are mentioned in Sect. 3.2. These polynomials will be generalized in this present work to piecewise-polynomial functions (see Sects. 2.2 and 3.2).

1.5 Finite formulas expressing periodic solutions and final remarks

As previously described, this work deals with the study of the periodic solutions of piecewise-\({\fancyscript{C}}^{\infty }\) differential equations. In the following, it appears that, up to normalization, the periodic solutions of our order \(n\) equations are expressed in a very direct way in terms of the Euler polynomial of degree \(n\).

The interest of this formulation in terms of Euler polynomials is double. On one hand, although Euler polynomials are well known and extensively studied by several mathematicians, it is the very first time that they appear in non-smooth dynamical systems theory. On the other hand, they allow us to write explicitly our solutions without any numerical approximation due to the presence of infinite series. In this sense, this approach is conceptually different from the classical one using Fourier series theory.

Looking closer to this first result, appears the need of considering a new class of piecewise-\({\fancyscript{C}}^{\infty }\) periodic functions, the periodic switched Euler polynomials, which are themselves built from Euler polynomials. Using this new class of functions, appears naturally a generalized class of differential equations where the problem of finding and expressing finitely their periodic solutions is solvable.

Observe that we give a characterization of the periodic solutions either by means of an explicit formula (see Theorem 1) or by means of the initial conditions in order to compute a periodic solution by numerical integration (see Corollary 1).

The paper is organized in the following way. In Sect. 2, we introduce the periodic switched Euler polynomials and present the main results. We state and prove several lemmas in Sect. 3, and finally, we prove our main results in Sect. 4.

2 Main results

2.1 Basic free equation

We state and discuss the major results which will be proved in Sect. 4. We begin our study by proving that the basic Eq. (1) has periodic solutions and that these periodic solutions have an explicit form expressed in terms of the Euler polynomial of degree \(n\). Moreover, if the value at one point is fixed, then the periodic solution is unique.

Theorem 1

For all \(y_0\in \mathbb R \), there exists a unique \(2\,\tau \)-periodic solution of class \(\mathcal{C ^{n-1}}\) of Eq. (1) such that \(y(0)=y_0\). Furthermore, for all \(t\in ]-\tau ,\tau [\), this solution is given by

$$\begin{aligned} y(t)={\frac{\eta \,\tau ^{n}}{n!} \, \left( \big ( \mathrm{sgn}(t) \big )^{n+1} \,E_n\left(|t\,\tau ^{-1}|\right) -E_n(0) \right) +y_0} \end{aligned}$$

where \(E_n\) is the Euler polynomial of degree \(n\).

It is indeed important to have a characterization of the solution in terms of Euler polynomials. They occur in Number Theory, and a lot is known about their behavior.

Since the restrictions of the solutions to \([0,\tau [\) (resp. \(]-\tau ,0]\)) are uniquely determined by the values at \(0\) of \((y^{(i)}, i\in [0,n-1])\), we get (by means of a recursive application of item (a), Lemma 3) the following corollary.

Corollary 1

For all \(y_0\in \mathbb R \), there exists a unique \(2\,\tau \)-periodic solution of class \({\fancyscript{C}^{n-1}}\) of Eq. (1) such that \(y(0)=y_0\). Furthermore, this solution is such that

$$\begin{aligned} y(0)=y_0 \quad \mathrm{and \quad for\,\, all } \ i \in [1,n-1], \ y^{(i)}(0)=\frac{\eta \,\tau ^{n-i}}{(n-i)!}\,E_{n-i}(0). \end{aligned}$$

This corollary might be useful if one wants to compute directly the periodic solution by means of numerical integration.

2.2 Periodic switched Euler polynomials

For all \(n\in \mathbb N \), let us introduce (see Sect. 3.2 for more details) the non-smooth function \(\psi _n\) such that, for all \(t\in \mathbb R \),

$$\begin{aligned} \psi _n(t)=\big (\mathop {\text{ sgn}}\nolimits (t)\big )^{n+1} \,E_n\left(|t|\right). \end{aligned}$$

The restriction of \(\psi _n\) to \([0,+\infty [\) (resp. \(]-\infty ,0]\)) is a polynomial function. We call \(\psi _n\) a switched Euler polynomial of degree \(n\). The graphs of the restrictions to \([-1,1]\) of \(\psi _n\) for values of \(n\) up to \(5\) are represented in Fig. 2. Observe that \(\psi _0=\mathop {\text{ sgn}}\nolimits \) and provided \(n>0, \psi _n\) is \({\fancyscript{C}}^{n-1}\) at \(0\) (see Lemma 5), but not \({\fancyscript{C}}^{n}\) at \(0\). Our aim is to construct from \(\psi _n\) a \(2\)-periodic piecewise-\({\fancyscript{C}}^{\infty }\) function with maximal regularity.

Fig. 2
figure 2

Graphs of \(\psi _0, \psi _1, \psi _2, \psi _3, \psi _4\) and \(\psi _5\) on \([-1,1]\)

From Lemma 8, provided \(\tau =1\) and \(n>0\), the periodic solution of Eq. (1) vanishing at \(0\), expressed in Theorem 1 is exactly defined, for all \(t\in \mathbb R \setminus O(\mathbb Z )\), by

$$\begin{aligned} y(t)={ \frac{\eta \,}{n!} \Big ( \psi _n\big (\mathop {\text{ sw}}\nolimits \left(t\,\right)\big ) -E_n(0) \Big ) } . \end{aligned}$$

The function \(\psi _n\circ \,\mathop {\text{ sw}}\nolimits \) presents potential discontinuities on points of \(O(\mathbb Z )\) since \(\mathop {\text{ sw}}\nolimits \) is involved. But, by Lemma 8, there exists a unique function \(\psi _n^{*}\) which is \({\fancyscript{C}}^{n-1}\) at points of \(2\,\mathbb Z , {\fancyscript{C}}^\infty \) everywhere else, and coinciding with \(\psi _n\circ \,\mathop {\text{ sw}}\nolimits \) on \(\mathbb R \setminus O(\mathbb Z )\).

We call periodic switched Euler polynomials these functions \(\psi _n^{*}\) which are going to play a central role in our study. Note that we also fix by convention \(\psi _0^{*}=\psi _0\circ \,\mathop {\text{ sw}}\nolimits \) (which remains discontinuous at points of \(\mathbb Z \)).

2.3 Forced equations

Due to the physical meaning, it is very usual to force a free differential equation by means of a periodic deformation appearing in the right hand side (see for instance [21]). Indeed, our next goal will be the study of some deformations of our basic equation of arbitrary order (1). Furthermore, we assume that these deformations are just piecewise-\({\fancyscript{C}}^\infty \). So we consider Equations of the form:

$$\begin{aligned} y^{(n)}-\eta \, \mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits \left(t\,\tau ^{-1}\right)\big ) = F(t) \end{aligned}$$
(2)

where \(n\in \mathbb N , n> 0, F : \mathbb R \rightarrow \mathbb R \) is piecewise-\({\fancyscript{C}}^\infty , 2\,\tau _{F}\)-periodic (\(\tau _{F}\in \mathbb R , \tau _{F}>0\)) with null average value.

We define the \(n\)-primitive of \(F\) (vanishing at \(0\)) in the following way:

$$\begin{aligned} \left\{ \begin{array}{l} \varTheta _{F}^{[1]}(t)={ \int _{0}^t F(u)\, \text{ d}u }\\ \varTheta _{F}^{[n]}(t)={ \int _{0}^t \varTheta _{F}^{[n-1]}(u) \, \text{ d}u} \quad \text{ if} n > 1 \ . \end{array}\right. \end{aligned}$$

Note that a non-recursive expression of \(\varTheta _{F}^{[n]}\) is:

$$\begin{aligned} \varTheta _{F}^{[n]}(t)={ \int \limits _{0}^t \left( \int \limits _0^{u_{n}} \left( \int \limits _0^{u_{n-1}} \ldots \left(\int \limits _0^{u_2} F(u_{1})\, du_1 \right) \ldots du_{n-2} \right) du_{n-1} \right) du_{n} } \end{aligned}$$

and that the \(n\)th derivative of \(\varTheta _{F}^{[n]}\) is exactly \(F\).

Let us recall that we assume that \(F\) has null average value, so that \(\varTheta _{F}^{[n]}\) remains periodic (see Lemma 9).

The following result states that the forced equation has periodic solutions provided that the period of \(F\) and \(\tau \) are commensurable, that is, their periods are rationally dependent.

Theorem 2

Equation (2) has periodic solutions if and only if the half-periods \(\tau _F\) of \(F\) and \(\tau \) satisfy \(\tau _{F}^{-1}\,\tau \in \mathbb N \).

In that case, for all \(y_0\in \mathbb R \), Eq. (2) has a unique periodic solution of class \(\mathcal{C ^{n-1}}\) such that \(y(0)=y_0\). Furthermore, this solution is given, for all \(t\in \mathbb R \), by

$$\begin{aligned} y(t)= { \frac{\eta \,\tau ^{n}}{n!}\, \Big ( \psi _n^{*}\left(t\,\tau ^{-1}\right) -E_n(0) \Big )+\varTheta _{F}^{[n]}(t)+y_0} \ . \end{aligned}$$

A consequence is the existence of a countable family of periodic solutions vanishing at \(0\). Observe that the only requested condition is that \(\tau _F^{-1}\,\tau \in \mathbb N \) which is completely different from the case studied in [14] where such a condition is necessary but not sufficient.

Let us denote by \(\mathbf{i }\) the complex number such that \(\mathbf{i }^2=-1\). In the case, \(F\) has a finite Fourier series, we get an explicit and computable formula of the solution, as described in the next corollary:

Corollary 2

Provided that \(F\) has a finite Fourier series of the form

$$\begin{aligned} F(t)= { \sum _{0<|k|\le m} a_k \,e^{\mathbf{i }\,k\,\pi \,\beta \,t} } \quad \mathrm{with }\,m\in \mathbb N , \beta \in \mathbb R , \quad \mathrm{and } \quad \forall k \ | \ 0<|k|\le m, \ a_k \in \mathbb C , \end{aligned}$$

and that \(\beta \,\tau \in \mathbb N \), then for all \(y_0\in \mathbb R \), Eq. (2) has a unique periodic solution of class \(\mathcal{C ^{n-1}}\) such that \(y(0)=y_0\). Furthermore, this solution is given, for all \(t\in \mathbb R \), by

$$\begin{aligned} y(t)= { \frac{\eta \,\tau ^{n}}{n!} \Big ( \psi _n^{*}\left(t\,\tau ^{-1}\right) -E_n(0) \Big ) } +{ \frac{1}{\beta ^n\,\mathbf{i }^n\,\pi ^n}\sum _{0<|k|\le m} \frac{a_k}{k^n}\, \left( e^{\mathbf{i }\,k\,\pi \,\beta \,t} -1 \right) } +y_0. \end{aligned}$$

We illustrate in Figs. 67, and 8 the periodic solutions of \(y^{\prime \prime }=\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big )+\alpha \,\cos (\pi \,\beta \,t)\) vanishing at \(0\) and their corresponding phase portraits, for different values of \((\alpha ,\beta )\).

2.4 Equations involving periodic switched Euler polynomials

The solutions of Eq. (2) are built in a simple manner from the Euler polynomials. On the other hand, Euler polynomials satisfy very particular differential properties. Indeed, the set of these Euler polynomials is closed by differentiation. This leads us to consider a distinguished class of non-smooth differential equations for which the problem of finding periodic solutions is solvable.

Let us study now differential equations of the form:

$$\begin{aligned} y^{(n)}-\eta \,\psi _k^{*}\big (t\,\tau ^{-1}\big ) = F(t) \end{aligned}$$
(3)

where \(F\) is \(2\,\tau _F\)-periodic, with null average value.

Observe that the case \(F=0\) and \(k=0\) corresponds to Eq. (1), already treated in Theorem 1. The following theorem generalizes this previous result.

Theorem 3

Provided \(\tau _F^{-1}\,\tau \in \mathbb N \), for all \(y_0\in \mathbb R \), Eq. (3) has a unique periodic solution of class \({\fancyscript{C}^{n+k-1}}\) such that \(y(0)=y_0\). Furthermore, this solution is given, for all \(t\in \mathbb R \), by

$$\begin{aligned} y(t)= { \frac{\eta \,\tau ^{n}\,k!}{(n+k)!}\, \Big ( \psi _{n+k}^{*}\left(t\,\tau ^{-1}\right) -E_{n+k}(0) \Big ) } +\varTheta _{F}^{[n]}(t)+y_0. \end{aligned}$$

Some examples are represented in Figs. 3 and 4.

Fig. 3
figure 3

The \(8\)-periodic solution of \(y^{\prime \prime }=\psi _1^{*}(t) +\frac{1}{20}\,\cos (\frac{\pi }{4}\,t)\) vanishing at \(0\) and its corresponding phase portrait

Fig. 4
figure 4

The \(8\)-periodic solution of \(y^{\prime \prime }=\psi _2^{*}(t) +\frac{1}{20}\,\cos (\frac{\pi }{4}\,t)=0\) vanishing at \(0\) and its corresponding phase portrait

Fig. 5
figure 5

The \(2\)-periodic solution of \(y^{\prime \prime }-\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big ) =0\) vanishing at \(0\) and its corresponding phase portrait

Observe that the functions \(F\) such that \(F(t)=\psi _l^{*}\big (t\,\tau ^{-1}\big ), l\in \mathbb N \), are themselves piecewise-\({\fancyscript{C}}^{\infty }\) periodic. Moreover, the average value of a switched periodic Euler polynomial is zero (see Lemma 10). So we can consider differential equations which involve exclusively periodic switched Euler polynomials:

$$\begin{aligned} y^{(n)}=\sum _{i=k_0}^{k_1}\, a_{i} \,\psi _{i}^{*}\big (t\,\tau ^{-1}\big ). \end{aligned}$$
(4)

Moreover, the \(n\)-primitive of a switched Euler polynomial is itself a switched Euler polynomial (see Lemma 8). By induction on the number on switched polynomials, we get:

Corollary 3

Equation (4) has a unique periodic solution of class \({\fancyscript{C}^{n+k_0-1}}\) such that \(y(0)\!=\!0\), given by:

$$\begin{aligned} y(t)={ \sum _{i=k_0}^{k_1} \frac{a_{i}\,{i}!\,\tau ^{n}}{(n+i)!}\Big (\psi _{n+i}^{*} \left(t\,\tau ^{-1}\right) -E_{n+i}(0) \Big ) }. \end{aligned}$$
Fig. 6
figure 6

The \(2\)-periodic solution of \(y^{\prime \prime }=\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big )+\cos ({2\,\pi }\,t)\) vanishing at \(0\) and its corresponding phase portrait

2.5 The particular case of a second order equation

Second order equations are the most frequent in applications, due to their physical meaning. In that case, it is also possible to draw phase portraits which are very useful to completely understand the periodic solutions.

Let us consider a \(2\,\tau _{F}\) periodic function \(F\), of null average value, and the following equation:

$$\begin{aligned} y^{\prime \prime }-\eta \, \mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t\,\tau ^{-1})\big ) = F(t) \ \text{ with} \, \eta \in \mathbb R . \end{aligned}$$
(5)

By Theorem 2, Eq. (5) has periodic solutions if and only if the half-periods \(\tau _F\) of \(F\) and \(\tau \) are such that \(\tau _{F}^{-1}\,\tau \in \mathbb N \). Then, for all \(y_0\in \mathbb R \), Eq. (5) has a unique periodic solution \(y\) such that \(y(0)=y_0\). It is defined outside \(\tau \,O(\mathbb Z )\) by

$$\begin{aligned} y(t)={ \frac{\eta \,\tau ^2}{2}\, \mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits \left(t \, \tau ^{-1}\right)\big ) \,\left(\big (\mathop {\text{ sw}}\nolimits \left(t \, \tau ^{-1}\right)\big )^2 - \big |\mathop {\text{ sw}}\nolimits \left(t \, \tau ^{-1}\right)\big | \right) +\varTheta _F^{[2]}(t)+y_0} . \end{aligned}$$
Fig. 7
figure 7

The \(8\)-periodic solution of \(y^{\prime \prime }=\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big )+\frac{1}{3}\,\cos (\frac{\pi }{4}\,t)\) vanishing at \(0\) and its corresponding phase portrait

Fig. 8
figure 8

The \(8\)-periodic solution of \(y^{\prime \prime }=\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big )+\frac{1}{8}\,\cos (\frac{\pi }{4}\,t)\) vanishing at \(0\) and its corresponding phase portrait

Provided that \(y(0)\) is fixed, this unique solution is also determined by the initial conditions:

$$\begin{aligned} { \left(y(0),y^{\prime }(0)\right)=\left(y_0,-\,\frac{\eta \,\tau ^2}{2}\right)}. \end{aligned}$$

The solutions of different equations are illustrated in Figs. 567 and 8.

If \(F=0, y_0=0\), the solution of Eq. (5) and its derivative are given for all \(t\in ]-\tau ,\tau [\) respectively, by:

$$\begin{aligned} y(t)=\frac{\varepsilon \,\eta \,\tau ^2}{2}\left(\left(t\tau ^{-1}\right)^2 -\varepsilon \,t\,\tau ^{-1}\right) \ \ \text{ and} \ \ y^{\prime }(t)=\frac{\varepsilon \,\eta \,\tau }{2}\left(2\,t\,-\varepsilon \,\tau \right) \ , \text{ where} \,\varepsilon =\mathop {\text{ sgn}}\nolimits (t). \end{aligned}$$

Provided that \(\varepsilon \ne 0, t\) can be eliminated from the second equation, and we get

$$\begin{aligned} y(t)=\frac{4\,\big (y^{\prime }(t)\big )^2-\eta ^2\,\tau ^2}{8\,\varepsilon \,\eta }, \end{aligned}$$

which is another description of the periodic solution of Eq. (5) (when \(F=0\)) such that \(y(0)=0\). This implies that \((y,y^{\prime })\) belongs to the semi-algebraic set

$$\begin{aligned} 8\,|\eta |\, |y|=\big |4\,\big (y^{\prime }\big )^2-\eta ^{2}\,\tau ^{2}\big |. \end{aligned}$$

3 Conditions for the existence of periodic solutions

In this section, we present several lemmas which will be used for proving our main results.

3.1 Reduction and resolution of the basic equation

As said in the introduction, we start the discussion from Eq. (1), and we search the solutions on the interval \(]-\tau ,\tau [\). We reformulate the problem by means of the following Lemma.

Lemma 1

  1. (a)

    \(y\) is a solution of Eq. (1) if and only if \(z\) defined by

    $$\begin{aligned} z(t)=\frac{n!}{\eta \,\tau ^n}y\big ({t}\,{\tau }\big ) \end{aligned}$$
    (6)

    is a solution of

    $$\begin{aligned} z^{(n)}-n!\, \mathrm{sgn}\big (\mathrm{sw}(t)\big ) =0. \end{aligned}$$
    (7)
  2. (b)

    If \(y\) is a \(2\,\tau \)-periodic \({\fancyscript{C}}^{n-1}\) solution of Eq. (1) vanishing at \(0\), then \(z\) defined by formula (6) is a \({\fancyscript{C}}^{n-1}\) solution of the boundary problem

    $$\begin{aligned} \left\{ \begin{array}{l} \forall t \in [-1,1], \ \ z^{(n)}(t)-n!\,\mathrm{sgn}(t)=0 \\ z(0)=0 \\ \mathrm{for\,\, all\,\, integer} \ i\in [0,n-1], \ \ z^{(i)}(1)=z^{(i)}(-1). \end{array}\right. \end{aligned}$$
    (8)

Proof

  1. (a)

    Straightforward from the computation of \(z^{(n)}\).

  2. (b)

    By item (a), Eq. (1) is equivalent to Eq. (7), which simplifies on \(]-1,1[\) to:

    $$\begin{aligned} z^{(n)}-n!\,\mathop {\text{ sgn}}\nolimits (t)=0. \end{aligned}$$
    (9)

    It is straightforward that if \(y\) is \(2\,\tau \)-periodic, then \(z\) is 2-periodic. So for all \(i\in [0,n\!-1], z^{(i)}(1)=z^{(i)}(-1+2)=z^{(i)}(-1)\).

\(\square \)

Observe that no solution of (9) can be \({\fancyscript{C}}^{n}\) at the origin. We will restrict ourselves to solutions which are \({\fancyscript{C}}^{n-1}\) everywhere, that is, are of maximal regularity. In the following lemma, we establish the existence and the unicity of such a solution of Eq. (9) vanishing at the origin and satisfying boundary constraints.

Lemma 2

Let us consider Eq. (9) on \([-1,1]\). There exists a unique solution \(z\) of Eq. (9) such that

  • \(z(0)=0\);

  • \(z\) is \({\fancyscript{C}}^{n-1}\) on \([-1,1]\);

  • for all \(i\in [0,n-1], z^{(i)}(1)=z^{(i)}(-1)\).

Observe that outside \(0\) any solution of Eq. (9) is \({\fancyscript{C}}^{\infty }\).

Proof

We search for solutions of Eq. (9) vanishing at the origin.

Let \(z_{+}\) (resp. \(z_{-}\)) be a solution on \([0,1]\) (resp. \([-1,0]\)) vanishing at \(0\) of the equation \(z^{(n)}=n!\) (resp. \(z^{(n)}=-n!\)). It is immediate that both \(z_{+}\) and \(z_{-}\) are polynomials of degree \(n\).

We wish to construct a solution \(z\) on \([-1,1]\) such that:

$$\begin{aligned} z(t)= \left\{ \begin{array}{l} z_{+}(t) \text{ if} \,t \in ]0,1]\\ 0 \text{ if} \, t=0\\ z_{-}(t) \text{ if} \, t \in [-1,0[ \end{array} \right. \end{aligned}$$

The solution \(z_{+}\) (resp. \(z_{-}\)) is right (resp. left) differentiable at any order at \(0\). And to get a \({\fancyscript{C}}^{n-1}\) solution at \(0\), we must have \(z_{+}^{(i)}(0)=z_{-}^{(i)}(0)\) for all \(i\in [0,n-1]\). This implies that the coefficients of \(z_{+}\) and \(z_{-}\) are the same, excepted the dominant coefficients which are \(1\) in the case of \(z_{+}, -1\) in the case of \(z_{-}\).

So a solution of Eq. (9) vanishing at \(0\) has the general form:

$$\begin{aligned} { z(t)=\mathop {\text{ sgn}}\nolimits (t)\,t^n+ \sum _{i=0}^{n-1} a_{i}\,t^{i}} \ \text{ with} \ (a_{i}, i\in [0,n-1]) \in \mathbb R ^{n} \ \text{ and} \, a_0=0 . \end{aligned}$$

Hence, for all \(t\ne 0\), for all \(j\in [0,n-1]\), the \(j\)-th derivative of \(z\) at \(t\) is:

$$\begin{aligned} { z^{(j)}(t)=\mathop {\text{ sgn}}\nolimits (t)\,\frac{n!}{(n-j)!}\, t^{n-j}+ \sum _{i=j}^{n-1} \frac{i!}{(i-j)!}\, a_{i}\,t^{i-j} . } \end{aligned}$$

Furthermore, for all \(j\in [0,n-1]\), we impose the condition \(z^{(j)}(1)=z^{(j)}(-1)\) which corresponds to the equation \(J(j)=0\) where:

$$\begin{aligned} J(j) = \frac{n!}{(n-j)!}\,\big (1+(-1)^{n-j}\big ) + \sum _{i=j}^{n-1} \frac{i!}{(i-j)!}\,\big (1-(-1)^{i-j}\big )\,a_{i}. \end{aligned}$$

Observe that \(J(n-1)=0\). Moreover, when \(i=j\), we have \((1-(-1)^{i-j})=0\). Hence, for all \(j\in [0,n-2], J(j)\) reduces to:

$$\begin{aligned} J(j) = \frac{n!}{(n-j)!}\,\big (1+(-1)^{n-j}\big ) + \sum _{i=j+1}^{n-1} \frac{i!}{(i-j)!}\,\big (1-(-1)^{i-j}\big )\,a_{i}. \end{aligned}$$

Observe that for all \(j\in [0,n-2], J(j)\) involves \((a_i, i\in [j+1,n-1])\) and the coefficient of \(a_{j+1}\) in \(J(j)\) is \(2\,(j+1)! \ne 0\).

Assume first that \(n\) is even, \(n\ge 2\). Let us solve the system:

$$\begin{aligned} \left\{ J(j)=0 ,\ \text{ for} \text{ all} \, j \in [0, n-2] \right\} \end{aligned}$$
(10)

with respect to the unknowns \((a_i, i\in [1,n-1])\).

Let us collect the equations of the linear system (10) this way:

  • we consider the subsystem of all equations corresponding to even indices of derivation:

    $$\begin{aligned} \left\{ J(2j)=0 ,\ \text{ for} \text{ all} \, j \in \left[0, \frac{n}{2}-1\right] \right\} . \end{aligned}$$
    (11)

    If one writes

    $$\begin{aligned} \tilde{J}(j) = \sum _{i=j+1}^{n-1} \frac{i!}{(i-j)!}\,\big (1-(-1)^{i-j}\big )\,a_{i}, \end{aligned}$$

    system (11) is equivalent to

    $$\begin{aligned} \left\{ \begin{array}{l} \tilde{J}(0)=-2\\ \tilde{J}(2)= -2\,n\,(n-1)\\ \ldots \\ { \tilde{J}(2j) = -\frac{2\,n!}{(n-2j)!}} \text{ for} \text{ all} \,0 \le j \le \frac{n}{2}-1\\ \ldots \\ \tilde{J}(n-2) =-{n!} \end{array}\right. \end{aligned}$$
    (12)

    For all \(j\in [0,\frac{n}{2}-1]\), the coefficient of the variable \(a_{2j+1}\) present in the equation \(\tilde{J}(2j)=0\) is \(2\,(2j+1)! \ne 0\). The right hand side of (12) is nonzero. So (12) is a triangular system of \(\frac{n}{2}\) equations and \(\frac{n}{2}\) unknowns. Since it is non-singular and non-homogeneous, it has a unique non-trivial solution with respect to the unknowns \((a_{2j+1},{j\in [0,\frac{n}{2}-1]})\). The conclusion is the same for (11).

  • we consider the subsystem of all equations corresponding to odd indices of derivation:

    $$\begin{aligned} \left\{ J(2j+1)=0 ,\ \text{ for} \text{ all} j \in \Big [0,\frac{n}{2}-2\Big ] \right\} . \end{aligned}$$
    (13)

    System (13) is also a triangular system, of \(\frac{n}{2}-1\) equations, and \(\frac{n}{2}-1\) unknowns. This system is also non-singular (for all \(j\in [1,\frac{n}{2}-1]\), the coefficient of \(a_{2j}\) in the equation \(J(2j-1)=0\) is \(2\,(2j)! \ne 0\)), but its right hand side is trivial. So it has a unique trivial solution with respect to the unknowns \((a_{2j}, {j\in [1,\frac{n}{2}-1]})\).

Since (10) is equivalent to (11) and (13), it has a unique solution. Moreover, all the even coefficients are zero. Note that the polynomial part of degree \(n-1\) of the solution \(z\) is an odd polynomial.

The case \(n\) odd is similar. The subsystem corresponding to even derivatives has a unique trivial solution, while the subsystem corresponding to odd derivatives has a unique non-trivial one. In that case, the polynomial part of degree \(n-1\) of the solution \(z\) is an even polynomial. \(\square \)

3.2 Euler polynomials and periodic switched Euler polynomials

Let us recall some well-known properties of the Euler polynomials \(E_n\), already defined in Sect. 1.4.

Lemma 3

For all \(t\in \mathbb R \), for all \(n\in \mathbb N \):

  1. (a)

    \(E_n^{^{\prime }}(t)=n\,E_{n-1}(t), n\ge 1\)

  2. (b)

    \(E_n(1-t)=(-1)^n\, E_n(t), n\ge 0\)

  3. (c)

    \(E_n(1+t)+E_n(t)=2\,t^n, n\ge 0\)

  4. (d)

    \((-1)^n\,E_n(-t)=2\,t^n-E_n(t), n\ge 0\).

Proof

Refer to [19] for (c), and to [3] for (a), (b).

Observe that (d) is a direct consequence of (b) (changing \(t\) into \(-t\)) and (c). \(\square \)

This implies that Euler polynomials corresponding to even indices vanish at both \(0\) and \(1\):

Lemma 4

If \(n\ge 1\) then \(E_{2n}(1)=E_{2n}(0)=0\).

Proof

By item (d), Lemma 3, \((-1)^{2n}\,E_{2n}(0)=-E_{2n}(0)\). So \(E_{2n}(0)=-E_{2n}(0)\), and \(E_{2n}(0)=0\). By item (b), Lemma 3, \(E_{2n}(1)=(-1)^{2n}\, E_{2n}(0)\), which implies \(E_{2n}(1)=0\). \(\square \)

We investigate now some properties of a piecewise-polynomial function built from the Euler polynomial. For all \(n\in \mathbb N \), let \(\psi _n\) be defined by

$$\begin{aligned} \psi _n(t)= \left\{ \begin{array}{l} E_n(t) \ \text{ if} \ t > 0 \\ 0 \ \text{ if} \ t = 0\\ (-1)^{n+1} E_n(-t) \ \text{ if} \ t < 0 . \end{array}\right. \end{aligned}$$

Observe that \(\psi _0=\mathop {\text{ sgn}}\nolimits \).

Lemma 5

For all \(n\in \mathbb N , n>0, \psi _n\) is \({\fancyscript{C}}^{n-1}\) at \(0, {\fancyscript{C}}^{\infty }\) everywhere else.

Proof

Let us call \((\psi _{n})_{+}\) (resp. \((\psi _n)_{-}\)) the restriction of \(\psi _n\) to \(]0,+\infty [\) (resp. \(]-\infty ,0[\)). The function \((\psi _n)_{+}\) (resp. \((\psi _n){-}\)) is right (resp. left) differentiable at any order at \(0\). And \(\psi _n\) is a \({\fancyscript{C}}^{n-1}\) function at \(0\) if and only if, for all \(j\in [0,n-1], (\psi _n)_{+}^{(j)}(0)=(\psi _n)_{-}^{(j)}(0)\).

For \(j=0\), we have \(E_n(0)\) on the right and \((-1)^{n+1}\,E_n(0)\) on the left. These quantities are equal if and only if \(E_n(0) \, (1+ (-1)^n)=0\). This is trivial if \(n\) is odd, and results from Lemma 4 if \(n\) is even.

For all \(j\in [1,n\!-\!1]\), for \(t\!<\!0\), the derivation chain rule gives \(\psi _{n}^{(j)}(t)=(-1)^{n+1+j}\, E_n^{(j)}(-t) \) . So,

$$\begin{aligned} \psi _{n}^{(j)}(0^{-})\!=\!(-1)^{n+1+j}\, E_n^{(j)}(0^{+})=(-1)^{n+1+j}\, E_n^{(j)}(0). \end{aligned}$$

But, for all \(j\in [1,n-1]\), we have

$$\begin{aligned} E_n^{(j)}(t)=\frac{n!}{j!}\,E_{n-j}(t). \end{aligned}$$

Hence, for all \(j\in [1,n-1], (\psi _n)_{-}^{(j)}(0^{-})=(\psi _n)_{+}^{(j)}(0^{+}) \) if and only if

$$\begin{aligned} (-1)^{n+1+j}\,\frac{n!}{j!}\,E_{n-j}(0)=\frac{n!}{j!}\,E_{n-j}(0) \end{aligned}$$

which leads to

$$\begin{aligned} \left(1+\left(-1\right)^{n+j}\right)\,E_{n-j}\left(0\right)=0. \end{aligned}$$
(14)

Condition (14) is trivially satisfied if \(n\) and \(j\) have different parity. And if \(n\) and \(j\) have the same parity, \(n-j\) is even with \(n-j\ge 1\). So \(n-j\ge 2\), and by Lemma 4, \(E_{n-j}(0)=0\). For all \(j\in [1,n-1]\), condition (14) is satisfied. Hence, \(\psi _n\) is \({\fancyscript{C}}^{n-1}\) at \(0\). It is \({\fancyscript{C}}^{\infty }\) outside \(0\) since it coincides on an open interval with a polynomial function. \(\square \)

Lemma 6

For all \(n\ge 1\), for all \(j\in [0,n-1], \psi _n^{(j)}(1)=\psi _n^{(j)}(-1)\).

Proof

If \(j=0, \psi _n(1)=\psi _n(-1)\) if and only if \(E_{n}(1)=(-1)^{n+1}\,E_{n}(1)\). If \(n\) is odd, this relation holds. If \(n\) is even, \(n\ge 2\), by Lemma 4, \(E_{n}(1)=0\) and the above relation still holds.

As in Lemma 5, we derive that for all \(j\in [1,n-1], \psi _n^{(j)}(1)=\psi _n^{(j)}(-1)\) if and only if

$$\begin{aligned} \frac{n!}{j!}\,E_{n-j}(1)=(-1)^{n+1+j}\,\frac{n!}{j!}\,E_{n-j}(1) \end{aligned}$$

which leads to \( \left(1+ \left(-1\right)^{n+j}\right)\, E_{n-j}\left(1\right)=0 \ . \) If \(n\) and \(j\) have the same parity, we use the equality \(E_{n-j}(1)=0\) established in Lemma 4. \(\square \)

In the following lemma, we show that the Euler polynomial \(E_n\) is involved in the periodic solutions of Eq. (1).

Lemma 7

Let \(y\) be a periodic solution of Eq. (1), of period \(2\,\tau , \tau \in \mathbb R , \tau > 0\), such that \(y(0)=0\). Then, for all \(t\in [-\tau ,\tau ]\)

$$\begin{aligned} y(t)={ \frac{\eta \,\tau ^{n}}{n!} \,\left( \big (\mathop {\text{ sgn}}\nolimits \left(t\right)\big )^{n+1} E_n\left(|t\,\tau ^{-1}|\right)-E_n\left(0\right) \right)} . \end{aligned}$$

Proof

We start by using Lemma 1, item (a), and passing from Eq. (9) to Eq. (7).

For \(n\in \mathbb N , n>0\), let us consider \(\psi _n\). By Lemma 5, \(\psi _n\) is \({\fancyscript{C}}^{\infty }\) outside \(0, {\fancyscript{C}}^{n-1}\) at \(0\). For all \(t > 0\), we can apply recursively rule (a), Lemma 3. We get \( E^{(n)}_n(t)=n!\, E_{0}(t)=n! \) , while for all \(t<0, (-1)^{n+1}\,E^{(n)}_n(-t)=(-1)^{2n+1}\, n!\, E_{0}(t)= - n! \) . So for all \(t\ne 0, \psi _n^{(n)}(t)=\mathop {\text{ sgn}}\nolimits (t)\, n!\).

By Lemma 6, for all \(j\in [0,n-1], \psi _n\) satisfies \(\psi _n^{(j)}(1)=\psi _n^{(j)}(-1)\).

Let us denote \(u=\psi _n-E_n(0)\). We have \(u(0)=0\), and \(u\) is a solution of the boundary problem (8). Consequently \(u\) is the unique solution of (8). \(\square \)

We can give a shorter form of the solution of Eq. (7) as described in the next lemma.

Lemma 8

For all \(n\in \mathbb N , n>0\), there exists a unique \({\fancyscript{C}}^{n-1}\) function \(\psi _n^{*}\) such that, for all \(t\in \mathbb R \setminus O(\mathbb Z )\),

$$\begin{aligned} \psi _n^{*}(t)={ \Big (\mathrm{sgn}\big (\mathrm{sw}\left(t\right)\big )\Big )^{n+1} E_n\big (|\mathop {\text{ sw}}\nolimits (t)|\big )-E_n\left(0\right)} . \end{aligned}$$

Moreover, \(\psi _n^{*}\) is the unique solution of Eq. (7) vanishing at \(0\). And for all \(k\in \mathbb N , k\le n\),

$$\begin{aligned} \psi _{n}^{*(k)}={ \frac{n!}{(n-k)!}\psi _{n-k}^{*} } . \end{aligned}$$

Proof

Observe that for \(t\in ]-1,1[, \psi _n^{*}\) is expressed by the formula:

$$\begin{aligned} \psi _n^{*}(t)={ \Big (\mathop {\text{ sgn}}\nolimits (t)\Big )^{n+1} E_n\big (|t|\big )-E_n(0)}, \end{aligned}$$

and it is a solution of Eq. (7) vanishing at \(0\). Using Lemma 6, at every point of \(O(\mathbb Z )\), the formula valid on \(\mathbb R \setminus O(\mathbb Z )\) can be extended on both sides. The statement about the derivatives of \(\psi _{n}^{*}\) is a direct consequence of item (d), Lemma 3. The remaining of the proof is straightforward. \(\square \)

By convention, we define \(\psi _0^{*}=\psi _0\circ \,\mathop {\text{ sw}}\nolimits \).

We then show that the \(n\)-primitive of a function is periodic if and only if its average value (defined in Sect. 1.2) is zero:

Lemma 9

Let \(F\) be a periodic piecewise-\({\fancyscript{C}}^\infty \) function. For all \(n\in \mathbb N , n>0, \varTheta _{F}^{[n]}\) is periodic if and only is the average value of \(F\) is equal to \(0\).

Proof

Since \(F\) is periodic piecewise-\({\fancyscript{C}}^\infty \), it equals its Fourier series almost everywhere. So for almost every \(t\),

$$\begin{aligned} F(t)= {a_0+ \sum _{k\ne 0} a_k \,e^{\mathbf{i }\,k\,\pi \,\beta \,t} } \quad \text{ with} \,\beta \in \mathbb R , \text{ and} \,\forall k \in \mathbb Z , \ a_k \in \mathbb C , \end{aligned}$$

and the average value of \(F\) is the constant term \(2\,\tau _F\, a_0\). The term \(a_0\) integrates, up to a constant, in \(\varTheta _{F}^{[n]}\) as the monomial \(a_0\,t^{n}\) which is periodic if and only if \(a_0=0\). \(\square \)

And indeed, the functions \(\psi _n^{*}\) have null average value:

Lemma 10

For all \(n\in \mathbb N \), the average value of \(\psi _n^{*}\) on any interval of length \(2\) is zero.

Proof

Without loss of generality, let us integrate on the interval \([-1,1]\):

$$\begin{aligned} {\frac{1}{2}\int _{-1}^{1} \psi _{n}^{*}\left( t \right) \,\text{ d}t}&= {\frac{1}{2} \left(\,\,\int _{-1}^{0} (-1)^{n+1} E_n(-t) \,\text{ d}t+ \int _{0}^{1} E_n(t) \,\text{ d}t \right)}\\&= {\frac{\left(1+(-1)^{n+1}\right)}{2} \int _{0}^{1} E_n(t) \,\text{ d}t}. \end{aligned}$$

If \(n\) is even, the result is zero.

Let us now assume \(n\) odd. From item (d), Lemma 3, a primitive of \((n+1)\, E_n\) is \(E_{n+1}\), so

$$\begin{aligned} \frac{1}{2}\int _{-1}^{1} \psi _{n}^{*}\left( t \right) \,\text{ d}t={ \frac{1}{n+1} \big [ E_{n+1}(t) \big ]_{0}^{1}=\frac{1}{n+1} \left( E_{n+1}(1) -E_{n+1}(0) \right)}. \end{aligned}$$

But from item (b), Lemma 3, we have \(E_{n+1}(1)=(-1)^{n+1}E_{n+1}(0)=E_{n+1}(0)\), and the average value is also zero. \(\square \)

4 Proofs of the main results

4.1 Proof of Theorem 1

Proof

By Lemma 1, item (a), we replace Eq. (1) by Eq. (7). By Lemma 7, Eq. (7) has a unique solution vanishing at \(0\).

It is straightforward to see that \(z\) is a solution of Eq. (7) vanishing at \(0\) if and only if \(\tilde{z}=z+z_0\) (\(z_0\in \mathbb R \)) is a solution of Eq. (7) such that \(\tilde{z}(0)=z_0\).\(\square \)

4.2 Proof of Theorem 2

Proof

As in the proof of Theorem 1, we consider the same unknown \(z\) [refer to Lemma 1, item (a)] and replace Eq. (2) by an equivalent equation

$$\begin{aligned} z^{(n)} -n!\,\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big )=\frac{n!}{\eta \,\tau ^n}\,F(t\,\tau ) . \end{aligned}$$
(15)

We first consider solutions \(z\) of Eq. (15) such that \(z(0)=0\).

For all \(t\), we have \(z{}^{(n)}(t+2)=z{}^{(n)}(t)\), so \(F(t\,\tau +2\,\tau )=F(t\,\tau )\). This implies that \(\tau _F^{-1}\,\tau \in \mathbb N \).

Up to changing \(F\), we can assume that \(\eta =\tau =1\). By Theorem 1, \(z\) defined on \(]-1,1[\) by \(z(t)=\psi _n(t)-E_n(0)+n!\,\varTheta _F^{[n]}(t)\) is a solution of Eq. (15) on \(]-1,1[\). It follows that \(t\mapsto \psi _n^{*}(t)-E_n(0)+n!\,\varTheta _F^{[n]}(t)\) is a 2-periodic solution of Eq. (15) vanishing at \(0\).

Assume that \(t\mapsto u(t)\) is another 2-periodic solution of Eq. (15), such that \(u(0)=0\). Then, \(u-n!\,\varTheta _F^{[n]}\) is a 2-periodic solution of Eq. (7) vanishing at \(0\). By Theorem 1, \(z\) is unique, so \(u-n!\,\varTheta _F^{[n]}=z\). It follows that Eq. (15) has a unique 2-periodic solution vanishing at \(0\).

It is straightforward to deduce that \(z\) is a solution of Eq. (15) vanishing at \(0\) if and only if \(u=z+z_0\) (\(z_0\in \mathbb R \)) is a solution of Eq. (15) such that \(u(0)=z_0\).\(\square \)

4.3 Proof of Theorem 3

Proof

The first parts of the proofs of Theorem 3 and Theorem 2 are the same.

Let us assume that \(F=0\) and \(n>0\).

As in Theorem 2, we replace the initial equation by an equivalent equation

$$\begin{aligned} z^{(n)}-\frac{(n+k)!}{k!}\,\psi _k^{*}(t) = 0 \end{aligned}$$
(16)

by involving \(z\) such that

$$\begin{aligned} z(t)=\frac{(n+k)!}{\eta \,k!\,\tau ^n}y(t\,\tau ). \end{aligned}$$

Let us split the resolution of Eq. (16) between the intervals \(]0,1[\) and \(]-1,0[\).

For \(t\in ]0,1[\), we have \(k!\, z^{(n)}(t)=(n+k)!\, E_k(t)\). Hence \(z^{(n+k)}(t)=(n+k)!\).

On the interval \(]-1,0[\), we have \(k!\, z^{(n)}(t)=(n+k)!\, (-1)^{k+1} E_k(-t)\).

So, \(z^{(n+k)}(t)=(-1)^{2k+1} \, (n+k)!= -(n+k)!\).

It follows that \(z\) must be a 2-periodic solution, vanishing at \(0\), of

$$\begin{aligned} z^{(n+k)}-(n+k)! \,\mathop {\text{ sgn}}\nolimits \big (\mathop {\text{ sw}}\nolimits (t)\big )=0. \end{aligned}$$
(17)

By Theorem 1, Eq. (17) has a unique 2-periodic solution vanishing at \(0\), namely,

$$\begin{aligned} t\mapsto \psi _{n+k}^{*}(t) -E_{n+k}(0). \end{aligned}$$

Conversely, it is immediate to check that the function above is a 2-periodic solution of Eq. (16) vanishing at \(0\).

Finally, the proof in the case \(F\ne 0\) is similar to the proof of Theorem 2. \(\square \)