Skip to main content

Oscillators with Second-Order Dynamics

  • Chapter
  • First Online:
Statistical Physics of Synchronization

Part of the book series: SpringerBriefs in Complexity ((BRIEFSCOMPLEXITY))

  • 588 Accesses

Abstract

In the first section, we introduce the generalized Kuramoto model with inertia and noise, and discuss in turn its connection to electrical power distribution networks, its interpretation as a long-range interacting system driven out of equilibrium and its dynamics written in a dimensionless form convenient for further analysis. In section two, we discuss our recent numerical results on very interesting nonequilibrium phase transitions exhibited by the model in the stationary state for the case of unimodal frequency distributions: the system shows a phase transition between a synchronized and an incoherent stationary state. Section three is devoted to an analytical treatment of the observed phase transitions, in which we discuss both the incoherent stationary state and its linear stability and the synchronized stationary state of the dynamics. In section four, we take up the issue of comparing and interpreting simulation results for a finite system vis-à-vis our derived analytical results in the thermodynamic limit, thereby providing interesting insights into the (slow) relaxation properties of the dynamics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    That inertia may have significant effect on relaxation properties was already seen in Chap. 1 for the case of a single oscillator.

  2. 2.

    Note that with \(\sigma =0\), all the oscillators have the same natural frequency equal to \(\langle \omega \rangle \), and consequently, the need to group the oscillators based on their natural frequencies, as was done while defining the density \(f(\theta ,v,\omega ,t)\), is no longer there. As a result, one has the stationary-state single-oscillator density \(f_\mathrm{st}(\theta ,v)\) defined as the fraction of oscillators that have angle \(\theta \) and angular velocity v in the stationary state.

  3. 3.

    Obtaining analytical forms of large-deviation functionals for many-body interacting systems turns to be a rather formidable task, and only limited success for very specific model systems has been achieved until now [32].

  4. 4.

    Slow relaxation is a hallmark of long-range interactions, and mean-field interaction is the extreme limit of long-range interaction [15].

References

  1. H. Tanaka, A.J. Lichtenberg, S. Oishi, Phys. Rev. Lett. 78, 2104 (1997)

    Article  ADS  Google Scholar 

  2. J.A. Acebrón, R. Spigler, Phys. Rev. Lett. 81, 2229 (1998)

    Article  ADS  Google Scholar 

  3. J.A. Acebrón, L.L. Bonilla, R. Spigler, Phys. Rev. E 62, 3437 (2000)

    Article  ADS  MathSciNet  Google Scholar 

  4. J.A. Acebrón, L.L. Bonilla, C.J.P. Vicente, F. Ritort, R. Spigler, Rev. Mod. Phys. 77, 137 (2005)

    Article  ADS  Google Scholar 

  5. S. Gupta, A. Campa, S. Ruffo, Phys. Rev. E 89, 022123 (2014)

    Article  ADS  Google Scholar 

  6. S. Gupta, A. Campa, S. Ruffo, J. Stat. Mech. Theory Exp. R08001 (2014)

    Google Scholar 

  7. M. Komarov, S. Gupta, A. Pikovsky, EPL 106, 40003 (2014)

    Article  ADS  Google Scholar 

  8. S. Olmi, A. Navas, S. Boccaletti, A. Torcini, Phys. Rev. E 90, 042905 (2014)

    Article  ADS  Google Scholar 

  9. A. Campa, S. Gupta, S. Ruffo, J. Stat. Mech. Theory Exp. P05011 (2015)

    Google Scholar 

  10. J. Barré, D. Métivier, Phys. Rev. Lett. 117, 214102 (2016)

    Article  ADS  Google Scholar 

  11. P.H. Chavanis, Eur. Phys. J. B 87, 120 (2014)

    Article  ADS  Google Scholar 

  12. M. Antoni, S. Ruffo, Phys. Rev. E 52, 2361 (1995)

    Article  ADS  Google Scholar 

  13. A. Campa, T. Dauxois, S. Ruffo, Phys. Rep. 480, 57 (2009)

    Article  ADS  MathSciNet  Google Scholar 

  14. F. Bouchet, S. Gupta, D. Mukamel, Physica A 389, 4389 (2010)

    Article  ADS  MathSciNet  Google Scholar 

  15. A. Campa, T. Dauxois, D. Fanelli, S. Ruffo, Physics of Long-Range Interacting Systems (Oxford University Press, Oxford, 2014)

    Book  Google Scholar 

  16. Y. Levin, R. Pakter, F.B. Rizzato, T.N. Teles, F.P.C. Benetti, Phys. Rep. 535, 1 (2014)

    Article  ADS  MathSciNet  Google Scholar 

  17. S. Gupta, S. Ruffo, Int. J. Mod. Phys. A 32, 1741018 (2017)

    Article  ADS  Google Scholar 

  18. B. Ermentrout, J. Math. Biol. 29, 571 (1991)

    Article  MathSciNet  Google Scholar 

  19. H. Sakaguchi, Prog. Theor. Phys. 79, 39 (1988)

    Article  ADS  Google Scholar 

  20. R. Livi, P. Politi, Nonequilibrium Statistical Physics: A Modern Perspective (Cambridge University Press, Cambridge, 2017)

    Book  Google Scholar 

  21. G. Filatrella, A.H. Nielsen, N.F. Pedersen, Eur. Phys. J. B 61, 485 (2008)

    Article  ADS  Google Scholar 

  22. M. Rohden, A. Sorge, M. Timme, D. Witthaut, Phys. Rev. Lett. 109, 064101 (2012)

    Article  ADS  Google Scholar 

  23. N. Goldenfeld, Lectures on Phase Transitions and the Renormalization Group (Addison-Wesley, Reading, 1992)

    Google Scholar 

  24. H. Risken, The Fokker-Planck Equation: Methods of Solution and Applications (Springer, Berlin, 1996)

    Book  Google Scholar 

  25. N.N. Lebedev, Special Functions and their Applications (Dover, New York, 1972)

    MATH  Google Scholar 

  26. G.H. Hardy, Divergent Series (Chelsea, New York, 1991)

    MATH  Google Scholar 

  27. K. Huang, Statistical Mechanics (Wiley, New York, 1987)

    MATH  Google Scholar 

  28. L. Casetti, S. Gupta, Eur. Phys. J. B 87, 91 (2014)

    Article  ADS  Google Scholar 

  29. T.N. Teles, S. Gupta, P. Di Cintio, L. Casetti, Phys. Rev. E 92, 020101(R) (2015)

    Google Scholar 

  30. S. Gupta, L. Casetti, New J. Phys. 18, 103051 (2016)

    Article  ADS  Google Scholar 

  31. K. Binder, Rep. Prog. Phys. 50, 783 (1987)

    Article  ADS  Google Scholar 

  32. H. Touchette, Phys. Rep. 478, 1 (2009)

    Article  ADS  MathSciNet  Google Scholar 

  33. H.A. Kramers, Physica VII 4, 284 (1940)

    ADS  MathSciNet  Google Scholar 

  34. R.B. Griffiths, C.Y. Weng, J.S. Langer, Phys. Rev. 149, 301 (1966)

    Article  ADS  Google Scholar 

  35. C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences (Springer, Berlin, 1983)

    Book  Google Scholar 

  36. R.I. McLachlan, P. Atela, Nonlinearity 5, 541 (1992)

    Article  ADS  MathSciNet  Google Scholar 

  37. V.I. Smirnov, A Course of Higher Mathematics. Vol. 3. Part. 2, Complex Variables Special Functions (Pergamon Press, Oxford, 1964) Chap. 1, Sec. 22

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shamik Gupta .

Appendices

Appendix 1: Proof that the Dynamics (3.16) Does Not Satisfy Detailed Balance

In this appendix, we prove that the dynamics (3.16) does not satisfy detailed balance unless \(g(\omega )=\delta (\omega )\), that is, unless \(\sigma \) is zero. For simplicity, we discuss the proof here for the case of two distinct natural frequencies, that is, for a particular bimodal \(g(\omega )\) made of two distinct delta peaks. Note that we need at least two different frequencies to have a non-zero \(\sigma \) for the underlying distribution.

Consider a given realization of \(g(\omega )\) in which there are \(N_{1}\) oscillators with frequencies \(\omega _{1}\) and \(N_{2}\) oscillators with frequencies \(\omega _{2}\), with \(N_{1}+N_{2}=N\). Define the N-oscillator distribution function \(f_{N}(\theta _{1},v_{1},\dots ,\theta _{N_{1}},v_{N_{1}},\theta _{N_{1}+1},v_{N_{1}+1},\dots ,\theta _{N},v_{N},t)\) as the probability density at time t to observe the system around the values \(\{\theta _{i},v_{i}\}_{1\le i\le N}\). In the following, we prefer to use the shorthand notations \(z_{i}\equiv (\theta _{i},v_{i})\) and \(\mathbf {z}=(z_{1},z_{2},\dots ,z_{N})\). Note that \(f_{N}\) satisfies the normalization \(\int \Big (\prod _{i=1}^{N}\mathrm{{d}}z_{i}\Big )f_{N}(\mathbf {z},t)=1\). We assume (i) that \(f_{N}\) is symmetric with respect to permutations of dynamical variables within the same group of oscillators, and (ii) that \(f_N\), together with the derivatives \(\partial f_{N}/\partial v_{i} ~\forall ~ i\), vanish on the boundaries of the phase space.

The evolution of \(f_{N}\) follows the Fokker-Planck equation that may be straightforwardly derived from the equations of motion (3.16), to get

$$\begin{aligned} \frac{\partial f_{N}}{\partial t}= & {} -\sum _{i=1}^{N}\Big [v_{i}\frac{\partial f_{N}}{\partial \theta _{i}}-\frac{1}{\sqrt{m}}\frac{\partial (v_{i}f_{N})}{\partial v_{i}}\Big ]-\sigma \sum _{j=1}^{N}\Big (\varOmega ^{T}\Big )_{j}\frac{\partial f_{N}}{\partial v_{j}}+\frac{T}{\sqrt{m}}\sum _{i=1}^{N}\frac{\partial ^{2}f_{N}}{\partial v_{i}^{2}}\nonumber \\&-\frac{1}{2N}\sum _{i,j=1}^{N}\sin (\theta _{j}-\theta _{i})\Big [\frac{\partial f_{N}}{\partial v_{i}}-\frac{\partial f_{N}}{\partial v_{j}}\Big ]. \end{aligned}$$
(3.56)

Here, the \(N\times 1\) column vector \(\varOmega \) is such that its first \(N_{1}\) entries equal \(\omega _{1}\) and the following \(N_{2}\) entries equal \(\omega _{2}\), and where the superscript T denotes matrix transpose operation: \(\varOmega ^T\equiv \left[ \omega _{1} ~\omega _{1} \dots ~\omega _{1}~\omega _{2}\dots ~\omega _{2}\right] \).

In order to prove that the dynamics (3.16) does not satisfy detailed balance unless \(\sigma =0\), we first rewrite the Fokker-Planck equation (3.56) as

$$\begin{aligned}&\frac{\partial f_{N}(\mathbf {x})}{\partial t}=-\sum _{i=1}^{2N}\frac{\partial (A_{i}(\mathbf {x})f_{N}(\mathbf {x}))}{\partial x_{i}}+\frac{1}{2}\sum _{i,j=1}^{2N}\frac{\partial ^{2}(B_{i,j}(\mathbf {x})f_{N}(\mathbf {x}))}{\partial x_{i}\partial x_{j}}, \end{aligned}$$
(3.57)

where we have

$$\begin{aligned} x_i=\left\{ \begin{array}{ll} \theta _{i};i=1,2,\dots ,N, \\ v_{i-N};i=N+1,\dots ,2N, \end{array} \right. \end{aligned}$$
(3.58)

and

$$\begin{aligned} \mathbf {x}&=\{x_{i}\}_{1\le i\le 2N}. \end{aligned}$$
(3.59)

In Eq. (3.57), the drift vector \(A_{i}(\mathbf {x})\) is given by

$$\begin{aligned} A_{i}(\mathbf {x}) =\left\{ \begin{array}{ll} v_{i};i=1,2,\dots ,N, \\ -\frac{1}{\sqrt{m}}v_{i-N} +\frac{1}{N}\sum _{j=1}^{N}\sin (\theta _{j}-\theta _{i-N})\\ +\sigma \Big (\varOmega ^{T}\Big )_{i-N}; i=N+1,\dots ,2N, \end{array} \right. \end{aligned}$$
(3.60)

while the diffusion matrix is

$$\begin{aligned} B_{i,j}(\mathbf {x}) =\left\{ \begin{array}{ll} \frac{2T}{\sqrt{m}}\delta _{ij};i,j>N, \\ 0, ~\mathrm{Otherwise.} \end{array} \right. \end{aligned}$$
(3.61)

It may be shown that the dynamics described by the Fokker-Planck equation of the form (3.57) satisfies detailed balance if and only if the following conditions are satisfied [35]:

$$\begin{aligned}&\epsilon _{i}\epsilon _{j}B_{i,j}(\epsilon \mathbf {x}) =B_{i,j}(\mathbf {x}),\end{aligned}$$
(3.62)
$$\begin{aligned}&\epsilon _{i}A_{i}(\epsilon \mathbf {x})f_{N}^{s}(\mathbf {x}) =-A_{i}(\mathbf {x})f_{N}^{s}(\mathbf {x})+\sum _{j=1}^{2N}\frac{\partial B_{i,j}(\mathbf {x})f_{N}^{s}(\mathbf {x})}{\partial x_{j}}, \end{aligned}$$
(3.63)

where \(f_{N}^{s}(\mathbf {x})\) is the stationary solution of Eq. (3.57). Here, \(\epsilon _{i}=\pm 1\) is a constant that denotes the parity with respect to time reversal of the variables \(x_{i}\)s: Under time reversal, the latter transform as \(x_{i} \rightarrow \epsilon _{i}x_{i}\), where \(\epsilon _{i}=-1\) or \(+1\) depending on whether \(x_{i}\) is odd or even under time reversal. In our case, \(\theta _{i}\)s are even, while \(v_{i}\)s are odd.

On using Eq. (3.61), we find that the condition (3.62) is trivially satisfied for our model. In order to check the other condition, we formally solve Eq. (3.63) for \(f_{N}^{s}(\mathbf {x})\), and ask if the solution solves Eq. (3.57) in the stationary state. From Eq. (3.63), we see that for \(i=1,2,\dots ,N\), the condition is obtained as

$$\begin{aligned} \epsilon _{i}A_{i}(\epsilon \mathbf {x})f_{N}^{s}(\mathbf {x})&=-A_{i}(\mathbf {x})f_{N}^{s}(\mathbf {x}), \end{aligned}$$
(3.64)

which on using Eq. (3.60) is obviously satisfied. For \(i=N+1,\dots ,2N\), we have

$$\begin{aligned} v_{k}f_{N}^{s}(\mathbf {x})&=-\frac{T\partial f_{N}^{s}(\mathbf {x})}{\partial v_{k}};\,\,\,\,\,\,\,\,k=i-N. \end{aligned}$$
(3.65)

Solving Eq. (3.65), we get

$$\begin{aligned} f_{N}^{s}(\mathbf {x})&\propto d(\theta _{1},\theta _{2},\dots ,\theta _{N})\exp \Big [-\frac{1}{2T}\sum _{k=1}^{N}v_{k}^{2}\Big ], \end{aligned}$$
(3.66)

where \(d(\theta _{1},\theta _{2},\dots ,\theta _{N})\) is a yet undetermined function. Substituting Eq. (3.66) into Eq. (3.57), and requiring that it is a stationary solution, we arrive at the conclusion that \(\sigma \) has to be equal to zero, and that the factor \(d(\theta _{1},\theta _{2},\dots ,\theta _{N})\) is given by \(d(\theta _{1},\theta _{2},\dots ,\theta _{N})=\exp \left( -1/(2NT)\sum _{i,j=1}^{N}\left[ 1-\cos (\theta _{i}-\theta _{j})\right] \right) \). Thus, for \(\sigma =0\), when the dynamics reduces to that of the Brownian mean-field model, we get the stationary solution as

$$\begin{aligned} f_{N,\sigma =0}^{s}(\mathbf {z}) \propto \exp \Big [-\frac{H}{T}\Big ]. \end{aligned}$$
(3.67)

where H is the Hamiltonian (expressed in terms of dimensionless variables that were introduced in the main text). The demonstration of the lack of detailed balance for \(\sigma \ne 0\) obviously extends to any distribution \(g(\omega )\).

Appendix 2: Simulation Details for the Dynamics (3.16)

In this appendix, we describe a method to simulate the dynamics (3.16) for given values of \(m,T,\sigma \) (note that we have dropped the overbars appearing in Eq. (3.16) for not wanting to overload the notation), and for a given realization of \(\omega _i\)’s. We employ a numerical integration scheme discussed in Refs. [5, 6]. Suppose we want to simulate the dynamics over a time interval \([0:\mathcal {T}]\). Let us first choose a time step size \(0<\varDelta t \ll 1\). Next, we set \(t_n=n\varDelta t\) as the n-th time step of the dynamics, where \(n=0,1,2,\ldots ,N_t\), and \(N_t=\mathcal {T}/\varDelta t\). In our numerical scheme, we first discard at every time step the effect of the noise (i.e., consider \(1/\sqrt{m}=0\)), and employ a fourth-order symplectic algorithm to integrate the resulting symplectic part of the dynamics [36]. This is followed by adding the effects of noise to the dynamical evolution through implementing an Euler-like first-order algorithm to update the dynamical variables. To summarize, one step of the numerical scheme accounting for evolution between times \(t_n\) and \(t_{n+1}=t_n+\varDelta t\) involves the following updates of the dynamical variables for \(i=1,2,\ldots ,N\): For the symplectic part, we have, for \(k=1,\ldots ,4\),

$$\begin{aligned}&v_i\Big (t_{n}+\frac{k\varDelta t}{4}\Big )=v_i\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )+b(k)\varDelta t\Big [r\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )\nonumber \\&\sin \Big \{\psi \Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )-\theta _i\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )\Big \}+\sigma \omega _i\Big ]; \nonumber \\&r\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )=\sqrt{r_x^2+r_y^2},\psi \Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )=\tan ^{-1}\frac{r_y}{r_x}, \nonumber \\&r_x=\frac{1}{N}\sum _{j=1}^N \sin \Big [\theta _j\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )\Big ],r_y=\frac{1}{N}\sum _{j=1}^N \cos \Big [\theta _j\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )\Big ], \nonumber \\ \end{aligned}$$
(3.68)
$$\begin{aligned}&\theta _i\Big (t_{n}+\frac{k\varDelta t}{4}\Big )=\theta _i\Big (t_n+\frac{(k-1)\varDelta t}{4}\Big )+a(k)\varDelta t ~v_i\Big (t_n+\frac{k\varDelta t}{4}\Big ), \end{aligned}$$
(3.69)

where the constants a(k)’s and b(k)’s are obtained from Ref. [36] as

$$\begin{aligned}&a(1) = 0.5153528374311229364, ~a(2) = 0.085782019412973646, \nonumber \\&a(3) = 0.4415830236164665242, ~a(4) = 0.1288461583653841854, \nonumber \\&b(1) = 0.1344961992774310892, ~b(2) = 0.2248198030794208058, \nonumber \\&b(3) = 0.7563200005156682911, ~b(4) = 0.3340036032863214255. \nonumber \\ \end{aligned}$$
(3.70)

At the end of the updates (3.68) and (3.69), we have the set \(\{\theta _i(t_{n+1}),v_i(t_{n+1})\}\). We then include the effect of the stochastic noise by keeping the values of the \(\theta _i(t_{n+1})\)’s unchanged, but by updating \(v_i(t_{n+1})\)’s as

$$\begin{aligned} v_i(t_{n+1}) \rightarrow v_i(t_{n+1})\Big [1-\frac{1}{\sqrt{m}} \varDelta t\Big ]+\sqrt{2\varDelta t\frac{T}{\sqrt{m}}}\varDelta X(t_{n+1}). \end{aligned}$$
(3.71)

Here, \(\varDelta X\) is a Gaussian distributed random number with zero mean and unit variance.

Appendix 3: Derivation of the Kramers Equation

In this appendix, we derive the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy equations for the dynamics (3.16) for any number N of oscillators. This would allow us to then derive in the limit \(N \rightarrow \infty \) the Kramers equation (3.21) discussed in the main text. Again, as in Appendix 1, we first discuss the derivation of the BBGKY equations for a bimodal \(g(\omega )\) made of two distinct delta peaks, and then generalize the derivation to a general \(g(\omega )\). Our starting point is the Fokker-Planck equation, Eq. (3.56). To proceed, we follow standard procedure [27], which was also invoked in Chap. 2, and define the so-called reduced distribution function \(f_{s_{1},s_{2}}\), with \(s_1=0,1,2,\dots ,N_{1}\) and \(s_2=0,1,2,\dots ,N_{2}\), as

$$\begin{aligned}&f_{s_{1},s_{2}}(z_{1},z_{2},\dots ,z_{s_{1}},z_{N_{1}+1},\dots ,z_{N_{1}+s_{2}},t)=\nonumber \\&\frac{N_{1}!}{(N_{1}-s_{1})!N_{1}^{s_{1}}}\frac{N_{2}!}{(N_{2}-s_{2})!N_{2}^{s_{2}}}\int \mathrm{{d}}z_{s_{1}+1}\dots \mathrm{{d}}z_{N_{1}}\mathrm{{d}}z_{N_{1}+s_{2}+1}\dots \mathrm{{d}}z_{N}f_{N}(z,t). \end{aligned}$$
(3.72)

Note that the following normalizations hold for the single-oscillator distribution functions: \(\int \mathrm{{d}}z_{1}f_{1,0}(z_{1},t)=1\), and \(\int \mathrm{{d}}z_{N_{1}+1}f_{0,1}(z_{N_{1}+1},t)=1\).

Using Eq. (3.56) in Eq. (3.72), we get the BBGKY hierarchy equations for oscillators with frequencies \(\omega _{1}\) as

$$\begin{aligned}&\frac{\partial f_{s,0}}{\partial t} +\sum _{i=1}^{s}\Big [\frac{v_{i}\partial f_{s,0}}{\partial \theta _{i}}-\frac{1}{\sqrt{m}}\frac{\partial }{\partial v_{i}}(v_{i}f_{s,0})\Big ]+\sigma \sum _{i=1}^{s}\omega _{1}\frac{\partial f_{s,0}}{\partial v_{i}}\nonumber \\&-\frac{T}{\sqrt{m}}\sum _{i=1}^{s}\frac{\partial ^{2}f_{s,0}}{\partial v_{i}^{2}}= -\frac{1}{2N}\sum _{i,j=1}^{s}\sin (\theta _{j}-\theta _{i})\Big [\frac{\partial f_{s,0}}{\partial v_{i}}-\frac{\partial f_{s,0}}{\partial v_{j}}\Big ]\nonumber \\&-\frac{N_{1}}{N}\sum _{i=1}^{s}\int \mathrm{{d}}z_{s+1}\sin (\theta _{s+1}-\theta _{i})\frac{\partial f_{s+1,0}}{\partial v_{i}}\nonumber \\&- \frac{N_{2}}{N}\int \mathrm{{d}}z_{N_{1}+1}\sum _{i=1}^{s}\sin (\theta _{N_{1}+1}-\theta _{i})\frac{\partial f_{s,1}}{\partial v_{i}}, \end{aligned}$$
(3.73)

and similar equations for \(f_{0,s}\) for oscillators of frequencies \(\omega _{2}\). The first equations of the hierarchy are

$$\begin{aligned}&\frac{\partial f_{1,0}(\theta ,v,t)}{\partial t} +\frac{v\partial f_{1,0}(\theta ,v,t)}{\partial \theta }-\frac{1}{\sqrt{m}}\frac{\partial }{\partial v}(vf_{1,0}(\theta ,v,t))\nonumber \\&+\sigma \omega _{1}\frac{\partial f_{1,0}(\theta ,v,t)}{\partial v}-\frac{T}{\sqrt{m}}\frac{\partial ^{2}f_{1,0}(\theta ,v,t)}{\partial v^{2}}\nonumber \\&=-\frac{N_{1}}{N}\int \mathrm{{d}}\theta '\mathrm{{d}}v'\sin (\theta '-\theta )\frac{\partial f_{2,0}(\theta ,v,\theta ',v',t)}{\partial v}\nonumber \\&-\frac{N_{2}}{N}\int \mathrm{{d}}\theta '\mathrm{{d}}v'\sin (\theta '-\theta )\frac{\partial f_{1,1}(\theta ,v,\theta ',v',t)}{\partial v}, \end{aligned}$$
(3.74)

and

$$\begin{aligned}&\frac{\partial f_{0,1}(\theta ,v,t)}{\partial t} +\frac{v\partial f_{0,1}(\theta ,v,t)}{\partial \theta }-\frac{1}{\sqrt{m}}\frac{\partial }{\partial v}(vf_{0,1}(\theta ,v,t))\nonumber \\&+\sigma \omega _{2}\frac{\partial f_{0,1}(\theta ,v,t)}{\partial v}-\frac{T}{\sqrt{m}}\frac{\partial ^{2}f_{0,1}(\theta ,v,t)}{\partial v^{2}}\nonumber \\&=-\frac{N_{2}}{N}\int \mathrm{{d}}\theta '\mathrm{{d}}v'\sin (\theta '-\theta )\frac{\partial f_{0,2}(\theta ,v,\theta ',v',t)}{\partial v}\nonumber \\&-\frac{N_{1}}{N}\int \mathrm{{d}}\theta '\mathrm{{d}}v'\sin (\theta '-\theta )\frac{\partial f_{1,1}(\theta ,v,\theta ',v',t)}{\partial v}. \end{aligned}$$
(3.75)

In the limit of large N, we may write

$$\begin{aligned} g(\omega )= \Big [\frac{N_{1}}{N}\delta (\omega -\omega _{1})+\frac{N_{2}}{N}\delta (\omega -\omega _{2})\Big ], \end{aligned}$$
(3.76)

and express Eqs. (3.74) and (3.75) in terms of \(g(\omega )\).

To generalize Eqs. (3.74) and (3.75) to the case of a continuous \(g(\omega )\), we denote for this case the single-oscillator distribution function as \(f(\theta ,v,\omega ,t)\). The first equation of the hierarchy is then obtained as

$$\begin{aligned}&\frac{\partial f(\theta ,v,\omega ,t)}{\partial t} +\frac{v\partial f(\theta ,v,\omega ,t)}{\partial \theta }-\frac{1}{\sqrt{m}}\frac{\partial }{\partial v}(vf(\theta ,v,\omega ,t))\nonumber \\&+\sigma \omega \frac{\partial f(\theta ,v,\omega ,t)}{\partial v}-\frac{T}{\sqrt{m}}\frac{\partial ^{2}f(\theta ,v,\omega ,t)}{\partial v^{2}}\nonumber \\&= -\int \mathrm{{d}}\omega 'g(\omega ')\int \mathrm{{d}}\theta '\mathrm{{d}}v'\sin (\theta '-\theta )\frac{\partial f(\theta ,v,\theta ',v',\omega ,\omega ',t)}{\partial v}. \end{aligned}$$
(3.77)

In the continuum limit \(N\rightarrow \infty \), we may neglect two-oscillator correlations (as we have done in Chap. 2) and approximate \(f(\theta ,v,\theta ',v',\omega ,\omega ',t)\) as

$$\begin{aligned} f(\theta ,v,\theta ',v',\omega ,\omega ',t) =f(\theta ,v,\omega ,t)f(\theta ',v',\omega ',t), \end{aligned}$$
(3.78)

neglecting terms that are sub-dominant in N. Using the last equation in Eq. (3.77) reduces the latter to the Kramers equation (3.21), thereby achieving the goal of this appendix.

Appendix 4: Nature of Solutions of Eq. (3.39)

In this appendix, we analyze in detail the nature of solutions of Eq. (3.39). We rewrite the equation as

$$\begin{aligned}&F(\lambda ;m,T,\sigma ) \equiv \frac{e^{mT}}{2T}\sum _{p=0}^\infty \frac{\left( -mT\right) ^p \left( 1+\frac{p}{mT}\right) }{p!}\nonumber \\&\times \int \frac{g(\omega )\mathrm{{d}}\omega }{1+\frac{p}{mT}+\frac{\lambda }{T\sqrt{m}} + \mathrm{i}\frac{\sigma \omega }{T}} - 1 = 0, \end{aligned}$$
(3.79)

where \(g(\omega )\) is unimodal. The incoherent state will be unstable if there is a \(\lambda \) with a positive real part that satisfies the above eigenvalue equation. We will now prove that depending on the parameters appearing in the above equation, there can be at most one such \(\lambda \) that can be only real. Moreover, for the case of a Gaussian \(g(\omega )\), we will obtain the general shape of the surface in the \((m,T,\sigma )\) space that defines the instability region of the incoherent state.

Considering m and T strictly positive, we multiply for convenience the numerator and denominator of Eq. (3.79) by mT to arrive at

$$\begin{aligned}&F(\lambda ;m,T,\sigma ) = \frac{e^{mT}}{2T}\sum _{p=0}^\infty \frac{\left( -mT\right) ^p \left( p+mT\right) }{p!}\nonumber \\&\times \frac{g(\omega )\mathrm{{d}}\omega }{mT+p+\sqrt{m}\lambda + \mathrm{i}\sigma m \omega } - 1 = 0. \end{aligned}$$
(3.80)

We now look for pure imaginary solutions of this equation. Separating into real and imaginary parts the last equation, we have

$$\begin{aligned}&\mathrm{Re} \left[ F(\mathrm{i}\mu ;m,T,\sigma )\right] = \frac{e^{mT}}{2T}\sum _{p=0}^\infty \frac{\left( -mT\right) ^p}{p!}\nonumber \\&\times \int \mathrm{{d}}\omega \, g(\omega ) \frac{\left( p+mT\right) ^2}{\left( p+mT\right) ^2+\left( m\sigma \omega +\sqrt{m}\mu \right) ^2} - 1 = 0, \end{aligned}$$
(3.81)
$$\begin{aligned}&\mathrm{Im} \left[ F(\mathrm{i}\mu ;m,T,\sigma )\right] = -\frac{e^{mT}}{2T} \sum _{p=0}^\infty \frac{\left( -mT\right) ^p}{p!}\nonumber \\&\times \int \mathrm{{d}}\omega \, g(\omega ) \frac{\left( p+mT\right) \left( m\sigma \omega + \sqrt{m}\mu \right) }{\left( p+mT\right) ^2+\left( m\sigma \omega + \sqrt{m}\mu \right) ^2}= 0. \end{aligned}$$
(3.82)

In the second equation above, let us make the change of variables \(m\sigma \omega + \sqrt{m}\mu = m\sigma x\), and exploit the parity in x of the sum. We get

$$\begin{aligned}&\mathrm{Im} \left[ F(\mathrm{i}\mu ;m,T,\sigma )\right] =\nonumber \\&-m\sigma \int _0^\infty \mathrm{{d}}x\Big \{ \left[ g\left( x-\frac{\mu }{\sqrt{m}\sigma }\right) -g\left( -x-\frac{\mu }{\sqrt{m}\sigma }\right) \right] \nonumber \\&\times x \sum _{p=0}^\infty \frac{\left( -mT\right) ^p}{p!} \frac{p+mT}{\left( p+mT\right) ^2+m^2\sigma ^2 x^2} \Big \} = 0, \end{aligned}$$
(3.83)

where it may be shown that the sum on the right-hand side is positive definite for any finite \(\sigma \). Furthermore, for the class of \(g(\omega )\) considered in the main text, one may see that the term in square brackets is positive (respectively, negative) definite for \(\mu >0\) (respectively, for \(\mu <0\)). As a result, the last equation is never satisfied for \(\mu \ne 0\) and finite, and therefore, the eigenvalue equation does not admit pure imaginary solutions (the proof holds also for the particular case \(g(\omega ) = \delta (\omega )\), as may be checked).

Fig. 3.10
figure 10

The loop in the complex F-plane, (b), corresponding to the loop in the complex \(\lambda \)-plane, (a), as determined by the function \(F(\lambda )\) given in Eq. (3.80)

We may also conclude that there can be at most one solution with positive real part. In fact, if in the complex \(\lambda \)-plane, we consider the loop depicted in Fig. 3.10, panel (a) (where the points A and C represent \(\mathrm{Im}\lambda \rightarrow \pm \infty \), respectively, and the radius of the arc extends to \(\infty \)), we obtain correspondingly in the complex-\(F(\lambda )\) plane due to the sign properties of \(\mathrm{Im} \left[ F(\mathrm{i}\mu ;m,T,\sigma )\right] \) just described the loop represented schematically in Fig. 3.10, panel (b). The point \(F=-1\) in panel (b) is obtained for \(\lambda \) in panel (a) at points A and C and in the whole of the arc extending to infinity. The position of the point B in the complex-F plane is determined by the value of F(0), which is given by

$$\begin{aligned}&F(0;m,T,\sigma ) = \frac{e^{mT}}{2T}\sum _{p=0}^\infty \frac{\left( -mT\right) ^p}{p!}\nonumber \\&\times \int \mathrm{{d}}\omega \, g(\omega ) \frac{\left( p+mT\right) ^2}{\left( p+mT\right) ^2+\left( m\sigma \omega \right) ^2} - 1. \end{aligned}$$
(3.84)

From the well-known theorem of complex analysis on the number of roots of a function in a given domain of the complex plane [37], we arrive at the result that for \(F(0;m,T,\sigma )>0\), there is one and only one solution of the eigenvalue equation with positive real part; on the other hand, for \(F(0;m,T,\sigma )<0\), there is no such solution. When the single solution with positive real part exists, it is necessarily real, since a complex solution would imply the existence of its complex conjugate. The value of \(F(0;m,T,\sigma )\) may be seen to equal \(1/(2T)-1\) for \(\sigma = 0\). For positive \(\sigma \), the value will depend on the specific form of the distribution function \(g(\omega )\). However, one can prove that the value is always smaller than \(1/(2T)-1\), consistent with the physically reasonable fact that if the incoherent state is stable for \(\sigma =0\), which happens for \(T>\frac{1}{2}\), it is all the more stable for \(\sigma > 0\).

The surface delimiting the region of instability in the \((m,T,\sigma )\) phase space is implicitly defined by Eq. (3.84) (i.e. \(F(0;m,T,\sigma )=0\)), which in principle can be solved to obtain the threshold value of \(\sigma \) (denoted by \(\sigma ^\mathrm{inc}\)) as a function of (mT): \(\sigma ^\mathrm{inc}=\sigma ^\mathrm{inc}(m,T)\). On physical grounds, we expect that the latter is a single valued function, and that for any given value of m, it is a decreasing function of T for \(0\le T \le 1/2\), reaching 0 for \(T=1/2\). We are able to prove analytically these facts for the class of unimodal distribution functions \(g(\omega )\) considered in the main text that includes the Gaussian case. However, we can prove for any \(g(\omega )\) that \(\sigma ^\mathrm{inc}(m,T)\) tends to 0 for \(m\rightarrow \infty \). This is done using the integral representation

$$\begin{aligned}&\sum _{p=0}^\infty \frac{\left( -mT\right) ^p}{p!} \frac{\left( p+mT\right) ^2}{\left( p+a\right) ^2+\left( m\sigma \omega \right) ^2}= e^{-mT}\nonumber \\&- \left( m\sigma \omega \right) \int _0^\infty \mathrm{{d}}t \, \exp \left[ -mT \left( t + e^{-t}\right) \right] \sin \left( m\sigma \omega t \right) ; \end{aligned}$$
(3.85)

For \(\sigma >0\) and \(m\rightarrow \infty \), one may see that the term within the integral in the last equation tends to \(e^{-mT}\). We thus obtain by examining Eq. (3.84) that \(F(0;m\rightarrow \infty ,T>0,\sigma >0)=-1\). Combined with the fact that \(F(0;m,T,0)=1/(2T)-1\), this shows that \(\sigma ^\mathrm{inc}(m\rightarrow \infty ,0\le T \le \frac{1}{2})=0\).

We now turn to the Gaussian case, \(g(\omega )=1/\sqrt{2\pi } \exp \left[ -\omega ^2/2\right] \). Denoting with a subscript g this case, and using Eq. (3.85), we have

$$\begin{aligned}&F_g(0;m,T,\sigma ) = \frac{1}{2T}- 1-\frac{e^{mT}}{2T\sqrt{2\pi }}\int \mathrm{{d}}\omega \, e^{-\frac{\omega ^2}{2}} \left( m\sigma \omega \right) \nonumber \\&\int _0^\infty \mathrm{{d}}t \, \exp \left[ -mT \left( t + e^{-t}\right) \right] \sin \left( m\sigma \omega t \right) . \end{aligned}$$
(3.86)

The integral in \(\omega \) may be easily performed: Making the change of variable \(m\sigma t=y\), we arrive at the equation

$$\begin{aligned}&F_g(0;m,T,\sigma ) = \frac{1}{2T}-1\nonumber \\&- \frac{1}{2T}\int _0^\infty \mathrm{{d}}y \, y e^{-\frac{y^2}{2}}\exp \left[ mT \left( 1 - \frac{y}{m\sigma } - e^{-\frac{y}{m\sigma }}\right) \right] . \end{aligned}$$
(3.87)

The equation \(F_g(0;m,T,\sigma )=0\) defines implicitly the function \(\sigma ^\mathrm{inc}(m,T)\), which we can show to be a single-valued function with the properties \(\frac{\partial \sigma ^\mathrm{inc}}{\partial m}<0\) and \(\frac{\partial \sigma ^\mathrm{inc}}{\partial T}<0\). We show these by explicitly computing the partial derivatives of \(F_g(0;m,T,\sigma )\) with respect to m and \(\sigma \), and by evaluating the behavior with respect to changes in T by adopting a suitable strategy.

We begin by computing the derivative with respect to \(\sigma \). From Eq. (3.87), we obtain

$$\begin{aligned}&\frac{\partial }{\partial \sigma }F_g(0;m,T,\sigma )= - \frac{1}{2\sigma ^2}\int _0^\infty \mathrm{{d}}y \, y^2 e^{-\frac{y^2}{2}} \left( 1- e^{-\frac{y}{m\sigma }} \right) \nonumber \\&\times \exp \left[ mT \left( 1 - \frac{y}{m\sigma } - e^{-\frac{y}{m\sigma }}\right) \right] , \end{aligned}$$
(3.88)

which is clearly negative. Secondly, the derivative with respect to m gives

$$\begin{aligned}&\frac{\partial }{\partial m}F_g(0;m,T,\sigma ) = - \frac{1}{2}\int _0^\infty \mathrm{{d}}y \, y e^{-\frac{y^2}{2}}\nonumber \\&\times \left( 1- e^{-\frac{y}{m\sigma }} - \frac{y}{m\sigma }e^{-\frac{y}{m\sigma }} \right) \exp \left[ mT \left( 1 - \frac{y}{m\sigma } - e^{-\frac{y}{m\sigma }}\right) \right] . \end{aligned}$$
(3.89)

This derivative is negative, since \(1-e^{-x} -xe^{-x}\) is positive for \(x>0\). From the implicit function theorems, we then derive the result \(\frac{\partial \sigma ^\mathrm{inc}}{\partial m}<0\). The study of the behavior with respect to a change in T is more complicated. Since we are considering \(T>0\), we multiply Eq. (3.87) by 2T to obtain

$$\begin{aligned}&2T F_g(0;m,T,\sigma ) = 1 - 2T \nonumber \\&- \int _0^\infty \mathrm{{d}}y \, y e^{-\frac{y^2}{2}}\exp \left[ mT \left( 1 - \frac{y}{m\sigma } - e^{-\frac{y}{m\sigma }}\right) \right] . \end{aligned}$$
(3.90)

Consider the integral on the right-hand side

$$\begin{aligned} \int _0^\infty \mathrm{{d}}y \, y e^{-\frac{y^2}{2}} \exp \left[ mT \left( 1 - \frac{y}{m\sigma } - e^{-\frac{y}{m\sigma }}\right) \right] ; \end{aligned}$$
(3.91)

Since \(1-x-e^{-x}\) is negative for \(x>0\), we conclude that the T derivative of this expression is negative, while its second T derivative is positive. Then the right-hand side of Eq. (3.90) can be zero for \(T>0\) for at most one value of T. Furthermore, since for fixed y and m, the value of \(y/(m\sigma )\) decreases if \(\sigma \) increases, we conclude that the T value for which \(F_g(0;m,T,\sigma )=0\) decreases for increasing \(\sigma \) at fixed m. This concludes the proof. Furthermore, for what we have seen before, we have \(\sigma ^\mathrm{inc}(m,1/2)=0\) and \(\lim _{m\rightarrow \infty }\sigma ^\mathrm{inc}(m,T)=0\) for \(0\le T \le 1/2\).

From the above analysis, it should be evident that the proof is not restricted to the Gaussian case, but would work for any \(g(\omega )\) such that

$$\begin{aligned} \beta \int \mathrm{{d}}x \, g(x) x \sin (\beta x), \end{aligned}$$
(3.92)

is positive for any \(\beta \). However, on physical grounds, we are led to assume that the same conclusions hold for any unimodal \(g(\omega )\).

Appendix 5: Solution of the System of Eqs. (3.46)

In this appendix, we give details of the solution, Eqs. (3.47)–(3.53), to the system of Eqs. (3.46).

Let us first consider Eq. (3.46) with \(n=0\), from which we obtain that \(c_{1,k}(\theta ,\omega )\) is independent of \(\theta \) for each k. Next, consider Eq. (3.46) for \(k=0\) and \(n=2,3,\dots \). Since we have \(c_{n,0}(\theta ,\omega )=0\) for \(n>0\), we find that \(c_{n,1}(\theta ,\omega )=0\) for \(n>1\). Next, Eq. (3.46) for \(k=1\) and \(n=3,4,\dots \) gives \(c_{n,2}(\theta ,\omega )=0\) for \(n>2\); for \(k=2\) and \(n=4,5,\dots \), we get \(c_{n,3}(\theta ,\omega )=0\) for \(n>3\), and so on. We are thus led to conclude that \(c_{n,k}(\theta ,\omega )=0~\forall ~k<n\). Figure 3.11 displays the coefficients \(c_{n,k}\) in a matrix that is seen to be upper triangular on the basis of the result just obtained. Hence, to obtain all the non-zero elements of the matrix, we should consider Eq. (3.46) for \(n=1,2,\dots \) and \(k \ge n-1\), or, equivalently, for \(k=0,1,2,\dots \) and \(n=1,2,\dots ,k+1\). To this end, we will first obtain the elements of the main diagonal, \(c_{n,n}(\theta ,\omega )\), then the elements of the first upper diagonal, \(c_{n,n+1}(\theta ,\omega )\), then the elements of the second upper diagonal, \(c_{n,n+2}(\theta ,\omega )\), and so on.

Fig. 3.11
figure 11

©SISSA Medialab Srl. Reproduced by permission of IOP Publishing. All rights reserved.

Flow diagram for the evaluation of the expansion coefficients \(c_{n,k}(\theta ,\omega ); n,k=0,1,2,\ldots ,6\) by using Eq. (3.46). Starting from the main diagonal, arrows and different colors denote subsequent flows (see text). The elements below the main diagonal are all zero. https://doi.org/10.1088/1742-5468/2015/05/P05011

Let us begin by studying the case of \(n=1\) and \(k=0\); we have

$$\begin{aligned} \sqrt{T}\frac{\partial c_{0,0}(\theta ,\omega )}{\partial \theta } +\sqrt{2T}\frac{\partial c_{2,0}(\theta ,\omega )}{\partial \theta } +\sqrt{T} a(\theta ,\omega ) c_{0,0}(\theta ,\omega ) + c_{1,1}(\omega )=0. \end{aligned}$$
(3.93)

Here, we have \(c_{2,0}(\theta ,\omega )=0\), while \(c_{1,1}(\omega )\) is independent of \(\theta \). We thus end up with a first-order differential equation for \(c_{0,0}(\theta ,\omega )\) with an unknown constant. The condition \(c_{0,0}(\theta ,\omega )=c_{0,0}(\theta +2\pi ,\omega )\) fixes the value of this constant, and we get

$$\begin{aligned} c_{0,0}(\theta ,\omega )= & {} c_{0,0}(0,\omega )e^{- g(\theta ,\omega )}\left[ 1+\left( e^{g(2\pi ,\omega )}-1\right) \frac{\int _0^\theta \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}{\int _{0}^{2\pi } \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}\right] , \end{aligned}$$
(3.94)
$$\begin{aligned} c_{1,1}(\omega )= & {} \sqrt{T}\frac{c_{0,0}(0,\omega )\left( 1-e^{g(2\pi ,\omega )}\right) }{\int _{0}^{2\pi } \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}, \end{aligned}$$
(3.95)

where \(g(\theta ,\omega )=\int _0^\theta \mathrm{{d}}\theta 'a(\theta ',\omega )\), and \(c_{0,0}(0,\omega )\) has to be fixed at the end by the normalization of \(b_0(\theta ,\omega )\). Now that we have determined \(c_{0,0}(\theta ,\omega )\) and \(c_{1,1}(\omega )\), we may obtain recursively the main diagonal elements by considering Eq. (3.46) for \(n=2,3,\dots \) and \(k=n-1\); we get

$$\begin{aligned} \sqrt{nT}\frac{\partial c_{n-1,n-1}(\theta ,\omega )}{\partial \theta } +\sqrt{(n+1)T}\frac{\partial c_{n+1,n-1}(\theta ,\omega )}{\partial \theta } \nonumber \\ +\sqrt{nT} a(\theta ,\omega ) c_{n-1,n-1}(\theta ,\omega ) + nc_{n,n}(\theta ,\omega )=0. \end{aligned}$$
(3.96)

Since we have \(c_{n+1,n-1}(\theta ,\omega )=0\), we get

$$\begin{aligned} c_{n,n}(\theta ,\omega )=-\sqrt{\frac{T}{n}}\left[ \frac{\partial c_{n-1,n-1}(\theta ,\omega )}{\partial \theta } + a(\theta ,\omega )c_{n-1,n-1}(\theta ,\omega ) \right] \end{aligned}$$
(3.97)

for \(n=2,3,\dots \). In particular, for \(n=2\), the first term within the square brackets is absent as \(c_{1,1}(\omega )\) is independent of \(\theta \). Let us note that all the functions \(c_{n,n}(\theta ,\omega )\) are proportional to \(c_{0,0}(0,\omega )\).

Now, we determine the elements of the first upper diagonal. Consider Eq. (3.46) for \(n=1\) and \(k=1\):

$$\begin{aligned} \sqrt{T}\frac{\partial c_{0,1}(\theta ,\omega )}{\partial \theta } +\sqrt{2T}\frac{\partial c_{2,1}(\theta ,\omega )}{\partial \theta }+\sqrt{T} a(\theta ,\omega ) c_{0,1}(\theta ,\omega ) + c_{1,2}(\omega )=0. \end{aligned}$$
(3.98)

The last equation has exactly the same structure as Eq. (3.93), since \(c_{2,1}(\theta ,\omega )=0\), and \(c_{1,2}(\omega )\) is a constant independent of \(\theta \). Now, we use the fact that \(c_{0,k}(0,\omega )=0\) for \(k\ge 1\), so that the solution of Eq. (3.98) is simply \(c_{0,1}(\theta ,\omega )=c_{1,2}(\omega )\equiv 0\). Next, by considering Eq. (3.46) for \(n=2,3,\dots \) and \(k=n\), and proceeding similarly, we may obtain that all the functions \(c_{n,n+1}(\theta ,\omega )\), i.e., the elements of the first upper diagonal of Fig. 3.11, are zero.

Our next task is to determine the elements of the second upper diagonal, which we begin by considering Eq. (3.46) for \(n=1\) and \(k=2\):

$$\begin{aligned} \sqrt{T}\frac{\partial c_{0,2}(\theta ,\omega )}{\partial \theta } +\sqrt{2T}\frac{\partial c_{2,2}(\theta ,\omega )}{\partial \theta }+\sqrt{T} a(\theta ,\omega ) c_{0,2}(\theta ,\omega ) + c_{1,3}(\omega )=0. \end{aligned}$$
(3.99)

In the above equation, \(c_{2,2}(\theta ,\omega )\) is known from Eq. (3.97). Then, from the requirement of periodicity of \(c_{0,2}(\theta ,\omega )\), and on using \(c_{0,2}(0,\omega )=0\), we arrive at the solutions

$$\begin{aligned} c_{0,2}(\theta ,\omega )= & {} \sqrt{2}\frac{\int _0^{2\pi } \mathrm{{d}}\theta ' \frac{\partial c_{2,2}(\theta ',\omega )}{\partial \theta '} e^{g(\theta ',\omega )}}{\int _0^{2\pi } \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}e^{-g(\theta ,\omega )}\int _0^\theta \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}\nonumber \\&-\sqrt{2}e^{-g(\theta ,\omega )}\int _0^\theta \mathrm{{d}}\theta ' \frac{\partial c_{2,2}(\theta ',\omega )}{\partial \theta '} e^{g(\theta ',\omega )}, \end{aligned}$$
(3.100)
$$\begin{aligned} c_{1,3}(\omega )= & {} -\sqrt{2T}\frac{\int _0^{2\pi } \mathrm{{d}}\theta ' \frac{\partial c_{2,2}(\theta ',\omega )}{\partial \theta '} e^{g(\theta ',\omega )}}{\int _0^{2\pi } \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}. \end{aligned}$$
(3.101)

Again, these functions are proportional to \(c_{0,0}(0,\omega )\). Now that we have determined \(c_{0,2}\) and \(c_{1,3}\), we obtain recursively the elements of the second upper diagonal, i.e., the functions \(c_{n,n+2}\), from Eq. (3.46) by considering \(n=2,3,\dots \) and \(k=n+1\):

$$\begin{aligned} \sqrt{nT}\frac{\partial c_{n-1,n+1}(\theta ,\omega )}{\partial \theta } +\sqrt{(n+1)T}\frac{\partial c_{n+1,n+1}(\theta ,\omega )}{\partial \theta }\nonumber \\ +\sqrt{nT} a(\theta ,\omega ) c_{n-1,n+1}(\theta ,\omega ) + nc_{n,n+2}(\theta ,\omega )=0. \end{aligned}$$
(3.102)

With the main diagonal elements already determined, we get

$$\begin{aligned} c_{n,n+2}(\theta ,\omega )= & {} -\sqrt{\frac{T}{n}}\left[ \frac{\partial c_{n-1,n+1}(\theta ,\omega )}{\partial \theta }+ a(\theta ,\omega )c_{n-1,n+1}(\theta ,\omega )\right] \nonumber \\&-\frac{\sqrt{(n+1)T}}{n}\frac{\partial c_{n+1,n+1}(\theta ,\omega )}{\partial \theta }, \end{aligned}$$
(3.103)

for \(n=2,3,\dots \). In particular, for \(n=2\), the first term within the square brackets is absent, as \(c_{1,3}(\omega )\) is independent of \(\theta \). Also, note that these functions are proportional to \(c_{0,0}(0,\omega )\).

We now show that the elements of the third upper diagonal vanish. Considering Eq. (3.46) for \(n=1\) and \(k=3\), we get

$$\begin{aligned} \sqrt{T}\frac{\partial c_{0,3}(\theta ,\omega )}{\partial \theta } +\sqrt{2T}\frac{\partial c_{2,3}(\theta ,\omega )}{\partial \theta }+\sqrt{T} a(\theta ,\omega ) c_{0,3}(\theta ,\omega ) + c_{1,4}(\omega )=0. \end{aligned}$$
(3.104)

Here, \(c_{2,3}\) has been previously determined to be vanishing identically, so that the solution of the last equation is simply obtained as \(c_{0,3}(\theta ,\omega )=c_{1,4}(\omega )\equiv 0\). Then, considering Eq. (3.46) for \(n=2,3,\dots \) and \(k=n+2\), we conclude that all the elements of the third upper diagonal, \(c_{n,n+3}\), vanish.

By now, the procedure of determining the coefficients \(c_{n,k}\)’s should be clear. All the elements of the upper diagonals of odd order vanish, being equivalent to the fact that in the portion of each row above the main diagonal, one element every two vanishes, i.e., \(c_{n,n+1+2k}\equiv 0\) for \(n,k=0,1,2,\dots \). All the nonvanishing elements are proportional to \(c_{0,0}(0,\omega )\). The expressions for the main diagonal elements are given by Eqs. (3.94), (3.95) and (3.97). On the basis of the analysis above, we may write down the general expressions for the nonvanishing non-diagonal elements as

$$\begin{aligned} c_{0,2k}(\theta ,\omega )= & {} \sqrt{2}\frac{\int _0^{2\pi } \mathrm{{d}}\theta ' \frac{\partial c_{2,2k}(\theta ',\omega )}{\partial \theta '} e^{g(\theta ',\omega )}}{\int _0^{2\pi } \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}e^{-g(\theta ,\omega )}\int _0^\theta \mathrm{{d}}\theta ' e^{g(\theta ',\omega )} \nonumber \\&-\sqrt{2}e^{-g(\theta ,\omega )}\int _0^\theta \mathrm{{d}}\theta ' \frac{\partial c_{2,2k}(\theta ',\omega )}{\partial \theta '} e^{g(\theta ',\omega )}, \end{aligned}$$
(3.105)
$$\begin{aligned} c_{1,1+2k}(\omega )= & {} -\sqrt{2T}\frac{\int _0^{2\pi } \mathrm{{d}}\theta ' \frac{\partial c_{2,2k}(\theta ',\omega )}{\partial \theta '} e^{g(\theta ',\omega )}}{\int _0^{2\pi } \mathrm{{d}}\theta ' e^{g(\theta ',\omega )}}, \end{aligned}$$
(3.106)
$$\begin{aligned} c_{2,2+2k}(\theta ,\omega )= & {} -\sqrt{\frac{T}{2}} a(\theta ,\omega ) c_{1,1+2k}(\omega )-\frac{\sqrt{3T}}{2}\frac{\partial c_{3,1+2k}(\theta ,\omega )}{\partial \theta }, \end{aligned}$$
(3.107)
$$\begin{aligned} c_{n,n+2k}(\theta ,\omega )= & {} -\sqrt{\frac{T}{n}}\left[ \frac{\partial c_{n-1,n-1+2k}(\theta )}{\partial \theta } +a(\theta ,\omega )c_{n-1,n-1+2k}(\theta ,\omega )\right] \nonumber \\&-\frac{\sqrt{(n+1)T}}{n}\frac{\partial c_{n+1,n-1+2k}(\theta ,\omega )}{\partial \theta } \,\,\,\,\,\, n \ge 3, \end{aligned}$$
(3.108)

with \(k=1,2,\dots \).

Appendix 6: Convergence Properties of the Expansion (3.45)

In this appendix, we discuss the convergence properties of the expansion (3.45) involved in obtaining the density \(n(\theta )\), see Eq. (3.54). To this end, let us first consider an asymptotic power series in the real variable x given by

$$\begin{aligned} A(x)=\sum _{k=0}^\infty a_kx^k. \end{aligned}$$
(3.109)

We define the partial sum

$$\begin{aligned} A_n(x) \equiv \sum _{k=0}^n a_k x^k. \end{aligned}$$
(3.110)

Then, being asymptotic means that at any given \(x\ne 0\), one has \(A_n(x) \rightarrow \infty \) as \(n \rightarrow \infty \). In this case, one employs the so-called Borel summation method to sum the series, by defining the Borel transform of A(x) as [26]

$$\begin{aligned} {\mathscr {B}}A(t) \equiv \sum _{k=0}^\infty \frac{a_k}{k!}t^k. \end{aligned}$$
(3.111)
Fig. 3.12
figure 12

Density \(n(\theta )\) in the dynamics (3.16) for a Gaussian \(g(\omega )\), and with \(m=0.25\), \(T=0.25\), \(\sigma =0.295\). Panel a refers to theoretical predictions using the Borel summation method with \(k_\mathrm{trunc}=38\), while b refers to estimates obtained by using direct summation with \(k_\mathrm{trunc}=22\)

If \({\mathscr {B}}A(t)\) converges for any positive t, or, if it converges for sufficiently small t to an analytic function that can be analytically continued to all \(t>0\), and if the integral

$$\begin{aligned} \int _0^\infty \mathrm{{d}}t~\exp (-t) {\mathscr {B}}A(tx) \end{aligned}$$
(3.112)

exists and equals \(A_B(x)\) (here, the subscript B stands for Borel), we say that the Borel sum of the series on the right hand side of Eq. (3.109) is \(A_B(x)\). One may observe that if the original series converges, i.e., if \(\lim _{n \rightarrow \infty }A_n(x) = A(x) < \infty \), then one has \(A_B(x) = A(x)\). Applying the above formalism to Eq. (3.45), we get

$$\begin{aligned} b_{0B}(\theta ,\omega )= & {} \int _0^\infty \mathrm{{d}}t~\exp (-t) \sum _{k=0}^\infty \frac{c_{0,k}(\theta ,\omega )}{k!}(t\sqrt{m})^k \nonumber \\= & {} \frac{1}{\sqrt{m}}\int _0^\infty \mathrm{{d}}y~\exp (-y/\sqrt{m}) \sum _{k=0}^\infty \frac{c_{0,k}(\theta )}{k!}y^k. \end{aligned}$$
(3.113)

The last integral needs to be computed numerically. One is required to truncate the series at a certain order \(k=k_\mathrm{trunc}\), and to extend the integral over y up to a given value \(y_M\), chosen such that the integrand is negligible for \(y > y_M\). However, contrary to what happens in the original series, we found that the sum in the last integral converges, at least for all y-values smaller than \(y_M\) that are necessary to compute the integral. We do not know the function to which our Borel transform converges and the corresponding radius of convergence. Nevertheless, our numerical results show that our series is Borel summable. Figure 3.12a shows the result of computing the density

$$\begin{aligned} n(\theta )=\int _{-\infty }^\infty \mathrm{{d}}\omega ~ g(\omega )b_{0B}(\theta ,\omega ) \end{aligned}$$
(3.114)

for the same conditions as in Fig. 3.4, where we truncate the sum in Eq. (3.113) at \(k_\mathrm{trunc}=38\); the plot coincides with the one shown in Fig. 3.4a. On the other hand, summing the series (3.45) for \(n=0\) without resorting to the Borel summation method, and then computing the density \(n(\theta )\), the result that we show in Fig. 3.12b clearly demonstrates that instabilities for truncation order \(k_\mathrm{trunc}=22\) of the series that get worse and worse with further increase of the truncation order. In this regard, the reader is referred to Table 3.1, where we list the truncation order \(k_\mathrm{max}\) versus m up to which one observes a perfect agreement of the density \(n(\theta )\) between theory and simulations, for the same representative \((\sigma ,T)\equiv (0.295,0.25)\) as in Fig. (3.4). We conclude from the analysis presented in this appendix that the series (3.45) is asymptotic, but is effectively summable by the Borel summation method.

Table 3.1 For the dynamics (3.16) with a Gaussian \(g(\omega )\), the table shows the maximum truncation order \(k_\mathrm{max}\) in the computation of the density \(n(\theta )\) as a function of m at a given representative \((\sigma ,T)\equiv (0.295,0.25)\) for which one observes a perfect agreement of the density \(n(\theta )\) in theory and simulations, as in Fig. (3.4). The agreement worsens on increasing the truncation order beyond \(k_\mathrm{max}\)

Rights and permissions

Reprints and permissions

Copyright information

© 2018 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gupta, S., Campa, A., Ruffo, S. (2018). Oscillators with Second-Order Dynamics. In: Statistical Physics of Synchronization. SpringerBriefs in Complexity. Springer, Cham. https://doi.org/10.1007/978-3-319-96664-9_3

Download citation

Publish with us

Policies and ethics