Abstract
The tools of weakly coupled phase oscillator theory have had a profound impact on the neuroscience community, providing insight into a variety of network behaviours ranging from central pattern generation to synchronisation, as well as predicting novel network states such as chimeras. However, there are many instances where this theory is expected to break down, say in the presence of strong coupling, or must be carefully interpreted, as in the presence of stochastic forcing. There are also surprises in the dynamical complexity of the attractors that can robustly appear—for example, heteroclinic network attractors. In this review we present a set of mathematical tools that are suitable for addressing the dynamics of oscillatory neural networks, broadening from a standard phase oscillator perspective to provide a practical framework for further successful applications of mathematics to understanding network dynamics in neuroscience.
Similar content being viewed by others
1 Introduction
Coupled oscillator theory is now a pervasive part of the theoretical neuroscientist’s toolkit for studying the dynamics of models of biological neural networks. Undoubtedly this technique originally arose in the broader scientific community through a fascination with understanding synchronisation in networks of interacting heterogeneous oscillators, and can be traced back to the work of Huygens on “an odd kind of sympathy” between coupled pendulum clocks [1]. Subsequently the theory has been developed and applied to the interaction between organ pipes [2], phaselocking phenomena in electronic circuits [3], the analysis of brain rhythms [4], chemical oscillations [5], cardiac pacemakers [6], circadian rhythms [7], flashing fireflies [8], coupled Josephson junctions [9], rhythmic applause [10], animal flocking [11], fish schooling [12], and behaviours in social networks [13]. For a recent overview of the application of coupled phase oscillator theory to areas as diverse as vehicle coordination, electric power networks, and clock synchronisation in decentralised networks see the recent survey article by Dörfler and Bullo [14].
Given the widespread nature of oscillations in neural systems it is no surprise that the science of oscillators has found such ready application in neuroscience [15]. This has proven especially fruitful for shedding light on the functional role that oscillations can play in feature binding [16, 17], cognition [18], memory processes [19], odour perception [20, 21], information transfer mechanisms [22], interlimb coordination [23, 24], and the generation of rhythmic motor output [25]. Neural oscillations also play an important role in many neurological disorders, such as excessive synchronisation during seizure activity in epilepsy [26, 27], tremor in patients with Parkinson’s disease [28] or disruption of cortical phase synchronisation in schizophrenia [29]. As such it has proven highly beneficial to develop methods for the control of (de)synchronisation in oscillatory networks, as exemplified by the work of Tass et al. [30, 31] for therapeutic brain stimulation techniques. From a transformative technology perspective, oscillatory activity is increasingly being used to control external devices in brain–computer interfaces, in which subjects can control an external device by changing the amplitude of a particular brain rhythm [32].
Neural oscillations can emerge in a variety of ways, including intrinsic mechanisms within individual neurons or by interactions between neurons. At the single neuron level, subthreshold oscillations can be seen in membrane voltage as well as rhythmic patterns of action potentials. Both can be modelled using the Hodgkin–Huxley conductance formalism, and analysed mathematically with dynamical systems techniques to shed light on the mechanisms that underly various forms of rhythmic behaviour, including tonic spiking and bursting (see e.g. [33]). The high dimensionality of biophysically realistic single neuron models has also encouraged the use of reduction techniques, such as the separation of time scales recently reviewed in [34, 35], or the use of phenomenological models, such as FitzHugh–Nagumo (FHN) [36], to regain some level of mathematical tractability. This has proven especially useful when studying the response of single neurons to forcing [37], itself a precursor to understanding how networks of interacting neurons can behave. When mediated by synaptic interactions, the repetitive firing of presynaptic neurons can cause oscillatory activation of postsynaptic neurons. At the level of neural ensembles, synchronised activity of large numbers of neurons gives rise to macroscopic oscillations, which can be recorded with a microelectrode embedded within neuronal tissue as a voltage change referred to as a local field potential (LFP). These oscillations were first observed outside the brain by Hans Berger in 1924 [38] in electroencephalogram (EEG) recordings, and have given rise to the modern classification of brain rhythms into frequency bands for alpha activity (8–13 Hz) (recorded from the occipital lobe during relaxed wakefulness), delta (1–4 Hz), theta (4–8 Hz), beta (13–30 Hz) and gamma (30–70 Hz). The latter rhythm is often associated with cognitive processing, and it is now common to link large scale neural oscillations with cognitive states, such as awareness and consciousness. For example, from a practical perspective the monitoring of brain states via EEG is used to determine depth of anaesthesia [39]. Such macroscopic signals can also arise from interactions between different brain areas, the thalamocortical loop being a classic example [40]. Neural mass models (describing the coarse grained activity of large populations of neurons and synapses) have proven especially useful in understanding EEG rhythms [41], as well as in augmenting the dynamic causal modelling framework (driven by large scale neuroimaging data) for understanding how eventrelated responses result from the dynamics of coupled neural populations [42].
One very influential mathematical technique for analysing networks of neural oscillators, whether they be built from single neuron or neural mass models, has been that of weakly coupled oscillator theory, as comprehensively described by Hoppensteadt and Izhikevich [43]. In the limit of weak coupling between limitcycle oscillators, invariant manifold theory [44] and averaging theory [45] can be used to reduce the dynamics to a set of phase equations in which the relative phase between oscillators is the relevant dynamical variable. This approach has been applied to neural behaviour ranging from that seen in small rhythmic networks [46] up to the whole brain [47]. Despite the powerful tools and widespread use afforded by this formalism, it does have a number of limitations (such as assuming the persistence of the limit cycle under coupling) and it is well to remember that there are other tools from the mathematical sciences relevant to understanding network behaviour. In this review, we encompass the weakly coupled oscillator formalism in a variety of other techniques ranging from symmetric bifurcation theory and the groupoid formalism through to more “physicsbased” approaches for obtaining reduced models of large networks. This highlights the regimes where the standard formalism is applicable, and provides a set of complementary tools when it does not. These are especially useful when investigating systems with strong coupling, or ones for which the rate of attraction to a limit cycle is slow.
In Sect. 2 we review some of the key mathematical models of oscillators in neuroscience, ranging from single neuron to neural mass, as well as introduce the standard machinery for describing synaptic and gapjunction coupling. We then present in Sect. 3 an overview of some of the more powerful mathematical approaches to understanding the collective behaviour in coupled oscillator networks, mainly drawn from the theory of symmetric dynamics. We touch upon the master stability function approach and the groupoid formalism for handling coupled cell systems. In Sect. 4 we review some special cases where it is either possible to say something about the stability of the globally synchronous state in a general setting, or that of phaselocked states for strongly coupled networks of integrateandfire neurons. The challenge of the general case is laid out in Sect. 5, where we advocate the use of phase–amplitude coordinates as a starting point for either direct network analysis or network reduction. To highlight the importance of dynamics off cycle we discuss the phenomenon of shearinduced chaos. In the same section we review the reduction to the standard phaseonly description of an oscillator, covering the wellknown notions of isochrons and phase response curves. The construction of phase interaction functions for weakly coupled phase oscillator networks is covered in Sect. 6, together with tools for analysing phaselocked states. Moreover, we go beyond standard approaches and describe the emergence of turbulent states in continuum models with nonlocal coupling. Another example of something more complicated than a periodic attractor is that of a heteroclinic attractor, and these are the subject of Sect. 7. The subtleties of phase reduction in the presence of stochastic forcing are outlined in Sect. 8. The search for reduced descriptions of very large networks is the topic of Sect. 9, where we cover recent results for Winfree networks that provide an exact meanfield description in terms of a complex order parameter. This approach makes use of the Ott–Antonsen ansatz that has also found application to chimera states, and which we discuss in a neural context. In Sect. 10 we briefly review some examples where the mathematics of this review have been applied, and finally in Sect. 11 we discuss some of the many open challenges in the field of neural network dynamics.
We will assume the reader has familiarity with the following:

The basics of nonlinear differential equation descriptions of dynamical systems such as linear stability and phaseplane analysis.

Ideas from the qualitative theory of differential equations/dynamical systems such as asymptotic stability, attractors and limit cycles.

Generic codimensionone bifurcation theory of equilibria (saddle node, Hopf) and of periodic orbits (saddle node of limit cycles, heteroclinic, torus, flip).
There are a number of texts that cover this material very well in the context of neuroscience modelling, for example [48, 49]. We include a glossary of some abbreviations that are introduced in the text.
2 Neurons and Neural Populations as Oscillators
Nonlinear ionic currents, mediated by voltagegated ion channels, play a key role in generating membrane potential oscillations and action potentials. There are many ordinary differential equation (ODE) models for voltage oscillations, ranging from biophysically detailed conductancebased models through to simple integrateandfire (IF) caricatures. This style of modelling has also proved fruitful at the population level, for tracking the averaged activity of a near synchronous state. In all these cases bifurcation analysis is especially useful for classifying the types of oscillatory (and possibly resonant) behaviour that are possible. Here we give a brief overview of some of the key oscillator models encountered in computational neuroscience, as well as models for electrical and chemical coupling necessary to build networks.
2.1 The Hodgkin–Huxley Model and Its Planar Reduction
The work of Hodgkin and Huxley in elucidating the mechanism of action potentials in the squid giant axon is one of the major breakthroughs of dynamical modelling in physiology [50], and see [51] for a review. Their work underpins all modern electrophysiological models, exploiting the observation that cell membranes behave much like electrical circuits. The basic circuit elements are (1) the phospholipid bilayer, which is analogous to a capacitor in that it accumulates ionic charge as the electrical potential across the membrane changes; (2) the ionic permeabilities of the membrane, which are analogous to resistors in an electronic circuit; and (3) the electrochemical driving forces, which are analogous to batteries driving the ionic currents. These ionic currents are arranged in a parallel circuit. Thus the electrical behaviour of cells is based upon the transfer and storage of ions such as potassium (K^{+}) and sodium (Na^{+}).
Our goal here is to illustrate, by exploiting specific models of excitable membrane, some of the concepts and techniques which can be used to understand, predict, and interpret the excitable and oscillatory behaviours that are commonly observed in single cell electrophysiological recordings. We begin with the mathematical description of the Hodgkin–Huxley model.
The standard dynamical system for describing a neuron as a spatially isopotential cell with constant membrane potential V is based upon conservation of electric charge, so that
where C is the cell capacitance, I the applied current and \(I_{\mathrm{ion}}\) represents the sum of individual ionic currents:
In the Hodgkin–Huxley (HH) model the membrane current arises mainly through the conduction of sodium and potassium ions through voltagedependent channels in the membrane. The contribution from other ionic currents is assumed to obey Ohm’s law (and is called the leak current). The \(g_{\mathrm{K}}\), \(g_{\mathrm{Na}}\) and \(g_{L}\) are conductances (reciprocal resistances) that can be interpreted as gating variables. The great insight of Hodgkin and Huxley was to realise that \(g_{\mathrm {K}}\) depends upon four activation gates: \(g_{\mathrm{K}} = \overline{g}_{\mathrm {K}} n^{4}\), whereas \(g_{\mathrm{Na}}\) depends upon three activation gates and one inactivation gate: \(g_{\mathrm{Na}} = \overline{g}_{\mathrm{Na}} m^{3} h\). Here the gating variables all obey equations of the form
The conductance variables described by X take values between 0 and 1 and approach the asymptotic values \(X_{\infty}(V)\) with time constants \(\tau_{X}(V)\). These six functions are obtained from fits with experimental data. It is common practice to write \(\tau_{X}(V) = 1/(\alpha_{X}(V) + \beta _{X}(V))\), \(X_{\infty}(V) =\alpha_{X}(V) \tau_{X}(V)\), where α and β have the interpretation of opening and closing channel transition rates, respectively. The details of the HH model are provided in the Appendix for completeness. A numerical bifurcation diagram of the model in response to constant current injection is shown in Fig. 1, illustrating that oscillations can emerge in a Hopf bifurcation with increasing drive.
The mathematical forms chosen by Hodgkin and Huxley for the functions \(\tau_{X}\) and \(X_{\infty}\), \(X \in\{m,n,h\}\), are all transcendental functions (i.e. involve exponentials). Both this and the high dimensionality of the model make analysis difficult. However, considerable simplification is attained with the following observations: (i) \(\tau_{m}(V)\) is small for all V so that the variable m rapidly approaches its equilibrium value \(m_{\infty}(V)\), and (ii) the equations for h and n have similar timecourses and may be slaved together. This has been put on a more formal footing by Abbott and Kepler [52] by expressing both n and h as functions of a scalar quantity \(U(t)\). They obtain a reduced planar model (of the full Hodgkin–Huxley model) in \((V,U)\) coordinates under the replacement \(m \rightarrow m_{\infty}(V)\) and \(X = X_{\infty}(U)\) for \(X \in\{ n , h \}\) with a prescribed choice of dynamics for U (such that \(\mathrm{d}^{2} V/ \mathrm{d} t^{2}\) is similar in the two models for a fixed V). The phase plane and nullclines for this system are shown in Fig. 2.
For zero external input the fixed point is stable and the neuron is said to be excitable. When a positive external current is applied the lowvoltage portion of the V nullcline moves up whilst the highvoltage part remains relatively unchanged. For sufficiently large constant external input the intersection of the two nullclines falls within the portion of the V nullcline with positive slope. In this case the fixed point is unstable and the system may support a limit cycle. If an emergent limit cycle is stable then a train of action potentials will be produced and the system is referred to as being oscillatory. Action potentials may also be induced in the absence of an external current for synaptic stimuli of sufficient strength and duration. This simple planar model captures all of the essential features of the original HH model yet is much easier to understand from a geometric perspective. Indeed the model is highly reminiscent of the famous FHN model, in which the voltage nullcline is taken to be a cubic function. Both models show the onset of repetitive firing at a nonzero frequency as observed in the HH model (when an excitable state loses stability via a subcritical Hopf bifurcation). However, unlike real cortical neurons they cannot fire at arbitrarily low frequency. This brings us to consider modifications of the original HH formalism to accommodate bifurcation mechanisms from excitable to oscillatory behaviours that can respect this experimental observation.
2.2 The Cortical Model of Wilson
Many of the properties of real cortical neurons can be captured by making the equation for the recovery variable of the FHN equations quadratic (instead of linear). We are thus led to the cortical model of Wilson [53]:
where \(0< a<1\) and \(\beta,\gamma, C>0\). Here v is like the membrane potential V, and w plays the role of a gating variable. In addition to the single fixed point of the FHN model it is possible to have another pair of fixed points, as shown in Fig. 3(left). As I increases two fixed points can annihilate at a saddle node on an invariant circle (SNIC) bifurcation at \(I=I_{c}\) [48]. In the neighbourhood of this global bifurcation the firing frequency scales like \(\sqrt{II_{c}}\). For large enough I there is only one fixed point on the middle branch of the cubic, as illustrated in Fig. 3(right). In this instance an oscillatory solution occurs via the same mechanism as for the FHN model.
2.3 Morris–Lecar with Homoclinic Bifurcation
A SNIC bifurcation is not the only way to achieve a low firing rate in a single neuron model. It is also possible to achieve this via a homoclinic bifurcation, as is possible in the Morris–Lecar (ML) model [54]. This was originally developed as a model for barnacle muscle fibre. Morris and Lecar introduced a set of two coupled ODEs incorporating two ionic currents: an outward, noninactivating potassium current and an inward, noninactivating calcium current. Assuming that the calcium currents operate on a much faster time scale than the potassium current one, they formulated the following twodimensional system:
with \(m_{\infty}(V) = 0.5(1+\tanh[(VV_{1})/V_{2}])\), \(w_{\infty}(V) = 0.5(1+ \tanh[(VV_{3})/V_{4}])\), \(\lambda(V) = \phi\cosh[(VV_{3})/(2V_{4})]\), where \(V_{1}, \ldots, V_{4} \) and ϕ are constants. Here w represents the fraction of K^{+} channels open, and the Ca^{2+} channels respond to V so rapidly that we assume instantaneous activation. Here \(g_{L}\) is the leakage conductance, \(g_{\mathrm{K}}\), \(g_{\mathrm{Ca}}\) are the potassium and calcium conductances, \(V_{L}\), \(V_{\mathrm{K}}\), \(V_{\mathrm{Ca}}\) are corresponding reversal potentials, \(m_{\infty}(V)\), \(w_{\infty}(V)\) are voltagedependent gating functions and \(\lambda(V)\) is a voltagedependent rate. Referring to Fig. 4, as I decreases the periodic orbit grows in amplitude, it comes closer to a saddle point and the period increases such that near the homoclinic bifurcation, where the orbit collides with the saddle at \(I=I_{c}\), the frequency of oscillation scales as \(1/\log(II_{c})\).
2.4 IntegrateandFire
Although conductancebased models like that of Hodgkin and Huxley provide a level of detail that helps us understand how the kinetics of channels (with averaged activation and inactivation variables) can underlie actionpotential generation, their high dimensionality is a barrier to studies at the network level. The goal of a networklevel analysis is to predict emergent computational properties in populations and recurrent networks of neurons from the properties of their component cells. Thus simpler (lowerdimensional and hopefully mathematically tractable) models are more appealing—especially if they fit single neuron data.
A onedimensional nonlinear IF model takes the form
such that \(v(t)\) is reset to \(v_{{R}}\) just after reaching the value \(v_{\mathrm{th}}\). In other words we seek a piecewise discontinuous solution \(v(t)\) of (1) such that
Firing times are defined iteratively according to
Real cortical data can be very accurately fit using
with \(v_{{L}}=68.5\mbox{ mV}\), \(\tau=3.3\mbox{ ms}\), \(v_{\kappa}= 61.5\mbox{ mV}\) and \(\kappa=4\mbox{ mV}\) [55]. There are many varieties of nonlinear IF model, with the quadratic one [56] being well known as a precursor for the planar Izhikevich spiking model [57], itself capable of generating a wide variety of firing patterns, including bursting and chattering as well as regular spiking. For a more thorough discussion of IF models and the challenges of analysing nonsmooth dynamical systems we refer the reader to [58].
2.5 Neuronal Coupling
At a synapse presynaptic firing results in the release of neurotransmitters that cause a change in the membrane conductance of the postsynaptic neuron. This postsynaptic current may be written \(I_{s}(t) = g_{s} s(t) (V_{s}  V(t))\) where \(V(t)\) is the voltage of the postsynaptic neuron, \(V_{s}\) is the membrane reversal potential and \(g_{s}\) is a constant conductance. The variable s corresponds to the probability that a synaptic receptor channel is in an open conducting state. This probability depends on the presence and concentration of neurotransmitter released by the presynaptic neuron. The sign of \(V_{s}\) relative to the resting potential (which without loss of generality we may set to zero) determines whether the synapse is excitatory (\(V_{s} >0\)) or inhibitory (\(V_{s} < 0\)).
The effect of some synapses can be described with a function that fits the shape of the postsynaptic response due to the arrival of an action potential at the presynaptic release site. The postsynaptic response \(s(t)\) would then be given by \(s(t) = \eta(tT)\), \(t \geq T\) where T is the arrival time of a presynaptic action potential and \(\eta(t)\) fits the shape of a realistic postsynaptic response (with \(\eta(t)=0\) for \(t<0\)). A common (normalised) choice for \(\eta(t)\) is a difference of exponentials:
or the alpha function \(\alpha^{2} t \mathrm {e}^{\alpha t}\) obtained from (2) in the limit \(\beta\rightarrow\alpha\). The postsynaptic response arising from a train of action potentials is given by
where \(T_{m}\) denotes the arrival time of the mth action potential at a synapse.
Interestingly even purely inhibitory synaptic interactions between nonoscillatory neurons can create oscillations at the network level, and can give rise to central pattern generators of “halfcentre” type [59]. To see this we need only consider a pair of (reduced) Hodgkin–Huxley neurons with mutual reciprocal inhibition mediated by an αfunction synapse with a negative reversal potential. The phenomenon of anode break excitation (whereby a neuron fires an action potential in response to termination of a hyperpolarising current) can underlie a natural antiphase rhythm, and is best understood in terms of the phase plane shown in Fig. 2. In this case inhibition will effectively move the voltage nullcline down, and the system will equilibrate to a new hyperpolarised state. Upon release of inhibition the fixed point will move to a higher value, though to reach this new state the trajectory must jump to the right hand branch (since now \({\mathrm {d}V}/{\mathrm {d}t}>0\)). An example is shown in Fig. 5.
Gap junctions differ from chemical synapses in that they allow for direct communication between cells. They are typically formed from the juxtaposition of two hemichannels (connexin proteins) and allow the free movement of ions or molecules across the intercellular space separating the plasma membrane of one cell from another. As well as being found in the neocortex, they occur in many other brain regions, including the hippocampus, inferior olivary nucleus in the brain stem, the spinal cord, and the thalamus [60]. Without the need for receptors to recognise chemical messengers, gap junctions are much faster than chemical synapses at relaying signals. The synaptic delay for a chemical synapse is typically in the range 1–100 ms, while the synaptic delay for an electrical synapse may be only about 0.2 ms.
It is common to view the gap junction as nothing more than a channel that conducts current according to a simple ohmic model. For two neurons with voltages \(v_{i}\) and \(v_{j}\) the current flowing into cell i from cell j is given by \(I_{\mathrm{gap}}(v_{i},v_{j})=g (v_{j}v_{i})\), where g is the constant strength of the gapjunction conductance. They are believed to promote synchrony between oscillators (e.g. see [61]), though the story is more subtle than this as we shall discuss in Sect. 4.
2.6 Neural Mass Models
As well as supporting oscillations at the single neuron level, brain tissue can also generate oscillations at the tissue level. Rather than model this using networks built from single neuron models, it is has proven especially useful to develop lowdimensional models to mimic the collection of thousands of near identical interconnected neurons with a preference to operate in synchrony. These are often referred to as neural mass models, with state variables that track coarse grained measures of the average membrane potential, firing rates or synaptic activity. They have proven especially useful in the description of human EEG power spectra [62], as well as resting brain state activity [63] and mesoscopic brain oscillations [64].
In many neural population models, such as the wellknown Wilson–Cowan model [65], it is assumed that the interactions are mediated by firing rates rather than action potentials (spikes) per se. To see how this might arise we rewrite (3) in the equivalent form using a sum of Dirac δfunctions,
Identifying the right hand side of (4) as a train of presynaptic spikes motivates the form of a phenomenological rate model in the form
with f identified as a firing rate and Q identified as the differential operator \((1+\alpha^{1} \mathrm{d}/\mathrm {d}t)(1+\beta^{1} \mathrm{d}/\mathrm{d} t)\). At the network level it is then common practice to close this system of equations by specifying f to be a function of presynaptic activity. A classic example is the Jansen–Rit model [66], which describes a network of interacting pyramidal neurons (P), inhibitory interneurons (I) and excitatory interneurons (E), and has been used to model both normal and epileptic patterns of cortical activity [67]. This can be written in the form
which is a realisation of the structure suggested by (5), with the choice
and \(Q_{a} = (1+\beta_{a}^{1}\mathrm{d}/\mathrm{d}t)^{2} \beta_{a}/A_{a}\) for \(a \in \{E,I\}\). Here A is an external input. When this is a constant we obtain the bifurcation diagram shown in Fig. 6. Oscillations emerge via Hopf bifurcations and it is possible for a pair of stable periodic orbits to coexist. One of these has a frequency in the alpha band and the other is characterised by a lower frequency and higher amplitude. Recently a network of such modules, operating in the alpha range and with additive noise, has been used to investigate mechanisms of crossfrequency coupling between brain areas [68]. Neural mass models have also previously been used to model brain resonance phenomena [69], for modelling of epileptic seizures [70–72], and they are very popular in the neuroimaging community for model driven EEG/fMRI fusion [73].
Now that we have introduced some oscillator models for neurons and neural populations it is appropriate to consider the set of tools for analysing their behaviour at the network level.
3 Dynamical Systems Approaches to Collective Behaviour
We give a brief overview of some dynamical systems approaches, concepts and techniques that can be used to understand collective behaviour that spontaneously appears in coupled dynamical system models used for neuroscience modelling. We do not give a complete review of this area but try to highlight some of the approaches and how they interact; some examples of applications of these to neural systems are given in later chapters.
In the artificial neural network literature, a distinction is made between recurrent and feedforward networks; see for example [74]. A feedforward network is one that is coupled but contains no feedback loops—i.e. there are no directed loops, while a recurrent network does contain feedback loops. We note that the methodologies discussed in this section (including constraints from symmetries and groupoid structures) may be applied to networks regardless of whether they are feedforward or recurrent. In this review we will later mostly discuss examples that are recurrent, though there are many interesting and relevant questions for feedforward networks as these often appear as models for “input–output” processes in neural systems.
3.1 Synchrony and Asynchrony
A set of N coupled nonlinear systems represented by ODEs can be written
where \(x_{i}\in \mathbb {R}^{d}\) represent the state space of the individual systems whose evolution is affected both by the current state of the system and by the states of those coupled to that system. In the mathematics literature, the \(x_{i}\) are often called “cells” though we note that the \(x_{i}\) may include degrees of freedom in the coupling as well as variables such as membrane potential that reflect the state of the “biological cell”. Note there is potential for notational confusion here: to clarify this we write
One of the most important observations concerning the collective dynamics of coupled nonlinear systems relates to whether the collection behaves as one or not—whether there is an attracting synchronous state, or whether more complex spatiotemporal patterns such as generalised synchrony (also called clustering) appear. There is a very large literature, even restricting to the case of applications of synchrony, and one where we cannot hope to do the whole area justice. We refer in particular to [75, 76]. Various applications of synchrony of neural models are discussed, for example, in [77–85] while there is a large literature (e.g. [17]) discussing the role of synchrony in neural function. Other work looks for example at synchronisation of groups of networks [86] and indeed synchrony can be measured experimentally [87] in groups of neurons using dynamic patch clamping.
We discuss some of the types of behaviour that can emerge in the collective dynamics and the response of partial synchronised states to external forcing. In many cases a system of N coupled dynamical systems can be written in the form
where each system is parametrised by \(x_{i}\in \mathbb {R}^{d}\), with intrinsic dynamics determined by \(f_{i}\); \(\epsilon=0\) corresponds to decoupling the systems and the functions \(g_{i}\) represent drive from other systems on the ith system. Many approaches start with the less general case of additive interactions
which can be justified for systems where the collective interaction between systems can be broken into “pairwise” interactions \(G_{ij}\) that are summed according to some linear weights \(w_{ij}\) (some of which may be zero) that represent the strength of the couplings and ϵ the strength of coupling of the network. Note that there is clearly no unique way to write the system in this form; more specifically, one can without loss of generality choose \(\epsilon=1\), \(w_{ij}=1\) by suitable choice of \(G_{ij}\). On the other hand it can be useful to be able to e.g. modulate the strength of the coupling across the whole network independently of changing individual coupling strengths. On the other hand, the special, fully symmetric, case \(G_{ij}=G\) is of particular interest as an example where there is full connectivity.
3.2 Clusters, Exact and Generalised Synchrony
If one has a notion of synchrony between the systems of (7), it is possible to discuss certain generalised forms of synchrony, including clustering according to mutual synchrony. Caution needs to be exercised whenever discussing synchrony—there are many distinct notions of synchrony that may be appropriate in different contexts and, in particular, synchrony is typically a property of a particular solution at a particular point in time rather than a property of the system as a whole.
An important case of synchrony is exact synchrony: we say \(x_{i}(t)\) and \(x_{j}(t)\) are exactly synchronised if \(x_{i}(t)=x_{j}(t)\) for all t. Generalised synchrony is, as the name suggests, much more general and corresponds to there simply being a functional relationship of the form \(x_{i}(t)=F(x_{j}(t))\). Another related notion is that of clustering, where different groups of oscillators are exactly synchronised but there is no exact synchrony between the groups. For oscillators, phases can be used to define additional notions such as phase and frequency synchrony: see Sect. 6.1.
Although we focus mostly on individual units with simple (periodic) dynamics, if the units have more complex dynamics (such as “chaotic oscillators”) one can understand synchrony of the cells by analysis of the linearised differences between oscillators, and there is a sizeable literature on this; see the review [88], or [89] for clusters in a system of globally coupled bistable oscillators. In the case of two linearly coupled identical chaotic oscillators
where \((x_{1},x_{2})\in \mathbb {R}^{2d}\), if the individual oscillator \({\mathrm {d}x}/{\mathrm {d}t}=f(x)\) has a chaotic attractor \(A\subset \mathbb {R}^{d}\) then the coupled system will have an attracting exactly synchronised attractor \(\tilde{A}=\{(x,x) : x\in A\}\) only if the coupling ϵ is sufficiently large in relation to the maximal Lyapunov exponent of the synchronous state [88].
3.3 Networks, Motifs and Coupled Cell Dynamics
We focus now on the dynamics of pairwise coupled networks such as (8) as this form is assumed in most cases. Under the additional assumption that the coupling between the oscillators is of the same type and either present or absent, one can consider uniformly weighted coupling of the form
where g is fixed and \(A_{ij}\) is an adjacency matrix of the graph of interactions, i.e. \(A_{ij}=1\) if there is a link from j to i and 0 otherwise, or more generally
where \(g_{ij}>0\) represents the strength of coupling and \(A_{ij}\) the adjacency matrix. It is clear that the coupling structure as represented in \(A_{ij}\) will influence the possible dynamics on the network and to make further progress it is useful to restrict to particular network structures. Some important literature on network structures is reviewed for example in [90], while [75] reviews work on synchrony in complex networks up to the time of its publication; recent work includes for example [91]. For a recent review of the application of complex networks to neurological disorders in the brain, see [92].
It is interesting to try and understand the effect of network structure on synchrony, so we briefly outline some basic graph theoretic measures of network structure. The indegree of the node i is the number of incoming connections (i.e. \(d_{\mathrm{in}}(i)=\sum_{j} A_{ij}\)), while the outdegree is the number of outgoing connections (i.e. \(d_{\mathrm{out}}(i)=\sum_{j} A_{ji}\)) and the distribution of these degrees is often used to characterise a large graph. A scalefree network is a large network where the distribution of in (or out) degrees scales as a power of the degree. This can be contrasted with highly structured homogeneous networks (for example on a lattice) where the degree may be the same at each node. Other properties commonly examined include the clustering properties and path lengths within the graph. There are also various measures of centrality that help one to determine the most important nodes in a graph—for example the betweenness centrality is a measure of centrality that is the probability that a given node is on the shortest path between two uniformly randomly chosen nodes [90]. As expected, the more central nodes are typically most important if one wishes to achieve synchrony in a network.
Other basic topological properties of networks that are relevant to their dynamics include, for example, the following, most of which are mentioned in [75, 90]: The network is undirected if \(A_{ij}=A_{ji}\) for all i, j, otherwise it is directed. We say nodes j and i in the network \(A_{ij}\) are pathconnected if for some n there is a path from j to i, i.e. \((A^{n})_{ij}\neq0\) for some n. The network is strongly connected if for each i, j it is pathconnected in both directions while it is weakly connected if we replace \(A_{ij}\) by \(\max (A_{ij},A_{ji})\) (i.e. we make the network undirected) and the latter network is strongly connected.In the terminology of artificial neural networks, a strongly connected network is recurrent while one that is not strongly connected must have some feedforward connections between groups of nodes. There is a potential source of confusion in that strong and weak connectivity are properties of a directed network—while strong and weak coupling are properties of the coupling strengths for a given network.
The diameter of a network is the maximal length of a shortest path between two points on varying the endpoints. Other properties of the adjacency matrix are discussed for example in [93] where spectral properties of graph Laplacians are linked to the problem of determining stability of synchronised states. Other work we mention is that of Pecora et al. [94, 95] on synchronisation in coupled oscillator arrays (and see Sect. 4.1), while [96] explores the recurrent appearance of synchrony in networks of pulsecoupled oscillators (and see Sect. 4.2).
Finally, we mention network motifs—these are subgraphs that are “more prevalent” than others within some class of graphs. More precisely, given a network one can look at the frequency with which a small subgraph appears relative to some standard class of graphs (for example Erdös–Rényi random graphs) and if a certain subgraph appears more often than expected, this characterises an important property of the graph [97]. Such analysis has been used in systems biology (such as transcription or protein interaction networks) and has been applied to study the structure in neural systems (see for example [98, 99]) and the implications of this for the dynamics. They have also been used to organise the analysis of the dynamics of small assemblies of coupled cells; see for example [100, 101].
3.4 Weak and Strong Coupling
Continuing with systems of the form (7) or (8), if the coupling parameter ϵ is, in some sense small, we refer to the system as “weakly coupled”. Mathematically, the weakcoupling approximation is very helpful because it allows one to use various types of perturbation theory to investigate the dynamics [43]. For coupling of limitcycle oscillators it allows one to greatly reduce the dimension of phase space. Nonetheless, many dynamical effects (e.g. “oscillator death” where the oscillations in one or more oscillators are completely suppressed by the action of the network [102]) cannot occur in the weakcoupling limit, and, moreover, real biological systems often have “strong coupling”. We will return to this topic to discuss oscillator behaviour in Sect. 4.3. One can sometimes use additional structure such as weak dissipation and weak coupling of the oscillators to perform a semianalytic reduction to phase oscillators; see for example [103, 104].
3.5 Synchrony, Dynamics and Time Delay
An area of intense interest is the role of time delay in the collective dynamics of coupled systems. In the neural context it is natural to include propagation delay between neurons explicitly, for example in models such as
where the delay time \(\tau>0\) represents the time of propagation of the signal from one neuron to another. This presents a number of serious mathematical challenges, not least due to the fact that delayed dynamical systems are infinitedimensional: one must specify the initial condition over a time interval \(t\in[t_{0}\tau,t_{0}]\) in order to have any chance of uniquely defining the future dynamics; this means that for a state to be linearly stable, an infinite number of eigenvalues need to have real part less than zero.
Nonetheless, much can be learned about stability, control and bifurcation of dynamically synchronous states in the presence of delay; for example [84, 105–108], and the volume [109] include a number of contributions by authors working in this area. There are also welldeveloped numerical tools such as DDEBIFTOOL [110, 111] that allow continuation, stability and bifurcation analysis of coupled systems with delays. For an application of these techniques to the study of a Wilson–Cowan neural population model with two delays we refer the reader to [112].
3.6 A Short Introduction to Symmetric Dynamics
Although no system is ever truly symmetric, in practice many models have a high degree of symmetry.^{Footnote 1} Indeed many realworld networks that have grown (e.g. giving rise to treelike structures) are expected to be well approximated by models that have large symmetry groups [113].
Symmetric (more precisely, equivariant) dynamics provides a number of powerful mathematical tools that one can use to understand emergent properties of systems of the form
with \(x\in R^{N}\). We say (9) is equivariant under the action of a group Γ if and only if \(f(gx)=gf(x)\) for any \(g\in \varGamma\) and \(x\in \mathbb {R}^{N}\). There is a welldeveloped theory of dynamics with symmetry; in particular see [114–116]. These give methods that help in a number of ways:

Description: one can identify symmetries of networks and dynamic states to help classify and differentiate between them.

Bifurcation: there is a welldeveloped theory of bifurcation with symmetry to help understand the emergence of dynamically interesting (symmetry broken) states from higher symmetry states.

Stability: bifurcation with symmetry often gives predictions about possible bifurcation scenarios that includes information as regards stability.

Generic dynamics: symmetries and invariant subspaces can provide a powerful structure with which one can understand more complex attractors such as heteroclinic cycles.

Design: one can use symmetries to systematically build models and test hypotheses.
The types of symmetries that are often most relevant for mathematical modelling of finite networks of neurons are the permutation groups, i.e. the symmetric groups and their subgroups. Nonetheless, continuum models of neural systems may have continuous symmetries that influence the dynamics and can be used as a tool to understand the dynamics; see for example [117].
3.7 Permutation Symmetries and Oscillator Networks
We review some aspects of the equivariant dynamics that have proven useful in coupled systems that are relevant to neural dynamics—see for example [118, 119]. In doing so we mostly discuss dynamics that respects some symmetry group of permutations of the systems. The full permutation symmetry group (or simply, the symmetric group) on N objects, \(S_{N}\), is defined to be the set of all possible permutations of N objects. Formally it is the set of permutations \(\sigma:\{1,\ldots,N\}\rightarrow\{1,\ldots,N\}\) (invertible maps of this set). To determine effects of the symmetry, not only the group must be known but also its action on phase space. If this action is linear then it is a representation of the group. The representation of the symmetry group is critical to the structure of the stability, bifurcations and generic dynamics that are equivariant with the symmetry.
For example, if each system is characterised by a single real variable, one can view the action of the permutations on \(\mathbb {R}^{N}\) as a permutation matrix
for each \(\sigma\in\varGamma\); note that \(M_{\sigma}M_{\rho }=M_{\sigma\rho }\) for any \(\sigma,\rho\in\varGamma\). Table 1 lists some commonly considered examples of symmetry groups used in coupled oscillator network models.
More generally, for (7) equivariance under the action of Γ means that for all \(\sigma\in\varGamma\), \(x\in \mathbb {R}^{Nd}\) and \(i=1,\ldots,N\) we have
A simple consequence of this is: if Γ acts transitively on \(\{1,\ldots,N\}\) (i.e. if for any i and j there is a \(\sigma\in \varGamma\) such that \(\sigma(j)=i\)) then all oscillators are identical, i.e. \(f_{i}(x_{i})=F(x_{i})\) for some function F.
The presence of symmetries means that solutions can be grouped together into families—given any x the set \(\varGamma x:= \{ gx : g\in\varGamma\}\) is the group orbit of x and all points on this group orbit will behave in dynamically the same way.
3.8 Invariant Subspaces, Solutions and Symmetries
Given a point \(x\in \mathbb {R}^{N}\) (for simplicity we consider the case \(d=1\) below) we define the isotropy subgroup (or simply the symmetry) of x under the action of Γ on \(\mathbb {R}^{N}\) to be
This is a subgroup of Γ, and the set of these groups forms a lattice (the isotropy lattice) by inclusion of subgroups. They are dynamically important in that for any trajectory \(x(t)\) we have \(\varSigma_{x(0)}=\varSigma_{x(t)}\) for all t. A converse problem is to characterise the set of all points with a certain symmetry. If H is a subgroup (or more generally, a subset) of Γ then the fixedpoint space of H is defined to be
Because all \(x\in \operatorname {Fix}(H)\) have symmetry H these subspaces are dynamically invariant. See Fig. 7 that illustrates this principle. Typical points \(x\in \operatorname {Fix}(H)\) have \(\varSigma_{x}=H\), however, some points may have more symmetry; more precisely, if \(H\subset K\) are isotropy subgroups then \(\operatorname {Fix}(H)\supset \operatorname {Fix}(K)\); and the partial ordering of the isotropy subgroups corresponds to a partial ordering of the fixedpoint subspaces.
One can identify similar types of isotropy subgroup as those that are conjugate in the group, i.e. we say \(H_{1}\) and \(H_{2}\) are conjugate subgroups of Γ if there is a \(g\in\varGamma\) such that \(gH_{1}=H_{2}g\). If this is the case, note that
meaning that the fixedpoint spaces of conjugate subgroups (and the dynamics on them) are in some sense equivalent.
Identifying symmetries up to conjugacy allows for a considerable reduction of the number of cases one needs to consider; note that conjugate subgroups must have fixedpoint subspaces of the same dimension where essentially the same dynamics will occur.
The fixedpoint subspaces are often used (implicitly or explicitly) to enable one to reduce the dimension of the system and thus to make it more tractable. As an example, to determine the existence of an exactly synchronised solution one only needs to suppose \(x_{i}(t)=x(t)\) and determine whether there is such a solution \(x(t)\) for the system (7).
For periodic orbits, the symmetry of points on the orbit to symmetries of the orbit as an invariant set are as follows. Suppose P is a periodic orbit (which can be viewed as a “loop” in phase space \(\mathbb {R}^{N}\)). Let K denote the symmetries that fix all points on P (the “spatial symmetries”) and H denote the symmetries that fix P as a set (the “spatiotemporal symmetries”); note that K will be a subgroup of H. Finally, let
be the set of points in phase space that have strictly more symmetry than K.
Theorem 3.1
(Theorem 3.4 in [116])
Consider ODEs on \(\mathbb {R}^{N}\) with a given finite symmetry group Γ. There is a periodic orbit P with spatial symmetries K and spatiotemporal symmetries H if and only if all of the following hold:

\(H/K\) is cyclic,

K is an isotropy subgroup,

\(\dim \operatorname {Fix}(K)\geq2\),

H fixes a connected component of \(\operatorname {Fix}(K)/L_{K}\), where \(L_{K}\) is defined as above.
One way of saying this is that the only possible spatiotemporal symmetries of periodic orbits are cyclic extensions of isotropy subgroups. Further theory, outlined in [116], shows that one can characterise possible symmetries of chaotic attractors; these may include a much wider range of spatiotemporal symmetries \((H,K)\) including some that do not satisfy the hypotheses of Theorem 3.1. This means that the symmetries of attractors may contain dynamical information about the attractor.
3.9 Symmetries and Linearisation
The presence of symmetries for an ODE will clearly constrain the system in many ways; one of the most important of these being the effects on linear stabilities of solutions. For a given equivariant ODE of the form (9), suppose we have an equilibrium \(x\in \mathbb {R}^{N}\) with isotropy subgroup \(\varSigma_{x}\) under the action of Γ on \(\mathbb {R}^{N}\). Then \(\varSigma_{x}\) constrains the Jacobian \(D f_{x}\) of the ODE because this must commute with the action of \(\varSigma_{x}\); more precisely
for any \(g\in\varSigma_{x}\) and \(v\in \mathbb {R}^{N}\). This means that eigenvalues of the linearisation will be multiple whenever they are mapped around in a nontrivial way by the action of the isotropy subgroup.
Fortunately, there is a welldeveloped theory that enables one to exactly characterise the structure of the Jacobians of such maps—this involves splitting the action into a number of isotypic components according to irreducible representations that are the most trivial invariant subspaces under the action of the group. We do not have space here to go into this in detail, but refer the reader to [116] and references therein. This characterisation can be extended to nonlinear terms of vector fields, and more general invariant sets (such as periodic orbits) in addition to equilibria.
3.10 Bifurcations with Symmetry and Genericity
Bifurcations of ODEs can be classified and analysed by codimension according to methods for example in texts [45, 123]. This involves the following steps:

(a)
Identification of the marginally unstable modes (the directions that are losing stability: for equilibria, this corresponds to the eigenspace of the Jacobian where the eigenvalues have zero real part).

(b)
Reduction to a centre manifold parametrised by the marginally unstable modes (generically this is one or twodimensional when only one parameter is varied).

(c)
Study of the dynamics of the normal form for the bifurcation under generic assumptions on the normal form coefficients.
Indeed, the only generic codimensionone local bifurcations (i.e. the only oneparameter bifurcations of equilibria that will not split into a number of simpler bifurcations for some small perturbation on the system) are the saddlenode (also called fold, or limit point) and the Hopf (also called Andronov–Hopf) bifurcation. Additional more complicated bifurcations can appear generically at higher codimension. This classification by codimension has enabled development of a powerful set of numerical tools to help the analysis of such systems, not just for local bifurcations of equilibria but also some global bifurcations (in particular, periodic orbit and homoclinic bifurcations). Particular packages to do this include AUTO [124], MatCont [125], CONTENT [126] and XPPAUT [127] (which includes an implementation of AUTO).
If we restrict to symmetry preserving perturbations, a much wider range of bifurcations can appear at low codimension—this is because, as described above, the symmetry can cause a marginal mode of instability in one direction to appear simultaneously in many other directions meaning that (a), (b) and (c) above must be replaced by
 (a′):

Identification of the marginally unstable modes (as discussed in Sect. 3.9, symmetry means there can generally be several of these that will become unstable at the same time).
 (b′):

Reduction to a centre manifold parametrised by the marginally unstable modes (these are preserved by the action of the symmetries and may be of dimension greater than two even for oneparameter bifurcations).
 (c′):

Study of the dynamics of the normal form for the symmetric bifurcation under generic assumptions on the normal form coefficients (the symmetries mean that some coefficients may be zero, some are constrained to be equal while others may be forced to satisfy nontrivial and sometimes obscure algebraic relationships).
These factors conspire to make symmetric bifurcations rich and interesting in behaviour—even in codimension one it is possible for heteroclinic cycles or chaos to bifurcate directly from high symmetry solutions. However, the same factors mean that many features cannot be caught by numerical pathfollowing packages such as those listed above—the degeneracies mean that many branches may emanate from the bifurcation; it is generally a challenge to identify all of these. Essentially, bifurcation theory needs to be developed in the context of the particular group action. Examples of some consequences of this for weakly coupled oscillator networks with symmetries are considered in Sect. 6.
3.11 Robust Heteroclinic Attractors, Cycles and Networks
The presence of symmetries in a dynamical system can cause highly nontrivial dynamics even away from bifurcation points. Of particular interest are robust invariant sets that consist of networks of equilibria (or periodic orbits, or more general invariant sets) connected via heteroclinic connections that are preserved under small enough perturbations that respect the symmetries [128]. These structures may be cycles or more generally networks. They can be robust to perturbations that preserve the symmetries and indeed they can be attracting [116, 129]. We are particularly interested in the attracting case in which case we call these invariant sets heteroclinic attractors and trajectories approaching such attractors show a typical intermittent behaviour—periods that are close to the dynamics of an unstable saddletype invariant set, and switches between different behaviours.
In higherdimensional systems, heteroclinic attractors may have subtle structures such as “depth two connections” [130], “cycling chaos” where there are connections between chaotic saddles [116, 131, 132] and “winnerless competition” [133, 134]. Related dynamical structures are found in the literature in attractors that show “chaotic itinerancy” or “slow switching”. Such complex attractors can readily appear in neural oscillator models in the presence of symmetries and have been used to model various dynamics that contribute to the function of neural systems; we consider this, along with some examples, in Sect. 7.
3.12 Groupoid and Related Formalisms
Some less restrictive structures found in some coupled dynamical networks also have many of the features of symmetric networks (including invariant subspaces, bifurcations that appear to be degenerate, and heteroclinic attractors) but without necessarily having the symmetries.
One approach [135] has been to use a structure of groupoids—these are mathematical structures that satisfy some, but not all, of the axioms of a group and can be useful in understanding the constraints on the dynamics of coupled cell systems of the form (6). A groupoid is similar to a group except that the composition of two elements in a groupoid is not always defined, and the inverse of a groupoid element may only be locally defined. This formalism can be used to describe the permutations of inputs of cells as in [135, 136].
Suppose that we have (6) with cells \(\mathcal {C}=\{1,\ldots,N\}\) and suppose that there are connections \(\mathcal {E}\), i.e. there are pairs of cells \((i,j)\) in \(\mathcal{C}\) such that cell i appears in the argument of the dynamics of cell j. We say
is the input set of cell j and there is a natural equivalence relation \(\sim_{I}\) defined by \(j\sim_{I} k\) if there is a bijection (an input automorphism)
with \(\beta(j)=k\) such that for all \(i\in I(j)\) we have \((i,j)\sim_{E} (\beta(i),k)\). The set \(B(j,k)\) of input automorphisms of this type and the set of all such input automorphisms has the structure of a groupoid [135].
Given a coupling structure of this type, an admissible vector field is a vector field on the product space of all cells that respects the coupling structure, and this generalises the idea of an equivariant vector field in the presence of a symmetry group acting on the set of cells. The dynamical consequences of this have a similar flavour to the consequences one can find in symmetric systems except that fewer cases have been worked out in detail, and there are many open questions.
To illustrate, consider the system of three cells,
where \(g(x,y,z)=g(x,z,y)\); this is discussed in Sect. 5 in [136] in some detail. This has a coupling structure as shown in Fig. 8. In spite of there being no exact symmetries in the system there is a nontrivial invariant subspace where \(x_{1}=x_{2}\). In the approach of [136] the dynamically invariant subspaces that can be understood in terms of the balanced colourings of the graph where the cells are grouped in such a way that the inputs are respected—this corresponds to an admissible pattern of synchrony.
The invariant subspaces that are forced to exist by this form of coupling structure have been called polydiagonals in this formalism, which correspond to clustering of the states. For every polydiagonal one can associate a quotient network by identifying cells that are synchronised, to give a smaller network. As in the symmetric case the existence of an invariant subspace does not guarantee that it contains any attracting solutions. Some work has been done to understand generic symmetry breaking bifurcations in such networks—see for example [138], or spatially periodic patterns in lattice networks [139]. Variants of this formalism have been developed to enable different coupling types between the same cells to be included.
Periodic orbits in such networks can also have interesting structures associated with the presence of invariant subspaces. The socalled rigid phase conjecture [136, 140], recently proved in [141], states that if there is a periodic orbit in the network such that two cells have a rigid phase relation between them (i.e. one that is preserved for all small enough structurepreserving perturbations) then this must be forced by either a \(\mathbb {Z}_{n}\) symmetric perturbation of the cells in the network, or in some quotient network.
An alternative formalism for discussing asymmetric coupled cell networks has been developed in [137, 142–144] that also allows one to identify invariant subspaces. Each cell has one output and several inputs that may be of different types. These papers concentrate on the questions: (a) When are twocell networks formally equivalent (i.e. when can the dynamics of one cell network be found in the other, under suitable choice of cell)? (b) How can one construct larger coupled cell systems with desired properties by “inflating” a smaller system S, such that the larger system has S as a quotient? (c) What robust heteroclinic attractors exist in such systems?
4 Coupled LimitCycle Oscillators
In addition to variants on the systems in Sect. 2, we mention that several nonlinear “conceptual” planar limitcycle oscillators are studied as archetypes of nonlinear oscillators. These include:
where x is real, z is complex and the coefficients λ, ω and \(a_{j}\) are real constants.
If the coupling between two or more limitcycle oscillators is relatively large, it can affect not only the phases but also the amplitudes, and a general theory of strongly interacting oscillators is likely to be no more or less complicated than a general theory of nonlinear systems. However, the theory of weak coupling is relatively well developed (see Sect. 5 and Sect. 6) and specific progress for strong coupling can sometimes be made for special choices of neuron model. Examples where one can do this include IF (see Sect. 4.3), piecewise linear models such as McKean [146], caricatures of FHN and ML [147], and singularly perturbed relaxation oscillators with linear [148] or fast threshold modulation coupling [149].
For linear coupling of planar oscillators, much is known about the general case [150, 151]. If the linear coupling is proportional to the difference between two state variables this is referred to as “diffusive”, and otherwise it is called “direct”. The difference between the two cases is most strongly manifest when considering the mechanism of oscillator death (see Sect. 3.4). The diffusive case is more natural in a neuroscience context as it can be used to model electrical gapjunction coupling (which depends on voltagedifferences). The existence of synchronous states in networks of identical units is inherited from the properties of the underlying single neuron model since in this case coupling vanishes, though the stability of this solution will depend upon the pattern of gapjunction connectivity.
Gap junctions are primarily believed to promote synchrony, though this is not always the case and they can also lead to robust stable asynchronous states [152], as well as “bursting” generated by cyclic transitions between coherent and incoherent network states [153]. For work on gap junctions and their role in determining network dynamics see for example [147, 154–158].
4.1 Stability of the Synchronised State for Complex Networks of Identical Systems
There is one technique specific to the analysis of the synchronous state in a quite large class of network models that is valid for strongly coupled identical systems, namely the master stability function (MSF) approach. For networks of coupled systems or oscillators with identical components the MSF approach of Pecora and Carroll [159] can be used to determine the stability of the synchronous state in terms of the eigenstructure of the network connectivity matrix. To introduce the MSF formalism it is convenient to consider N nodes (oscillators)^{Footnote 2} and let \(x_{i} \in \mathbb {R}^{d}\) be the ddimensional vector of dynamical variables of the ith node with isolated (uncoupled) dynamics \({\mathrm {d}x_{i}}/{\mathrm {d}t} = {F}({x}_{i})\), with \(i=1,\ldots, N\). The output for each node is described by a vector function \({H} \in \mathbb {R}^{d}\) (which is not necessarily linear). For example, for a threedimensional system with \({x}=(x^{(1)},x^{(2)},x^{(3)})\) and linear coupling only occurring through the \(x^{(3)}\)component then we would set \({H}({x}) = (0,0,x^{(3)})\). For a given adjacency matrix \(A_{ij}\) and associated set of connection strengths \(g_{ij}\) and a global coupling strength σ the network dynamics of N coupled identical systems, to which the MSF formalism applies, is specified by
Here the matrix \(\mathcal{G}\) with blocks \(\mathcal{G}_{ij}\) has the graphLaplacian structure \(\mathcal{G}_{ij} = A_{ij}g_{ij} + \delta _{ij} \sum_{k}A_{ik} g_{ik}\). The \(N1\) constraints \({x}_{1}(t)={x}_{2}(t)=\cdots={x}_{N}(t) = s(t)\) define the (invariant) synchronisation manifold, with \({s}(t)\) a solution in \(\mathbb {R}^{d}\) of the uncoupled system, namely \({\mathrm {d}s}/{\mathrm {d}t}={F}({s})\). Although we will not discuss in detail here, we assume that the asymptotic behaviour \(s(t)\) is such that averages along trajectories converge, i.e. the behaviour of \(s(t)\) for the uncoupled system is governed by a natural ergodic (Sinai–Ruelle–Bowen) measure for the dynamics.
To assess the stability of this state we perform a linear stability analysis expanding a solution as \({x}_{i}(t) = {s}(t) +\delta{x}_{i}(t)\) to obtain the variational equation
Here \(D {F}({s})\) and \(D {H}({s})\) denote the Jacobian of \({F}({s})\) and \({H}({s})\) around the synchronous solution, respectively. The variational equation has a block form that can be simplified by projecting δx into the eigenspace spanned by the (right) eigenvectors of the matrix \(\mathcal{G}\). This yields a set of N decoupled equations in the block form
where \({\xi}_{l}\) is the lth (right) eigenmode associated with the eigenvalue \(\lambda_{l}\) of G (and \(D {F}({s})\) and \(D {H}({s})\) are independent of the block label l). Since \(\sum_{i} \mathcal{G}_{ii}=0\) there is always a zero eigenvalue, say \(\lambda_{1}=0\), with corresponding eigenvector \((1,1,\ldots,1)\), describing a perturbation parallel to the synchronisation manifold. The other \(N1\) transverse eigenmodes must damp out for synchrony to be stable. For a general matrix \(\mathcal{G}\) the eigenvalues \(\lambda_{l}\) may be complex, which brings us to consideration of the system
For given \(s(t)\), the MSF is defined as the function which maps the complex number α to the greatest Lyapunov exponent of (10). The synchronous state of the system of coupled oscillators is stable if the MSF is negative at \(\alpha=\sigma\lambda_{l}\) where \(\lambda_{l}\) ranges over the eigenvalues of the matrix \(\mathcal{G}\) (excluding \(\lambda_{1}=0\)).
For a ring of identical (or near identical) coupled periodic oscillators in which the connections have randomly heterogeneous strength, Restrepo et al. [160] have used the MSF method to determine the possible patterns at the desynchronisation transition that occurs as the coupling strengths are increased. Interestingly they demonstrate Anderson localisation of the modes of instability, and show that this could organise waves of desynchronisation that would spread to the whole network. For a further discussion as regards the use of the MSF formalism in the analysis of synchronisation of oscillators on complex networks we refer the reader to [75, 161], and for the use of this formalism in a nonsmooth setting see [162]. This approach has recently been extended to cover the case of cluster states by making extensive use of tools from computational group theory to determine admissible patterns of synchrony [163] (and see also Sect. 3.12) in unweighted networks.
4.2 PulseCoupled Oscillators
Another example of a situation in which analysis of network dynamics can be carried out without the need for any reduction or assumption is that of pulsecoupled oscillators, in which interactions between neurons are mediated by instantaneous “kicks” of the voltage variable.
Networks of N identical oscillators with global (alltoall) strong pulse coupling were first studied by Mirollo and Strogatz [164]. They assumed that each oscillator is defined by a state variable v and is of integrate–and–fire type with threshold \(v_{\mathrm{th}}=1\) and reset value \(v_{R}=0\). When oscillator i in the network fires the instantaneous pulsatile coupling pulls all other oscillators \(j \neq i\) up by a fixed amount ϵ or to firing, whichever is less, i.e.
Mirollo and Strogatz assume that the coupling is excitatory (\(\epsilon >0\)). If m oscillators fire simultaneously then the remaining \(Nm\) oscillators are pulled up by mϵ, or to firing threshold.
In the absence of coupling each oscillator has period Δ and there is a natural phase variable \(\phi(t) =t/\Delta\mod1\) such that \(\phi=0\) when \(v=0\) and \(\phi=1\) when \(v=1\). Mirollo and Strogatz further assume that the dynamics of each (uncoupled) oscillator is governed by \(v(t)=f(\phi)\) where f is a smooth function satisfying \(f(0)=0\), \(f(1)=1\), \(f^{\prime}(\phi)>0\) and \(f^{\prime\prime}(\phi)<0\) for all \(\phi\in[0,1]\). Because of these hypotheses on f, it is invertible with inverse \(\phi=g(v)\).
Note that for the leaky (linear) integrateandfire model (LIF) where
we have \(f(\phi) = I(1\mathrm {e}^{\Delta\phi/\tau})\) where \(\Delta= \tau\ln (I/(I1))\), for \(I>1\), which satisfies the above conditions. However, quadratic IF models do not satisfy the concavity assumption.
If an oscillator is pulled up to firing threshold due to the coupling and firing of a group of m oscillators which have already synchronised then the oscillator is ‘absorbed’ into the group and remains synchronised with the group for all time. (Here synchrony means firing at the same time.) Since there are now more oscillators in the synchronised group, the effect of the coupling on the remaining oscillators is increased and this acts to rapidly pull more oscillators into synchronisation. Mirollo and Strogatz [164] proved that for pulsatile coupling and f satisfying the conditions above, the set of initial conditions for which the oscillators do not all become synchronised has zero measure. Here we briefly outline the proof for two pulsecoupled oscillators. See Mirollo and Strogatz [164] for the generalisation of this proof to populations of size N.
Consider two oscillators labelled A and B with \(\phi_{A}\) and \(v_{A}\) denoting, respectively, the phase and state of oscillator A and similarly for oscillator B. Suppose that oscillator A has just fired so that \(\phi_{A}=0\) and \(\phi_{B}=\phi\) as in Fig. 9(a). The return map \(R(\phi)\) is the phase of B immediately after A next fires. It can be shown that the return map has a unique, repelling fixed point:
Oscillator B reaches threshold when \(\phi_{A}=1\phi\) and an instant later B fires and the pulsatile coupling makes \(v_{A}= \min(1, \epsilon + f(1\phi))\). If \(v_{A}=1\) then the oscillators have synchronised. Assuming that \(v_{A}= \epsilon+ f(1\phi)<1\) then \(\phi_{A} = g(\epsilon+ f(1\phi))\) and after one firing the system has moved from \((\phi_{A}, \phi_{B})= (0, \phi)\) to \((h(\phi), 0)\) where \(h(\phi) = g(\epsilon+ f(1\phi))\) is the firing map (see Fig. 9(c)). The return map is given by a further application of the firing map so that \(R(\phi)=h(h(\phi))\). The assumption that \(\epsilon+ f(1\phi)<1\) is satisfied when \(\epsilon\in[0,1)\) and \(\phi\in(\delta, 1)\) where \(\delta=1 g(1\epsilon)\). Thus the domain of h is \((\delta,1)\) and the domain of R is \((\delta, h^{1}(\delta ))\). Since h is monotonically decreasing, \(\delta< h^{1}(\delta)\) for \(\epsilon<1\) and the interval is nonempty. Thus on the whole of \([0,1)\) the return map is defined as
Since the points 0 and 1 are identified, if \(\phi<\delta\) or \(\phi >h^{1}(\delta)\) then the two oscillators will become synchronised.
It can be shown that almost all initial conditions eventually become synchronised since (i) R has a unique fixed point \(\overline{\phi} \in(\delta, h^{1}(\delta))\) and (ii) this fixed point is unstable (i.e. \(\vert R^{\prime}(\overline{\phi})\vert >1\)). To see that R has a unique fixed point, observe that fixed points ϕ̅ are roots of \(F(\phi)\equiv\phih(\phi)\). Now \(F(\delta)=\delta1<0\) and \(F(h^{1}(\delta))= h^{1}(\delta )\delta>0\) so F has a root in \((\delta, h^{1}(\delta))\) and this root is unique since \(F^{\prime}(\phi)=1h^{\prime}(\phi)>2\).
Extensions to the framework of Mirollo and Strogatz include the introduction of a time delay in the transmission of pulses and the consideration of inhibitory coupling. It has been observed that delays have a dramatic effect on the dynamics in the case of excitatory coupling. Considering first a pair of oscillators, Ernst et al. [165] demonstrate analytically that inhibitory coupling with delays gives stable inphase synchronisation while for excitatory coupling, synchronisation with phase lag occurs. As the number of globally coupled oscillators increases, so does the number of attractors which can exist for both excitatory and inhibitory coupling.
In the presence of delays many different cluster state attractors can coexist. The dynamics settle down onto a periodic orbit with clusters reaching threshold and sending pulses alternately [165–167]. Under the addition of weak noise when the coupling is inhibitory, the dynamics stay near this periodic orbit indicating that all cluster state attractors are stable [167]. However, the collective behaviour shows a marked difference when the coupling is excitatory. In this case, weak noise is sufficient to drive the system away from the periodic orbit and results in persistent switching between unstable (Milnor) attractors.
These dynamics are somewhat akin to heteroclinic switching and the relationship between networks of unstable attractors and robust heteroclinic cycles has been addressed by a number of authors [168–170]. In particular, Broer et al. [170] highlight a situation in which there is a bifurcation from a network of unstable attractors to a heteroclinic cycle within a network of pulsecoupled oscillators with delays and inhibitory coupling. They note that the model used in previous work [165–167] is locally noninvertible since the original phase of an oscillator cannot be recovered once it has received an input which takes it over threshold causing the phase to be reset. Kirst and Timme [170] employ a framework in which suprathreshold activity is partially reset, such that \(v_{j}(t^{+})= \mathcal{R}(v_{j}(t)1)\) if \(v_{j}>1\) with a reset function \(\mathcal {R}(v)=cv\), \(c \in[0,1]\), which ensures that the flow becomes locally time invertible when \(c>0\). They demonstrate that for \(c=0\) (where the locally noninvertible dynamics is recovered), the system has a pair of periodic orbits \(A_{1}\) and \(A_{2}\), which are unstable attractors enclosed by the basin of each other. When \(c>0\), \(A_{1}\) and \(A_{2}\) are nonattracting saddles with a heteroclinic connection from \(A_{1}\) to \(A_{2}\). Furthermore, there is a continuous bifurcation from the network of two unstable attractors when \(c=0\) to a heteroclinic two cycle when \(c>0\).
For an interesting dynamical systems perspective on the differences between “kick” synchronisation (in pulsatile coupled systems) and “diffusive” synchronisation [171] and the lack of mathematical work on the former problem see [172]. For example, restrictions on the dynamics of symmetrically coupled systems of oscillators when the coupling is timecontinuous can be circumvented for pulsatile coupling leading to more complex network dynamics [173].
In the real world of synaptic interactions, however, pulsatile kicks are probably the exception rather than the rule, and the biology of neurotransmitter release and uptake is better modelled with a distributed delay process, giving rise to a postsynaptic potential with a finite rise and fall time. For spiketime event driven synaptic models, described in Sect. 2.5, analysis at the network level is hard for a general conductancebased model (given the usual expectation that the single neuron model will be highdimensional and nonlinear), though far more tractable for LIF networks, especially when the focus is on phaselocked states [174–176]. Indeed in this instance many results can be obtained in the strongly coupled regime [177], without recourse to any approximation or reduction.
4.3 Synaptic Coupling in Networks of IF Neurons
The instantaneous reaction of one neuron to the firing of another, as in the pulsecoupled neurons above, does not account for the role of synapses in the transmission of currents. Bressloff and Coombes [177] consider a network of N identical, LIF neurons that interact via synapses by transmitting spike trains to one another. Let \(v_{i}(t)\), the state of neuron i at time t, evolve according to
where \(I_{i}\) is a constant external bias and \(X_{i}(t)\) is the total synaptic current into the cell. As before, we supplement this with the reset condition that whenever \(v_{i}=1\) neuron i fires and is reset to \(v_{i}=0\). The synaptic current \(X_{i}(t)\) is generated by the arrival of spikes from other neurons j and can be taken to have the form
where \(\epsilon w_{ij}\) represents the weight of the connection from the jth neuron to the ith neuron with ϵ characterising the overall strength of synaptic interactions, \(T_{j}^{m}\) denotes the sequence of firing times of the jth neuron and \(J(t)\) determines the course of postsynaptic response to a single spike. A biologically motivated choice for \(J(t)\) is
where \(\varTheta(t)=1\) if \(t>0\) and zero otherwise. Here η is an alpha function (see also Sect. 2.5) and the maximum synaptic response occurs at a nonzero delay \(t=\alpha^{1}\). Note that in the limit of large inverse rise time α, \(J(t)\) approximates a delta function (more like pulse coupling).
Bressloff and Coombes [177] show that the behaviour of the network of oscillators differs depending on the strength of the coupling. This is another instance in which information is lost through making weakcoupling assumptions. To see this one may integrate (11) between \(T_{i}^{n}\) and \(T_{i}^{n+1}\), exploiting the linearity of the equations and solving with variation of parameters, to obtain a map of the firing times. Since the drive \(X_{i}(t)\) depends upon all previous firing events of the other neuron this is an implicit map that relates all the network firing events to one another. It is convenient to introduce the set of interspike intervals (ISIs) \(\Delta_{ij}^{n,m}= T_{i}^{n}T_{j}^{m}\), so that we may write the firing map in the convenient form
where \(T_{i}^{n+1}> T_{j}^{m}\) for all j, m, and
Phaselocked solutions may be found, for an arbitrary coupling strength ϵ, using the ansatz \(T_{j}^{m}=(m\phi_{j}) \Delta\) for some selfconsistent ISI Δ and constant phases \(\phi_{j}\). Substitution into (12) yields
where
and \(P(t)=P(t+\Delta)\). Choosing one of the phases, say \(\phi_{1}\), as a reference then (13) provides a set of N equations for the unknown period Δ and the remaining \(N1\) relative phases \(\phi_{j}\phi_{1}\).
In order to investigate the linear stability of phaselocked solutions of Eq. (13), we consider perturbations of the firing times which we write in the form \(T_{j}^{m} =(m\phi_{j}) \Delta+ \delta_{j}^{m}\). Linearisation of the firing map (12) gives an explicit map for these perturbations that can be written as
where the partial derivatives of \(F_{ij}\) are evaluated at the phaselocked state \((\Delta_{ii}^{n+1,n}, \Delta _{ij}^{n,m})=(\Delta, (nm)\Delta+(\phi_{j}\phi_{i})\Delta)\). For solutions of the form \(\delta _{j}^{m} = \lambda^{m}\delta_{j}\) this reduces to
where \(P_{i}=I_{i}1+\epsilon\sum_{j} W_{ij} P((\phi_{j}\phi_{i})\Delta)\), \(H_{ij}(\lambda) = W_{ij} G((\phi_{j}\phi_{i})\Delta,\lambda)  \delta _{ij} \sum_{k} W_{ik}G((\phi_{k}\phi_{i})\Delta,1)\) and
One solution to Eq. (14) is \(\lambda=1\) with \(\delta _{i} = \delta\) for all \(i=1,\ldots,N\). This reflects the invariance of the dynamics with respect to uniform phase shifts in the firing times. Thus the condition for linear stability of a phaselocked state is that all remaining solutions λ of Eq. (14) are within the unit disc. For \(\lambda1 \sim O(\epsilon)\), and ϵ small, we may expand (14) as
where \(\mathcal{H}_{ij} (\varPhi) = H_{ij}(1) = [W_{ij} K'(\phi_{j}\phi_{i})  \delta_{ij} \sum_{k} W_{ik}K'(\phi_{k}\phi_{i})]/\Delta\) and we have used the result that \(G(\phi,1)=K'(\phi)/\Delta\). Suppose for simplicity that \(I_{i}=I>1\) for all i, so that \(\Delta= \ln [I/(I1)] + O(\epsilon)\). Then the weakcoupling stability condition is that \(\operatorname {Re}\lambda_{p} <0\), where \(\lambda_{p}\), \(p=1,\ldots,N\), are the eigenvalues of the \(N \times N\) matrix with elements \(\epsilon \mathcal{H}_{ij}(\varPhi)\).
As an explicit example let us consider the synchronous state (\(\phi _{i}=\phi\) for all i). From (13) we see that this is guaranteed to exist for the choice \(\sum_{j} W_{ij}=\gamma\) and \(I_{i}=I[1\epsilon K(0) \gamma]\) for some constant \(I>1\), which sets the period as \(\Delta= \ln[I/(I1)]\). Using the result that \(K'(\phi )=\Delta K(\phi) + \Delta P(\phi\Delta)/I\) the spectral problem for the synchronous state then takes the form
We may diagonalise this equation in terms of the eigenvalues of the weight matrix, denoted by \(\nu_{p}\) with \(p=1,\ldots, N\) (and we note that γ is the eigenvalue with eigenvector \((1,1,\ldots,1)\) corresponding to a uniform phase shift). Looking for nontrivial solutions then gives the set of spectral equations \(\mathcal{E}_{p}(\lambda)=0\), where
We may use (15) to determine bifurcation points defined by \(\vert \lambda \vert =1\). For sufficiently small ϵ, solutions to (15) will either be in a neighbourhood of the real solution \(\lambda=1\) or in a neighbourhood of one of the poles of \(G(0,\lambda)\). Since the latter lie in the unit disc, the stability of the synchronous solution (for weak coupling) is determined by \(\epsilon K'(0)[\operatorname {Re}\nu_{p}\gamma] <0\). For strong coupling the characteristic equation has been solved numerically in [177] to illustrate the possibility of Hopf bifurcations (\(\lambda=\mathrm {e}^{i \theta}\), \(\theta\neq0\), \(\theta \neq\pi\)) with increasing \(\vert \epsilon \vert \), leading to oscillator death in a globally coupled inhibitory network for a sufficiently slow synapse and bursting behaviour in asymmetrically coupled networks. Bifurcation diagrams illustrating these phenomenon are shown in Fig. 10. To see how these phenomena can occur from a more analytical perspective it is useful to consider the Fourier representation \(J(t) = (2 \pi )^{1} \int_{\infty}^{\infty}\mathrm{d}\omega\widetilde{J}(\omega ) \mathrm {e}^{i \omega t}\), where \(\widetilde{J}(\omega)=\alpha^{2}/(\alpha+i \omega )^{2}\), so that G may easily be evaluated with \(\lambda=\mathrm {e}^{z}\) as
from which it is easy to see a pole at \(z=\alpha\Delta\). This suggests writing z in the form \(z=\alpha(1+\kappa_{p}) \Delta\) and expanding the spectral equation in powers of α to find a solution. For small α we find from (16) that \(G(0,\mathrm {e}^{z}) = \alpha(1+\kappa_{p}) (1\mathrm {e}^{\Delta })/(\kappa_{p}^{2} \Delta)\). Balancing terms of order α then gives \(\kappa^{2}_{p}=\epsilon\nu_{p} (1\mathrm {e}^{\Delta})/(\Delta^{2}(I1))\), where we use the result that \(G(0,\mathrm {e}^{0}) = O(\alpha^{2})\). Thus for small α, namely slow synaptic currents, we have \(K'(0)=0\), so that a weakcoupling analysis would predict neutral stability (consistent with the notion that a set of IF neurons with common constant input would frequency lock with an arbitrary set of phases). However, our strong coupling analysis predicts that the synchronous solution will only be stable if \(\operatorname {Re}z^{\pm}_{p}<0\) with \(z^{\pm}_{p}=[1\pm\kappa_{p}] \alpha \Delta\). Introducing the firing rate function \(f=1/\Delta\) then \(z^{\pm}_{p}\) can be written succinctly as
Thus for an asymmetric network (with at least one complex conjugate pair of eigenvalues) it may occur that as \(\vert \epsilon \vert \) is increased a pair of eigenvalues determined by \(z^{\pm}_{p}\) may cross the imaginary axis to the right hand complex plane signalling a discrete Hopf bifurcation in the firing times. For a symmetric network with real eigenvalues an instability may occur as some \(\kappa_{p} \in \mathbb {R}\) increases through 1, signalling a breakdown of frequency locking. The above results (valid for slow synapses) can also be obtained using a firing rate approach, as described in [177].
The results above, albeit valid for strong coupling, are only valid for LIF networks. To obtain more general results for networks of limitcycle oscillators it is useful to consider a reduction to phase models.
5 Reduction of LimitCycle Oscillators to Phase–Amplitude and Phase Models
Consider a system of the form
such that for \(\epsilon=0\) the system possesses a periodic orbit
with minimal period \(T>0\) (such that \(u(t)=u(t+T)\) for all \(t\in \mathbb {R}\) but \(u(t)\neq u(s)\) for \(0< s< T\)). We will assume that Γ is attracting and hyperbolic, i.e. linearly stable so that there is one zero Floquet exponent and the others have negative real part. Then we say that (17) is a limitcycle oscillator. We will reduce this to a description that involves a phase that lives on a topological circle that can be thought of as an interval \([0,T)\) with the topology that comes from identifying the ends of the interval: ϵ and \(T\epsilon\) are close for ϵ small. This circle is sometimes called the onetorus \(\mathbb {T}\).
There are a number of conventions used in the literature to represent this phase. We discuss these conventions before looking at general reduction methods. One convention is to define a phase ϑ modulo T such that
where \(\vartheta(u(t))=t\) interpreted modulo T and such that \(\vartheta(u_{0})=0\) for some chosen point \(u_{0}\in\varGamma\). Another convention is to define a phase θ modulo 2π such that
where \(\omega=2\pi/T\) is the angular frequency of the oscillator. Again we pick a point \(u_{0}\in\varGamma\) and require that \(\theta=0\) at \(u=u_{0}\).
In the remainder of the article we will use ϑ and θ as an indication of the convention we are using for phase, in the special case \(T=2\pi\) both conventions coincide and we use θ. Typically one of these descriptions is more convenient than the others but all can in principle be adapted to any phase reduction. Table 2 expresses some features of the conventions which commonly used for phase variables.
We now review some techniques of reduction which can be employed to study the dynamics of (17) when \(\epsilon\neq0\) so that the perturbations may take the dynamics away from the limit cycle. In doing so we will reduce for example to an ODE for \(\vartheta(t)\) taken modulo T. Clearly any solution of an ODE must be continuous in t and typically \(\vartheta(t)\) will be unbounded in t growing at a rate that corresponds to the frequency of the oscillator. Strictly speaking, the coordinate we are referring to in this case is on the lift of the circle \(\mathbb {T}\) to a covering space \(\mathbb {R}\), and for any phase \(\vartheta\in[0,T)\) there are infinitely many lifts to \(\mathbb {R}\) given by \(\vartheta+ kT\) for \(k\in \mathbb {Z}\). However, in common with most literature in this area we will not make a notational difference between whether the phase is understood on the unit cell e.g. \(\theta \in [0,2\pi)\) or on the lift, e.g. \(\theta\in \mathbb {R}\) modulo 2π.
5.1 Isochronal Coordinates
Consider (17) with \(\epsilon=0\). The asymptotic (or latent) phase of a point \(x_{0}\) in the basin of attraction of the limit cycle Γ of period T is the value of \(\vartheta(x_{0})\) such that
where \(x(t)\) is a trajectory starting at \(x_{0}\). Thus if \(u(t)\) and \(x(t)\) are trajectories on and off the limit cycle, respectively, they have the same asymptotic phase if the distance between \(u(t)\) and \(x(t)\) vanishes as \(t\to\infty\). The locus of all points with the same asymptotic phase is called an isochron. Thus an isochron extends the notion of phase off the cycle (within its basin of attraction). Isochrons can also be interpreted as the leaves of the stable manifold of a hyperbolic limit cycle. They fully specify the dynamics in the absence of perturbations [178].
There are very few instances where the isochrons can be computed in closed form (though see the examples in [179] for planepolar models where the radial variable decouples from the angular one). Computing the isochron foliation of the basin of attraction of a limit cycle is a major challenge since it requires knowledge of the limit cycle and therefore can only be computed in special cases or numerically.
One computationally efficient method for numerically determining the isochrons is backward integration, however, it is unstable and in particular for strongly attracting limit cycles the trajectories determined by backwards integration may quickly diverge to infinity. See Izhikevich [48] for a MATLAB code which determines smooth curves approximating isochrons. Other methods include the continuationbased algorithm introduced by Osinga and Moehlis [180], the geometric approach of Guillamon and Huguet to find highorder approximations to isochrons in planar systems [181], quadratic and higherorder approximations [182, 183], and the forward integration method using the Koopman operator and Fourier averages as introduced by Mauroy and Mezić [184]. This latter method is particularly appealing and given its novelty we describe the technique below.
The Koopman operator approach for constructing isochrons for a Tperiodic orbit focuses on tracking observables (or measures on a state space) rather than the identification of invariant sets. The Koopman operator, K, is defined by \(K = z \circ\varPhi_{t}(x)\), where \(z: \mathbb {R}^{n} \rightarrow \mathbb {R}\) is some observable of the state space and \(\varPhi_{t}(x)\) denotes the flow evolved for a time t, starting at a point x. The Fourier average of an observable z is defined as
For a fixed x, (18) is equivalent to a Fourier transform of the (timevarying) observable computed along a trajectory. Hence, for a dynamics with a stable limit cycle (of frequency \(2\pi/T\)), it is clear that the Fourier average can be nonzero only for the frequencies \(\omega_{n}= 2 \pi n/ T\), \(n \in \mathbb {Z}\). The Fourier averages are the eigenfunctions of K, so that
Perhaps rather remarkably the isochrons are level sets of \(\widehat {z}(x;\omega_{n})\) for almost all observables. The only restriction being that the first Fourier coefficient of the Fourier observable evaluated along the limit cycle is nonzero over one period. An example of the use of this approach is shown in Fig. 11, where we plot the isochrons of a Stuart–Landau oscillator.
5.2 Phase–Amplitude Models
An alternative (non isochronal) framework for studying oscillators with an attracting limit cycle is to make a transformation to a moving orthonormal coordinate system around the limit cycle where one coordinate gives the phase on the limit cycle while the other coordinates give a notion of distance from the limit cycle. It has long been known in the dynamical systems community how to construct such a coordinate transformation; see [186] for a discussion. The importance of considering the effects of both the phase and the amplitude interactions of neural oscillators has been highlighted by several authors including Ermentrout and Kopell [187] and Medvedev [188], and that this is especially pertinent when considering phenomenon such as oscillator death (and see Sect. 3.4). Phase–amplitude descriptions have already successfully been used to find equations for the evolution of the energies (amplitudes) and phases of weakly coupled weakly dissipative networks of nonlinear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [103, 189, 190]. Lee et al. [191] use the notion of phase and amplitudes of large networks of globally coupled Stuart–Landau oscillators to investigate the effects of a spread in amplitude growth parameter (units oscillating with different amplitudes and some not oscillating at all) and the effect of a homogeneous shift in the nonlinear frequency parameter.
We now discuss the phase–amplitude coordinate transformation detailed by Hale [186], some of the situations where it has been employed and other areas in which a phase–amplitude description of oscillators is necessary to reveal the dynamics of the system. Consider again the system (17) with \(\epsilon=0\), which has an attracting hyperbolic periodic orbit. Make a transformation to a moving orthonormal coordinate system as follows. Choose one axis of the coordinate system to be in the direction of unit tangent vector along the periodic orbit, ξ, given by
The remaining coordinate axes can be expressed as the columns of an \(n\times(n1)\) matrix ζ. We can then write an arbitrary point x in terms of its phase \(\vartheta\in[0,T)\) and its amplitude ρ:
Here, \(\rho\) represents the Euclidean distance from the limit cycle. The vector of amplitudes \(\rho\in \mathbb {R}^{n1}\) allows us to consider points away from the limit cycle.
Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system for \(\vartheta\in[0,T)\) and \(\rho\in \mathbb {R}^{n1}\)
where
and Df is the Jacobian of the vector field f, evaluated along the periodic orbit u. The technical details to specify the orthonormal coordinates forming ζ can be found in the appendix of [192]. It is straightforward to show that \(f_{1}(\vartheta,\rho) \rightarrow0\) as \(\rho\rightarrow0\), \(f_{2}(\vartheta,0) = 0\) and that \(\partial f_{2} (\vartheta,0)/\partial\rho= 0\). In the above, \(A(\vartheta)\) describes the ϑdependent rate of attraction or repulsion from cycle and \(f_{1}\) captures the shear present in the system, that is, whether the speed of ϑ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [193] and will be discussed further in Sect. 5.3.
Some caution must be exercised when applying this transformation as it will break down when the determinant of the Jacobian of the transformation vanishes. This never occurs on cycle (where \(\rho=0\)) but it may do so for some \(\rho=k>0\), setting an upper bound on how far from the limit cycle these phase–amplitude coordinates can be used to describe the system. In [192] it is noted that for the planar ML model the value of k can be relatively small for some values of ϑ, but that breakdown occurs where the orbit has high curvature. In higherdimensional systems this issue would be less problematic.
Similarly, the coordinate transformation can be applied to driven systems (i.e. (17) with \(\epsilon\neq0\)) where ϵ is not necessarily small. In this case the dynamics in \((\vartheta, \rho)\) coordinates, where \(\vartheta\in[0,T)\) and \(\rho\in \mathbb {R}^{n1}\), are
with
and \(\mathrm{I}_{n}\) is the \(n\times n\) identity matrix. Here, h and B describe the effect in terms of ϑ and ρ that the perturbations have. For planar models, \(B=\mathrm{I}_{2}\).
Medvedev [188] has employed this phase–amplitude description to determine conditions for stability of the synchronised state in a network of identical oscillators with separable linear coupling. Medvedev [194] has also used the framework to consider the effects of white noise on the synchronous state, identifying the types of linear coupling operators which lead to synchrony in a network of oscillators provided that the strength of the interactions is sufficiently strong.
5.3 Dynamics of Forced Oscillators: ShearInduced Chaos
Since phase–amplitude coordinates can capture dynamics a finite distance away from the limit cycle (and additionally have the advantage over isochronal coordinates of being defined outside of the basin of attraction of the limit cycle), they can be used to model dynamical phenomena in driven systems where the perturbations necessarily push the dynamics away from the limit cycle. There is no need to make any assumptions about the strength of the forcing ϵ.
The phase–amplitude description of a forced oscillator is able to detect the presence of other structures in the phase space. For example if the system were multistable, phase–amplitude coordinates would track trajectories near these other structures and back again, should another perturbation return the dynamics to the basin of attraction of the limit cycle. These coordinates would also detect the presence of other nonattracting invariant structures such as saddles in the unperturbed flow. Orbits passing near the saddle will remain there for some time and forcing may act to move trajectories near this saddle before returning to the limit cycle. It may also be the case that the forcing acts to create trapping regions if the forcing is strong compared to the attraction to the limit cycle.
Another dynamical phenomenon which phase–amplitude coordinates are able to capture is the occurrence of shearinduced chaos. Shear in a system near a limit cycle Γ is the differential in components of the velocity tangent to the limit cycle as one moves further from Γ in phase space. Shear forces within the system act to speed up (or slow down) trajectories further away from the limit cycle compared to those closer to it. This phenomenon is illustrated in Fig. 12.
As we show below, when an impulsive force is applied (the system is kicked) a ‘bump’ in the image of Γ is produced. If there is sufficient shear in the system then the bump is folded and stretched as it is attracted back to the limit cycle. Such folding can potentially lead to the formation of horseshoes and strange attractors. However, if the attraction to the limit cycle is large compared to the shear strength or size of the kick then the bumps will dissipate before any significant stretching occurs.
Shearinduced chaos is most commonly discussed in the context of discrete time kicking of limit cycles. Wang and Young [195–197] prove rigorous results in the case of periodically kicked limit cycles with long relaxation times. Their results provide details of the geometric mechanism for producing chaos. Here we briefly review some of these results. More detailed summaries can be found in [198] and [199].
Let \(\varPhi_{t}\) be the flow of the unperturbed system, which has an attracting hyperbolic limit cycle Γ. We can think of a kick as a mapping κ. The dynamics of the system with periodic kicking every τ units of time can be obtained by iterating the map \(F_{\tau}=\varPhi_{\tau}\circ\kappa\). Provided that there is a neighbourhood \(\mathcal{U}\) of Γ such that points in \(\mathcal{U}\) remain in the basin of attraction of Γ under kicks and τ is long enough that the kicked points can return to \(\mathcal{U}\), then \(\varGamma _{\tau}= \bigcap_{n\geq0} F_{\tau}^{n}(\mathcal{U})\) is an attractor for the periodically kicked system. If the kicks are weak then \(\varGamma_{\tau}\) is generally expected to be a slightly perturbed version of Γ and we should expect fairly regular behaviour. In this case \(\varGamma_{\tau}\) is an invariant circle. To obtain more complicated dynamics it is necessary to break the invariant circle. This idea is illustrated by the following linear shear model, considered in [196]:
where \((\vartheta, y) \in[0,T)\times\mathbb{R} = S^{1} \times\mathbb {R}\) are coordinates in the phase space, \(\lambda, \sigma, A>0\) are constants and \(H:S^{1} \to\mathbb{R}\) is a nonconstant smooth function. The unforced system (\(A=0\)) has a limit cycle \(\varGamma=S^{1} \times\{0\} \). If the quantity
is sufficiently large, then it is possible to show that there is a positive measure set \(\mathcal{T}\subset\mathbb{R}^{+}\) such that for all \(\tau\in\mathcal{T}\), \(\varGamma_{\tau}\) is a strange attractor of \(F_{\tau}\) [197]. How large the quantity must be depends on the function \(H(\vartheta)\). Since \(H(\vartheta)\) is nonconstant (to create the bumps), the larger shear σ and the kick amplitude A, the more folding will occur as the bump is attracted back to the limit cycle. Also note that weaker limit cycles (smaller λ) create more favourable conditions for shear induced chaos as the shear has longer to act before the bump is attracted back to the limit cycle.
For a more general system with periodic forcing the shear may not appear explicitly as a parameter. To elucidate what kind of kicks may cause shear induced chaos in this case we appeal to the isochrons of the system. Suppose, as illustrated in Fig. 12, that a section \(\varGamma _{0}\) of the limit cycle is kicked upwards with the end points held fixed and assume \(\tau= np\) for some \(n,p \in\mathbb{Z}^{+}\). Since the isochrons are invariant under the action of \(\varPhi_{T}\), during relaxation the flow moves each point of the kicked curve \(\kappa(\varGamma_{0})\) back towards Γ along the isochrons. In Fig. 12 we can clearly see the effect of the shear with a fold forming.
From Fig. 12 one can see that kicks along isochrons or in directions roughly parallel to the isochrons will not produce strange attractors, nor will kicks that carry points from one isochron to another. The cause of the stretching and folding is the variation in how far points \(x \in\varGamma\) are moved by κ in the direction transverse to the isochrons (i.e. the ordering of points in terms of asymptotic phase is altered by the action of the kick). Lin and Young [198] emphasise that the occurrence of shear induced chaos depends on the interplay between the geometries of the kicks and the dynamical structures of the unforced system.
In the case of the linear shear model above, given by Eq. (23), the isochrons of the unforced system are simply the lines with slope \(\lambda/\sigma\) in \((\vartheta, y)\) coordinates. Variation in kick distances in directions transverse to these isochrons is guaranteed with any nonconstant function H, with greater variation given by larger values of \(\sigma/\lambda\) and A.
Beyond the rigorous results proved by Wang and Young [195–198] concerning periodically kicked limit cycles of the linear shear model and for supercritical Hopf bifurcations, Ott and Stenlund [193] prove that shearinduced chaos may exist near general limit cycles. In addition, Lin and Young [198] have carried out numerical studies including random kicks at times given by a Poisson process and systems driven by white noise. They also consider forcing of a pair of coupled oscillators. In all cases, shearinduced chaos occurs when the shearing and amplitude of the forcing are large enough to overcome the effects of damping.
Lin et al. [200] demonstrate that the ML model can exhibit shearinduced chaos near the homoclinic bifurcation when periodically forced, by plotting images of the periodic orbit under successive applications of the kick map and calculating the maximum Lyapunov exponent. They also emphasise that the phenomenon of shearinduced chaos cannot be detected by the perturbative techniques such as those outlined in Sect. 5.4 and Sect. 5.5 below.
Wedgwood et al. [192] go on to show that the phase–amplitude description is well suited to understanding the behaviour of neural oscillators in response to external stimuli. They consider a number of neural oscillator models in addition to a generic planar model and examine the response of the system to periodic pulsatile forcing by taking \(x\in \mathbb {R}^{2}\),
and a variety of choices for \(f(x)\) in (17). Their numerical simulations indicate that for both the FHN and the ML models the behaviour remains qualitatively similar when the relevant functions \(f_{1}(\vartheta, \rho)\) are replaced by σρ, \(f_{2}(\vartheta, \rho)\) is dropped and \(A(\vartheta)\) is replaced with −λ (for a wide range of \(\sigma, \lambda>0\)). They then focus on the effect of the form of the forcing function in the phase–amplitude coordinates. Evaluating the functions \(h(\vartheta, \rho)\) and \(B(\vartheta, \rho)\) of Eqs. (19) and (22), respectively, for the particular model and denoting the first component of h as \(P_{1}\) and the first component of ξ as \(P_{2}\) they find that forcing each system at the same ratio of the natural frequency of the underlying periodic orbit and implementing the choices above gives the following ODE on \(\vartheta \in[0,T)\), \(\rho\in \mathbb {R}^{n1}\):
By developing a stroboscopic map for this system, Wedgwood et al. [192] numerically evaluate the largest Lyapunov exponent which is found to be positive for the Morris–Lecar model but negative for the FHN model.
The analysis of the behaviour of generic networks of oscillators within a phase–amplitude framework is a challenging open problem but such a description would allow for greater accuracy (compared to the phaseonly methods traditionally used and described below) in elucidating a richer variety of the complex dynamics of oscillator networks.
5.4 Phase Oscillator Models
To obtain a phase description, one can consider the limit of strong attraction [102] in Eqs. (20)–(21). However, it is more appealing to consider a phase variable with a uniform rotation rate that assigns a phase coordinate \(\vartheta \in[0,T)\) to each point \(x \in\varGamma\) according to
This reduction to a phase description gives a simple dynamical system, albeit one that cannot describe evolution of trajectories in phase space that are away from the limit cycle. However, the phasereduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response curve (iPRC). This can be written, for a given ODE model, as the solution to an adjoint equation. For a given highdimensional conductancebased model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [201].
The iPRC at a point on cycle is equal to the gradient of the (isochronal) phase at that point. Writing this vector quantity as Q means that weak forcing \(\epsilon g(t)\) in the original highdimensional models transforms as \({\mathrm {d}\vartheta}/{\mathrm {d}t}=1+\epsilon \langle Q, g \rangle\) where \(\langle\cdot,\cdot\rangle\) defines the standard inner product and ϵ is a small parameter. For periodic forcing such equations can be readily analysed, and questions relating to synchronisation, modelocking and Arnol’d tongues can be thoroughly explored [76]. Moreover, this approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phaselocking and central pattern generation in neural circuitry and see for example [43].
However, the assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed for the popular type I ML firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [200], as just highlighted in Sect. 5.3 (since strong interactions will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold).
5.5 Phase Response
It is common practice in neuroscience to characterise a neuronal oscillator in terms of its phase response to a perturbation [202]. This gives rise to the notion of a socalled phase response curve (PRC), which for a real neuron can be determined experimentally [203–205], and can also be related to the poststimulus time histogram [206]. Following [201], consider a dynamical system \({\mathrm {d}x}/{\mathrm {d}t}=f(x)\), \(x\in \mathbb {R}^{N}\) with a Tperiodic solution \(u(t)=u(t+T)\) and introduce an infinitesimal perturbation \(\Delta x_{0}\) to the trajectory \(u(t)\) at time \(t=0\). This perturbation evolves according to the linearised equation of motion:
Here \(Df(u)\) denotes the Jacobian of f evaluated along u. Introducing a timeindependent isochronal phase shift Δϑ as \(\vartheta(u(t)+\Delta u(t))\vartheta(u(t))\), we have to first order in Δx that
where \(\langle\cdot,\cdot\rangle\) defines the standard inner product, and \(Q=\nabla_{u} \vartheta\) is the gradient of ϑ evaluated at \(u(t)\). Taking the timederivative of (25) gives
Since the above equation must hold for arbitrary perturbations, we see that the gradient \(Q=\nabla_{u} \vartheta\) satisfies the linear equation
subject to the boundary conditions
The first condition simply guarantees that \({\mathrm {d}\vartheta}/{\mathrm {d}t}=1\) (at any point on the periodic orbit), and the second enforces periodicity. Note that the notion of phase response can also be extended to timedelayed systems [207, 208].
In general, Eq. (26) must be solved numerically to obtain the iPRC, say, using the adjoint routine in XPPAUT [127] or MatCont [209]. However, for the case of a nonlinear IF model, defined by (1), the PRC is given explicitly by
where the period of oscillation is found by solving \(\varPsi(v_{\mathrm{th}})=T\) (and the response function is valid for finite perturbations). For example, for the quadratic IF (QIF) model, obtained from (1) with the choice with \(f(v)=v^{2}\), and taking the limit \(v_{\mathrm{th}} \rightarrow\infty\) and \(v_{{R}} \rightarrow\infty\) we find \(Q(\vartheta) = \sin^{2}(\vartheta\pi/T)/I\) with \(T=\pi\tau/\sqrt{I}\), recovering the iPRC expected of an oscillator close to a SNIC bifurcation [210, 211]. For a comprehensive discussion of iPRCs for various common oscillator models see the excellent survey article by Brown et al. [201]. The iPRC for planar piecewise linear IF models can also be computed explicitly [58], although we do not discuss this further here. In Fig. 13 we show the numerically computed iPRCs for the Hodgkin–Huxley model, the Wilson cortical model and the Morris–Lecar model previously discussed in Sect. 2. For completeness it is well to note the iPRC for a planar oscillator close to a supercritical Hopf bifurcation has an adjoint with a shape given by \((\sin(2 \pi t/T),\cos(2 \pi t/T))\) for an orbit shape \((\cos (2 \pi t/T), \sin(2 \pi t/T))\).
Having defined the phase in some neighbourhood of the limit cycle, we can write (24) as
For \({\mathrm {d}x}/{\mathrm {d}t}=f(x)\) this gives \(\langle\nabla_{x} \vartheta, f(x) \rangle = 1\). We now consider the effect of a small external periodic force on the self sustained oscillations as in (17), with \(g(x,t)=g(x,t+\Delta)\), where in general Δ is different from T. For weak perturbations (\(\epsilon\ll1\)) the state point will leave Γ, though will stay in some neighbourhood, which we denote by \(\mathcal{U}\). We extend the phase off cycle using isochronal coordinates so that \(\langle\nabla_{x} \vartheta, f(x) \rangle= 1\) holds for any point \(x \in\mathcal{U}\). For a perturbed oscillator, in these coordinates we have
As a first approximation we evaluate the right hand side of the above on the limit cycle to get an equation for the phase dynamics:
The phase dynamics (28) is still very hard to analyse, and as such it is common to resort to a further simplification, namely averaging. First let us introduce a rotating phase \(\psi=\vartheta Tt/\Delta\), in which case (28) becomes
If both ϵ and δ are small then \({\mathrm {d}\psi}/{\mathrm {d}t} \simeq0\) and ψ evolves slowly. Thus we may set \(\psi(s) \simeq\psi (t)\) for \(s \in[t, t+T]\). Averaging the above over one period gives
where we have used the result that \(I(\psi+s,t)=I(\psi+s+T,t+T)\). The function \(H(\psi)\) is Tperiodic and can be written as a Fourier series \(H(\psi) = \sum_{n} H_{n} \mathrm {e}^{2 \pi i n \psi/T}\), with the simplest example of an averaged phase dynamics being
which is called the Adler equation [212]. If we denote the maximum and minimum of H by \(H_{\mathrm{min}}\) and \(H_{\mathrm{max}}\), respectively, then for a phaselocked \(1:1\) state defined by \(\frac{\mathrm{d}}{\mathrm{d}t} {\psi}=0\) we require \(\epsilon H_{\mathrm{min}} < \delta< \epsilon H_{\mathrm{max}}\). In this case there are two fixed points \(\psi_{\pm}\) defined by \(H(\psi_{\pm}) = \delta\). One of these is unstable (say \(\psi_{}\), so that \(\epsilon H'(\psi _{})>0\)) and the other stable (\(\psi_{+}\), with \(\epsilon H'(\psi_{+})<0\)). This gives rise to a rotating solution with constant rotation frequency so that \(\vartheta=(1+\delta)t+ \psi_{+}\). The two solutions coalesce in a saddlenode bifurcation when \(\delta= \epsilon H_{\mathrm{min}}\) and \(\delta= \epsilon H_{\mathrm{max}}\) (or equivalently when \(H'(\psi _{\pm})=0\)). In the case of the Adler model the parameter region for phase locking is given explicitly by a triangular wedge defined by \(\epsilon= \delta\)—a socalled Arnol’d tongue. Outside of this tongue solutions drift (they cannot lock to the forcing period) according to (29), and the system evolves quasiperiodically. We treat weakly coupled phase oscillators in Sect. 6.
6 Weakly Coupled Phase Oscillator Networks
The theory of weakly coupled oscillators [5, 213] is now a standard tool of dynamical systems theory and has been used by many authors to study oscillatory neural networks; see for example [213–217]. The book by Hoppensteadt and Izhikevich provides a very comprehensive review of this framework [43], which can also be adapted to study networks of relaxation oscillators (in some singular limit) [146, 218].
Consider, for illustration, a system of interacting limitcycle oscillators (8). Following the method in Sect. 5.5, similar to (29) we obtain the network’s phase dynamics in the form
where the frequency \(\omega_{i}\) allows for the fact that oscillators are not identical and, for this reason, we will assume that \(\theta_{i}\in [0,2\pi)\). Precisely this form of network model was originally suggested by Winfree to describe populations of coupled oscillators. The Winfree model [219] assumes a separation of time scales so that an oscillator can be solely characterised by its phase on cycle (fast attraction to cycle) and is described by the network equations,
describing a globally coupled network with a biologically realistic PRC R and pulsatile interaction function P. Using a mixture of analysis and numerics Winfree found that with large N there was a transition to macroscopic synchrony at a critical value of the heterogeneity of the population. Following this, Kuramoto [5] introduced a simpler model with interactions mediated by phase differences, and showed how the transition to collective synchronisation could be understood from a more mathematical perspective. For an excellent review of the Kuramoto model see [220] and [221].
The natural way to obtain a phasedifference model from (30) is, as in Sect. 5.5, to average over one period of oscillation. For simplicity let us assume that all the oscillators are identical, and \(\omega_{i}=\omega\) for all i, in which case we find that
where
The 2πperiodic function H is referred to as the phase interaction function (or coupling function). If we write complex Fourier series for Q and G as
respectively, then
with \(H_{n} = \langle Q_{n}, G_{n} \rangle\). Note that a certain caution has to be exercised in applying averaging theory. In general, one can only establish that a solution of the unaveraged equations is ϵclose to a corresponding solution of the averaged system for times of \(O(\epsilon^{1})\). No such problem arises in the case of hyperbolic fixed points corresponding to phaselocked states.
When describing a piece of cortex or a central pattern generator circuit with a set of oscillators, the biological realism of the model typically resides in the phase interaction function. The simplest example is \(H(\psi)=\sin(\psi)\), which when combined with a choice of global coupling defines the wellknown Kuramoto model [5]. However, to model realistic neural networks one should calculate (33) directly, using knowledge of the single neuron iPRC and the form of interaction. As an example consider a synaptic coupling, described in Sect. 2.5, that can be written in the form \(G(u(\psi)) = \sum_{m} \eta(\psi+m 2\pi)\), and a single neuron model for which the iPRC in the voltage variable v is given by R (say experimentally or from numerical investigation). In this case
If instead we were interested in diffusive (gap junction) coupling then we would have
For the HH model \(R(t)\) is known to have a shape like \(\sin(t)\) for a spike centred on the origin (see Fig. 13). Making the further choice that \(\eta(t)=\alpha^{2} t \mathrm {e}^{\alpha t}\) then (35) can be evaluated as
In the particular case of two oscillators with reciprocal coupling and \(\omega=1\) then
and we define \(\varphi:=\theta_{2}(t)\theta_{1}(t)\). A phaselocked solution has a constant phase difference ϕ that is a zero of the (odd) function
A given phaselocked state is then stable provided that \(K'(\varphi)<0\). Note that by symmetry both the inphase (\(\varphi=0\)) and the antiphase (\(\varphi =\pi\)) states are guaranteed to exist. For the form of the phase interaction function given by (36), the stability of the synchronous solution is governed by the sign of \(K'(0)\):
Thus for inhibitory coupling (\(\epsilon< 0\)) synchronisation will occur if \(1/\alpha> 1\), namely when the synapse is slow (\(\alpha\rightarrow0\)). It is also a simple matter to show that the antisynchronous solution (\(\varphi =\pi \)) is stable for a sufficiently fast synapse (\(\alpha \rightarrow\infty\)). It is also possible to develop a general theory for the existence and stability of phaselocked states in larger networks than just a pair.
6.1 Phase, Frequency and Mode Locking
Now suppose we have a general population of \(N\geq2\) coupled phase oscillators
described by phases \(\theta_{j}\in \mathbb {R}/2\pi \mathbb {Z}\). For a particular continuous choice of phases \(\theta(t)\) for the trajectory one can define the frequency of the jth oscillator as
This limit will converge under fairly weak assumptions on the dynamics [222], though it may vary for different attractors in the same system, for different oscillators j and in some cases it may vary even for different trajectories within the same attractor.
We say two oscillators j and k are phase locked with ratio \((n:m)\) for \((n,m)\in \mathbb {Z}^{2}\setminus(0,0)\) with no common factors of n and m, if there is an \(M>0\) such that
for all \(t>0\). The oscillators are frequency locked with ratio \((n:m)\) if
If we say they are simply phase (or frequency locked) without explicit mention of the \((n:m)\) ratio, we are using the convention that they are \((1:1)\) phase (or frequency) locked. The definition of \(\varOmega_{j}\) means that if two oscillators are phase locked then they are frequency locked. The converse is not necessarily the case: two oscillators may be frequency locked but not phase locked if the phase difference \(n\theta_{j}(t)m\theta_{k}(t)\) grows sublinearly with t.
For the special case of globally coupled networks (\(w_{ij}=1/N\) for the system (32)), the system is \(S_{N} \times \mathbb {T}\) equivariant. By topological arguments, maximally symmetric solutions describing synchronous, splay, and a variety of cluster states exist generically for weak coupling [118]. The system (32) with global coupling is in itself an interesting subject of study in that it is of arbitrarily high dimension N but is effectively determined by the single function \(H(\varphi)\) that is computable from a single pair of oscillators. The system (and variants thereof) have been productively studied by thousands of papers since the seminal work of Kuramoto [5].
6.2 Dynamics of General Networks of Identical Phase Oscillators
The collective dynamics of phase oscillators have been investigated for a range of regular network structures including linear arrays and rings with uni or bidirectional coupling e.g. [118, 120, 213, 223], and hierarchical networks [224]. In some cases the systems can be usefully investigated in terms of permutation symmetries of (32) with global coupling, for example \(\mathbb {Z}_{N}\) or \(\mathbb {D}_{N}\) for uni or bidirectionally coupled rings. In other cases a variety of approaches have been developed and adapted to particular structures though these have not in all cases been specifically applied to oscillator networks; some of these approaches are discussed in Sect. 3.3
We recall that the form of the coupling in (32) is special in the sense that it assumes the interactions between two oscillators are independent of any third—this is called pairwise coupling [118, 120]. If there are degeneracies such as
which can appear when some of the Fourier components of H are zero, this can lead to degeneracies in the dynamics. For example [225], while Theorem 7.1 in [118] shows that if H satisfies (37) for some \(m\geq2\) and N is a multiple of m then (32), with global coupling, will have mdimensional invariant tori in phase space that are foliated by neutrally stable periodic orbits. This degeneracy will disappear on including either nonpairwise coupling or introducing small but nonzero Fourier components in the expansion of H but as noted in [226] this will typically be the case for the interaction of oscillators even if they are near a Hopf bifurcation.
We examine in more detail some of the phaselocked states that can arise in weakly coupled networks of identical phase oscillators described by (32). We define a \(1:1\) phaselocked solution to be of the form \(\theta_{i}(t)=\varphi_{i}+ \varOmega t\), where \(\varphi_{i}\) is a constant phase and Ω is the collective frequency of the coupled oscillators. Substitution into the averaged system (32) gives
for \(i=1,\ldots,N\). These N equations determine the collective frequency Ω and \(N1\) relative phases with the latter independent of ϵ.
It is interesting to compare the weakcoupling theory for phaselocked states with the analysis of LIF networks from Sect. 4.3. Equation (13) has an identical structure to that of Eq. (38) (for \(I_{i} = I\) for all i), so that the classification of solutions using group theoretic methods is the same in both situations. There are, however, a number of significant differences between phaselocking equations (38) and (13). First, Eq. (13) is exact, whereas Eq. (38) is valid only to \(O(\epsilon)\) since it is derived under the assumption of weak coupling. Second, the collective period of oscillations Δ must be determined selfconsistently in Eq. (13).
In order to analyse the local stability of a phaselocked solution \(\varPhi=(\phi_{1},\ldots,\phi_{N})\), we linearise the system by setting \(\theta_{i}(t)={\varphi}_{i}+\varOmega t+ \widetilde{\theta}_{i}(t)\) and expand to firstorder in \(\widetilde{\theta}_{i}\):
where
and \(H'(\varphi) = \mathrm{d}H(\varphi)/\mathrm{d}\varphi\). One of the eigenvalues of the Jacobian \(\widehat{\mathcal {H}}\) is always zero, and the corresponding eigenvector points in the direction of the uncoupled flow, that is, \((1,1,\ldots,1)\). The phaselocked solution will be stable provided that all other eigenvalues have a negative real part. We note that the Jacobian has the form of a graphLaplacian mixing both anatomy and dynamics, namely it is the graphLaplacian of the matrix with components \(w_{ij}H'(\varphi_{j}\varphi_{i})\).
6.2.1 Synchrony
Synchrony (more precisely, exact phase synchrony), where \(\theta _{1}=\theta_{2}= \cdots= \theta_{N1}=\theta_{N} = \varOmega t+ t_{0}\) for some fixed frequency Ω, is a classic example of a phaselocked state. Substitution into (32), describing a network of identical oscillators, shows that Ω has symmetry \(S_{N}\) and must satisfy the condition
One way for this to be true for all i is if \(H(0)=0\), which is the case, say, for \(H(\theta)=\sin(\theta)\) or for diffusive coupling, which is linear in the difference between two state variables so that \(H(0)=0\). The existence of synchronous solutions is also guaranteed if \(\sum_{j=1}^{N} w_{ij}\) is independent of i. This would be the case for global coupling where \(w_{ij}=1/N\), so that the system has permutation symmetry.
If the synchronous solution exists then the Jacobian is given by \(\epsilon H'(0)\mathcal{L}\) where \(\mathcal{L}\) is the graphLaplacian with components \(\mathcal{L}_{ij}=\delta_{ij}\sum_{k} w_{ik}w_{ij}\). We note that \(\mathcal{L}\) has one zero eigenvalue, with eigenvector \((1,1,\ldots,1,1)\). Hence if all the other eigenvalues of \(\mathcal{L}\) lie on one side of the imaginary axis then stability is solely determined by the sign of \(\epsilon H'(0)\). This would be the case for a weighted connectivity matrix with all positive entries since the graphLaplacian in this instance would be positive semidefinite. For example, for global coupling we have \(\mathcal{L}_{ij} = \delta_{ij}N^{1}\), and the (\(N1\) degenerate) eigenvalue is +1. Hence the synchronous solution will be stable provided \(\lambda= \epsilon H'(0)<0\).
6.2.2 Asynchrony
Another example of a phaselocked state is the purely asynchronous solution whereby all phases are uniformly distributed around the unit circle. This is sometimes referred to as a splay state, discrete rotating wave with \(\mathbb{Z}_{N}\) symmetry or splayphase state and can be written \({\mathrm {d}\theta_{i}}/{\mathrm {d}t}=\varOmega\) with \(\theta_{i+1}\theta_{i}=2\pi/N\) ∀i. Like the synchronous solution it will be present but not necessarily stable in networks with global coupling, with an emergent frequency that depends on H:
In this case the Jacobian takes the form
where \(\varGamma=\sum_{k} H'(2\pi k/N)\) and \(A_{n}=H'(2\pi n/N)\). Hence the eigenvalues are given by \(\lambda_{p} = \epsilon[\nu_{p} \varGamma]/N\), \(p=0,\ldots,N1\) where \(\nu_{p}\) are the eigenvalues of \(A_{nm}\): \(\sum_{m} A_{nm} a _{m}^{p} = \nu_{p} a_{n}^{p}\), where \(a_{n}^{p}\) denote the components of the pth eigenvector. This has solutions of the form \(a_{n}^{p} = \mathrm {e}^{2 \pi i np/N}\) so that \(\nu_{p} = \sum_{m} A_{m} \mathrm {e}^{2 \pi i m p/N}\), giving
and the splay state will be stable if \(\operatorname {Re}(\lambda_{p}) < 0\) \(\forall p \neq0\).
In the large N limit \(N \rightarrow\infty\) we have the useful result that (for global coupling) network averages may be replaced by time averages:
for some 2πperiodic function \(F(t)=F(t+2\pi)\) (which can be established using a simple Riemann sum argument), with a Fourier series \(F(t) = \sum_{n} F_{n} \mathrm {e}^{i n t}\). Hence in the large N limit the collective frequency of a splay state (global coupling) is given by \(\varOmega= \omega+ \epsilon H_{0}\), with eigenvalues
Hence a splay state is stable if \(p \epsilon \operatorname {Im}H_{p} < 0\), where we have used the fact that since \(H(\theta)\) is real, then \(\operatorname {Im}H_{p}= \operatorname {Im}H_{p}\). As an example consider the case \(H(\theta) = \theta(\pi\theta )(\theta 2 \pi)\) for \(\theta\in[0,2\pi)\) and \(\epsilon>0\) (where H is periodically extended outside \([0,2\pi)\)). It is straightforward to calculate the Fourier coefficients (34) as \(H_{n}=6i/n^{3}\), so that \(p \epsilon \operatorname {Im}H_{p} =  6 \epsilon/p^{2} < 0\) \(\forall p \neq0\). Hence the asynchronous state is stable. If we flip any one of the coefficients \(H_{m} \rightarrow H_{m}\) then \(\operatorname {Re}\lambda_{m} >0\) and the splay state will develop an instability to an eigenmode that will initially destabilise the system in favour of an mcluster, and see Fig. 14.
6.2.3 Clusters for Globally Coupled Phase Oscillators
For reviews of the stability of cluster states (in which subsets of the oscillator population synchronise, with oscillators belonging to different clusters behaving differently) we refer the reader to [118, 120, 227]; here we use the notation of [228]. If a group of N oscillators is neither fully synchronised nor desynchronised it may be clustered. We say \(\mathcal {A}=\{ A_{1},\ldots,A_{M}\}\) is an Mcluster partition of \(\{1,\ldots,N\}\) if
where \(A_{p}\) are pairwise disjoint sets (\(A_{p}\cap A_{q}=\emptyset\) if \(p \neq q\)). Note that if \(a_{p}=\vert A_{p}\vert \) then
One can refer to this as a cluster of type \((a_{1},\ldots,a_{M})\). It is possible to show that any clustering can be realised as a stable periodic orbit of the globally coupled phase oscillator system [228] for suitable choice of phase interaction function; more precisely, there is a coupling function H for the system (32), with global coupling, such that for any N and any given Mcluster partition \(\mathcal {A}\) of \(\{1,\ldots, N\}\) there is a linearly stable periodic orbit realising that partition (and all permutations of it). See also [229], where the authors consider clustering in this system where \(H(\varphi)=\sin M\varphi\). More generally, there are very many invariant subspaces corresponding to spatiotemporal clustering that we can characterise in the following theorem.
Theorem 6.1
(Theorem 3.1 in [118])
The subsets of \(\mathbb {T}^{N}\) that are invariant for (32), with global coupling, because of symmetries of \(S_{N}\times \mathbb {T}\) correspond to isotropy subgroups in the conjugacy class of
where \(N=mk\), \(k=k_{1}+\cdots+k_{\ell}\) and \(\times_{s}\) denotes the semidirect product. The points with this isotropy have ℓm clusters that are split into ℓ groups of m clusters of the size \(k_{i}\). The clusters within these groups are cyclically permuted by a phase shift of \(2\pi/m\). The number of isotropy subgroups in this conjugacy class is \(N!/[m(k_{1}!\cdots k_{\ell}!)]\).
It is a nontrivial problem to discover which of these subspaces contain periodic solutions. Note that the inphase case corresponds to \(\ell =m=1\), \(k_{1}=N\) while splay phase corresponds to \(\ell=k_{1}=1\), \(m=N\). The stability of several classes of these solutions can be computed in terms of properties of \(H(\varphi)\); see for example Sect. 6.2.1 and Sect. 6.2.2 and for other classes of solution [118, 120, 228].
6.2.4 Generic Loss of Synchrony in Globally Coupled Identical Phase Oscillators
Bifurcation properties of the globally coupled oscillator system (32) on varying a parameter that affects the coupling \(H(\varphi)\) are surprisingly complicated because of the symmetries present in the system; see Sect. 3.6. In particular, the high multiplicity of the eigenvalues for loss of stability of the synchronous solution means:

Pathfollowing numerical bifurcation programs such as AUTO, CONTENT, MatCont or XPPAUT need to be used with great care when applying to problems with \(N\geq3\) identical oscillators—these typically will not be able to find all solutions branching from one that loses stability.

A large number of branches with a range of symmetries may generically be involved in the bifurcation; indeed, there are branches with symmetries corresponding to all possible twocluster states \(S_{k} \times S_{Nk}\).

Local bifurcations may have global bifurcation consequences owing to the presence of connections that are facilitated by the nontrivial topology of the torus [118, 230].

Branches of degenerate attractors such as heteroclinic attractors may appear at such bifurcations for \(N\geq4\) oscillators.
Hansel et al. [231] consider the system (32) with global coupling and phase interaction function of the form
for \((r,\alpha)\) fixed parameters; detailed bifurcation scenarios in the cases \(N=3,4\) are shown in [232]. As an example, Fig. 15 shows regions of stability of synchrony, splayphase solutions and robust heteroclinic attractors as discussed later in Sect. 7.
6.3 Phase Waves
The phasereduction method has been applied to a number of important biological systems, including the study of travelling waves in chains of weakly coupled oscillators that model processes such as the generation and control of rhythmic activity in central pattern generators (CPGs) underlying locomotion [233, 234] and peristalsis in vascular and intestinal smooth muscle [213]. Related phase models have been motivated by the observation that synchronisation and waves of excitation can occur during sensory processing in the cortex [235]. In the former case the focus has been on dynamics on a lattice and in the latter continuum models have been preferred. We now present examples of both these types of model, focusing on phase wave solutions [236].
Phase Waves: A Lattice Model
The lamprey is an eellike vertebrate which swims by generating travelling waves of neural activity that pass down its spinal cord. The spinal cord contains about 100 segments, each of which is a simple halfcentre neural circuit capable of generating alternating contraction and relaxation of the body muscles on either side of body during swimming. In a seminal series of papers, Ermentrout and Kopell carried out a detailed study of the dynamics of a chain of weakly coupled limitcycle oscillators [213, 237, 238], motivated by the known physiology of the lamprey spinal cord. They considered N phase oscillators arranged on a chain with nearestneighbour anisotropic interactions, as illustrated in Fig. 16, and identified a travelling wave as a phaselocked state with a constant phase difference between adjacent segments. The intersegmental phase differences are defined as \(\varphi_{i} = \theta_{i+1}  \theta_{i}\). If \(\varphi_{i} < 0\) then the wave travels from head to tail whilst for \(\varphi_{i}>0\) the wave travels from the tail to the head. For a chain we set \(W_{ij} = \delta_{i1,j}W_{} + \delta_{i+1,j}W_{+}\) to obtain
where \(\theta_{i}\in \mathbb {R}/2\pi \mathbb {Z}\). Pairwise subtraction and substitution of \(\varphi_{i} = \theta_{i+1}  \theta_{i}\) leads to an \(N1\)dimensional system for the phase differences,
for \(i=1, \ldots, N1\), with boundary conditions \(H(\varphi_{0}) = 0 = H(\varphi_{N+1})\), where \(\Delta\omega_{i} = \omega_{i+1}  \omega_{i}\). There are at least two different mechanisms that can generate travelling wave solutions.
The first is based on the presence of a gradient of frequencies along the chain, that is, \(\Delta\omega_{i}\) has the same sign for all i, with the wave propagating from the high frequency region to the low frequency region. This can be established explicitly in the case of an isotropic, odd interaction function, \(W_{\pm}=1\) and \(H(\varphi) = H(\varphi)\), where we have
The fixed points \(\varPhi=(\varphi_{1},\ldots,\varphi_{N})\) satisfy the matrix equation \({ H} ( \varPhi) = { A}^{1} { D}\), where \({ H} (\varPhi) = (H(\varphi_{1}), \ldots, H(\varphi_{N}))^{\top}\), \({ D} = (\Delta\omega_{1}, \ldots, \Delta\omega_{N})^{\top}\), and A is a tridiagonal matrix with elements \(A_{ii}=2\), \(A_{i,i+1}=A_{1+1,i}=1\). For the sake of illustration suppose that \(H(\varphi) = \sin(\varphi+\sigma)\). Then a solution Φ will exist if every component of \({ A}^{1} { D}\) lies between ±1. Let \(a_{0} = \max\{  ({ A}^{1} { D})_{i}  \}\). If \(a_{0} < 1\) then for each \(i=1,\ldots,N\) there are two distinct solutions \(\varphi_{i}^{\pm}\) in the interval \([0, 2\pi]\) with \(H'(\varphi_{i}^{} ) > 0\) and \(H'(\varphi_{i}^{+} ) < 0\). In other words, there are \(2^{N}\) phaselocked solutions. Linearising about a phaselocked solution and exploiting the structure of the matrix A, it can be proven that only the solution \(\varPhi^{}=(\varphi_{1}^{}, \ldots,\varphi_{N}^{})\) is stable. Assuming that the frequency gradient is monotonic, this solution corresponds to a stable travelling wave. When the gradient becomes too steep to allow phase locking (i.e. \(a_{0} > 1\)), two or more clusters of oscillators (frequency plateaus) tend to form that oscillate at different frequencies. Waves produced by a frequency gradient do not have a constant speed or, equivalently, constant phase lags along the chain.
Constant speed waves in a chain of identical oscillators can be generated by considering phaselocked solutions defined by \(\varphi_{i} = \varphi\) for all i, with a collective period of oscillation Ω determined using \({\mathrm {d}\theta_{1}}/{\mathrm {d}t} = \varOmega\) to give \(\varOmega=\omega_{1} + W_{+} H(\varphi_{1})\). The steady state equations are then \(\Delta\omega_{1} +W_{+} H(\varphi) =0\), \(\Delta\omega_{N1} W_{}H(\varphi) =0 \) and \(\Delta\omega_{i} =0\), for \(i=2,\ldots, N2\). Thus, a travelling wave solution requires that all frequencies must be the same except at the ends of the chain. One travelling solution is given by \(\Delta \omega_{N1}=0\) with \(\Delta\omega_{1} = W_{} H(\varphi)\) and \(H(\varphi) = 0\). For the choice \(H(\varphi)=\sin(\varphi+\sigma)\) we have \(\varphi =\sigma\) and \(\Delta\omega_{1} = W_{} \sin(2 \sigma)\). If \(2 \sigma< \pi\) then \(\Delta\omega_{1} =\omega_{2}  \omega_{1}<0\) and \(\omega_{1}\) must be larger than \(\omega_{2}\) and hence all the remaining \(\omega_{i}\) for a forward travelling wave to exist. Backward swimming can be generated by setting \(\omega_{1}=0\) and solving in a similar fashion.
Phase Waves: A Continuum Model
There is solid experimental evidence for electrical waves in awake and aroused vertebrate preparations, as well as semiintact and active invertebrate preparations, as nicely described by Ermentrout and Kleinfeld [235]. Moreover, these authors argue convincingly for the use of phase models in understanding waves seen in cortex and speculate that they may serve to label simultaneously perceived features in the stimulus stream with a unique phase. More recently it has been found that cortical oscillations can propagate as travelling waves across the surface of the motor cortex of monkeys (Macaca mulatta) [239]. Given that to a first approximation the cortex is often viewed as being built from a dense reciprocally interconnected network of corticocortical axonal pathways, of which there are typically 10^{10} in a human brain it is natural to develop a continuum phase model, along the lines described by Crook et al. [240]. These authors further incorporated axonal delays into their model to explain the seemingly contradictory result that synchrony is stable for short range excitatory coupling, but unstable for long range. To see how a delayinduced instability may arise we consider a continuum model of identical phase oscillators with spacedependent delays
where \(x \in D \subseteq \mathbb {R}\) and \(\theta\in \mathbb {R}/2\pi \mathbb {Z}\). This model is naturally obtained as the continuum limit of (32) for a network arranged along a line with communication delays \(\tau(x,y)\) set by the speed of an action potential v that mediates the long range interaction over a distance between points in the tissue at x and y. Here we have used the result that for weak coupling delays manifest themselves as phase shifts. The function W sets the anatomical connectivity pattern. It is convenient to assume that interactions are homogeneous and translation invariant, so that \(W(x,y) = W(xy)\) with \(\tau(x,y)=xy/v\), and we either assume periodic boundary conditions or take \(D=\mathbb {R}\).
Following [240] we construct travelling wave solutions of Eq. (40) for \(D=\mathbb {R}\) of the form \(\theta(x,t) = \varOmega t + \beta x\), with the frequency Ω satisfying the dispersion relation
When \(\beta=0\), the solution is synchronous. To explore the stability of the travelling wave we linearise (40) about \(\varOmega t + \beta x\) and consider perturbations of the form \(\mathrm {e}^{\lambda t} \mathrm {e}^{ipx}\), to find that travelling phase waves solutions are stable if \(\operatorname {Re}\lambda(p) <0\) for all \(p \neq0\), where
Note that the neutrally stable mode \(\lambda(0) =0\) represents constant phase shifts. For example, for the case that \(H(\theta)=\sin\theta\) then we have
where
and \(\beta_{\pm}= \pm\beta 1/(v)\). A plot of the region of wave stability for the choice \(W(y)=\exp(\vert y\vert )/2\) and \(\epsilon>0\) in the \((\beta,v)\) plane is shown in Fig. 17. Note that the synchronous solution \(\beta=0\) is unstable for small values of v. For a discussion of more realistic forms of phase interaction, describing synaptic interactions, see [241].
6.3.1 Phase Turbulence
For appropriate choice of the anatomical kernel and the phase interaction function, continuum models of the form (40) can also support weak turbulent solutions reminiscent of those seen in the Kuramoto–Sivashinsky (KS) equation. The KS equation generically describes the dynamics near long wavelength primary instabilities in the presence of appropriate (translational, parity and Galilean) symmetries, and it is of the form
where \(\alpha,\beta,\gamma>0\). For a further discussion of this model see [76]. For the model (40) with decaying excitatory coupling excitation and purely sinusoidal phase coupling, simulations on a large domain show a marked tendency to generate phase slips and spatiotemporal pattern shedding, resulting in a loss of spatial continuity of \(\theta(x,t)\). However, Battogtokh [242] has shown that a mixture of excitation and inhibition with higher harmonics in the phase interaction can counteract this tendency and allow the formation of turbulent states. To see how this can arise consider an extension of (40) to allow for a mixing of spatial scales and nonlinearities in the form
where we drop the consideration of axonal delays. Using the analysis of Sect. 6.3 the synchronous wave solution will be stable provided \(\lambda(p) < 0\) for all \(p \neq0\) where
After introducing the complex function \(z(x,t) = \exp(i \theta(x,t))\) and writing the phase interaction functions as Fourier series of the form \(H_{\mu}(\theta) = \sum_{n} H_{n}^{\mu} \mathrm {e}^{i n \theta}\) then we can rewrite (42) as
where
The form above is useful for computational purposes, since \(\psi_{n}^{\mu}\) can easily be computed using a fast Fourier transform (exploiting its convolution structure). Battogtokh [242] considered the choice \(M=3\) with \(H_{1}(\theta)=H_{2}(\theta)=\sin(\theta+ \alpha)\), \(H_{3}=\sin(2\theta+ \alpha)\) and \(W_{\mu}(x)=\gamma_{\mu}\exp(\gamma_{\mu}x)/2\) with \(\gamma _{2}=\gamma_{3}\). In this case \(\widehat{W}_{\mu}(p) = \gamma_{\mu}^{2}/(\gamma _{\mu}^{2}+p^{2})\), so that
By choosing a mixture of positive and negative coupling strengths the spectrum can easily show a band of unstable wavenumbers from zero up to some maximum as shown in Fig. 18. Indeed this shape of spectrum is guaranteed when \(\sum_{\mu}\epsilon _{\mu}H_{\mu}'(0)>0\) and \(\sum_{\mu}\epsilon_{\mu}H_{\mu}'(0)/\gamma_{\mu}^{2}<0\). Similarly the KS equation (41) has a band of unstable wavenumbers between zero and one (with the most unstable wavenumber at \(1/\sqrt{2}\)). For the case that all the spatial scales \(\gamma _{\mu}^{1}\) are small compared to the system size then we may develop a long wavelength argument to develop local models for \(\psi _{n}^{\mu}\). To explain this we first construct the Fourier transform \(\widehat{\psi}_{n}^{\mu}(p,t) = \widehat{W}_{\mu}(p) f_{n}(p,t)\), where \(f_{n}\) is the Fourier transform of \(z^{n}\) and use the expansion \(\widehat {W}_{\mu}(p) \simeq1\gamma_{\mu}^{2} p^{2} +\gamma_{\mu}^{4} p^{4} + \cdots\). After inverse Fourier transforming we find
Noting that \(H^{1}_{1}=H_{2}^{1}=H_{3}^{2}=\exp(i \alpha)/(2i) \equiv\varGamma\) with all other Fourier coefficients zero then (43) yields
where \(\varOmega=\omega+ \sum_{\mu} \epsilon_{\mu}H_{\mu}(0)\). Expanding (44) to second order gives \(\theta_{t} = \varOmega \alpha\theta_{xx} +\beta(\theta_{x})^{2}\), where \(\alpha=\sum_{\mu}\epsilon_{\mu}H_{\mu}'(0) \gamma_{\mu}^{2}\) and \(\beta =\sum_{\mu}\epsilon_{\mu}H_{\mu}''(0) \gamma_{\mu}^{2}\). Going to higher order yields fourthorder terms and we recover an equation of KS type with the coefficient of \(\theta_{xxxx}\) given by \(\gamma=\sum_{\mu}\epsilon_{\mu}H_{\mu}'(0) \gamma_{\mu}^{4}\). To generate phase turbulence we thus require \(\alpha>0\), which is also a condition required to generate a band of unstable wavenumbers, and \(\beta,\gamma>0\). A direct simulation of the model with the parameters for Fig. 18, for which \(\alpha,\beta,\gamma>0\), shows the development of a phase turbulent state. This is represented in Fig. 19 where we plot the absolute value of the complex function \(\varPsi= (\epsilon_{1} W_{1}+\epsilon_{2} W_{2} )\otimes z + \epsilon_{3} W_{3} \otimes z^{2}\).
7 Heteroclinic Attractors
In addition to dynamically simple periodic attractors with varying degrees of clustering, the emergent dynamics of coupled phase oscillator systems such as (32) can be remarkably complex even in the case of global coupling and similar effects can appear in a wide range of coupled systems. In the case of globally coupled phase oscillators, the dynamical complexity depends only on the phase interaction function H and the number of oscillators N. Chaotic dynamics [243] can appear in four or more globally coupled phase oscillators for phase interaction functions of sufficient complexity. We focus now on attractors that are robust and apparently prevalent in many such systems: robust heteroclinic attractors.
In a neuroscience context such attractors have been investigated under several related names, including slow switching [231, 244–246] where the system evolves towards an attractor that displays slow switching between cluster states where the switching is on a time scale determined by the noise, heteroclinic networks [137, 232, 247] or winnerless competition [248–250]. The effects can be seen in “microscale” models of small numbers of neurons or in “macroscale” models of cognitive function. In all these cases there are a number of possible patterns of activity that compete with each other but such that each pattern is unstable to some perturbations that take it to another pattern—this can be contrasted to winnertakesall competition where there is attraction to an asymptotically stable pattern.
These attractors obtain their dynamical structure from the presence of invariant subspaces for the dynamics that allow the possibility of robust connections between saddles. These subspaces may be symmetryinduced fixedpoint subspaces, spaces with a given clustering, subspaces forced by multiplicative coupling or subspaces forced by assumptions on the nature of the coupling. In all cases, there will be a number of dynamically simple nodes, usually equilibria or periodic orbits, say \(x_{i}\), \(i=1,\ldots,k\) each of which is of saddle type. These nodes have unstable manifolds that, within some invariant subspace, limit to other nodes within the network—usually because there is a robust (transverse) saddletosink connection between the nodes within some invariant subspace; see [251]. It can be verified that such heteroclinic networks can be attracting if, in some sense, the rate of expansion away from the nodes is not as strong as the rate of contraction towards the nodes—see [129] for some precise results in this direction. Figure 20 illustrates some of the ingredients for a robust heteroclinic attractor.
7.1 Robust Heteroclinic Attractors for Phase Oscillator Networks
Hansel et al. [231] considered the dynamics of (32) with global coupling and phase interaction function of the form (39) for \((r,\alpha)\) fixed parameters. For large N, they find an open region in parameter space where typical attractors are heteroclinic cycles that show slow switching between states where the clustering is into two clusters of macroscopic size. This dynamics is examined in more depth in [244] where the simulations for typical initial conditions show a long and intermittent transient to a twocluster state that, surprisingly, is unstable. This is a paradox because only a lowdimensional subset of initial conditions (the stable manifold) should converge to a saddle. The resolution of this paradox is a numerical effect: as the dynamics approaches the heteroclinic cycle where the connection is in a clustered subspace, there can be numerical rounding into the subspace. For weak perturbation of the system by additive noise, [244] find that the heteroclinic cycle is approximated by a roughly periodic transition around the cycle whose approximate period scales as the logarithm of the noise amplitude.
The bifurcations that give rise to heteroclinic attractors in this system on varying \((r, \alpha)\) are quite complex even for small N. As discussed in [232] one can only find attracting robust heteroclinic attractors in (32), (39) for \(N\geq4\): in this case Fig. 15 shows a region where robust heteroclinic attractors of the type illustrated in Fig. 21 appear. A trajectory approaching such a network will spend much of its time near a cluster state with two groups of two oscillators each. Each time there is a connection between the states, one of the groups will break clustering for a short time, and over a longer period the clusters will alternate between breaking up and keeping together. Qualitatively similar heteroclinic attractors can be found for example in coupled Hodgkin–Huxley type limitcycle oscillators with delayed synaptic coupling as detailed in [251] and illustrated in Fig. 22.
The heteroclinic attractors that appear for \(N>4\) can have more complex structures. For \(N=5\) this is investigated in [253, 254] for (32), (39) and in [252] for the phase interaction function
where α, β and r are parameters. Figure 23 illustrates a trajectory of (32) with global coupling and phase interaction function (45) as a raster plot for five phase oscillators with parameters \(r=0.2\), \(\alpha=1.8\), \(\beta=2.0\) and \(\omega=1.0\) and with addition of noise of amplitude 10^{−5}. Observe there is a dynamic change of clustering over the course of the timeseries with a number of switches taking place between cluster states of the type
where y, g and b represent different relative phases that are coloured “yellow”, “green” or “blue” in Fig. 24 to other symmetrically related cluster states. One can check that the group orbit of states with this clustering gives 30 symmetrically related cluster states for the system; details of the connections are shown in Fig. 24. All cluster states connect together to form a single large heteroclinic network that is an attractor for typical initial conditions [252]. Figure 23 illustrates the possible switchings between phase differences near one particular cluster state for the heteroclinic network in this case. The dynamics of this system can be used for encoding a variety of inputs, as discussed in [255] where it is shown that a constant heterogeneity of the natural frequencies between oscillators in this system leads to a spatiotemporal encoding of the heterogeneities. The sequences produced by the system can be used by a similar system with adaptive dynamics to learn a spatiotemporal encoded state [228].
Robust heteroclinic attractors also appear in a range of coupled phase oscillator models where the coupling is not global (alltoall) but such that it still preserves enough invariant subspaces for the connections to remain robust. For example, [222] study the dynamics of a network “motif” of four coupled phase oscillators and find heteroclinic attractors that are “ratchets”, i.e. they are robust heteroclinic networks that wind preferentially around the phase space in one direction—this means that under the influence of small perturbations, phase slips in only one direction can appear.
7.2 Winnerless Competition and Other Types of Heteroclinic Attractor
Heteroclinic connections play a key role in describing winnerless competition dynamics [248, 249]. This dynamics is usually associated with a firing rate model of coupled neurons of “Lotka–Volterra” form
where \(x_{i}(t)\) is an instantaneous firing rate of a neuron or group of neurons, \(\lambda_{i}\) is a selfexcitation term and \(A_{ij}\) represents coupling between the \(x_{i}\), though it is also possible to consider generalisations [134]. These models were originally developed for predatorprey interaction of species in an ecosystem. Having found wide use in population ecology, they have more recently be applied to evolutionary game dynamics [256], including applications in economics. Since the seminal paper of May [257] it has been recognised that (46) can have robust heteroclinic attractors for an open set of parameter choices \(\lambda_{i}\) and \(A_{ij}\), as long as at least three species are involved. Indeed, the system can show a wide range of dynamics such as “winnertakesall” equilibria where there is a stable equilibrium with \(x_{i}>0\) and \(x_{j}=0\) for \(j\neq i\) (i is the “winner”), “cooperative” equilibria where several of the \(x_{i}\) are nonzero as well as nontrivial periodic or chaotic dynamics. The particular case of “winnerless competition” gives attractors consisting of chains or networks of saddles joined by connections that are robust because the absence of a species (or in our case, the lack of firing of one neuron or group of neurons) \(x_{i}=0\) is preserved by the dynamics of the model (46).
The simplest case of this appears in a ring of \(N=3\) coupled neurons with dynamics
for \(\alpha+\beta>2\) and \(0<\alpha<1\) [257], corresponding to cyclic inhibition of the neurons in one direction around the ring and cyclic excitation in the other direction. This “rock–paper–scissors” type of dynamics leads to winnerless competition that has been applied in a variety of more complex models of neural systems. The local behaviour near connections in such heteroclinic attractors has been called “stable heteroclinic channels” [250] and used to model a variety of lowlevel and highlevel neural behaviours including random sequence generation, information dynamics, encoding of odours and working memory. We refer the reader to the reviews [133, 134, 250].
Analogous behaviour has been found in a range of other coupled systems, for example [137] or delayed pulsecoupled oscillators [167–170, 245, 258]. Recent work has also considered an explicit constructive approach to heteroclinic networks to realise arbitrary directed networks as a heteroclinic attractor of a coupled cell system [247] or as a closelyrelated excitable network attractor that appears at a bifurcation of the heteroclinic attractor [259].
More complex but related dynamical behaviour has been studied under the names of “chaotic itinerancy” (see for example [260]), “cycling chaos” [131], “networks of Milnor attractors” [261] and “heteroclinic cycles for chaotic oscillators” [262]. It has been suggested that these and similar models are useful for modelling of the functional behaviour of neural systems [263].
Because heteroclinic attractors are quite singular in their dynamical behaviour (averages of observables need not converge, there is a great deal of sensitivity of the longterm dynamics to noise and system heterogeneity), it is important to consider the effect of noise and/or heterogeneities in the dynamics. This leads to a finite average transition time between states determined by the level of noise and/or heterogeneity (which may be due to inputs to the system) and the local dynamics—see for example [264]. Another useful feature of heteroclinic attractors is that they allow one to model “input–output” response of the system to a variety of inputs.
8 Stochastic Oscillator Models
Noise is well known to play a constructive role in the neural encoding of natural stimuli, leading to increased reliability or regularity of neuronal firing in single neurons [265, 266] and across populations [267]. From a mathematical perspective it is natural to consider how noise may affect the reduction to a phase oscillator description. Naively one may simply consider the addition of noise to a deterministic phase oscillator model to generate a stochastic differential equation. Indeed models of this type have been studied extensively at the network level to understand noiseinduced first and secondorder phase transitions, and new phenomenon such as noiseinduced synchrony [268–270] or asynchrony [271], and noiseinduced turbulence [272]. We refer the reader to the review by Lindner [273] for a comprehensive discussion. More recently Schwabedal and Pikovsky have extended the foundations of deterministic phase descriptions to irregular, noisy oscillators (based on the constancy of the mean first return times) [274], Ly and Ermentrout [275] and Nakao et al. [276] have built analytical techniques for studying weak noise forcing, and Moehlis has developed techniques to understand the effect of white noise on the period of an oscillator [277].
At the network level (global coupling) a classic paper examining the role of external noise in IF populations, using a phase description, is that of Kuramoto [278], who analysed the onset of collective oscillations. Without recourse to a phase reduction it is well to mention that Medvedev has been pioneering a phase–amplitude approach to studying the effects of noise on the synchronisation of coupled stochastic limitcycle oscillators [194, 279], and that Newhall et al. have developed a Fokker–Planck approach to understanding cascadeinduced synchrony in stochastically driven IF networks with pulsatile coupling and Poisson spiketrain external drive [280]. More recent work on pairwise synchrony in network of heterogeneous coupled noisy phase oscillators receiving correlated and independent noise can be found in [281]. However, note that even in the absence of synaptic coupling, two or more neural oscillators may become synchronised by virtue of the statistical correlations in their noisy input streams [282–284].
8.1 Phase Reduction of a Planar System with StateDependent Gaussian White Noise
For clarity of exposition let us consider the phase reduction of a planar system described by \({\mathrm {d}x}/{\mathrm {d}t}=F(x)+p(x) \xi(t)\), where \(\xi(t)\) is white Gaussian noise such that \(\langle\xi(t) \rangle=0\) and \(\langle\xi(t) \xi(s) \rangle= 2D \delta(ts)\), where \(\langle \cdot \rangle\) denotes averaging over the realisation of ξ, and \(D>0\) scales the noise intensity. We employ a Stratonovich interpretation of the noise (such that the chain rule of ordinary calculus holds). In the absence of noise we shall assume that the system has a stable \(2 \pi/\omega\)periodic limitcycle solution, with a phase that satisfies \(\mathrm{d}{\theta }/{\mathrm{d}t} = \omega\). For weak noise perturbations the state point will leave the cycle, though will stay in some neighbourhood, which we denote by \(\mathcal{U}\). To extend the notion of a phase off the cycle we let θ be a smooth function of x such that \(\langle\nabla_{x} \theta , F(x) \rangle=\omega\) holds for any point \(x \in\mathcal{U}\). We shall denote the other coordinate in \(\mathcal{U}\) by ρ, and assume that there exists a smooth coordinate transformation \(x \rightarrow(\theta, \rho)\). For a noise perturbed oscillator, in the new coordinates we have
where we have introduced \(h=\langle\nabla_{x} \theta, p \rangle\), \(f=\langle\nabla_{x} \rho, F \rangle\) and \(g=\langle\nabla_{x} \rho, p \rangle\). Note that the full coordinate transformation \((\theta,\rho)=(\theta (x),\rho(x))\) is not prescribed here, though it is common to use one such that ρ can be interpreted as some distance from cycle. Thus, although results may be formally developed for the system (47) they cannot be directly interpreted in terms of a given model until the full coordinate transformation taking one from \(x \in \mathbb {R}^{2}\) to \((\theta,\rho)\) is given.
One can transform (47) into a stochastic phase–amplitude equation in the Itō sense, where it reads
where the subscripts θ and ρ denote partial derivatives with respect to θ and ρ, respectively. Using the Itō form we may construct a Fokker–Planck equation for the timedependent probability distribution \(Q(t,\theta,\rho)\) as
with periodic boundary condition \(Q(t,0,\rho)=Q(t,2 \pi,\rho)\).
When \(D=0\) the steady state distribution is given by \(Q_{0}(\theta,\rho )=\delta(\rho)/(2 \pi)\). For small noise amplitude D we expect \(Q_{0}\) to still localise near \(\rho=0\) [285], say over a region \(\rho_{c} < \rho< \rho _{c}\). In this case it is natural to make the approximation that for large t, \(Q=0=\partial Q /\partial\rho\) at \(\rho=\pm\rho_{c}\). Now introduce the marginal distribution
Following [286] we can integrate (48) over the interval \(I=[\rho_{c},\rho_{c}]\) and generate a Fokker–Planck equation for P. The last three terms in (48) vanish after integration due to the boundary conditions, so that we are left with
We now expand about \(\rho=0\) to give \(h(\theta,\rho)=Z(\theta)+\rho h_{\rho}(\theta,0)+\cdots\) and \(g(\theta,\rho)=g(\theta,0) +\rho g_{\rho}(\theta,0)+\cdots\), where \(Z(\theta)\) is identified as the phase response \(\langle\nabla_{x} \theta, p \vert _{\rho=0} \rangle\). In the limit of small D where \(Q_{0} \simeq\delta(\rho)/(2 \pi)\) we note that, for an arbitrary function \(R(\theta)\),
Making use of the above gives the Fokker–Planck equation:
where \(Y(\theta) = h_{\rho}(\theta,0)g(\theta,0)\). Hence, the corresponding Itō equation is
while the Stratonovich version is
Equations (50) and (51) are the stochastic phase oscillator descriptions for a limit cycle driven by weak white noise. These make it clear that naively adding noise to the phase description misses not only a multiplication by the iPRC but also the addition of a further term \(Y(\theta)\) that contains information as regards the amplitude response of the underlying limitcycle oscillator.
We are now in a position to calculate the steady state probability distribution \(P_{0}(\theta)\) and use this to calculate the moments of the phase dynamics. Consider (49) with the boundary condition \(P(t,0)=P(t,2 \pi)\), and set \(P_{t}=0\). Adopting a Fourier representation for \(P_{0}\), Z and Y as \(P_{0}(\theta ) = \sum_{n} P_{n} \mathrm {e}^{in \theta}\), \(Z(\theta) = \sum_{n} Z_{n} \mathrm {e}^{in \theta }\), \(Y(\theta) = \sum_{n} Y_{n} \mathrm {e}^{in \theta}\), allows us to obtain a set of equations for the unknown amplitudes \(P_{l}\) as
for some constant K. For \(D=0\) we have \(P_{0}=K\), and after enforcing normalisation we may set \(K=1/(2 \pi)\). For small D we may then substitute \(P_{0}\) into (52) and work to next order in D to obtain an approximation for the remaining amplitudes, \(l \neq0\), in the form
Using this we may reconstruct the distribution \(P_{0}(\theta)\) for small D as
The mean frequency of the oscillator is defined by \({\varOmega} = \lim_{T \rightarrow\infty} T^{1} \int_{0}^{T} \theta'(t) \,\mathrm{d}t\). This can be calculated by replacing the time average with the ensemble average. For an arbitrary 2πperiodic function R we set \(\lim_{T \rightarrow \infty} T^{1} \int_{0}^{T} R(t) \,\mathrm{d}t = \int_{0}^{2 \pi} R(\theta) P_{0}(\theta ) \,\mathrm{d}\theta\). Using (53) and (50) we obtain
where we have used the fact that \(\langle Z(\theta) \xi(t) \rangle= \langle Z(\theta) \rangle\langle\xi(t) \rangle=0\). We may also calculate the phase diffusion D̃ as
where we use the fact that \(\langle\xi(t) \rangle=0\) and \(\langle \xi (t) \xi(s) \rangle= 2D \delta(ts)\). This recovers a wellknown result of Kuramoto [5].
8.2 Phase Reduction for Noise with Temporal Correlation
A recent paper by Teramae et al. [287] shows that when one considers noise described by an Ornstein–Uhlenbeck (OU) process with a finite correlation time then this can interact with the attraction time scale of the limit cycle and give fundamentally different results when compared to Gaussian white noise (which has a zero correlation time). This observation has also been independently made in [286]. Both approaches assume weak noise, though [286] makes no assumptions about relative time scales, and is thus a slightly more general approach than that of [287]. Related work by Goldobin et al. [288] for noise \(\eta(t)\) with zeromean \(\langle\eta(t) \rangle= 0\) and prescribed autocorrelation function \(C(\tau)=\langle\eta(\tau) \eta (0) \rangle\) yields the reduced Stratonovich phase equation,
where
where λ is the average (linearised) rate of attraction to the limit cycle. Note that for \(C(\tau)=2 D \delta(\tau)\), \(\widetilde{Y}(\theta) = {Y}(\theta)\) and (54) reduces to (51) as expected. To lowest order in the noise strength the steady state probability distribution will simply be \(P_{0}(\theta)=1/(2 \pi)\). Therefore to lowest noise order the mean frequency is determined from an ensemble average as
where the last term comes from using the Itō form of (54) and the subscript 0 notation is defined as in (53). The phasediffusion coefficient at lowest noise order is given by
Let us now consider the example of OU noise so that \(C(\tau)=D \gamma \exp(\gamma \vert \tau \vert )\). Furthermore let us take the simultaneous limit \(\gamma\rightarrow\infty\) (zero correlation time scale) and \(\lambda \rightarrow\infty\) (infinitely fast attraction), such that the ratio \(\alpha=\lambda/\gamma\) is constant. In this case we have from (55) that
Hence, when the correlation time of the noise is much smaller than the decay time constant \(\alpha=0\) and we recover the result for white Gaussian noise. In the other extreme when \(\alpha\rightarrow\infty\), where the amplitude of the limit cycle decays much faster than the correlation time of the noise, then Ỹ vanishes and the reduced phase equation is simply \({\mathrm {d}\theta}/{\mathrm {d}t}=\omega+ Z(\theta) \xi (t)\), as would be obtained using the standard phasereduction technique without paying attention to the stochastic nature of the perturbation.
9 LowDimensional Macroscopic Dynamics and Chimera States
The selforganisation of large networks of coupled neurons into macroscopic coherent states, such as observed in phase locking, has inspired a search for equivalent lowdimensional dynamical descriptions. However, the mathematical step from microscopic to macroscopic dynamics has proved elusive in all but a few special cases. For example, neural mass models of the type described in Sect. 2.6 only track mean activity levels and not the higherorder correlations of an underlying spiking model. Only in the thermodynamic limit of a large number of neurons firing asynchronously (producing null correlations) are such rate models expected to provide a reduction of the microscopic dynamics. Even here the link from spike to rate is often phenomenological rather than rigorous. Unfortunately only in some rare instances has it been possible to analyse spiking networks directly (usually under some restrictive assumption such as global coupling) as in the spikedensity approach [289], which makes heavy use of the numerical solution of coupled PDEs. Recently however, exact results for globally pulsecoupled oscillators described by the Winfree model [219] have been obtained by Pazó and Montbrió [290], making use of the Ott–Antonsen (OA) ansatz. The OA anstaz was originally used to find solutions on a reduced invariant manifold of the Kuramoto model [278], and essentially assumes that the distribution of phases has a simple unimodal shape, capable of describing synchronous (peaked) and asynchronous (flat) distributions, though is not capable of describing clustered states (multipeak phase distributions), and see below for a more detailed discussion. The major difference between the Winfree and Kuramoto phase oscillator models is that the former has interactions described by a phase product structure and the latter a phasedifference structure.
9.1 Ott–Antonsen Reduction for the Winfree Model
The Winfree model is described in Sect. 6 as a model for weakly globally pulsecoupled biological oscillators, and can support incoherence, frequency locking, and oscillator death when \(P(\theta)=1+\cos\theta\) and \(R(\theta)=\sin\theta\) [291]. We note, however, that the same model is exact when describing nonlinear IF models described by a single voltage equation, and that in this case we do not have to restrict attention to weak coupling. Indeed the OA ansatz has proved equally successful in describing both the Winfree network with a sinusoidal PRC [290] and a network of theta neurons [292]. A theta neuron is formally equivalent to a QIF neuron with infinite positive threshold and infinite negative reset values. The PRC of a QIF neutron can be computed using (27), and for the case described by (1) with \(\tau=1\) and \(f(v)=v^{2}\) and infinite threshold and reset then \(R(\theta)=a (1\cos \theta)\) with \(a=1/\sqrt{I}\) for \(\theta\in[0,2 \pi)\) (which is the shape expected for an oscillator near a SNIC bifurcation). We shall now focus on this choice of PRC and a pulsatile coupling that we write in the form
where we have introduced a convenient Fourier representation for the periodic function P. We now consider the large N limit in (31) and let \(\rho (\theta\omega,t) \,\mathrm{d}\theta\) be the fraction of oscillators with phases between θ and \(\theta+ \mathrm{d}\theta\) and natural frequency ω at time t. The dynamics of the density ρ is governed by the continuity equation (expressing the conservation of oscillators)
where the meanfield drive is \(h(t)= \lim_{N \rightarrow\infty} \sum_{j} N^{1}P(\theta_{j})\). Boundary conditions are periodic in the probability flux, namely \(\rho (0\omega,t) v (0,t) = \lim_{\theta\rightarrow2 \pi} \rho(\theta \omega,t) v(\theta,t)\). A further reduction in dimensionality is obtained for the choice that the distribution of frequencies is described by a Lorentzian \(g(\omega )\) with
for fixed Δ and \(\omega_{0}\) (controlling the width and mean of the distribution, respectively), which has simple poles at \(\omega_{\pm}=\omega_{0} \pm i \Delta\).
A generalised set of order parameters is defined as
The integration over ω can be done using a contour in the lower half complex plane so that \(Z_{m}(t) = \langle \mathrm {e}^{i m \theta}, \rho(\theta \omega_{}, t) \rangle\), where we have introduced the inner product \(\langle u(\theta), v (\theta) \rangle= (2 \pi)^{1}\int_{0}^{2 \pi }\mathrm{d} \theta\overline{u(\theta)} v(\theta) \,\mathrm{d}\theta\). The OA ansatz assumes that the density ρ can be written in a restricted Fourier representation as
where cc stands for complex conjugate. Substitution into the continuity equation (57) and balancing terms in \(\mathrm {e}^{i m \theta}\) show that α must obey
Moreover, using the inner product structure of \(Z_{m}\) we easily see that \(Z_{m}(t) = [\alpha(\omega_{},t)]^{m}\). Thus the Kuramoto order parameter \(Z_{1} \equiv R \mathrm {e}^{i \varPsi}\) is governed by (59) with \(\omega =\omega_{}\), yielding
To calculate the meanfield drive h we note that it can be written as
Hence, we have explicitly that \(h=h(R,\varPsi)\) with
The planar system of equations defined by (60) and (61) can be readily analysed using numerical bifurcation analysis.
Recent work by Montbrió et al. [293] has shown that QIF networks can also be analysed without having to invoke the OA ansatz. Indeed they show that choosing a Lorentzian distribution for the voltages also leads to a reduced meanfield description that tracks the population firing rate and the mean membrane voltage. Interestingly they also relate these variables back to the complex Kuramoto order parameter via a conformal mapping. A treatment of QIF networks with gapjunction coupling has also recently been achieved by Laing [294], using the OA ansatz, showing how this can destroy certain spatiotemporal patterns, such as localised “bumps”, and create others, such as travelling waves and spatiotemporal chaos. The OA ansatz has also proved remarkably useful in understanding nontrivial solutions such as chimera states (where a subpopulation of oscillators synchronises in an otherwise incoherent sea).
9.2 Chimera States
Phase or cluster synchronised states in systems of identical coupled oscillators have distinct limitations as descriptions of neural systems where not just phase but also frequency clearly play a part in the processing, computation and output of information. Indeed, one might expect that for any coupled oscillator system that is homogeneous (in the sense that any oscillators can be essentially replaced by any other by a suitable permutation of the oscillators), the only possible dynamical states are homogeneous in the sense that the oscillators behave in either a coherent or an incoherent way. This expectation, however, is not justified—there can be many dynamical states that cannot easily be classified as coherent or incoherent, but that seem to have a mixture of coherent and incoherent regions. Such states have been given the name “chimera state” by Abrams and Strogatz [295, 296] and have been the subject of intensive research over the past five years. For reviews of chimera state dynamics we refer the reader to [297, 298].
Kuramoto and Battogtokh [299, 300] investigated continuum systems of oscillators of the form
where θ represent phases at locations \(x \in D \subseteq \mathbb {R}\), the kernel \(G(u)=\kappa\exp(\kappa \vert u\vert )/2\) represents a nonlocal coupling and ω, α, κ are constants. Interestingly this model is precisely in the form presented in Sect. 6.3 as Eq. (40) for an oscillatory model of cortex, although here there are no spacedependent delays and the interaction function is \(H(\theta) = \sin(\theta\alpha)\). Kuramoto and Battogtokh found for a range of parameters near \(\alpha=\pi/2\), and carefully selected initial conditions that the oscillators can split into two regions in x, one region which is frequency synchronised (or coherent) while the other region shows a nontrivial dependence of frequency on location x. An example of a chimera state is shown in Fig. 25.
Note that a discretisation of (62) to a finite set of N coupled oscillators is
where \(\theta_{i}\in[0,2\pi)\) represents the phase at location \(i=1,\ldots ,N\) and the coupling matrix \(K_{ij}= G(\vert ij\vert /N)/N\) is the discretised interaction kernel (assuming a domain of length 1). Using different kernels, \(G(u) = \exp(\kappa\cos(2\pi u))\) and an approximation \(G(u) = 1 \kappa\cos(2\pi u)\) for small κ, Abrams and Strogatz [295] identified similar behaviour and [296] discussed a limiting case of parameters such that the continuum system provably has chimera solutions. The OA reduction discussed in Sect. 9.1 allows an exact reduction of oscillator networks of this form and in the continuum limit this can give a solvable loworder system whose solutions include a variety of chimera states [297]. It is useful to note that when \(\alpha=\pi/2\), pure cosine coupling results in an integrable system [301], such that disordered initial states will remain disordered. Thus α determines a balance between spontaneous order and permanent disorder.
However, it seems that chimera states are much more “slippery” in finite oscillator systems than in the continuum limit. In particular, Wolfrum and Omel’chenko [302] note that for finite approximations of the ring (62) by N oscillators, with a mixture of local and nearest Rneighbour coupling corresponding to (63) with a particular choice of coupling matrix \(K_{ij}\), chimera states apparently only exist as transients. However, the lifetime of the typical transient apparently grows exponentially with N. Thus, at least for some systems of the form (63), chimeras appear to be a type of chaotic saddle. This corresponds to the fact that the boundaries between the regions of coherent and incoherent oscillation fluctuate apparently randomly over a long time scale. These fluctuations lead to wandering of the incoherent region as well as change in size of the region. Eventually these fluctuations appear to result in typical collapse to a fully coherent oscillation [302].
Although this appears to be the case for chimeras for (63), there are networks such as coupled groups of oscillators; [303] or twodimensional lattices [304] where chimera attractors can appear. It is not clear what will cause a chimera to be transient or not, or indeed exactly what types of chimeralike states can appear in finite oscillator networks. A suggestion of [305] is that robust neutrally stable chimeras may be due to the special type of singleharmonic phase interaction function used in (62), (63).
More recent work includes investigations of chimeras (or chimeralike states) in chemical [306] or mechanical oscillator networks [307]; chimeras in systems of coupled oscillators other than phase oscillators have been investigated in many papers; for example in Stuart–Landau oscillators [299, 308, 309], Winfree oscillators [290] and models with inertia [310]. Other recent work includes discussion of feedback control to stabilise chimeras [311], investigations of chimeras with multiple patches of incoherence [312], multicluster and travelling chimera states [313].
In a neural context chimeras have also been found in pulsecoupled LIF networks [314], and hypothesised to underly coordinated oscillations in unihemispheric slowwave sleep, whereby one brain hemisphere appears to be inactive while the other remains active [315].
10 Applications
We briefly review a few examples where mathematical frameworks are being applied to neural modelling questions. These cover functional and structural connectivity in neuroimaging, central pattern generators (CPGs) and perceptual rivalry. There are many other applications we do not review, for example deep brain stimulation protocols [316] or modelling of epileptic seizures where network structures play a key role [71].
10.1 Functional and Structural Connectivity in Neuroimaging
Functional connectivity (FC) refers to the temporal synchronisation of neural activity in spatially remote areas. It is widely believed to be significant for the integrative processes in brain function. Anatomical or structural connectivity (SC) plays an important role in determining the observed spatial patterns of FC. However, there is clearly a role to be played by the dynamics of the neural tissue. Even in a globally connected network we would expect this to be the case, given our understanding of how synchronised solutions can lose stability for weak coupling. Thus it becomes useful to study models of brain like systems built from neural mass models (such as the Jansen–Rit model of Sect. 2.6), and ascertain how the local behaviour of the oscillatory node dynamics can contribute to global patterns of activity. For simplicity, consider a network of N globally coupled identical Wilson–Cowan [65] neural mass models:
for \(i=1,\ldots, N\). Here \((x_{i},y_{i}) \in \mathbb {R}^{2}\) represents the activity in each of a pair of coupled neuronal population models, f is a sigmoidal firing rate given by \(f(x) = 1/(1+\mathrm {e}^{x})\) and \((P,Q)\) represent external drives. The strength of connections within a local population is prescribed by the coefficients \(c_{1},\ldots, c_{4}\), which we choose as \(c_{1} = c_{2} = c_{3} = 10\) and \(c_{4} = 2\) as in [43]. For \(\epsilon=0\) it is straightforward to analyse the dynamics of a local node and find the bifurcation diagram in the \((P,Q)\) plane as shown in Fig. 26. Moreover, for \(\epsilon\ll1\) we may invoke weakcoupling theory to describe the dynamics of the full network within the oscillatory regime bounded by the two Hopf curves shown in Fig. 26. From the theory of Sect. 6.2.1 we would expect the synchronous solution to be stable if \(\epsilon H'(0) >0\). Taking \(\epsilon>0\) we can consider \(H'(0)\) as a proxy for the robustness of synchrony. The numerical construction of this quantity, as in [317], predicts that there will be regions in the \((P,Q)\) plane associated with a breakdown of FC (where \(H'(0) <0\)), as indicated by points a and c in Fig. 26. This highlights the role that local node dynamics can have on emergent network dynamics. Moreover, we see that simply by tuning the local dynamics to be deeper within the existence region for oscillatory solutions we can, at least for this model, enhance the degree of FC. It would be interesting to explore this simple model further for more realistic brain like connectivities, along the lines described in [318]. Moreover, given that this would preclude the existence of the synchronous state by default (since we would neither have \(H(0)=0\) nor would \(\sum_{j} w_{ij}\) be independent of i) then it would be opportune to explore the use the recent ideas in [163, 319] to understand how the system could organise into a regime of remote synchronisation whereby pairs of nodes with the same network symmetry could synchronise. For related work on Wilson–Cowan networks with some small dynamic noise see [320], though here the authors construct a phase oscillator network by linearising around an unstable fixed point, rather than use the notion of phase response.
10.2 Central Pattern Generators
CPGs are (real or notional) neural subsystems that are implicated in the generation of spatiotemporal patterns of activity [321], in particular for driving the relatively autonomous activities such as locomotion [322–324] or for driving involuntary activities such as heartbeat, respiration or digestion [325]. These systems are assumed to be behind the creation of the range of walking or running patterns (gaits) that appear in different animals [326]. The analysis of phase locking provides a basis for understanding the behaviour of many CPGs, and for a nice overview see the review articles by Marder and Bucher [327] and Hooper [328].
In some cases, such as the leech (Hirudo medicinalis) heart or Caenorhabditis elegans locomotion, the neural circuitry is well studied. For more complex neural systems and in more general cases CPGs are still a powerful conceptual tool to construct notional minimal neural circuitry needed to undertake a simple task. In this notional sense they have been extensively investigated to design control circuits for actuators for robots; see for example the review [329]. Recent work in this area includes robots that can reproduce salamander walking and swimming patterns [330]. Since the control of motion of autonomous “legged” robots is still a very challenging problem in realtime control, one hope of this research is that nature’s solutions (for example, how to walk stably on two legs) will help inspire robotic ways of doing this.
CPGs are called upon to produce one or more rhythmic patterns of actuation; in the particular problem of locomotion, a likely CPG is one that will produce the range of observed rhythms of muscle actuation, and ideally the observed transitions between then. For an early discussion of design principles for modelling CPGs, see [46]. This is an area of modelling where consideration of symmetries as in Sect. 3.6 has been usefully applied to constrain the models. For example [331] examine models for generating the gaits in a range of vertebrate animals, from those with two legs (humans) through those with four (quadrupeds such as horses have a wide range of gaits—walk, trot, pace, canter, gallop—they may use) or larger numbers of legs (myriapods such as centipedes). Insects make use of six legs for locomotion while other invertebrates such as centipedes and millipedes have a large number of legs that are to some extent independently actuated. As an example, [332] consider a schematic CPG of 2n oscillators for animals with n legs, as shown in Fig. 27(a). The authors use symmetry arguments and Theorem 3.1 to draw a number of modelindependent conclusions from the CPG structure.
One can also view CPGs as a window into more fundamental problems of how small groups of neurons coordinate to produce a range of spatiotemporal patterns. In particular, it is interesting to see how the observable structure of the connections influences the range and type of dynamical patterns that can be produced. For example, [333] consider a simple threecell “motif” networks of bursters and classify a range of emergent spatiotemporal patterns in terms of the coupling parameters. Detailed studies [334] investigate properties such as multistability and bifurcation of different patterns and the influence of inhomogeneities in the system. This is done by investigating return maps for the burst timings relative to each other.
The approach of [135, 136] discussed in Sect. 3.12 provides an interesting framework to discuss CPG dynamics in cases where the connection structure is given but not purely related to symmetries of the network. For example, [141] use that formalism to understand possible spatiotemporal patterns that arise in lattices or [100] that relates synchrony properties of small motif networks to spectral properties of the adjacency matrix.
10.3 Perceptual Rivalry
Many neural systems process information—they need to produce outputs that depend on inputs. If the system effectively has no internal degrees of freedom then this will give a functional relationship between output and input so that any temporal variation in the output corresponds to a temporal variation of the input. However, this is not the case for all but the simplest systems and often outputs can vary temporally unrelated to the input. A particularly important and wellstudied system that is a model for autonomous temporal output is perceptual rivalry, where conflicting information input to a neural system results, not in a rejection or merging of the information, but in an apparently random “flipping” between possible “rival” states (or percepts) of perception. This nontrivial temporal dynamics of the perception appears even in the absence of a temporally varying input. The best studied example of this type is binocular rivalry, where conflicting inputs are simultaneously made to each eye. It is widely reported by subjects that perception switches from one eye to the other, with a mean frequency that depends on a number of factors such as the contrast of the image [335]. More general perceptual rivalry, often used in “optical illusions” such as ambiguous figures—the Rubin vase, the Necker cube—show similar behaviour with percepts shifting temporally between possible interpretations.
Various approaches [336] have been made to construct nonlinear dynamical models of the generation of a temporal shifting between possible percepts such as competition models [337], bifurcation models, ones based on neural circuitry [338], or conceptual ones [339] based on network structures [340] or on heteroclinic attractors [341].
11 Discussion
As with any review we have had to leave out many topics that will be of interest to the reader. In particular we have confined ourselves to “cell” and “systemlevel” dynamics rather that “subcellular” behaviour of neurons. We briefly mention some other active areas of mathematical research relevant to the science of rhythmic neural networks. Perhaps the most obvious area that we have not covered in any depth is that of single unit (cell or population) forcing, which itself is a rather natural starting point for gaining insights into network behaviour and how best to develop mathematical tools for understanding response [342, 343]. For a general perspective on modelocked responses to periodic forcing see [344] and [76], and for the role of spatially correlated input in generating oscillations in feedforward neuronal networks see [345]. Other interesting recent work includes uncovering some surprising nonlinear dynamical properties of feedforward networks [346, 347].
For a more recent discussion of the importance of modelocking in auditory neuroscience see [348, 349] and in motor systems, see [350]. However, it is well to note that not much is known about nonlinear systems with three or more interacting frequencies [351], as opposed to periodically forced systems where the notions of Farey tree and the devil’s staircase have proven especially useful. We have also painted the notion of synchrony with a broad mathematical brush, and not discussed more subtle notions of envelope locking that may arise between coupled bursting neurons (where the within burst patterns may desynchronise) [352]. This is especially relevant to studies of synchronised bursting [353] and the emergence of chaotic phenomena [354]. Indeed, we have said very little about coupling between systems that are chaotic, such as described in [355], the emergence of chaos in networks [356, 357] or chaos in symmetric networks [243].
The issue of chaos is also relevant to notions of reliability, where one is interested in the stability of spike trains against fluctuations. This has often been discussed in relation to stochastic oscillator forcing rather than those arising deterministically in a highdimensional setting [267, 358–360]. Of course, given the sparsity of firing in cortex means that it may not even be appropriate to treat neurons as oscillators. However, some of the ideas developed for oscillators can be extended to excitable systems, as described in [361–363]. As well as this it is important to point out that neurons are not point processors, and have an extensive dendritic tree, which can also contribute significantly to emergent rhythms as described in [241, 364], as well as couple strongly to glial cells. Although the latter do not fire spikes, they do show oscillations of membrane potential [365]. At the macroscopic level it is also important to acknowledge that the amplitude of different brain waves can also be significantly affected by neuromodulation [366], say, through release of norepinephrine, serotonin and acetylcholine, and the latter is also thought to be able to modulate the PRC of a single neuron [367].
This review has focussed mainly on the embedding of weakly coupled oscillator theory within a slightly wider framework. This is useful in setting out some of the neuroscience driven challenges for the mathematical community in establishing inroads into a more general theory of coupled oscillators. Heterogeneity is one issue that we have mainly sidestepped, and we recall that the weakly coupled oscillator approach requires frequencies of individual oscillators to be close. This can have a strong effect on emergent network dynamics [368], and it is highly likely that even a theory with heterogeneous phase response curves [369] will have little bearing on real networks. The equationfree coarsegraining approach may have merit in this regard, though is a numerically intensive technique [370].
We suggest a good project for the future is to develop a theory of strongly coupled heterogeneous networks based upon the phase–amplitude coordinate system described in Sect. 5.2, with the challenge to develop a reduced network description in terms of a set of phase–amplitude interaction functions, and an emphasis on understanding the new and generic phenomena associated with nontrivial amplitude dynamics (such as clustered phase–amplitude chaos and multiple attractors). To achieve this one might further tap into recent ideas for classifying emergent dynamics based upon the group of structural symmetries of the network. This can be computed as the group of automorphisms for the graph describing the network. For many realworld networks, this can be decomposed into direct and wreath products of symmetric groups [113]. This would allow for the use of tools from computational group theory [371] and open up a way to classify the generic forms of behaviour that a given network may exhibit using the techniques of equivariant bifurcation theory.
Notes
Indeed, the human brain consists of the order of 10^{11} neurons, but of the order of 100–1000 types http://neuromorpho.org meaning there is a very high replication of cells that are only different by their location and exact morphology.
In this section we assume little about the dynamics of the nodes—they may be “chaotic oscillators”.
Abbreviations
 DDE:

Delay differential equation
 IF:

Integrate and fire (model for neural oscillator)
 iPRC:

Infinitesimal phase response curve
 ISI:

Interspike interval
 FHN:

FitzHugh–Nagumo equation (model for neural oscillator)
 HH:

Hodgkin–Huxley equation (model for neural oscillator)
 LIF:

Leaky integrate and fire (model for neural oscillator)
 ML:

Morris–Lecar equation (model for neural oscillator)
 MSF:

Master stability function
 ODE:

Ordinary differential equation
 PDE:

Partial differential equation
 PRC:

Phase response curve
 QIF:

Quadratic integrate and fire (model for neural oscillator)
 SDE:

Stochastic differential equation
 SHC:

Stable heteroclinic channel
 SNIC:

Saddle node on an invariant circle (bifurcation)
 WLC:

Winnerless competition
References
Huygens C. Oeuvres complétes de Christiaan Huygens. La Haye: Martinus Nijhoff; 1888. doi:10.5962/bhl.title.21031.
Rayleigh JWSB. The theory of sound. Vol. 2. London: Macmillan; 1896.
Van Der Pol B. Forced oscillations in a circuit with nonlinear resistance. Lond Edinb Dublin Philos Mag J Sci, Ser 7. 1927;3:65–80.
Wiener N. Cybernetics; or control and communication in the animal and the machine; 1948.
Kuramoto Y. Chemical oscillations, waves and turbulence. Heidelberg: Springer; 1984.
Michaels DC, Matyas EP, Jalife J. Mechanisms of sinoatrial pacemaker synchronization: a new hypothesis. Circ Res. 1987;61:704–14.
Liu C, Weaver DR, Strogatz SH, Reppert SM. Cellular construction of a circadian clock: period determination in the suprachiasmatic nuclei. Cell. 1997;91:855–60.
Ermentrout GB. An adaptive model for synchrony in the firefly Pteroptyx malaccae. J Math Biol. 1991;29:571–85.
Wiesenfeld K, Colet P, Strogatz SH. Frequency locking in Josephson arrays: connection with the Kuramoto model. Phys Rev E. 1998;57:1563–9.
Néda Z, Ravasz E, Vicsek T, Brechet Y, Barabási AL. Physics of the rhythmic applause. Phys Rev E. 2000;61:6987–92.
Ha SY, Jeong E, Kang MJ. Emergent behaviour of a generalized Viscektype flocking model. Nonlinearity. 2010;23:3139–56.
Paley DA, Leonard NE, Sepulchre R, Grunbaum D, Parrish JK. Oscillator models and collective motion. IEEE Control Syst Mag. 2007;27:89–105.
Assenza S, Gutiérrez R, GómezGardenes J, Latora V, Boccaletti S. Emergence of structural patterns out of synchronization in networks with competitive interactions. Sci Rep. 2011;1:99.
Dörfler F, Bullo F. Synchronization in complex networks of phase oscillators: a survey. Automatica. 2014;50:1539–64.
Velazquez JLP. Brain research: a perspective from the coupled oscillators field. NeuroQuantology. 2006;4:55–165.
Singer W. Synchronization of cortical activity and its putative role in information processing and learning. Annu Rev Physiol. 1993;55:349–74.
Uhlhaas PJ, Pipa G, Lima B, Melloni L, Neuenschwander S, Nikolic D, Lin SC. Neural synchrony in cortical networks: history, concept and current status. Neuroscience. 2009;3:1–19.
Wang XJ. Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev. 2010;90:1195–268.
Fell J, Axmacher N. The role of phase synchronization in memory processes. Nat Rev Neurosci. 2011;12:105–18.
Wehr M, Laurent G. Odour encoding by temporal sequences of firing in oscillating neural assemblies. Nature. 1996;384:162–6.
Laurent G, Stopfer M, Friedrich RW, Rabinovich MI, Volkovskii A, Abarbanel HD. Odor encoding as an active, dynamical process: experiments, computation, and theory. Annu Rev Neurosci. 2001;24:263–97.
Buzśaki G, Draguhn A. Neuronal oscillations in cortical networks. Science. 2004;304:1926–9.
Haken H, Kelso JAS, Bunz H. A theoretical model of phase transitions in human hand movements. Biol Cybern. 1985;51:347–56.
Kelso JAS. Dynamic patterns: the selforganization of brain and behaviour. Cambridge: MIT Press; 1995.
Stein PSG, Grillner S, Selverston AI, Stuart DG, editors. Neurons, networks and motor behavior. Cambridge: MIT Press; 1999.
Milton J, Jung P, editors. Epilepsy as a dynamic disease. Berlin: Springer; 2003.
Coombes S, Terry JR. The dynamics of neurological disease: integrating computational, experimental and clinical neuroscience. Eur J Neurosci. 2012;36:2118–20.
Titcombe MS, Edwards R, Beuter A. Mathematical modelling of Parkinsonian tremor. Nonlinear Stud. 2004;11(3):363–84.
Bressler SL. Cortical coordination dynamics and the disorganization syndrome in schizophrenia. Neuropsychopharmacology. 2003;28:35–9.
Tass PA. Phase resetting in medicine and biology: stochastic modelling and data analysis. Berlin: Springer; 1999.
Tass PA, Hauptmann C, Popovych OV. Development of therapeutic brain stimulation techniques with methods from nonlinear dynamics and statistical physics. Int J Bifurc Chaos. 2006;16:1889–911.
Rao RPN. Brain–computer interfacing: an introduction. Cambridge: Cambridge University Press; 2013.
Coombes S, Bressloff PC, editors. Bursting: the genesis of rhythm in the nervous system. Singapore: World Scientific; 2005.
Rinzel J, Huguet G. Nonlinear dynamics of neuronal excitability, oscillations, and coincidence detection. Commun Pure Appl Math. 2013;66:1464–94.
Kuehn C. Multiple time scale dynamics. Cham: Springer; 2015.
Fitzhugh R. Impulses and physiological states in theoretical models of nerve membrane. Biophys J. 1961;1:445–66.
Coombes S, Osbaldestin AH. Period adding bifurcations and chaos in a periodically stimulated excitable neural relaxation oscillator. Phys Rev E. 2000;62:4057–66.
Berger H. Über das Elektrenkephalogramm des Menschen. Arch Psychiatr Nervenkrankh. 1929;87:527–70.
Voss L, Sleigh J. Monitoring consciousness: the current status of EEGbased depth of anaesthesia monitors. Best Pract Res Clin Anaesthesiol. 2007;21:313–25.
Sherman M, Guillery RW. Functional connections of cortical areas: a new view from the thalamus. Cambridge: MIT Press; 2013.
Wright JJ, Liley DTJ. Dynamics of the brain at global and microscopic scales: neural networks and the EEG. Behav Brain Sci. 1996;19:285–320.
David O, Friston KJ. A neural mass model for MEG/EEG: coupling and neuronal dynamics. NeuroImage. 2003;20:1743–55.
Hoppensteadt FC, Izhikevich EM. Weakly connected neural networks. New York: Springer; 1997.
Fenichel N. Persistence and smoothness of invariant manifolds for flows. Indiana Univ Math J. 1971;21:193–226.
Guckenheimer J, Holmes P. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields. New York: Springer; 1990. (Applied mathematical sciences; vol. 42).
Kopell N. Toward a theory of modelling central pattern generators. In: Neural control of rhythmic movements in vertebrates. New York: Wiley; 1988.
Cabral J, Hugues E, Sporns O, Deco G. Role of local network oscillations in restingstate functional connectivity. NeuroImage. 2011;57:130–9.
Izhikevich E. Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge: MIT Press; 2006.
Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. New York: Springer; 2010.
Hodgkin AL, Huxley AF. A quantitative description of membrane and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44.
Rinzel J. Electrical excitability of cells, theory and experiment: review of the Hodgkin–Huxley foundation and an update. Bull Math Biol. 1990;52:3–23.
Abbott LF, Kepler TB. Model neurons: from Hodgkin–Huxley to Hopfield. In: Garrido L, editor. Statistical mechanics of neural networks. Berlin: Springer; 1990. p. 5–18. (Lecture notes in physics; vol. 368).
Wilson HR. Simplified dynamics of human and mammalian neocortical neurons. J Theor Biol. 1999;200:375–88.
Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. Biophys J. 1981;35:193–213.
Badel L, Lefort S, Berger TK, Petersen CCH, Gerstner W, Richardson MJE. Extracting nonlinear integrateandfire models from experimental data using dynamic I–V curves. Biol Cybern. 2008;99:361–70.
Latham PE, Richmond BJ, Nelson PG, Nirenberg S. Intrinsic dynamics in neuronal networks I. Theory. J Neurophysiol. 2000;83:808–27.
Izhikevich EM. Simple model of spiking neurons. IEEE Trans Neural Netw. 2003;14:1569–72.
Coombes S, Thul R, Wedgwood KCA. Nonsmooth dynamics in spiking neuron models. Physica D. 2012;241:2042–57.
Brown TG. On the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression and a theory of the evolution of function in the nervous system. J Physiol (Lond). 1914;48:18–46.
Connors BW, editor. Gap junctions in the mammalian brain. Boca Raton: CRC Press; 2005.
TorbenNielsen B, Segev I, Yarom Y. The generation of phase differences and frequency changes in a network model of inferior olive subthreshold oscillations. PLoS Comput Biol. 2012;8:1002580.
Liley DTJ, Cadusch PJ, Dafilis MP. A spatially continuous mean field theory of electrocortical activity. Netw Comput Neural Syst. 2002;13:67–113.
Deco G, Jirsa VK, McIntosh AR. Emerging concepts for the dynamical organization of restingstate activity in the brain. Nat Rev Neurosci. 2011;12:43–56.
Young CK, Eggermont JJ. Coupling of mesoscopic brain oscillations: recent advances in analytical and theoretical perspectives. Prog Neurobiol. 2009;89:61–78.
Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24.
Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern. 1995;73:357–66.
Touboul J, Wendling F, Chauvel P, Faugeras O. Neural mass activity, bifurcations, and epilepsy. Neural Comput. 2011;23:3232–86.
Jedynak M, Pons AJ, GarciaOjalvo J. Crossfrequency transfer in a stochastically driven mesoscopic neuronal model. Front Comput Neurosci. 2015;9:14.
Spiegler A, Knösche TR, Schwab K, Haueisen J, Atay FM. Modeling brain resonance phenomena using a neural mass model. PLoS Comput Biol. 2011;7(12):1002298.
Breakspear M, Roberts JA, Terry JR, Rodrigues S, Mahant N, Robinson PA. A unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysis. Cereb Cortex. 2006;16:1296–313.
Terry JR, Benjamin O, Richardson MP. Seizure generation: the role of nodes and networks. Epilepsia. 2012;53(9):166–9.
Chowdhury FA, Woldman W, FitzGerald TH, Elwes RD, Nashef L, Terry JR, Richardson MP. Revealing a brain network endophenotype in families with idiopathic generalised epilepsy. PLoS ONE. 2014;9:110136.
ValdesSosa PA, SanchezBornot JM, Sotero RC, IturriaMedina Y, AlemanGomez Y, BoschBayard J, Carbonell F, Ozaki T. Model driven EEG/fMRI fusion of brain oscillations. Hum Brain Mapp. 2009;30:2701–21.
Gençay R, Liu T. Nonlinear modelling and prediction with feedforward and recurrent networks. Physica D. 1997;108:119–34.
Arenas A, DíazGuilera A, Kurths J, Moreno Y, Zhou C. Synchronization in complex networks. Phys Rep. 2008;469:93–153.
Pikovsky A, Rosenblum M, Kurths J. Synchronization. Cambridge: Cambridge University Press; 2001. (Cambridge nonlinear science series; vol. 12).
Brette R. Computing with neural synchrony. PLoS Comput Biol. 2012;8:1002561.
Brunel N, Hakim V. Sparsely synchronized neuronal oscillations. Chaos. 2008;18:015113.
Casagrande V, Mikhailov A. Birhythmicity, synchronization, and turbulence in an oscillatory system with nonlocal inertial coupling. Physica D. 2005;205:154–69.
Deville REL, Peskin CS. Synchrony and asynchrony for neuronal dynamics defined on complex networks. Bull Math Biol. 2012;74:769–802.
Nowotny T, Huerta R, Rabinovich MI. Neuronal synchrony: peculiarity and generality. Chaos. 2008;18:037119.
Parga N, Abbott LF. Network model of spontaneous activity exhibiting synchronous transitions between up and down states. Front Neurosci. 2007;1:57–66.
Pinsky PF, Rinzel J. Synchrony measures for biological neural networks. Biol Cybern. 1995;137:129–37.
Popovych OV, Hauptmann C, Tass P. Control of neuronal synchrony by nonlinear delayed feedback. Biol Cybern. 2006;95:69–85.
Rubin J, Terman D. Synchronized activity and loss of synchrony among heterogeneous conditional oscillators. SIAM J Appl Dyn Syst. 2002;1:146–74.
Sorrentino F, Ott E. Network synchronization of groups. Phys Rev E. 2007;76:056114.
Reyes AD. Synchronydependent propagation of firing rate in iteratively constructed networks in vitro. Nat Neurosci. 2003;6:593–9.
Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang DU. Complex networks: structure and dynamics. Phys Rep. 2006;424:175–308.
Nekorkin VI, Voronin ML, Velarde MG. Clusters in an assembly of globally coupled bistable oscillators. Eur Phys J B. 1999;9:533–43.
Newman MEJ. Networks: an introduction. Oxford: Oxford University Press; 2010.
Do AL, Gross T. Selforganization in continuous adaptive networks. Copenhagen: River; 2012.
Stam CJ. Modern network science of neurological disorders. Nat Rev Neurosci. 2014;15:683–95.
Atay FM, Biyikoglu T, Jost J. Network synchronization: spectral versus statistical properties. Physica D. 2006;224:35–41.
Pecora L, Carroll T, Johnson G, Mar D, Fink KS. Synchronization stability in coupled oscillator arrays: solution for arbitrary configurations. Int J Bifurc Chaos. 2000;10:273–90.
Pecora L, Barahona M. Synchronization of oscillators in complex networks. Chaos Complex Lett. 2005;1:61–91.
Rothkegel A, Lehnertz K. Recurrent events of synchrony in complex networks of pulsecoupled oscillators. Europhys Lett. 2011;95:38001.
Milo R, ShenOrr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U. Network motifs: simple building blocks of complex networks. Science. 2002;298:824–7.
Sporns O, Kötter R. Motifs in brain networks. PLoS Biol. 2004;2:369.
Sporns O, Honey CJ, Kötter R. Identification and classification of hubs in brain networks. PLoS ONE. 2007;2(10):1049.
Kamei H. The existence and classification of synchronybreaking bifurcations in regular homogeneous networks using lattice structures. Int J Bifurc Chaos. 2009;19:3707–32.
Benjamin O, Fitzgerald THB, Ashwin P, TsanevaAtanasova K, Chowdhury F, Richardson MP, Terry JR. A phenomenological model of seizure initiation suggests network structure may explain seizure frequency in idiopathic generalised epilepsy. J Math Neurosci. 2012;2:1.
Ermentrout GB, Kopell N. Oscillator death in systems of coupled neural oscillators. SIAM J Appl Math. 1990;50:125–46.
Ashwin P, Dangelmayr G. Isochronicityinduced bifurcations in systems of weakly dissipative coupled oscillators. Dyn Stab Syst. 2000;15:263–86.
Sepulchre JA, MacKay RS. Localized oscillations in conservative or dissipative networks of weakly coupled autonomous oscillators. Nonlinearity. 1997;10:679–713.
Campbell SA. Time delays in neural systems. In: Handbook of brain connectivity. Berlin: Springer; 2007. (Understanding complex systems).
Hoevel P, Dahlem MA, Schoell E. Control of synchronization in coupled neural systems by timedelayed feedback. Int J Bifurc Chaos. 2010;20:813–5.
Dhamala M, Jirsa V, Ding M. Enhancement of neural synchrony by time delay. Phys Rev Lett. 2004;92:074104.
Levnajić Z, Pikovsky A. Phase resetting of collective rhythm in ensembles of oscillators. Phys Rev E. 2010;82:056202.
Atay FM. Complex timedelay systems. Berlin: Springer; 2010. (Understanding complex systems).
Engelborghs K, Luzyanina T, Roose D. Numerical bifurcation analysis of delay differential equations using DDEBIFTOOL. ACM Trans Math Softw. 2002;28:1–21.
Sieber J, Engelborghs K, Luzyanina T, Samaey G, Roose D. DDEBIFTOOL manual—bifurcation analysis of delay differential equations. arXiv:1406.7144 (2014).
Coombes S, Laing C. Delays in activitybased neural networks. Philos Trans R Soc A. 2009;367:1117–29.
MacArthur BD, SanchezGarcia RJ, Anderson JW. Symmetry in complex networks. Discrete Appl Math. 2008;156:3525–31.
Golubitsky M, Schaeffer DG. Singularities and groups in bifurcation theory. Vol. I. New York: Springer; 1985. (Applied mathematical sciences; vol. 51).
Golubitsky M, Stewart I, Schaeffer DG. Singularities and groups in bifurcation theory. Vol. II. New York: Springer; 1988. (Applied mathematical sciences; vol. 69).
Golubitsky M, Stewart I. The symmetry perspective: from equilibrium to chaos in phase space and physical space. Basel: Birkhäuser Verlag; 2002. (Progress in mathematics; vol. 200).
Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC. What geometric visual hallucinations tell us about the visual cortex. Neural Comput. 2002;14(3):473–91.
Ashwin P, Swift JW. The dynamics of n weakly coupled identical oscillators. J Nonlinear Sci. 1992;2:69–108.
Bressloff PC, Coombes S. Symmetry and phaselocking in a ring of pulsecoupled oscillators with distributed delays. Physica D. 1999;126:99–122.
Brown E, Holmes P, Moehlis J. Globally coupled oscillator networks. In: Kaplan E, Marsden J, Sreenivasan K, editors. Perspectives and problems in nonlinear science: a celebratory volume in honor of Larry Sirovich. New York: Springer; 2003. p. 183–215.
Othmer HG, Scriven LE. Instability and dynamic pattern in cellular networks. J Theor Biol. 1971;32:507–37.
Dionne B, Golubitsky M, Stewart I. Coupled cells with internal symmetry. I. Wreath products. Nonlinearity. 1996;9:559–74.
Kuznetsov YA. Elements of applied bifurcation theory. 2nd ed. New York: Springer; 1998. (Applied mathematical sciences; vol. 112).
Krauskopf B, Osinga HM, GalanVioque J, editors. Numerical continuation methods for dynamical systems path following and boundary value problems. Heidelberg: Springer; 2007.
Dhooge A, Govaerts W, Kuznetsov YA. MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs. ACM Trans Math Softw. 2003;29:141–64.
Kuznetsov YA, Levitin VV, Skovoroda AR. Continuation of stationary solutions to evolution problems in CONTENT. Amsterdam, The Netherlands: Centrum voor Wiskunde en Informatica; 1996. Report No.: AMR9611.
Ermentrout GB. Simulating, analyzing, and animating dynamical systems: a guide to XPPAUT for researchers and students. Philadelphia: SIAM; 2002.
Krupa M. Robust heteroclinic cycles. J Nonlinear Sci. 1997;7:129–76.
Krupa M, Melbourne I. Asymptotic stability of heteroclinic cycles in systems with symmetry. II. Proc R Soc Edinb, Sect A. 2004;134:1177–97.
Ashwin P, Field M. Heteroclinic networks in coupled cell systems. Arch Ration Mech Anal. 1999;148:107–43.