Journal of Nonlinear Science

, Volume 24, Issue 2, pp 201–242

Some Joys and Trials of Mathematical Neuroscience


    • Department of Mechanical and Aerospace Engineering, Program in Applied and Computational MathematicsPrinceton University
    • Princeton Neuroscience InstitutePrinceton University

DOI: 10.1007/s00332-013-9191-4

Cite this article as:
Holmes, P. J Nonlinear Sci (2014) 24: 201. doi:10.1007/s00332-013-9191-4


I describe the basic components of the nervous system—neurons and their connections via chemical synapses and electrical gap junctions—and review the model for the action potential produced by a single neuron, proposed by Hodgkin and Huxley (HH) over 60 years ago. I then review simplifications of the HH model and extensions that address bursting behavior typical of motoneurons, and describe some models of neural circuits found in pattern generators for locomotion. Such circuits can be studied and modeled in relative isolation from the central nervous system and brain, but the brain itself (and especially the human cortex) presents a much greater challenge due to the huge numbers of neurons and synapses involved. Nonetheless, simple stochastic accumulator models can reproduce both behavioral and electrophysiological data and offer explanations for human behavior in perceptual decisions. In the second part of the paper I introduce these models and describe their relation to an optimal strategy for identifying a signal obscured by noise, thus providing a norm against which behavior can be assessed and suggesting reasons for suboptimal performance. Accumulators describe average activities in brain areas associated with the stimuli and response modes used in the experiments, and they can be derived, albeit non-rigorously, from simplified HH models of excitatory and inhibitory neural populations. Finally, I note topics excluded due to space constraints and identify some open problems.


AccumulatorAveragingCentral pattern generatorDecision makingBifurcationDrift-diffusion processMean field reductionOptimalityPhase reductionSpeed-accuracy tradeoff

Mathematics Subject Classification


1 Introduction

Neuroscience is currently generating much excitement and some hyperbole (a recent review of a popular book referred to “neuromania” (McGinn 2013)). This is largely due to recent advances in experimental techniques and associated methods for analysis of “big data.” Striking examples are the CLARITY method that allows imaging of entire neural circuits and captures subcellular structural detail (Chung et al. 2013), and Connectomics (Seung 2012), which aims to determine neural connectivity and hence function at the cellular level. In announcing the US Brain Initiative in April 2013, President Obama spoke of “giving scientists the tools they need to get a dynamic picture of the brain in action and better understand how we think and how we learn and how we remember” (Insel et al. 2013). Such tools are not solely experimental (Abbott 2008). Computational approaches already play a substantial rôle in neuroscience (De Schutter 2008, 2009), and they are becoming more ambitious: the European Blue Brain project (Markram 2006) ( proposes to simulate all the cells and most of the synapses in an entire brain, thereby hoping to “challenge the foundations of our understanding of intelligence and generate new theories of consciousness.”

In this article I have a more modest goal: to show how mathematical models and their analyses are contributing to our understanding of some small parts of brains and central nervous systems. I will describe how reductions of biophysically based models of single cells and circuits to low-dimensional dynamical systems can reveal mechanisms that might otherwise remain hidden in massive data analyses and computer simulations. In this regard mathematics does not merely enable numerical simulation and motivate experiments, it provides an analytical complement without which they can lose direction and lack explanatory power.

Mathematical treatments of the nervous system began in the mid 20th century. An early example is Norbert Wiener’s “Cybernetics,” published in 1948 and based on work with the Mexican physiologist Arturo Rosenblueth (Wiener 1948). Weiner introduced ideas from dissipative dynamical systems, symmetry groups, statistical mechanics, time series analysis, information theory, and feedback control. He also discussed the relationship between digital computers (then in their infancy) and neural circuits, a theme that John von Neumann subsequently addressed in a book published in the year following his death (von Neumann 1958). While developing one of the first programmable digital computers (JONIAC, built at the Institute for Advanced Study in Princeton in the late 1940s), von Neumann “tried to imitate some of the known operations of the live brain” (von Neumann 1958, Preface). In developing cybernetics, Wiener drew on von Neumann’s earlier works in analysis, ergodic theory, computation and game theory, as well his own studies of Brownian motion (a.k.a. Wiener processes). Some of these ideas appear in Sect. 4 of the present paper.

These books (Wiener 1948; von Neumann 1958) were directed at the brain and nervous system in toto, although much of the former was based on detailed studies of heart and leg muscles in animals. The first cellular-level mathematical model of a single neuron was developed in the early 1950s by the British physiologists Hodgkin and Huxley (1952d). This work, which won them the Nobel Prize in Physiology in 1963, grew out of a long series of experiments on the giant axon of the squid Loligo by themselves and others, as noted in Sect. 2 (also see Huxley’s obituary (Mackey and Santillán 2013)). Since their pioneering work, mathematical neuroscience has grown into a subdiscipline, served worldwide by courses long and short (e.g. Kopell et al. 2009, Whittington et al. 2009), textbooks (e.g. Wilson 1999, Dayan and Abbott 2001, Keener and Sneyd 2009, Ermentrout and Terman 2010, Gabbiani and Cox 2010), and review articles (recent examples include Wang 2010, Kopell et al. 2010, McCarthy et al. 2012, Deco et al. 2013). The number of mathematical models must now exceed the catalogue of brain areas by several orders of magnitude. I can present but few examples here, inevitably biased toward my own interests.

Models can be of two broad types: empirical (also called descriptive or phenomenological), or mechanistic. The former ignore (possibly unknown) anatomical structure and physiology, and seek to reproduce input–output or stimulus–response relationships of the system under study. Mechanistic models attempt to describe structure and function in some detail, reproducing observed behaviors by appropriate choice of model components and parameters and thereby revealing mechanisms responsible for those behaviors. Models can reside throughout a continuum from molecular to organismal scales, and many are not easily classifiable, but one common feature is nonlinearity. Unlike much of physical science and engineering, biology is inherently nonlinear. For example, the functions describing ion channels opening in cells in response to transmembrane voltage increase or characterizing neural firing rate dependence on input current are typically bounded above and below, and often modeled by sigmoids.

The first part of this article covers mechanistic models, beginning in Sect. 2 with the Hodgkin–Huxley (HH) equations for the generation and propagation of a single action potential (AP, or spike); it then discusses dimensional reductions that are easier to analyze and extensions of HH to describe neurons that emit bursts of spikes, and introduces models for synapses. Section 3 considers small neural circuits found in central pattern generators for locomotion, and shows how HH models of them can be simplified to phase oscillators. While mathematical methods such as averaging and dimensional reduction via time scale separation are used to simplify coupled sets of HH equations in these cases, the models are all based on cellular biophysiology.

In Sect. 4 I change scale to introduce empirical models of activity in brain areas that may contain millions of neurons. Focusing on simple binary decisions in which a noisy stimulus must be identified, I show how a pair of competing nonlinear stochastic accumulators can model the integration of noisy evidence toward a threshold, triggering a response. Linearizing and considering a limiting case, this model reduces to a scalar drift-diffusion (DD) process, which is in turn a continuum limit of the sequential probability ratio test (SPRT). The SPRT is known to be optimal in that it renders decisions of specified accuracy in the shortest possible time. The tractability of the DD process allows one to derive an explicit optimal speed-accuracy tradeoff, against which human and animal behavior can be assessed. Behavioral experiments reveal both approximations to and deviations from optimality, and further analyses of the model and data suggest three potential reasons for the latter: avoidance of errors, poor time estimation, and minimization of the cost of cognitive control.

Section 5 sketches computations based on mean-field theory which start with pools of spiking neurons having distinct “tuning curves” that respond differently to the two stimuli and lead to stochastic accumulator models like those of Sect. 4. While this is neither rigorous nor as complete as the reduction methods of Sect. 3, it provides further support for such models by connecting them to simplified neuron models of HH type. It also suggests a fourth, physiological reason for suboptimality, namely, nonlinear dynamics. Section 6 contains a brief discussion, provides references some of the many topics omitted due to space limitations, and notes some open problems.

2 The Components: Neurons, Synapses and the Hodgkin–Huxley Equations

The basic components of the nervous system are neurons: electrically active cells that can generate and propagate signals over distance. These signals are action potentials (APs, or spikes): voltage fluctuations of \(\mathcal{O}(100)\) mV, each lasting 1–5 msec, across the cell membrane. Structurally, neurons come in many shapes and sizes, but all share the basic features of a soma or cell body, dendrites: multiply branching extensions that receive signals from other neurons, and an axon, a cable-like extension that may also be branched, along which APs propagate to other neurons.1 The connections between axons and dendrites are called synapses, and they may be electrical, communicating voltage differences, or chemical, releasing neurotransmitters upon the arrival of an AP from the presynaptic cell. Functionally, neurons are either excitatory or inhibitory, tending to increase or depress the transmembrane voltage of postsynaptic cells to which they connect. In this section we describe models for single neurons and for synapses.

2.1 The Hodgkin–Huxley Equations

As noted above, following years of beautiful and painstaking experiments reported in an impressive series of papers (Hodgkin et al. 1949, 1952; Hodgkin and Huxley 1952b,a,c), Hodgkin and Huxley created the first mathematical model for the AP (Hodgkin and Huxley 1952d). This work gained them a Nobel prize in 1963, along with J.C. Eccles (for work on synapses and discovery of excitatory and inhibitory postsynaptic potentials: see Sect. 2.5). They used the giant axon of a squid, part of the animal’s escape reflex system. The cell’s size allowed them to thread a silver wire through it, equalizing voltages along the axon, thus removing spatial variations and allowing them to describe its dynamics in terms of nonlinear ordinary differential equations (ODEs):
$$\begin{aligned} C_m\frac{\mathrm{d}v}{\mathrm{d}t} &= - \bar{g}_{\mathrm{K}} n^4(v-v_{\mathrm{K}}) - \bar{g}_{\mathrm{Na}}m^3 h(v-v_{\mathrm{Na}}) - \bar{g}_\mathrm{L}(v-v_{\mathrm{L}}) + I , \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}m}{\mathrm{d}t} &= \alpha_m(v) (1-m) - \beta_m(v) m , \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}n}{\mathrm{d}t} &= \alpha_n(v) (1-n) - \beta_n(v) n , \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}h}{\mathrm{d}t} &= \alpha_h(v) (1-h) - \beta_h(v) h . \end{aligned}$$
(A term \(\frac{\partial^{2} v}{\partial x^{2}}\) was subsequently added to (1a) to model propagation of the AP along the axon (Hodgkin and Huxley 1952d), creating a reaction–diffusion equation.) I now briefly describe the electro-chemical mechanisms encoded in the ODEs (1a1d); for further details and historical notes, see Keener and Sneyd (2009, Sect. 5.1), and Hodgkin and Huxley (1952d).

Before starting it is important to know that ionic transport across cell membranes occurs through ion-specific channels and pores. It is driven passively by concentration and potential differences and by active pumps that exchange sodium for potassium and remove calcium from the cell. The Nernst–Planck equation, from biophysics, relates transmembrane flux, concentration and potential differences for each ionic species, and allows one to compute equilibrium conditions consistent with zero flux (Keener and Sneyd 2009, Sect. 2.6). At this resting potential, sodium concentrations are higher outside the cell than inside, while potassium concentrations are higher inside it.

Hodgkin and Huxley had noted that during a spike the initial inward current was followed by an outward current. They hypothesized that the former was due to sodium ions (Na+) flowing in from their higher extracellular concentration, and that the outward current was due to potassium ions (K+) leaving the cell. They also included a passive leak current, due primarily to chloride ions (Cl). These three currents appear in (1a) as \(\bar{g}_{\mathrm{Na}}m^{3} h(v-v_{\mathrm{Na}})\), \(\bar{g}_{\mathrm{K}} n^{4}(v-v_{\mathrm{K}})\), and \(\bar{g}_{\mathrm{L}}(v-v_{\mathrm{L}})\) μA/cm2 respectively, along with an externally applied current I, corresponding to Kirchhoff’s law which describes the rate of change of transmembrane voltage v in the circuit of Fig. 1. The barred parameters in the sodium and potassium conductances denote constant values that multiply time-dependent functions of n(t),m(t) and h(t) to form “dynamical” conductances \(\bar{g}_{\mathrm{K}} n^{4}\) and \(\bar{g}_{\mathrm{Na}}m^{3} h\). Voltage dependencies of the ion channels are also characterized by the Nernst reversal potentialsvK=−12 mV, vNa=115 mV and vL=10.6 mV; as the name suggests, the currents change direction as v crosses these values.
Fig. 1

Equivalent circuit for the giant axon of squid, from Hodgkin and Huxley (1952d, Fig. 1). Leak conductance is constant, but sodium and potassium conductances vary, indicated by variable resistors. Batteries represent reversal potentials, transmembrane capacitance is Cm μF/cm2 and applied current is I μA/cm2

The rôle of each ionic species was revealed by experiments in which all but one active species were removed and the transmembrane voltage held constant and then stepped from one value to another, while the current I(t) required to maintain that voltage was recorded. This voltage clamp method determined each ionic conductance as a function of voltage. Moreover, by examining transient responses following steps of given sizes, Hodgkin and Huxley could fit sigmoids to the six functions αm(v),…βh(v) across the relevant voltage range. (Note that (1b1d) are linear for fixed v.) They postulated a single gating variable n(t)∈[0,1] to describe potassium activation and noted that while conductance dropped sharply from higher levels following a downward step in v, it rose gently from zero after a step increase. This led to the fourth power in the potassium conductance \(\bar{g}_{\mathrm{K}} n^{4}\) (cf. Hodgkin and Huxley (1952d, Fig. 1)). Sodium dynamics proved more complicated, involving a rapid increase in conductance followed by slower decrease (Hodgkin and Huxley 1952d, Figs. 2 and 3), a non-monotonic response that required two variables m(t),h(t)∈[0,1] to describe activation and deactivation, producing the m3h term in \(\bar{g}_{\mathrm{Na}}m^{3} h\).

The resulting forms of the α and β functions are
$$\begin{aligned} & \alpha_m(v) = 0.1\frac{25-v}{\exp (\frac{25-v}{10} )-1} , \quad\quad \beta_m (v)= 4\exp \biggl(\frac{-v}{18} \biggr) , \end{aligned}$$
$$\begin{aligned} & \alpha_h(v) = 0.07 \exp \biggl(\frac{-v}{20} \biggr) ,\quad \quad \beta_h(v) = \frac{1}{\exp (\frac{30-v}{10} )+1} , \end{aligned}$$
$$\begin{aligned} & \alpha_n(v) = 0.01\frac{10-v}{\exp (\frac{10-v}{10} )-1} ,\quad\quad \beta_n(v) = 0.125 \exp \biggl(\frac{-v}{80} \biggr) , \end{aligned}$$
and the conductances are \(\bar{g}_{\mathrm{Na}} = 120\), \(\bar{g}_{\mathrm{K}} = 36\), and \(\bar{g}_{\mathrm{L}} = 0.3\) mSiemens/cm2. To emphasize the equilibrium potential n(v) at which n remains constant, and the time scale τn(v), the gating equations may be rewritten as follows:
$$\begin{aligned} &\frac{dn}{dt} = \frac{n_\infty(v)-n}{\tau_n(v)} , \quad\mbox{where} \\ &\quad n_\infty(v) = \frac{\alpha_n (v)}{\alpha_n (v)+\beta_n (v)} , \tau_n(v) = \frac{1}{\alpha_n (v)+\beta_n (v)} , \end{aligned}$$
with analogous expressions for m and h. See Hodgkin and Huxley 1952d, Fig. 6 for graphs of αn(v),βn(v),n(v), etc.
Figure 2 shows time courses of voltage and gating variables during a typical AP, obtained by numerical solution of (1a)–(1d). Note the four phases:
  1. (1)

    rapid increase in v and m as sodium conductance rises towards the sodium Nernst potential in a brief depolarized AP.

  2. (2)

    At higher voltages h decreases, lowering sodium conductance, and n increases, increasing potassium conductance and driving v down towards the potassium potential.

  3. (3)

    During the ensuing refractory period m falls quickly to its resting value, but n stays high and h remains low because their equations have longer time constants, thus holding v down (hyperpolarized) and preventing APs.

  4. (4)

    As n and h return to values that allow an AP, the cell enters its recovery phase.
Fig. 2

(a) Time courses of membrane voltage (a) and gating variables (b) during an action potential and the subsequent refractory and recovery periods: msolid, n dash-dotted and hdashed. Voltage scale has been shifted so that resting potential is at 0 mV. Note the differing timescales and approximate anticorrelation of n(t) and h(t)

The variables m,n and h can be interpreted as probabilities that gates in the corresponding ionic channels are open, and the exponents in the conductances as the numbers of gates that must be open. It is now known that potassium channels contain tetrameric structures that must cooperate for ions to flow, in agreement with the empirical n4 fit.

2.2 Two-Dimensional Reductions of HH

I now introduce two simplifications of the Hodgkin–Huxley equations, a reduction of HH due to Hodgkin and Huxley (1952d, Figs. 4–5, 7–8 and 9–10) and, independently, Krinsky and Kokoz (1973), and the FitzHugh-Nagumo (FN) equations Rinzel (1985), (FitzHugh 1961; Nagumo et al. 1962). Examining the behavior of the HH state variables, we see that m(t) changes relatively rapidly because its timescale τm=1/(αm+βm)≪τn,τh in the relevant voltage range (cf. (25) and Fig. 2). We may therefore assume that it is almost always equilibrated so that \(\dot{m} \approx0\), implying that
$$ m(t) \approx m_\infty(v) = \frac{\alpha_m (v)}{\alpha_m (v)+\beta _m (v)} , $$
cf. (5). Moreover, as Fig. 2(b) shows, n(t) and h(t) are approximately anti-correlated in that throughout the AP and recovery phase their sum remains almost constant: h+na. Thus m and h may be replaced by m(v) and an and dropped as state variables, reducing the system to
$$\begin{aligned} C_m\frac{\mathrm{d}v}{\mathrm{d}t} =& - \bar{g}_{\mathrm{K}} n^4(v-v_{\mathrm{K}}) - \bar{g}_{\mathrm{Na}}m_\infty(v)^3 (a-n) (v-v_{\mathrm{Na}}) - \bar{g}_{\mathrm{L}}(v-v_{\mathrm{L}})+I , \end{aligned}$$
$$\begin{aligned} \tau_n(v) \frac{\mathrm{d}n}{\mathrm{d}t} =& n_\infty(v) - v . \end{aligned}$$
This reduction to a planar system can be made rigorous by use of geometric singular perturbation methods (Wilson 1999, Chaps. 8–9).
Planar system methods (Jones 1994) reveal the phase portrait of (7a)–(7b). The \(\dot{n} = 0\) nullcline, on which solutions move vertically in Fig. 3, can be written explicitly, but the \(\dot{v} = 0\) nullcline, for horizontal motion, demands solution of a quartic polynomial, which can be done numerically to yield the phase portrait of Fig. 3. Fixed points lie at the intersections of these nullclines. The left-hand plot, for I=0, features a sink near v=0 along with a saddle and a source. In the right-hand plot, for I=15, a limit cycle has appeared. Figure 3 displays the spiking threshold vth at a local minimum of the \(\dot{v} = 0\) nullcline. When the leftmost fixed point lies to the left of vth it is stable, as for I=0. In this excitable state spikes can occur due to perturbations that push v past vth, but absent further perturbations the state returns to the sink. When the fixed point moves to the right of vth (I=15) it loses stability and solutions repeatedly cross threshold, yielding periodic spiking in the manner of a relaxation oscillator (Hirsch et al. 2004).
Fig. 3

Phase planes of the reduced HH equations (7a)–(7b), showing nullclines \(\dot{v} = 0, \dot{n} = 0\) (bold) for I=0 (a) and I=15 (b). Orbits flow to the left above \(\dot{v} = 0\) and to the right below it; diamonds at ends of orbit segments indicate flow direction. Approximately horizontal components correspond to fast flows and solutions move slowly near the slow manifold \(\dot{v} = 0\)

Here, to illustrate the rich dynamics that a planar system with nonlinear nullclines can exhibit, we have chosen I values for which (7a)–(7b) has three fixed points; for others, it has only one (as do the original H–H equations) (Guckenheimer and Holmes 1983, Sect. 2.1).

The FN equations preserve this qualitative structure, replacing the complicated sigmoids of (24) by cubic and linear functions:
$$\begin{aligned} \dot{v} &= \frac{1}{\tau_v} \biggl( v - \frac{v^3}{3} - r + I \biggr) , \end{aligned}$$
$$\begin{aligned} \dot{r} &= \frac{1}{\tau_r} ( -r + 1.25 v + 1.5) , \end{aligned}$$
(Rinzel 1985). Timescales are normally chosen so that \(\tau_{v} \ll\tau_{r} = \mathcal{O}(1)\) to preserve the relaxation oscillation with fast rise and fall in v, but the relative durations of the depolarized and hyperpolarized episodes are approximately equal, unlike the HH dynamics of Fig. 2. The reason for this becomes clear when we examine the nullclines shown in Fig. 4.
Fig. 4

Phase planes of the FizHugh–Nagumo equations (8a)–(8b), showing nullclines and indicating fast and slow flows for τv=0.1,τr=1.25, and I=0 (left) and I=1.5 (right). At left, all orbits approach a sink; at right a limit cycle encircles a source. Orbits shown as in Fig. 3

First note that the basic behavior of the reduced HH equations (7a)–(7b) is preserved: for low I (on the left), there is a stable sink, while for higher I (on the right), there is a stable limit cycle. However, unlike Fig. 3, the cubic \(\dot{v} = 0\) nullcline is symmetric about v=0, so that the slow orbit segments are similar in duration. Moreover, since the slope of the \(\dot{r} = 0\) nullcline (1.25) exceeds the maximum slope of the \(\dot{v} = 0\) nullcline (1), (8a)–(8b) has a single fixed point for all I. It can be shown that this loses stability in a supercritical Hopf bifurcation Wilson (1999, Sect. 8.3) as I increases, creating the limit cycle, and that the limit cycle vanishes in a second Hopf bifurcation at a higher I, where the fixed point restabilizes, corresponding to persistent depolarization of the neuron. This bifurcation sequence also occurs for the full HH equations, but in that case the first Hopf bifurcation is subcritical, in which an unstable limit cycle converges on the stable hyperpolarized fixed point. The unstable cycle appears in a saddle-node bifurcation of periodic orbits (Guckenheimer and Holmes 1983, Sect. 3.4), along with a stable limit cycle at a slightly lower I. The FN simplification loses both quantitative and fine qualitative detail, but is nonetheless popular among applied mathematicians due to its analytical tractability.

2.3 Integrate-and-Fire Models

Integrate-and-fire (IF) models effect further reduction to a single ODE by replacing the spike dynamics with a stereotypic AP description inserted when v exceeds a threshold vth, followed by reset to a resting potential vr, possibly after a fixed refractory period. The model was first introduced in 1907 in studying the sciatic nerve of leg muscles in frogs (Guckenheimer and Holmes 1983, Sect. 3.5), but further studies came decades later (Lapicque 1907; Brunel and van Rossum 2007). The linear IF model retains only the leak and applied currents of (1a) and is written
$$ C \dot{v} = - \bar{g}_{\mathrm{L}}(v - v_{\mathrm{L}}) + I , \quad\mbox{for} \ v \in[v_{r}, v_{\mathrm{th}}) . $$
A delta function δ(ttk) is inserted and voltage reset if v reaches vth at t=tk, making (9) a hybrid dynamical system (Stein 1965; Knight 1972a,b). Without resets, all solutions would approach the sink at \(v_{\mathrm{ss}} = v_{\mathrm{L}} + I / \bar{g}_{\mathrm{L}}\) as they do for vssvth, but if vss>vth repetitive spiking occurs as shown in Fig. 5.
Fig. 5

Periodic spiking in a leaky IF model for vss>vth, including a refractory period. Trajectory of v(t) without threshold shown dashed and marked vss (in red) (Color figure online)

The decelerating subthreshold voltage profile of the linear IF model differs from the acceleration characteristic of more realistic models (cf. Fig. 2(a)). This can be repaired by using nonlinear functions, common choices being quadratic (Back et al. 1993; Guckenheimer and Johnson 1995) or exponential (Ermentrout and Kopell 1986; Latham et al. 2000; Latham and Brunel 2003). The reset upon reaching threshold prevents orbits escaping to infinity in finite time. See (Foucaud-Trocme et al. 2003; Foucaud-Trocme and Brunel 2005) for comparisons and Izhikevich (2004) for model reviews.

Interspike intervals and hence firing rates are easily computed for scalar IF models, but it is difficult to obtain explicit results for all but the simplest multi-unit circuits because one must compute threshold arrival times for every cell and paste together the intervening orbit segments to obtain the flow map. Nonetheless, IF models are in wide use for large-scale numerical simulations of cortical circuits; an example appears below in Sect. 5.

2.4 A Model for Bursting Neurons

As well as reducing them, one can augment the HH equations by adding ionic species. Incorporating slow processes such as calcium (Ca++) release introduces long time scales that can interact with the medium and short timescales of periodic APs to produce bursts of spikes followed by refractory periods. This is characteristic of motoneurons, and more generally of cells involved in generating rhythmic activity Burkitt (2006a,b). Let c(t) denote a slow gating variable governed by
$$ \tau_c(v) \dot{c} = \epsilon\bigl(c_{\infty}(v) - c\bigr) , \quad \epsilon\ll1 , $$
and for simplicity suppose that the medium scale ionic dynamics has been reduced to a single variable n, as in (7a)–(7b). The voltage equation analogous to (7a) now contains an ionic current depending on c, but since \(\dot{c} = \mathcal{O}(\epsilon)\), we may appeal to perturbation methods (Chay and Keizer 1983; Sherman et al. 1988) and regard c as a “frozen” parameter. Changes in c can cause bifurcations in the two-dimensional (v,n) system that lead from quiescence (a stable fixed point), to periodic spiking, as in Fig. 3, and the slow dynamics of (10) can drive the full system periodically between these states.
Figure 6 shows an example from Ghigliazza and Holmes (2004b, Fig. 11). For small c the (v,n) system has a source surrounded by a stable limit cycle, and for high c a single sink, which continues to the lower saddle-node bifurcation point. The upper saddle node creates the source and a saddle point. Below the \(\dot{c} = 0\) nullcline, c decreases, moving the state along the lower, stable branch of equilibria during the refractory period. At the lower saddle node, the state jumps to the limit cycle, which lies above \(\dot{c} = 0\), so that c now increases. However, before reaching the upper saddle-node the limit cycle collides with the saddle and vanishes in a homoclinic loop bifurcation Ghigliazza and Holmes (2004b, Fig. 11).
Fig. 6

Left: A branch of equilibria (red) for the “frozen c” system containing two saddle-nodes (SN) and a Hopf bifurcation (H). The \(\dot{v}=0\) and \(\dot{c}=0\) nullclines and a typical bursting orbit are projected onto the (c,v) plane. Right: The voltage time history exhibiting periodic bursts. Adapted from (Holmes 2013) (Color figure online)

More on bursting mechanisms and their classification via the fast subsystem’s bifurcations as the slow c variable drifts back and forth can be found in Ghigliazza and Holmes (2004b) and (Guckenheimer and Holmes 1983, Sect. 6.1).

2.5 Neural Connectivity: Synapses and Gap Junctions

Synapses are structures that allow communication of signals between neurons. They come in two types, electrical and chemical. The former provide fast, bidirectional communication via direct contact of cytoplasm in distinct cells through gap junctions, small protein structures where the cells make close contact. They are generally modeled as linear resistors, so that the voltage equations for cells i and j become
$$\begin{aligned} C_i \dot{v}_i &= - I_{i,\mathrm{ion}}( \ldots) + I_i + \bar{g}_{\mathrm{gap}} (v_j - v_i) , \end{aligned}$$
$$\begin{aligned} C_j \dot{v}_j &= - I_{j,\mathrm{ion}}( \ldots) + I_j + \bar{g}_{\mathrm{gap}} (v_i - v_j) , \end{aligned}$$
where \(\bar{g}_{\mathrm{gap}}\) is the gap-junction conductance and Ii,ion(…) denotes the internal ionic currents of cell i. Electrical synapses appear in escape reflexes: e.g., the tail-flip giant neuron in goldfish connects to sensors via a gap junction, allowing rapid responses to threatening stimuli. Gap junctions can also connect groups of small cells, causing them to spike together, as in the synchronization of ink release in certain marine snails.

Chemical synapses involve the release of neurotransmitter from a presynaptic neuron and its reception at a postsynaptic neuron. The cells are separated by synaptic clefts between boutons, protrusions on the presynaptic axon that contain vesicles of neurotransmitter molecules, and postsynaptic dendritic spines. After an AP arrives, calcium influx causes vesicles to fuse with the cell membrane and release their contents, which diffuse across the synaptic cleft to reach postsynaptic receptors that open ion channels and generate excitatory or inhibitory postsynaptic potentials (EPSPs, IPSPs). A single EPSP is usually too small to drive a hyperpolarized postsynaptic cell across threshold, but multiple EPSPs can evoke a spike. IPSPs drive its voltage down to delay or prevent spiking.

The amino acids acetylcholine (ACh), glutamate and γ-aminobutyric acid (GABA) are major neurotransmitters, as are the monoamines dopamine (DA), norepinephrine (NE) and serotinin (SE). Their effects are determined by ionotropic and metabotropic receptors; the former open channels quickly, the latter act via a slower cascade of messengers. GABA activates both ionotropic and metabotropic inhibitory receptors and 2-amino-3-hydroxy-5-methyl-isoxazolepropanoic acid (AMPA) and N-methyl-D-aspartic acid (NMDA) are excitatory ionotropic receptor types for glutamate, with AMPA exhibiting significantly faster activation and deactivation than NMDA.

Chemical synapses are considerably slower than gap junctions, but allow more complicated behavior. They exhibit synaptic plasticity which is crucial to learning, since it allows connections among cells (and hence brain areas) to weaken or strengthen in response to experience. They can amplify signals by releasing large numbers of neurotransmitter molecules, which open many ion channels and thereby depolarize a much larger cell than is possible with gap junctions. Neurotransmitter and receptor time constants span two orders of magnitude and their interaction can lead to reverberations that sustain neural activity in working memory: see Wilson (1999, Chap. 10) and Sect. 5 below.

The effects of neurotransmitter arrival can be modeled as a current that depends on the probability, Ps, of postsynaptic ion channels being open. This process, and the closure of channels as the transmitter unbinds from receptors, can be modeled like the gating variables in the HH equations:
$$ \frac{\mathrm{d}P_{\mathrm{s}}}{\mathrm{d}t} = \alpha_{\mathrm{s}} (1-P_{\mathrm{s}})- \beta_{\mathrm{s}} P_{\mathrm{s}} , $$
where αs and βs determine the rates at which channels open and close, effectively encoding the neurotransmitter time scales: see Keener and Sneyd (2009, Chap. 9) and Wang (1999, 2010), Wong and Wang (2006). Opening is typically faster than closure, so αsβs, and βs is often assumed constant, but αs depends on neurotransmitter concentration in the synaptic cleft, and thus on the presynaptic voltage vi. Again, a sigmoid provides an acceptable model:
$$ \alpha_{\mathrm{s}} (v_i) = \frac{\bar{\alpha}_{\mathrm{s}} C_{\mathrm{NT},\mathrm{max}}}{1 + \exp[-k_{\mathrm{pre}} (v_i - v^{\mathrm{pre}}_{\mathrm{syn}})]} , $$
Destexhe et al. (1999, p. 15), where CNT,max represents the maximal neurotransmitter concentration, \(v^{\mathrm{pre}}_{\mathrm{syn}}\) sets the voltage at which vesicles begin to open, kpre sets the “sharpness” of the switch, and the scale factor \(\bar{\alpha}_{\mathrm{s}}\) allows one to lump the effects of all the synapses between the two cells.
As for the internal ionic currents in the HH model, the postsynaptic current in cell j due to an AP in cell i involves a reversal potential, \(v^{\mathrm{post}}_{\mathrm{syn}}\), and is scaled by a maximal conductance, \(\bar{g}_{\mathrm{syn}}\), so that the voltage equation for the postsynaptic cell is
$$ C \dot{v_j} = - I_{j,\mathrm{ion}}(\ldots) + I_j - \bar{g}_{\mathrm{syn}} P_{\mathrm{s}} \bigl(v_j - v^{\mathrm{post}}_{\mathrm{syn}}\bigr) . $$
Equation (14) and the analogous voltage equations for all presynaptic cells, with their associated full or reduced gating equations, are solved together with (12). See Dayan and Abbott (2001, p. 180) for examples.
This model can be simplified by noting that the rapid rise and fall of the AP vi, acting via (13), makes αs(vi) behave like a rectangular pulse with duration of the AP and height \(\bar{\alpha}_{\mathrm{s}} C_{\mathrm{NT},\mathrm{max}}\). Equation (12) may then be solved explicitly during and following the AP and the resulting exponentials matched to produce a piecewise-smooth rising and falling pulse. Alternatively, this may be approximated as a sum of two exponentials or as an “alpha” function:
$$ P_{\mathrm{s}}(t) = \frac{P_{\mathrm{max}}t}{\tau_{\mathrm{s}}} \exp \biggl( 1-\frac{t}{\tau_{\mathrm{s}}} \biggr) , \quad t \ge0 , $$
which starts at zero, rises to a peak Pmax at t=τs, and then decays back to zero with time constant τs. For further discussions of synaptic mechanisms, see (Destexhe et al. 1999; Dayan and Abbott 2001), Ghigliazza and Holmes (2004a) and Dayan and Abbott (2001).

As noted in Sect. 2.1, Hodgkin and Huxley modeled the propagation of APs along an axon by adding a diffusive spatial term to (1a) Keener and Sneyd (2009, Chap. 8). More complex geometries including branching dendrites and axons are often represented by multiple compartments (sometimes in the hundreds). This leads to large sets of ODEs for each cell, but allows one to capture subtle effects that influence intercellular communication. Not only do dendrite sizes affect their conductances Johnston and Wu (1997, Chaps. 11–15) and transmission delays occur in dendritic trees, but EPSPs arriving at nearby synapses interact to produce less excitation than their sum predicts (Hodgkin and Huxley 1952d). Nonlinear interactions due to shunting inhibition that changes membrane conductance can also reduce excitatory currents (Rall 1959). See (Rall et al. 1967) for reviews of such “dendritic computations.”

3 Central Pattern Generators and Phase Reduction

Central pattern generators (CPGs) are networks in the spinal cords of vertebrates and invertebrate thoracic ganglia, capable of generating muscular activity in the absence of sensory feedback (Rall 1964), cf London and Häusser (2005). CPGs drive many rhythmic actions, including locomotion, scratching, whisking (e.g. in rats), moulting (in insects), chewing and digestion. The stomato-gastric ganglion in lobster is perhaps the best-understood example (Cohen et al. 1988; Getting 1988; Pearson 2000; Marder 2000; Ijspeert 2008). Experiments are typically done in isolated in vitro preparations, with sensory and higher brain inputs removed (Wilson 1999, Chaps. 12–13), but it is increasingly acknowledged that an integrative approach, including muscle and body-limb dynamics, environmental reaction forces and proprioceptive feedback, is needed to fully understand their function (Marder and Bucher 2007). Indeed, without reaction forces, animals would go nowhere! CPGs nonetheless provide examples of neural networks capable of generating interesting behaviors, but small enough to allow the study of detailed biophysically based models.

After introducing a phase reduction method that is particularly useful for such systems and applies to any ODE with a hyperbolic limit cycle and showing how it leads to systems of coupled phase oscillators via averaging theory, I describe a model of an insect CPG. For an early review of CPG models that use phase reduction and averaging, see (Cohen et al. 1988; Grillner 1999).

3.1 Phase Reduction and Phase Response Curves

Phase reduction was originally developed by (Chiel and Beer 1997; Holmes et al. 2006; Tytell et al. 2011), and independently, with biological applications in mind, by Kopell (1988). For extensive treatments, see Malkin (1949, 1956) and Winfree (2001). Consider a system
$$ \dot{{\bf{x}}} = {\bf{f}}({\bf{x}}) + \epsilon{\bf{g}}({\bf{x}}, \ldots) ;\quad {\bf{x}}\in\mathbb{R}^{n} , \quad 0 \le\epsilon\ll1 , \ n \ge2 , $$
where \({\bf{g}}({\bf{x}}, \ldots)\) represents external inputs (e.g. (16) might be an ODE of HH type (1a)–(1d)) or a busting neuron model (Sect. 2.4). Suppose that (16) possesses a stable hyperbolic limit cycle Γ0 of period T0 for ϵ=0, and let \({\bf{x}}_{0}(t)\) denote a solution lying in Γ0. Invariant manifold theory Hoppensteadt and Izhikevich (1997, Chap. 9) guarantees that, in a neighborhood U of Γ0, the state space splits into a phase variable φ∈[0,2π) along the closed curve Γ0 and a smooth foliation of transverse isochrons Ermentrout and Terman (2010, Chap. 8). Each isochron is an (n−1)-dimensional manifold Mφ with the property that any two solutions starting on the same leaf \(M_{\varphi_{i}}\) are mapped by the flow to another leaf \(M_{\varphi_{j}}\) and hence approach Γ0 with equal asymptotic phases as t→∞: see Fig. 7. For points \({\bf{x}}\in U\), phase is therefore defined by a smooth function \(\varphi({\bf{x}})\) and the leaves MφU are labeled by the inverse function \({\bf{x}}(\varphi)\). Moreover, this structure persists for small ϵ>0, so Γ0 perturbs to a nearby limit cycle Γϵ.
Fig. 7

The direct method for computing the PRC, showing the geometry of isochrons, the effect of the perturbation at \({\bf{x}}^{*}\) that results in a jump to a new isochron, recovery to the limit cycle, and the resulting phase shift. Adapted from (Hirsch et al. 1977; Guckenheimer and Holmes 1983)

The phase coordinate \(\varphi({\bf{x}})\) is chosen so that progress around the limit cycle occurs at constant speed when ϵ=0:
$$ \dot{\varphi}\bigl({\bf{x}}(t)\bigr) \big|_{{\bf{x}}\in\varGamma_0} = \frac{\partial\varphi({\bf{x}}(t))}{\partial{\bf{x}}} \cdot{\bf {f}}\bigl({\bf{x}}(t)\bigr) \bigg|_{{\bf{x}}\in\varGamma_0} = \frac{2 \pi}{T_0} \stackrel{\rm {def}}{=}\omega_0 . $$
Applying the chain rule, using (16) and (17), we obtain the scalar phase equation:
$$ \dot{\varphi} = \frac{\partial\varphi({\bf{x}})}{\partial{\bf{x}}} \cdot \dot{{\bf{x}}} = \omega_0 + \epsilon\frac{\partial\varphi}{\partial {\bf{x}}} \cdot{\bf{g}}\bigl({\bf{x}}_0(\varphi), \ldots\bigr) \big|_{\varGamma_0(\varphi )} + \mathcal{O}\bigl(\epsilon^2\bigr) . $$
The assumption that coupling and external influences are weak (ϵ≪1) allows approximation of their effects by evaluating \({\bf{g}}({\bf{x}}, \ldots)\) along Γ0.
For spiking-neuron models in which inputs enter only via the voltage equation (e.g., (1a)–(1d)), the only nonzero component in the vector \(\frac{\partial\varphi}{\partial{\bf{x}}}\) is \(\frac{\partial \varphi}{\partial V} = \frac{\partial\varphi}{\partial{\bf{x}}} \cdot \frac{\partial{\bf{x}}}{\partial V} \stackrel{\rm{def}}{=}z(\varphi )\). This phase response curve (PRC) describes how an impulsive perturbation advances or retards the next spike as a function of the phase during the cycle at which it acts. PRCs may be calculated using a finite-difference approximation to the derivative:
$$ z(\varphi) = \frac{\partial\varphi}{\partial V} = \lim_{\Delta V \rightarrow0} \biggl[ \frac{\varphi({\bf{x}}^* + (\Delta V,0)^{\rm T}) - \varphi({\bf{x}}^*)}{\Delta V} \biggr] , $$
where the numerator \(\Delta\varphi= [\varphi({\bf{x}}^{*} + (\Delta V,0)^{\rm T}) - \varphi({\bf{x}}^{*})]\) describes the change in phase due to the delta function perturbation VVV at \({\bf{x}}^{*} \in\varGamma_{0}\): see Fig. 7. PRCs may also be found from adjoint equations (Guckenheimer 1975).

3.2 Weak Coupling, Averaging and Half-Center Oscillators

A common structure appearing in GPG models is the half-center oscillator: a pair of units, often identical and hence bilaterally (or reflection-) symmetric and sometimes each containing several neurons, connected via mutual inhibition to produce an alternating rhythm Brown et al. (2004). See Brown et al. (2004) and (Ermentrout and Terman 2010, Chap. 8) for examples. Phase reduction provides a simple expression of this architectural subunit, which can be written as a system on the torus:
$$\begin{aligned} \dot{\varphi_1} &= \omega_0 + \epsilon \bigl[ \delta_1 + z_1(\varphi_1) h_1( \varphi_1, \varphi_2)\bigr] \stackrel{\rm{def}}{=} \omega_0 + \epsilon H_1(\varphi_1, \varphi_2) , \end{aligned}$$
$$\begin{aligned} \dot{\varphi_2} &= \omega_0 + \epsilon \bigl[ \delta_2 + z_2(\varphi_2) h_2( \varphi_2, \varphi_1)\bigr] \stackrel{\rm{def}}{=} \omega_0 + \epsilon H_2(\varphi_2, \varphi_1) . \end{aligned}$$
Here small frequency differences ϵδj are allowed and the \(\mathcal{O}(\epsilon^{2})\) terms neglected. Transformation to slow phases ψi=φiω0t removes the common frequency ω0 and puts (20a)–(20b) in a form to which the averaging theorem for periodically forced ODEs can be applied (Ermentrout and Terman 2010, Sect. 9.6):
$$\begin{aligned} \dot{\psi_1} &= \epsilon H_1(\psi_1 + \omega_0 t, \psi_2 + \omega _0 t) , \end{aligned}$$
$$\begin{aligned} \dot{\psi_2} &= \epsilon H_2(\psi_2 + \omega_0 t, \psi_1 + \omega _0 t) . \end{aligned}$$
Recalling that the common period of the uncoupled oscillators is T0, the averages of the terms on the RHS of (21a)–(21b) are
$$ \overline{H_i(\psi_i, \psi_j)} = \frac{1}{T_0} \int_0^{T_0} H_i( \psi_i + \omega_0 t, \psi_j + \omega_0 t) \, \mathrm{d}t . $$
Changing variables by setting τ=ψj+ω0t, so that \(dt = \frac{d \tau}{\omega_{0}} = \frac{T_{0} \, \mathrm{d} \tau}{2 \pi}\), and using the fact that the Hi are 2π-periodic, the integral of (22) becomes
$$ \overline{H_i(\psi_i, \psi_j)} = \frac{1}{2 \pi} \int_0^{2 \pi} H_i( \psi_i - \psi_j + \tau, \tau) \, \mathrm{d} \tau\stackrel{ \rm{def}}{=} \overline{H_i(\psi_i - \psi_j)} ; $$
hence the averaged system (up to \(\mathcal{O}(\epsilon^{2})\)) is
$$ \dot{\psi}_1 = \epsilon\overline{H_1( \psi_1 - \psi_2)} , \quad\quad \dot{\psi}_2 = \epsilon \overline{H_2(\psi_2 - \psi_1)} . $$
The functions \(\overline{H_{i}(\psi_{i} - \psi_{j})}\) are 2π-periodic and depend only on phase difference θ=ψ1ψ2. Equations (24) may therefore be subtracted to yield
$$ \dot{\theta} = \epsilon\bigl[\overline{H_1(\theta)} - \overline{H_2(-\theta)}\bigr] \stackrel{\rm{def}}{=}\epsilon G( \theta) . $$
Phase reduction and averaging have simplified a system with at least two voltage variables, associated gating variables, and possibly additional synaptic variables, to a flow on the circle.

For mutually symmetric coupling between identical units, h2(φ2,φ1)=h1(φ1,φ2) in (20a)–(20b). Integration preserves this symmetry under permutation of φ1 and φ2, so that the averaged functions satisfy \(\overline{H_{2}(-\theta)} = \overline{H_{1}(\theta)} \stackrel{\rm{def}}{=}\overline{H(\theta )}\). In this case, since H is 2π-periodic, \(G(\pi) = \overline{H(\pi)} - \overline{H(-\pi)} = \overline{H(\pi)} - \overline{H(\pi)} = 0\) and \(G(0) = \overline{H(0)} - \overline{H(0)} = 0\). Equation (25) therefore has fixed points at θ=0,π, corresponding to in-phase and anti-phase solutions, regardless of the precise form of\(\overline{H}\). Moreover, G is odd and its derivative G′(θ) is even (see Fig. 9 below). Additional fixed points θe such that G(θe)=0 may also exist, depending on \(\overline{H}\). Nonsymmetric pairs generally do not have exact in- and anti-phase solutions.

Under the averaging theorem Hill et al. (2001), Daun-Gruhn et al. (2009), Doloc-Mihu and Calabrese (2011), hyperbolic fixed points of (24) correspond to T0-periodic orbits of the original system (20a)–(20b). Since θ=ψ1ψ2, fixed points θe of (25) appear as circles in the toroidal phase space of (24), and their linearization necessarily has a zero eigenvalue with eigenvector \((1,1)^{\rm{T}}\). This lack of hyperbolicity derives from the transformation ψi=φiω0t, so the circles are T0-periodic orbits in the original φi variables (note that φ1φ2=θe). Provided that the other eigenvalue is nonzero, with eigenvector transverse to \((1,1)^{\rm{T}}\), it follows that the original system has a periodic orbit whose phases maintain the difference θe to \(\mathcal{O}(\epsilon)\). The dynamics are necessarily only neutrally stable to perturbations that equally advance or retard the phases of both units.

3.3 A CPG Model for Insect Locomotion

This section describes a CPG model for hexapedal locomotion Wilson (1999, Sect. 13.1) motivated by experiments on the cockroach Periplaneta americana (Guckenheimer and Holmes 1983, Sect. 4.1–2). It uses bursting neuron and synapse models of the types described in Sects. 2.4 and 2.5 for which PRCs and averaged coupling functions were computed numerically (Guckenheimer and Holmes 1983, Sect. 4.1–2) to derive a phase reduced model. The network, including motoneurons, is shown in Fig. 8.
Fig. 8

A model for the cockroach CPG. Units 1, 2, 3 (resp. 4, 5, 6), activating the left (resp. right) tripods, are coupled through mutually inhibitory synapses and modulate each leg’s hip and knee extensor (resp. flexor) motoneurons via inhibitory (resp. excitatory) synapses, shown as filled circles and semi-arcs, respectively; only right front leg motoneurons shown here. Tonic drive is applied to all units by constant external currents. See (Ghigliazza and Holmes 2004b,a) for further details. Figure adapted from (Pearson and Iles 1970, 1973; Pearson 1972; Pearson and Fourtner 1975)

Cockroaches run over much of their speed range with a double tripod gait, in which left front, left rear and right middle legs (the L tripod) alternate with right front, right rear and left middle legs (the R tripod) to provide stance support. Motoneurons activating depressor and extensor muscles that drive the power stroke during stance are alternately active in the L and R tripods, and neighboring legs on the same side (ipsilateral) and across the body (contralateral) operate approximately in anti-phase. In Fig. 8 cells 1, 2, 3 drive the L tripod and 4, 5, 6 drive the R tripod. Extensors spike persistently to support the animal’s weight when standing still: they must be deactivated to swing the leg in running; in contrast, flexors must shut off during stance. As proposed in (Ghigliazza and Holmes 2004a), during its burst a single CPG interneuron can simultaneously inhibit an extensor and excite a flexor; during the interneuron’s refractory phase the extensor can resume spiking and the excitable flexor remain hyperpolarized and inactive. The model simplifies by allowing excitatory and inhibitory synapses on the same axon; in reality at least one disynaptic path would be necessary.

Little is known about architecture and neuron types in the cockroach, but representation of each leg unit by a single bursting cell, as in Fig. 8, is certainly minimal. For example, hemisegments of lamprey spinal cord each contain three different cell types as well as motoneurons Ghigliazza and Holmes (2004b,a), and in stick insects separate oscillators with multiple interneurons have been identified for each joint on a single leg Kukillaya et al. (2009).

In Fig. 8 one-way paths connect CPG interneurons to motoneurons, so the basic stepping rhythm is determined by the six CPG units, which may be studied in isolation. The reduced phase CPG model is
$$\begin{aligned} \dot{\psi_1} =& \bar{g}_{\mathrm{syn}}H(\psi_1 - \psi_4) + \bar {g}_{\mathrm{syn}} H(\psi_1 - \psi_5) , \\ \dot{\psi_2} =& \frac{\bar{g}_{\mathrm{syn}}}{2} H(\psi_2 - \psi _4) + \bar{g}_{\mathrm{syn}}H(\psi_2 - \psi_5) + \frac{\bar{g}_{\mathrm{syn}}}{2} H(\psi_2 - \psi_6) , \\ \dot{\psi_3} =& \bar{g}_{\mathrm{syn}}H(\psi_3 - \psi_5) + \bar {g}_{\mathrm{syn}} H(\psi_3 - \psi_6) , \\ \dot{\psi_4} =& \bar{g}_{\mathrm{syn}}H(\psi_4 - \psi_1) + \bar {g}_{\mathrm{syn}} H(\psi_4 - \psi_2) , \\ \dot{\psi_5} =& \frac{\bar{g}_{\mathrm{syn}}}{2} H(\psi_5 - \psi _1) + \bar{g}_{\mathrm{syn}}H(\psi_5 - \psi_2) + \frac{\bar{g}_{\mathrm{syn}}}{2} H(\psi_5 - \psi_3) , \\ \dot{\psi_6} =& \bar{g}_{\mathrm{syn}}H(\psi_6 - \psi_2) + \bar {g}_{\mathrm{syn}} H(\psi_6 - \psi_3) , \end{aligned}$$
where all cells are identical and connection strengths are chosen so that the net effect on each cell from its presynaptic neighbors is the same. The middle leg cells 2 and 5 receive three inputs, and front and hind leg cells receive two, hence the ipsilateral connections from front and hind to middle are of strength \(\bar{g}_{\mathrm{syn}}/ 2\).
The PRC is a complicated function with multiple sign changes caused by the burst of spikes (not shown here, see Ghigliazza and Holmes (2004b,a)), but the integral required by averaging yields fairly simple functions H(−θ) and H(θ): Fig. 9. Their subtraction produces an odd function G(θ) with zeroes at θ=0 and π, as noted in Sect. 3.2, that is also remarkably close to a simple sinusoid, as assumed in an earlier phase oscillator model for lamprey CPG Kukillaya et al. (2009), although this was not justified for some 25 years Pearson (1972), Pearson and Iles (1973).
Fig. 9

(a) The coupling function \(\bar{g}_{\mathrm{syn}}H_{ji}(\theta)\) (solid) for an inhibitory synapse; \(\bar{g}_{\mathrm{syn}}H_{ji}(-\theta)\) also shown (dash-dotted). (b) The phase difference coupling function \(\bar{g}_{\mathrm{syn}}G(\theta) = \bar{g}_{\mathrm{syn}}[H_{ji}(\theta) - H_{ji}(-\theta)]\). Note that G(0)=G(π)=0 and \(\bar{g}_{\mathrm{syn}}G'(0) > 0 > \bar {g}_{\mathrm{syn}} G'(\pi)\). From (Buchanan and Grillner 1987; Hellgren et al. 1992; Wallen et al. 1992; Várkonyi et al. 2008)

Seeking L-R tripod solutions of the form ψ1=ψ2=ψ3ψL(t), ψ4=ψ5=ψ6ψR(t), (26) collapse to the pair of ODEs
$$ \dot{\psi}_{\mathrm{L}} = 2 \bar{g}_{\mathrm{syn}}H(\psi_{\mathrm{L}} - \psi_{\mathrm{R}}) \quad {\text{and}} \quad \dot{\psi}_{\mathrm{R}} = 2 \bar{g}_{\mathrm{syn}}H(\psi_{\mathrm{R}} - \psi_{\mathrm{L}}) , $$
and the arguments used in Sect. 3.2 may be applied to conclude that ψR=ψL+π and ψR=ψL are fixed points of (27), independent of the precise form of H. For this argument to hold, note that the sums on the right-hand sides of the first three and last three equations of (26) must be identical when evaluated on the tripod solutions; hence, net inputs to each cell from its synaptic connections must be equal. Also, since for \(\bar{g}_{\mathrm{syn}}> 0\) we have G′(0)>0>G′(π) (Fig. 9(b)), so that we expect the in-phase solution to be unstable and the anti-phase one to be stable. To confirm this in the full six-dimensional phase space we compute the 6×6 Jacobian matrix:
$$ \bar{g}_{\mathrm{syn}}\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 2H' & 0 & 0 & -H' & -H' & 0 \\ 0 & 2H' & 0 & -H'/2 & -H' & -H'/2 \\ 0 & 0 & 2H' & 0 & -H' & -H' \\ -H' & -H' & 0 & 2H' & 0 & 0 \\ -H'/2 & -H' & -H'/2 & 0 & 2H' & 0 \\ 0 & -H' & -H' & 0 & 0 & 2H' \end{array} \right ] , $$
with derivatives H′ evaluated at appropriate phase differences π or 0. The anti-phase tripod solution ψLψR=π has one zero eigenvalue with eigenvector \((1,1,1,1,1,1)^{\rm{T}}\); the remaining eigenvalues and eigenvectors are
$$\begin{aligned} \lambda =& \bar{g}_{\mathrm{syn}}H'(\pi): \ (1,0,-1,1,0,-1)^{\rm{T}} , \\ \lambda =& 2\bar{g}_{\mathrm{syn}}H'(\pi), { {m}} = 2: \ (1,-1,1,0,0,0)^{\rm{T}} \ {\rm{and}} \ (0,0,0,-1,1,-1)^{\rm{T}} , \\ \lambda =& 3\bar{g}_{\mathrm{syn}}H'(\pi): \ (1,0,-1,-1,0,1)^{\rm {T}} , \\ \lambda =& 4\bar{g}_{\mathrm{syn}}H'(\pi): \ (1,1,1,-1,-1,-1)^{\rm {T}} . \end{aligned}$$
Since \(\bar{g}_{\mathrm{syn}}H'(\pi) < 0\) (Fig. 9(a)), this proves asymptotic stability with respect to perturbations that disrupt the tripod phase relationships; moreover, the system recovers fastest from perturbations that disrupt the relative phasing of the L and R tripods (\(\lambda= 4\bar{g}_{\mathrm{syn}}H'\): last entry of (29)). Since \(\bar{g}_{\mathrm{syn}}H'(0) > 0\) (Fig. 9(a)), the pronking gait with all legs in phase (ψL(t)≡ψR(t)) is unstable.

This CPG model was created in the absence of information on coupling strengths among different hemisegments, and symmetry assumptions were made for mathematical convenience, allowing the reduction to a pair of tripod oscillators. Recent experiments support bilateral symmetry, but indicate that descending connections are stronger than ascending ones (Daun-Gruhn 2011; Daun-Gruhn and Toth 2011; Toth and Daun-Gruhn 2011). Similar rostral-caudal asymmetries have been identified in the lamprey spinal cord Ghigliazza and Holmes (2004a, Fig. 7). The model is currently being modified to fit the data.

In introducing this section it was noted that integrated neuro-mechanical models are needed to better understand the rôle of CPGs in producing locomotion. Examples of these for the cockroach appear in (Cohen et al. 1982) and, with proprioceptive feedback, in (Várkonyi et al. 2008). Models for lamprey swimming can be found in Ghigliazza and Holmes (2004a).

4 Models of Perceptual Decisions

I now move to a different topic and scale to consider decision making, specficially two-alternative forced-choice (2AFC) tasks and stochastic accumulator models that describe average activities of large groups of cortical neurons. These belong to a general class of connectionist neural networks Ghigliazza and Holmes (2004a), which, while not directly connected to cellular-level descriptions such as the HH equations, are still biologically plausible. Specifically, in nonhuman primates performing perceptual decisions, intracellular recordings in oculomotor regions such as the lateral intraparietal area (LIP), frontal eye fields, and superior colliculus show that spike rates evolve like sample paths of a stochastic process, rising to a threshold prior to response (Fuchs et al. 2011). In special cases accumulators reduce to drift-diffusion (DD) processes, which have been used to model reaction time distributions and error rates in 2AFC tasks for over 50 years, e.g. (Hagevik and McClellan 1994; Ayali et al. 2007). Subsequently, Sect. 5 sketches how biophysically based neural networks can be reduced to nonlinear leaky competing accumulator (LCAs), providing a path from biological detail to tractable models. For a broad review of decision models, see Kukillaya and Holmes (2007, 2009).

4.1 Accumulators and Drift-Diffusion Processes

In the simplest LCA a pair of units with activity levels (x1,x2) represent pools of neurons selectively responsive to two stimuli Proctor and Holmes (2010), Kukillaya et al. (2009). These mutually inhibit via functions that express neural activity (e.g., short-term firing rates) in terms of inputs that include constant currents representing mean stimulus levels and i.i.d. Wiener processes modeling noises that pollute the stimuli and/or enter the local circuit from other brain regions, as described by the stochastic differential equations:
$$\begin{aligned} \mathrm{d}x_1 =& \bigl[- \gamma x_1 - \beta f(x_2) + \mu_1\bigr] \, \mathrm{d}t + \sigma \, \mathrm{d}W_1 , \end{aligned}$$
$$\begin{aligned} \mathrm{d}x_2 =& \bigl[- \gamma x_2 - \beta f(x_1) + \mu_2\bigr] \, \mathrm{d}t + \sigma\, \mathrm{d}W_2 , \end{aligned}$$
where γ,β are leak and inhibition rates and μj,σ the means and standard deviation of the noisy stimuli. A decision is reached when the first xj(t) exceeds a threshold xj,th. See McMillen and Holmes (2006), McMillen et al. (2008), Tytell et al. (2010) for background on related connectionist networks, and (Rumelhart and McClelland 1986) on the equivalence of different integrator models. For stochastic ODEs, see (Schall 2001; Gold and Shadlen 2001; Shadlen and Newsome 2001; Roitman and Shadlen 2002; Ratcliff et al. 2003, 2006; Mazurek et al. 2003).
The function characterizing neural response is typically sigmoidal:
$$ f(x) = \frac{1}{1 + \exp[-g(x - b)]} , $$
or piecewise linear Stone (1960), Laming (1968), Ratcliff (1978), Ratcliff et al. (1999), Smith and Ratcliff (2004). With appropriate gain g and bias b (3031) without noise (σ=0) can have one or two stable equilibria, separated by a saddle point. In the noisy system these correspond to “choice attractors,” and if γ and β are sufficiently large, a one-dimensional, attracting curve exists that contains the equilibria and orbits connecting them: see Doya and Shadlen (2012) and (Usher and McClelland 2001). Hence, after rapid transients decay following stimulus onset, the dynamics relax to that of a nonlinear Ornstein–Uhlenbeck (OU) process Rumelhart and McClelland (1986), Grossberg (1988), Usher and McClelland (2001). The dominant terms are found by linearizing (3031) and subtracting to yield an equation for the difference x=x1x2:
$$ \mathrm{d}x = \bigl[(\mu_1 - \mu_2) + (\beta- \gamma) x\bigr] \, \mathrm{d}t + \sigma\,\mathrm{d}W . $$
If leak and inhibition are balanced (β=γ), and initial data are unbiased, appropriate when stimuli appear with equal probability and have equal reward values, (33) becomes a DD process
$$ \mathrm{d}x = A \, \mathrm{d}t +\sigma\, \mathrm{d}W ; \quad x(0) = 0 , $$
where A=μ1μ2 denotes the drift rate. Responses are given when x first crosses a threshold ±xth; if A>0 then crossing of +xth corresponds to a correct response and crossing −xth to an incorrect one. Here x is the logarithmic likelihood ratio Miller and Fumarola (2012), measuring the difference in evidence accumulated for the two options. The error rate and mean decision time are
$$ p({\mathrm{err}}) = \frac{1}{{1 + \exp(2 \eta\theta )}}\quad\mbox{and}\quad \langle {\mathit{DT}} \rangle= \theta \biggl[ \frac{{\exp(2\eta\theta) - 1}}{{\exp(2 \eta\theta) + 1}} \biggr] , $$
Gardiner (1985), Arnold (1974) and (Usher and McClelland 2001; Brown et al. 2005). Here the three parameters A,σ and xth reduce to two: η≡(A/σ)2, signal-to-noise ratio (SNR), having units of inverse time, and θ≡|xth/A|, threshold-to-drift ratio, the decision time for noise-free drift x(t)=At. Accuracy may be adjusted by changing xth or the initial condition x(0), see Sect. 4.7 below.

The DD process (34) is a continuum limit of the sequential probability ratio test Feng et al. (2009, Fig. 2) which, for statistically stationary signal detection tasks, yields decisions of specified average accuracy in the shortest possible time Brown et al. (2005). This property leads to an optimal speed-accuracy tradeoff that maximizes reward rate, enabling the experiments and analyses described below.

4.2 An Optimal Speed-Accuracy Tradeoff

If SNR and the mean delay between each response and next stimulus onset (response-to-stimulus interval DRSI) remain constant across each block of trials and block durations are fixed, optimality corresponds to maximising reward rate: average accuracy divided by average trial duration:
$$ \mathit{RR} = \frac{1 - p({\mathrm{err}}) }{\langle \mathit{DT} \rangle+ T_0 + D_{\mathrm{RSI}}} . $$
Here T0 is the part of reaction time due to non-decision-related sensory and motor processing. Since T0 and η also typically remain (approximately) constant for each participant, we may substitute (35) into (36) and maximize RR for fixed η,T0 and DRSI, obtaining a unique threshold-to-drift ratio θ=θop for each pair (η,Dtot):
$$ \exp(2\eta\theta_{\mathrm{op}}) - 1 = 2\eta(D_{\mathrm{tot}} - \theta_{\mathrm{op}}) ,\quad \mbox{where } D_{\mathrm{tot}} = T_0 + D_{\mathrm{RSI}} . $$
Inverting the relationships (35) to obtain
$$ \theta= \frac{\langle \mathit{DT} \rangle}{1 - 2 p({\mathrm{err}})} \quad \mbox{and} \quad \eta= \frac{1 - 2 p({\mathrm{err}})}{2 \langle \mathit{DT} \rangle} \log \biggl[ {\frac{{1 - p({\mathrm{err}}) }}{p({\mathrm{err}})}} \biggr] , $$
the parameters θop,η in (37) can be replaced by the performance measures, p(err) and 〈DT〉, yielding a unique, parameter-free relationship describing the speed-accuracy tradeoff that maximizes RR:
$$ \frac{\langle \mathit{DT} \rangle}{D_{\mathrm{tot}}} = \biggl[ \frac{1}{p({\mathrm{err}}) \log [ \frac{1 - p({\mathrm{err}})}{p({\mathrm{err}})} ]} + \frac{1}{1 - 2 p({\mathrm{err}})} \biggr]^{-1} . $$
Equation (39) defines an optimal performance curve (OPC) (Gold and Shadlen 2002; Bogacz et al. 2006): Fig. 10(a). Different points on the OPC represent θop’s and corresponding speed-accuracy trade-offs for different values of difficulty (η) and timing (Dtot): lower or higher thresholds, associated with faster or slower responses, yield lower rewards (diamonds in Fig. 10(a)). The OPC’s shape is intuitively understood by noting that very noisy stimuli (η≈0) contain little information, so that, if they are equally likely, it is optimal to choose at random, giving p(err)=0.5 and 〈DT〉=0 (SNR=0.1 at the right of Fig. 10(a)). As η→∞, stimuli become so easily discriminable that both 〈DT〉 and p(err) approach zero (SNR=100). Intermediate SNRs require some integration of evidence (SNR=1,10). Being parameter free, the OPC can be used to compare performance with respect to optimality across conditions, tasks, and individuals, irrespective of differences in difficulty or timing.
Fig. 10

(a) The optimal performance curve (OPC) of (39) relates mean normalized decision time 〈DT〉/Dtot to error rate p(err). Triangles and circles mark hypothetical performances under eight different task conditions; diamonds mark suboptimal performances resulting from thresholds at ±1.25θop for SNR=1 and Dtot=2, respectively more accurate but too slow (upper diamond), and faster but less accurate (lower diamond); both reduce RR by ≈1.3 %. (b) OPC (curve) and data from 80 human participants (boxes) sorted according to total rewards accrued over all conditions. White: all participants; lightest: lowest 10 % excluded; medium: lowest 50 % excluded; darkest: lowest 70 % excluded. Vertical lines show standard errors. From (Brown et al. 2005; Roxin and Ledberg 2008)

4.3 Experimental Evidence: Failures to Optimize

Two 2AFC experiments Busemeyer and Townsend (1993) tested whether humans optimize reward rate in accord with the OPC. In the first, 20 participants viewed motion stimuli Bogacz et al. (2006, Appendix) and were rewarded for correct responses. Trials were grouped in 7-minute blocks with different DRSI’s fixed through each block. In the second experiment, 60 participants discriminated if the majority of 100 locations on a static display were filled with stars or empty. Two difficulty conditions were used in 4-minute blocks. Participants were told to maximize total earnings, and practice blocks were administered prior to testing.

Mean decision times 〈DT〉’s were estimated by fitting the DD model to reaction time distributions; the 0–50 % error-rate range was divided into 10 bins, and 〈DT/Dtot〉 were computed for each bin by averaging over those results and conditions with error rates in that bin. This yields the open (tallest) bars in Fig. 10(b); the shaded bars derive from similar analyses restricted to subgroups of participants ranked by their total rewards accrued over all different conditions. The top 30 % group performs close to the OPC, achieving near-optimal performance, but a majority of participants are significantly suboptimal due to longer decision times (Wald and Wolfowitz 1948). This suggests two possibilities:
  1. (1)

    Participants seek to optimize another criterion, such as accuracy, instead of, or as well as, maximizing reward.

  2. (2)

    Participants seek to maximize reward, but systematically fail due to constraint(s) on performance and/or other cognitive factors. We now address these.


4.4 A Preference for Accuracy?

A substantial literature suggests that humans favor accuracy over speed in reaction time tasks (e.g., (Gold and Shadlen 2002; Bogacz et al. 2006)). This could explain the observations in Fig. 10(b), since longer decision times typically lead to greater accuracy. Participants may seek to maximize accuracy in addition to rewards Bogacz et al. (2006), Zacksenhouse et al. (2010). This can be expressed in at least two ways Bogacz et al. (2006), Zacksenhouse et al. (2010):
$$ \mathit{RA} = \mathit{RR} - \frac{q p({\mathrm{err}})}{D_{\mathrm{tot}}} , \quad \mbox{or} \quad \mathit{RR}_m = \frac{ (1 - p({\mathrm{err}})) - q p({\mathrm{err}}) }{\langle \mathit{DT} \rangle+ D_{\mathrm{tot}}} . $$
The first (reward + accuracy, RA) subtracts a fraction of error rate from RR; the second (modified reward rate, RRm) penalizes errors by reducing previous winnings. In both the parameter q∈(0,1) specifies a weight placed on accuracy. Increasing q drives the OPC upward (Bogacz et al. 2006), consistent with the empirical observations of Fig. 10(b), suggesting that participants assume that errors are explicitly penalized.

However, alternative accounts of the data preserve the assumption of reward maximization. Specifically, timing uncertainty may degrade RR estimates, systematically causing longer decision times, or participants may allow for costs of cognitive control required for changing parameters, especially if these yield small increases in RR (cf. diamonds in Fig. 10(a)).

4.5 Robust Decisions in the Face of Uncertainty?

In the analyses of Sects. 4.24.4 an objective function is maximized, given known task parameters. However, accurate values may not be available: RR depends on inter-trial delays and SNR, both of which may be hard to estimate. Information-gap theory (Bogacz et al. 2006, 2010) assumes that parameters lie in a bounded uncertainty set and uses a maximin strategy to optimize a worst case scenario.

Interval timing studies (Britten et al. 1993) indicate that time estimates are normally distributed around the true duration with a standard deviation proportional to it (Bogacz et al. 2010). This prompted the assumption in Myung and Busemeyer 1989 that the estimated delay Dtot lies in a set \(U_{p}(\alpha_{p}; \tilde{D}_{\mathrm{tot}}) = \{ D_{\mathrm{tot}} > 0: | D_{\mathrm{tot}} - \tilde{D}_{\mathrm{tot}} | \le\alpha_{p} \tilde{D}_{\mathrm{tot}} \}\), of size proportional to the actual delay \(\tilde{D}_{\mathrm{tot}}\), with uncertainty αp analogous to the coefficient of variation in scalar expectancy theory (Maddox and Bohil 1998; Bohil and Maddox 2003). Instead of the optimal threshold of (37), the maximin strategy selects the threshold θMM that maximizes the worst possible RR for \(D_{\mathrm{tot}} \in U_{p}(\alpha_{p}; \tilde{D}_{\mathrm{tot}})\), yielding a one-parameter family of maximin performance curves (MMPCs) (Bogacz et al. 2006):
$$ \frac{\langle \mathit{DT} \rangle}{D_{\mathrm{tot}}} = \frac{(1 + \alpha_p) \tilde{D}_{\mathrm{tot}}}{D_{\mathrm{tot}}} \biggl[ {\frac{1}{{p({\mathrm{err}})\log ( {\frac{{1 - p({\mathrm{err}}) }}{p({\mathrm{err}})}} )}} + \frac{1}{{1 - 2 p({\mathrm{err}})}}} \biggr]^{ - 1} . $$
Like the functions (40), (41) predicts longer mean decision times than the OPC (39), of which they are scaled versions. Uncertain SNRs can be treated similarly, yielding families of MMPCs that differ from both the OPC and (41), rising to peaks at progressively smaller p(err) as uncertainty increases. An alternative strategy yields robust-satisficing performance curves (RSPCs) (Bogacz et al. 2006, Fig. 13) that provide poorer fits and are not described here.
Figure 11 shows data fits to the parameter-free OPC, the functions RA and RRm of (40), to MMPCs for timing uncertainty and SNR, and to RSPCs for timing uncertainty. While there is little difference among fits to the top 30 %, data from the middle 60 % and lowest 10 % subgroups exhibit patterns that distinguish among the theories. Maximum likelihood computations show that MMPCs for uncertainties in delays provide the best fits, followed by RSPCs for uncertainties in delays and RA (Buhusi and Meck 2005). Thus, greater accuracy can emerge as a consequence of maximizing RR under uncertainty rather than an objective of optimization.
Fig. 11

Comparisons of performance curves with mean normalized decision times (with SE bars) for three groups of participants sorted by total rewards acquired. Different curves are identified by line style and gray scale in the key, in which \({\rm maximin}_{\rm D}\) and \({\rm maximin}_{\mathrm{SNR}}\) refer to maximin performance curves (MMPCs) for uncertainty in total delay (41) and noise variance respectively, \({\rm robust}_{\rm D}\) to robust-satisficing performance curves (RSPCs) for uncertainty in total delay, RA and RRm denote the accuracy-weighted objective functions of (40), and RR the OPC (39). Note different vertical axis scales in upper and lower panels. From (Ben-Haim 2006)

4.6 Practice, Timing Uncertainty, or the Cost of Control?

To test whether deviations from the OPC are better explained by an emphasis on accuracy or by timing uncertainty, (Gibbon 1977) conducted a 2AFC experiment with motion stimuli encompassing a range of discriminabilities (moving dots Zacksenhouse et al. (2010) with 0 %,4 %,8 %,16 % and 32 % coherences fixed in each block), and administered interval timing tests in parallel (Gibbon 1977). 17 participants completed at least 13 sessions in each condition, increasing the likelihood of achieving optimal performance by providing extensive training and allowing the study of practice effects (Zacksenhouse et al. 2010). There were four main findings.

First, average performance converges toward the OPC with increasing experience. Figure 12(a) shows mean normalized decision times (dots) for five error bins averaged over sessions 1, 2–5, 6–9 and 10–13. Performance during the final two sets of sessions is indistinguishable from the OPC for higher coherences, but decision times remain significantly above the OPC for 0 and 4 % coherences.
Fig. 12

Mean normalized decision times (dots) grouped by coherence vs. error proportions for sessions 1 (highest points and curve, red), 2–5 (second highest points and curve, blue), 6–9 (green) and 10–13 (pink). Data points and fitted curves for sessions 6–13 are very similar. (a) Performance compared with OPC (lowest curve, black) and best-fitting MMPCs for each coherence condition. Performance converges toward the OPC, but DTs remain high for \(p(\rm{err}) > 0.35\). (b) Performance compared with DD fits to single threshold for all coherences. Solid and dotted horizontal lines connect model fits and dotted vertical lines connect data points from different sessions having the same coherence. Fits connected by solid lines exclude 0 and 4 % coherences; fits connected by dotted lines include all coherences. Single threshold fits capture longer DTs at high error rates better than OPC and MMPCs. Panel (b) adapted from (Zacksenhouse et al. 2010) (Color figure online)

Second, the accuracy-weighted objective function RRm of (40) outperforms the OPC in fitting decision times, with accuracy weight decreasing monotonically through sessions 1–9 and thereafter remaining at q≈0.2 (not shown here, see Zacksenhouse et al. (2010)), suggesting that participants may initially favor accuracy, but that this diminishes with practice.

Third, timing inaccuracy throughout all but the first session, independently assessed by participants’ coefficients of variation in a peak-interval task, is significantly correlated with their distances from the OPC Zacksenhouse et al. (2010). Moreover, this provides a better account of deviations from the OPC than weighting accuracy by the parameter q in RRm, supporting the hypothesis that humans can learn to maximize rewards by devaluing accuracy, with a deviation from optimality inversely related to their timing ability. However, even after long practice, MMPCs based on timing uncertainty fail to capture performance for the two lowest coherences (Fig. 12(a)), suggesting that other factors may be involved, including the cost of cognitive control (Zacksenhouse et al. 2010).

To test this fourth possibility, Balci et al. (2011) computed the single optimal threshold over all coherence conditions. Figure 12(b) shows that the resulting curve fits the full range of data for later sessions (6–13), suggesting that, given practice, participants adopted such a threshold. Rewards for this single threshold differed little from those for thresholds optimized for each coherence condition, suggesting that participants may seek one threshold that does best over all conditions, avoiding estimation of coherences and control of thresholds from block to block. Control costs are discussed in (Britten et al. 1993).

4.7 Prior Expectations and Trial-to-Trial Adjustments

Given prior information on the probabilities of observing each stimulus in a 2AFC task, a DD process can be optimized by appropriately shifting the initial condition x(0); rewards that favor one response over the other can be treated similarly (Buhusi and Meck 2005). Comparisons of these predictions with human behavioral data were carried out in (Dutilh et al. 2009; Petrov et al. 2011), finding that participants achieved 97–99 % of maximum reward. A related study of monkeys used a fixed stimulus presentation period that obviates the need for a speed-accuracy tradeoff, but in which differences in rewards for the two responses were signaled before each trial and motion coherences varied randomly between trials. The animals achieved 98 % and 99.5 % of maximum rewards Balci et al. (2011), and fits of LIP recordings to an accumulator model Balci et al. (2011) indicated that this was also done by shifting initial conditions. Human behavioral studies revealed similar near-optimal shifts in response to biased rewards Balci et al. (2011, Fig. 9).

Humans also exhibit adjustment effects in response to repetition and alternation patterns that necessarily occur, given stimuli chosen with equal probability (Balci et al. 2011). Accumulator models developed in (Posner and Snyder 1975) indicate that this is also due to initial condition shifts, presumably due to expectations that patterns will persist even when stimuli are purely random. In fact pattern recognition is advantageous in natural situations, allowing prior beliefs to adapt to match stationary or slowly changing environments Balci et al. (2011).

The work described in this section depends on simple models that at first reproduce previous data, then predict outcomes of new experiments, and finally admit analyses and modifications that account for differences between predictions and the new data. The explicit OPC expression (39) is crucial here; it seems unlikely that such predictions could readily emerge from computational simulations alone. These models are certainly useful, but they do not immediately connect to cellular-level descriptions such as those of Sects. 23. We now discuss this connection.

5 Connecting the Levels

The accumulator models of Sect. 4 address optimality constraints at the systems level, but they are too abstract to identify mechanisms or constraints arising from underlying neural circuits. To do this the abstract models must be related to biophysical aspects of neural function. For example, spiking-neuron models can be reduced in dimension by averaging over populations of cells Holmes and Cohen (2014), allowing them to include the effects of synaptic time constants and neurotransmitters that can change cellular excitability and synaptic efficacy (Bogacz et al. 2006), effectively adjusting gains g in frequency–current functions (32) Simen et al. (2009). I describe one such reduction in this section. For a review of spiking models for decision making, see (Feng et al. 2009).

5.1 Reduction of a Spiking Model to Accumulators

In (Soetens et al. 1985) the network model of a microcircuit in area LIP Cho et al. (2002), Jones et al. (2002), Gao et al. (2009), Goldfarb et al. (2012) was extended to simulate the effects of NE release on excitatory (AMPA, NMDA) and inhibitory (GABA) receptors, showing that co-modulation can tune speed and accuracy to provide good performance over a substantial parameter range. The network contains 2000 leaky IF neurons in four groups: two stimulus-selective populations each containing 240 excitatory (pyramidal) cells, a non-selective pool of 1120 excitatory cells, and an inhibitory population of 400 interneurons: Figure 13. The state variables are transmembrane voltages vj(t) and synaptic activities sAMPA,j(t), sNMDA,j(t) and sGABA,j(t), described by the following ODEs:
$$\begin{aligned} C_j \frac{\mathrm{d}v_j}{\mathrm{d}t} =& - g_{L}(v_j - v_{L}) + I_{\mathrm{syn},j}(t) , \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}s_{\mathrm{type},j}}{\mathrm{d}t} =& -\frac{s_{\mathrm{type},j}}{T_{\mathrm{type}}} + \sum_{l} \delta\bigl(t - t_{j}^{l}\bigr) . \end{aligned}$$
Here Isyn,j(t)=−∑type,kgtype,kstype,k(vkvE), type=AMPA, GABA, or AMPA-ext, Ttype is the time constant for that synapse type, k ranges over all cell j’s presynaptic neurons, and superscripts l index times \(t_{j}^{l}\) at which the jth cell crosses a threshold vth, emits a delta function and is reset to vr for a refractory period τref, cf. Sect. 2.3 and Fig. 5. The NMDA dynamics require two ODEs to model fast rise followed by slow decay (Yu and Cohen 2009), cf. Sect. 2.5:
$$\begin{aligned} \frac{\mathrm{d}s_{\mathrm{NMDA}, j}(t)}{\mathrm{d}t} = & -\frac{s_{\mathrm{NMDA}, j}(t)}{\tau _{\mathrm{NMDA},\mathrm{decay}}} + \alpha x_{j}(t) \bigl[1-s_{\mathrm{NMDA}, j}(t)\bigr] , \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}x_{j}(t)}{\mathrm{d}t} = & -\frac{x_{j}(t)}{\tau_{\mathrm{NMDA},\mathrm{rise}}} + \sum _{l} \delta\bigl(t-t_{j}^{l}\bigr) . \end{aligned}$$
Fig. 13

(a) The IF network of (Gao et al. 2011) contains three populations of excitatory cells; one is non-selective, and each selective population has relatively stronger self-excitation and responds preferentially to one stimulus. Interneurons provide overall inhibition. Excitatory and inhibitory synapses, denoted by filled and open ovals, respectively, are all to all. (b) Stimuli excite both selective populations, but inhibition typically suppresses one; a decision is made when the averaged firing rate of the first population crosses threshold. Adapted from (Gao et al. 2011)

External inputs sAMPA-ext,j(t), modeled by OU processes driven by Gaussian noise of mean μ and standard deviation σ, enter all cells:
$$ \mathrm{d}s_{\mathrm{AMPA}\mbox{-}\mathrm{ext},j} = -\frac{(s_{\mathrm{AMPA}\mbox{-}\mathrm{ext},j} - \mu)\, \mathrm{d}t}{\tau_{\mathrm{AMPA}}} + \sigma \, \mathrm{d}W_j . $$
Stimuli are represented by modified mean inputs μ(1±E) to the selective cells with appropriately adjusted variances σj, where E∈[0,1] denotes stimulus discriminability (E=1 for high SNR; E=0 for zero SNR). Neuromodulation is represented by multiplying excitatory and inhibitory conductances gAMPA,k,gNMDA,k and gGABA,k by factors γE, γI. Eliminating irrelevant stype,j’s (excitatory neurons do not release GABA, inhibitory neurons do not release AMPA or NMDA), (4246) constitute a 9200-dimensional hybrid, stochastic dynamical system that is analytically intractable and computationally demanding.

Following the mean-field method of (Brunel and Wang 2001; Wong and Wang 2006; Eckhoff et al. 2011; Deco et al. 2013), the network is first reduced to four populations by assuming a fixed average voltage \(\bar{v} = (v_{r}+v_{\mathrm{th}}) / 2\) to estimate synaptic currents. These are multiplied by the appropriate numbers Nj of presynaptic cells in each population and by averaged synaptic variables \(\bar{s}_{\mathrm{type}, j}\), and summed to create input currents Itype,j to each population (the index j∈{1,2,3,4} now denotes the population). Individual, evolving cell voltages are replaced by population-averaged, time-dependent firing rates determined by frequency–current (f–I) curves φj(Isyn,j), analogous to the input–output function of (32). This yields an 11-dimensional system described by 4 firing rates νj(t), one inhibitory population-averaged synaptic variable \(\bar{s}_{\mathrm{GABA}}(t)\), and two variables \(\bar{s}_{\mathrm{AMPA},j}(t)\) and \(\bar{s}_{\mathrm{NMDA},j}(t)\) for each excitatory subpopulation (6 in all).

Further reduction to two populations relies on time scale separation (Berridge and Waterhouse 2003; Aston-Jones and Cohen 2005; Sara 2009). Time constants for AMPA and GABA are fast (TAMPA=2 ms, TGABA=5 ms), while that for NMDA decay is slow (TNMDA=100 ms); \(\bar{s}_{\mathrm{AMPA},j}(t)\) and \(\bar{s}_{\mathrm{GABA},j}(t)\) therefore rapidly approach quasi-steady states, as in Rinzel’s reduction of HH in Sect. 2.2, This eliminates 3 ODEs for the excitatory populations and 1 for the inhibitory population. Firing rates likewise track values set by the f–I curves, since they are determined by TAMPA:
$$ \frac{\mathrm{d} \nu_j}{\mathrm{d}t} = \frac{-(\nu_j - \varphi_j(I_{\mathrm{syn},j}))}{T_{\mathrm{AMPA}}} , $$
so νj(t)≈φj(Isyn,j(t)) for the non-selective and interneuron populations and the ODEs for ν3 and νI drop out. Also, with stimuli on, the non-selective population typically has a less variable firing rate than the selective populations, so that \(\bar{s}_{\mathrm{NMDA},3}\) can be fixed, leaving four ODEs for the synaptic variables and firing rates of the selective populations:
$$\begin{aligned} \frac{\mathrm{d} \bar{s}_j}{\mathrm{d}t} =& -\frac{\bar{s}_j}{T_{\mathrm{NMDA}}}+0.641(1-\bar{s}_j) \frac{\nu_j}{1000} , \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}\nu_j}{\mathrm{d}t} =& -\frac{\nu_j-\varphi_j(I_{\mathrm{syn},j})}{T_{\mathrm{pop}f}} ; \quad j = 1, 2 , \end{aligned}$$
where we write \(\bar{s}_{\mathrm{NMDA},j} = \bar{s}_{j}\) for brevity.2 The \(\bar{s}_{j}(t)\)’s and the firing rates νj(t) correspond to the activity levels xj(t) in the LCA (3031), and white noise is added as in those SDEs.

To complete the reduction, currents must be estimated self-consistently. This is complicated by the fact that Isyn,j in (49) contains terms that depend on both \(\bar{s}_{j}\) and φj(Isyn,j), so that the vector field is defined recursively. Ideally, we seek relationships of the form \(I_{\mathrm{syn},j} = \alpha_{j1} \bar{s}_{1} +\alpha_{j2} \bar{s}_{2} +\beta_{j1} \nu_{1} + \beta_{j2} \nu_{2} + I_{\mathrm{const},j}\), as in (Servan-Schreiber et al. 1990). Piecewise-smooth f–I curves help here Wang (2008), since they predict critical currents beyond which firing rates rise linearly. The parameters γE,γI enter via the AMPA, NMDA and GABA components of the currents Iconst,j and coefficients αjk,βjk. See Eckhoff et al. (2009, 2011) for details.

Computations of reward rates over the neuromodulation plane verify that (4849) capture key properties of the full spiking model: Fig. 14. Bifurcation diagrams (Fig. 15) reveal that up to nine fixed points, including four sinks, can coexist, as shown in the phase-plane projections of Fig. 16. The linear and approximately parabolic components of each nullcline derive from sub- and super-threshold parts of the f–I curves φj(Isyn,j), as described in Eckhoff et al. (2009). In the top two and bottom left panels of Fig. 16 (γE,γI)=(1,1) (on the RR ridge of Fig. 14(b)) cases without stimulus, with identical stimuli E=0,SNR=0 and with E=0.128,SNR>0 are shown. States with both \(\bar{s}_{j}\)’s low represent lack of response, possibly awaiting stimulus appearance. With stimuli present, the basin of attraction of the low-low sink shrinks, allowing noisy trials to reach the choice attractors \(\bar{s}_{1} \gg \bar{s}_{2}\) and \(\bar{s}_{2} \gg\bar{s}_{1}\), as in Eckhoff et al. (2009); the basin of the correct attractor is larger for E>0 (bottom left panel). Finally, for (γE,γI)=(2,1.2) (lower right region in Fig. 14(b) and Fig. 16, bottom right panel), \(\bar{s}_{1}, \bar{s}_{2}\) both high corresponds to impulsive behavior in which near-random choices occur.
Fig. 14

Contour maps of RR over the (γE,γI)-neuromodulation plane for the full network (a) and reduced 2-population system (b). Inhibition dominates at upper left where excitatory cells rarely exceed threshold, at lower right excitation dominates and thresholds are rapidly crossed, giving high error rates; a ridge of high RRs separates these low RR regions. From Eckhoff et al. (2009) (Color figure online)
Fig. 15

Bifurcation diagrams for the noise-free 2-population model for (γE,γI)=(1,1) with E=0 (a) and E=0.128 (b), showing \(\bar{s}_{1}\) vs. stimulus strength μ. Pitchfork bifurcations and pairs of saddle-nodes occur for E=0 (a), finite coherence breaks the symmetry (b). Only the upper- and lower-most branches are stable, cf. Fig. 16
Fig. 16

Dynamics of the 2-population model projected onto (s1, s2) plane, with \(\dot{\bar{s}}_{1} = 0\) nullcline vertical line and curve with maximum at top (orange), \(\dot{\bar{s}}_{2} = 0\) nullcline horizontal line and curve with maximum at right (green), sinks as filled circles, saddles with one- and two-dimensional stable manifolds as open triangles and open circles. From top left clockwise, nullclines, fixed points, and sample trial paths are shown without stimulus at (γE,γI)=(1,1), with stimulus and E=0 (symmetric) at (γE,γI)=(1,1), without stimulus at (γE,γI)=(2,1.2), and with stimulus for E=0.128 at (γE,γI)=(1,1). From Eckhoff et al. (2009, 2011) (Color figure online)

5.2 Physiological Constraints to Optimality?

The global dynamics of the reduced system (4849), with its multiple attractors (Fig. 16), differs qualitatively and quantitatively from the optimal DD process of (34). While the dynamics near a saddle point approximate the approach of orbits to a drift along one-dimensional DD dynamics, acceleration away from the saddle and deceleration toward an attractor cause suboptimal integration. Moreover, even if an attracting one-dimensional subspace exists, deviations from it effectively blur the decision thresholds, and firing rate bounds preclude negative and arbitrarily high activations, preventing perfect subtraction as in (33) (Wang 2002; Wong and Wang 2006). Adjustments in baseline activity and gain can keep accumulator states in near-linear dynamical ranges (Eckhoff et al. 2011), but the fact that nonlinear dynamics emerge from a biophysically based model suggests physiological obstructions to optimality, especially when task conditions span a wide range Brunel and Wang (2001), Renart et al. (2003), Wong and Wang (2006).

In this regard it is worth noting that choice attractors such as those of Fig. 16, which can persist in the absence of stimuli (Fig. 15(a)), have been identified with short-term working memory states (Jones 1994; Guckenheimer and Holmes 1983). Working memory is clearly important for delayed-response tasks such as those of Wong and Wang (2006) and in many other aspects of our lives. It is therefore plausible that cortical circuits have evolved to allow flexible multi-attractor dynamics that are inconsistent with optimal strategies in artificially simple tasks such as 2AFC.

6 Discussion: Some Omissions and Open Problems

In this article I have outlined some mathematical models in neuroscience that draw on dynamical systems theory. Specifically, Sects. 23 describe mechanistic, cellular-scale models for the generation of action potentials (spikes) and for electrical and chemical synapses, discuss methods for reducing their dimension and hence simplifying their analysis, and illustrate with a model of a central pattern generator for legged locomotion. This represents the dominant approach to modeling among neuroscientists and applied mathematicians. In contrast, Sect. 4 takes a high level perspective, using stochastic ODEs and a drift-diffusion (DD) model to represent accumulation of evidence in brain areas and to predict a strategy for maximizing reward rate in binary perceptual choice tasks, enabled by the simplicity of the DD model. Such models exemplify the connectionist neural networks in wide use among cognitive psychologists. They can be justified empirically by their ability to fit behavioral data, and, more directly, from observations of spike rates in animal studies. They can also be derived, albeit non-rigorously, from cellular-scale spiking models, as sketched in Sect. 5.

These models illustrate the range of scales and models in mathematical neuroscience, but as noted in Sect. 1, many important ideas and approaches are missing from this account. While spinal and thoracic CPGs generate functional motor rhythms, brains exhibit cortical oscillations over a wide frequency range (2–100 Hz), as detected via electroencephalography (scalp electrodes) and extracellular electrodes recording local field potentials (Eckhoff et al. 2011). There is much debate about the mechanisms and functions of such oscillations Eckhoff et al. (2011), including their rôles in “binding” different sensory modalities Eckhoff et al. (2011) and in diseased states Eckhoff et al. (2011), and their generation by and effects on spikes from individual cells Eckhoff et al. (2011). Further modeling, with phase oscillators as well as HH-type equations, could shed light on these cortical rhythms.

The study of how organisms learn about and adapt to changing environments is a major area in which reinforcement learning (RL) Eckhoff et al. (2011) and the extended notion of hierarchical reinforcement learning Eckhoff et al. (2011) draw on studies of the dopamine neuromodulation system Wong and Wang (2006) to propose discrete dynamical updates following rewarded behaviors. Dimension reduction can occur in RL (van Ravenzwaaij et al. 2012, Figs. 5–7). Goal-directed planning and searching also employ iterative models in which different strategies are explored (Servan-Schreiber et al. 1990; Cohen et al. 1990). On the mechanistic level, cortical rhythms also may be important in learning (Deco et al. 2013).

Probabilistic ideas are also widely used, perhaps more widely than those from dynamical systems. An influential subculture considers probabilistic computations based on Bayes’ rule (Usher and Cohen 1999; Wang 1999; Renart et al. 2003; Wong and Wang 2006; Deco et al. 2013) that offer normative accounts of task performance as dynamic updating of priors. Models have been developed for sensori-motor control, e.g. Zhang and Barash (2004), Feng et al. (2009), Rorie et al. (2010), Gao et al. (2011), and proposed to describe stimulus identification, decision making and learning (Buzsaki 2006; Baker 2007; Fries et al. 2007). These models are mostly empirical, but there is increasing evidence that the brain can code probability distributions and perform Bayesian computations, possibly via cortico-basal ganglia circuits (Kopell et al. 2009; Whittington et al. 2009; Wang 2010). Information theory (Gray et al. 1989; Gray and Singer 1995; Fries 2005), originally suggested by Weiner as a descriptor for sensory receptors (McCarthy et al. 2011, 2012; Uhlhaas and Singer 2010) has also been used to analyze spike trains and quantify their information content in attempts to understand neural coding (Wang 2010) as well as learning (Sutton 1988; Sutton and Barto 1998). More generally, probabilistic methods including hidden Markov models are useful in analyzing multi-electrode recordings in terms of transitions among brain states, e.g. (Botvinick et al. 2009; Botvinick 2012).

In closing I note some open problems, focusing on ones that arise from the models discussed in Sects. 25. A persistent difficulty is in identifying brain areas and neural substrates in which specific “computations” are done. Excepting some specialized sensory and sensori-motor circuits, such as those dedicated to reflexive actions, most computations appear to activate multiple brain areas. For example, while neural firing rates in area LIP are similar to DD processes (Schultz et al. 1997, 2000), pharmacological inactivation of LIP may not deprive an animal of the ability to identify stimuli and respond appropriately (Swinehard and Abbott 2006) (although it may slow him down). Computational models involving multiple brain areas have been constructed for over 30 years (Solway and Botvinick 2012), but there have been few attempts to analyze them mathematically (an exception is (Torta et al. 2009), which shows that the multi-layer network of (Bayes 1763) can be reduced to a DD process with time-varying input). Moreover, until recently simultaneous neural recordings from multiple areas have not been available to constrain multi-area models.

Here are suggestions some specific things to pursue:
  • Improved theory and analytical methods for hybrid dynamical systems, especially (large) networks of integrate-and-fire cells.

  • Better descriptions of, and methods for extracting macroscopic activity states in averaging over large cell populations.

  • Nested sets of models for simple tasks, fitted to electrophysiological and behavioral data.

  • Further use of time scale separation in creating and analyzing models of cognitive dynamics.

  • Analyses of iterative learning algorithms as dynamical systems.

The joys celebrated herein are chiefly in the works of others, the trials come in my attempts to understand them, and to contribute new ideas.


Biologists refer to dendrites and axons as processes: confusing terminology for a mathematician!


Division by 103 accommodates the conventional units of millivolts, nanoamps, and nanosiemens for conductances.



This article draws upon work done since 2000, variously supported by NSF under DMS-0101208 and EF-0425878, AFOSR under FA9550-07-1-0537 and FA9550-07-1-0528, NIH under P50 MH62196, the US-Israel Binational Science Foundation under BSF 2011059, and the J. Insley Pyne Fund of Princeton University. The material in Sects. 23 was adapted from notes for a course taught at Princeton since 2006 to which Philip Eckhoff contributed much. An extended version of Sect. 4 appears in Kording et al. (2004), Kording and Wolpert (2006), Wolpert (2007). The author thanks Fuat Balci for providing Fig. 12 and the anonymous reviewers for their suggestions, and gratefully acknowledges the contributions of many other collaborators, not all of whose work could be cited here.

Copyright information

© Springer Science+Business Media New York 2013