Abstract
We study the reliability of layered networks of coupled “type I” neural oscillators in response to fluctuating input signals. Reliability means that a signal elicits essentially identical responses upon repeated presentations, regardless of the network’s initial condition. We study reliability on two distinct scales: neuronal reliability, which concerns the repeatability of spike times of individual neurons embedded within a network, and pooledresponse reliability, which concerns the repeatability of total synaptic outputs from a subpopulation of the neurons in a network. We find that neuronal reliability depends strongly both on the overall architecture of a network, such as whether it is arranged into one or two layers, and on the strengths of the synaptic connections. Specifically, for the type of singleneuron dynamics and coupling considered, singlelayer networks are found to be very reliable, while twolayer networks lose their reliability with the introduction of even a small amount of feedback. As expected, pooled responses for large enough populations become more reliable, even when individual neurons are not. We also study the effects of noise on reliability, and find that noise that affects all neurons similarly has much greater impact on reliability than noise that affects each neuron differently. Qualitative explanations are proposed for the phenomena observed.
Similar content being viewed by others
Notes
“A priori” here refers to one’s best guess based on system parameters alone without further knowledge of the dynamics. For example, as the system evolves, each neuron will, in time, acquire a mean frequency, which is likely to be different than its intrinsic frequency. But before studying the dynamics of the system—that is, a priori—there is no information on how the two will differ. So we take its a priori value to be equal to its intrinsic frequency.
Inspection of raster plots at N = 1000, for example, clearly shows substantial trialtotrial variability, with an interesting effect: the variability of single neurons tends to wax and wane over time, and this waxing and waning is differently timed for different neurons.
In this paper, we use the Itô interpretation throughout.
References
Arnold, L. (2003). Random dynamical systems. New York: Springer.
Averbeck, B., Latham, P. E., & Pouget, A. (2006). Neural correlations, population coding and computation. Nature Reviews Neuroscience, 7(5), 358–366, May.
Aviel, Y., Mehring, C., Abeles, M., & Horn, D. (2003). On embedding synfire chains in a balanced network. Neural Computation, 15, 1321–1340.
Bair, W., Zohary, E., & Newsome, W. T. (2001). Correlated firing in macaque visual area MT: Time scales and relationship to behavior. Journal of Neuroscience, 21(5), 1676–1697.
Banerjee, A. (2006). On the sensitive dependence on initial conditions of the dynamics of networks of spiking neurons. Journal of Computational Neuroscience, 20, 321–348.
Banerjee, A., Seriès, P., & Pouget, A. (2008). Dynamical constraints on using precise spike timing to compute in recurrent cortical networks. Neural Computation, 20, 974–993.
Baxendale, P. H. (1992). Stability and equilibrium properties of stochastic flows of diffeomorphisms. In Progr. Probab. 27. Boston: Birkhauser.
Bazhenov, M., Rulkov, N., Fellous, J., & Timofeev, I. (2005). Role of network dynamics in shaping spike timing reliability. Physical Review E, 72, 041903.
Berry, M., Warland, D., & Meister, M. (1997). The structure and precision of retinal spike trains. PNAS, 94, 5411–5416.
Bertschlinger, N., & Natschlager, T. (2004). Realtime computation at the edge of chaos in recurrent neural networks. Neural Computation, 16, 1413–1436.
Borgers, C., Epstein, C., & Kopell, N. (2005). Background gamma rhythmicity and attention in cortical local circuits: A computational study. Journal of Neuroscience, 102, 7002–7007.
Bruno, R. M., & Sakmann, B. (2006). Cortex is driven by weak but synchronously active thalamocortical synapses. Nature, 312, 1622–1627.
Bryant, H. L., & Segundo, J. P. (1976). Spike initiation by transmembrane current: A whitenoise analysis. Journal of Physiology, 260, 279–314.
de Reuter van Steveninck, R., Lewen, R., Strong, S., Koberle, R., & Bialek, W. (1997). Reproducibility and variability in neuronal spike trains. Science, 275, 1805–1808.
Doiron, B., Chacron, M. J., Maler, L., Longtin, A., & Bastian, J. (2003). Inhibitory feedback required for network burst responses to communication but not to prey stimuli. Nature, 421, 539–543.
Douglas, E., & Martin, K. (2004). Neuronal circuits of the neocortex. Annual Review of Neuroscience, 27, 419–451.
Eckmann, J.P., & Ruelle, D. (1985). Ergodic theory of chaos and strange attractors. Reviews of Modern Physics, 57, 617–656.
Ermentrout, G. B. (1996). Type I membranes, phase resetting curves, and synchrony. Neural Computation, 8, 979–1001.
Ermentrout, G. B., & Kopell, N. (1984). Frequency plateaus in a chain of weakly coupled oscillators, I. SIAM Journal on Mathematical Analysis, 15, 215–237.
Faisal, A. A., Selen, L. P. J., & Wolpert, D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9, 292–303.
Hodgkin, A. (1948). The local electric changes associated with repetitive action in a nonmedulated axon. Journal of Physiology, 117, 500–544.
Hunter, J., Milton, J., Thomas, P., & Cowan, J. (1998). Resonance effect for neural spike time reliability. Journal of Neurophysiology, 80, 1427–1438.
Johnston, D., & Wu, S. (1997). Foundations of cellular neurophysiology. Cambridge: MIT.
Kandel, E., Schwartz, J., & Jessell, T. (1991). Principles of neural science, 4th edn. New York: McGrawHill.
Kara, P., Reinagel, P., & Reid, R. C. (2000). Low response variability in simultaneously recorded retinal, thalamic, and cortical neurons. Neuron, 27, 636–646.
Kifer, Y. (1986). Ergodic theory of random transformations. Boston: Birkhauser.
Koch, C. (1999). Biophysics of computation: Information processing in single neurons. Oxford: Oxford University Press.
Kunita, H. (1990). Stochastic flows and stochastic differential equations. Cambridge studies in advanced mathematics (Vol. 24). Cambridge: Cambridge University Press.
Lampl, I., Reichova, I., & Ferster, D. S. (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.
Latham, P. E., Richmond, B. J., Nelson, P. G., & Nirenberg, S. (2000). Intrinsic dynamics in neuronal networks. I. theory. Journal of Neurophysiology, 83, 808–827.
Le Jan, Y. (1987). Équilibre statistique pour les produits de difféomorphismes aléatoires indépendants. Annales de l’Institut Henri Poincaré Probabilités et Statistiques, 23(1), 111–120.
Ledrappier, F., & Young, L.S. (1988). Entropy formula for random transformations. Probability Theory and Related Fields, 80, 217–240.
Lin, K. K., SheaBrown, E., & Young, L.S. (2009a). Reliability of coupled oscillators. Journal of Nonlinear Science (in press).
Lin, K. K., SheaBrown, E., & Young, L.S. (2009b). Reliability of layered neural oscillator networks. Communications in Mathematical Sciences (in press).
Lu, T., Liang, L., & Wang, X. (2001). Temporal and rate representations of timevarying signals in the auditory cortex of awake primates. Nature Neuroscience, 4, 1131–1138.
Maei, H. R., & Latham, P. E. (2005). Can randomly connected networks exhibit long memories? Preprint, Gatsby Computational Neuroscience Unit.
Mainen, Z., & Sejnowski, T. (1995). Reliability of spike timing in neocortical neurons. Science, 268, 1503–1506.
Mazurek, M., & Shadlen, M. (2002). Limits to the temporal fidelity of cortical spike rate signals. Nature Neuroscience, 5, 463–471.
Murphy, G., & Rieke, F. (2007). Network variability limits stimulusevoked spike timing precision in retinal ganglion cells. Neuron, 52, 511–524.
Pakdaman, K., & Mestivier, D. (2001). External noise synchronizes forced oscillators. Physical Review E, 64, 030901–030904.
Perkel, D., & Bullock, T. (1968). Neural coding. Neurosciences Research Program Bulletin, 6, 221–344.
Pikovsky, A., Rosenblum, M., & Kurths, J. (2001). Synchronization: A universal concept in nonlinear sciences. Cambridge: Cambridge University Press.
Reyes, A. (2003). Synchronydependent propagation of firing rate in iteratively constructed networks in vitro. Nature Neuroscience, 6, 593–599.
Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1996). Spikes: Exploring the neural code. Cambridge: MIT.
Rinzel, J., & Ermentrout, G. B. (1998). Analysis of neural excitability and oscillations. In C. Koch, & I. Segev (Eds.), Methods in neuronal modeling (pp. 251–291). Cambridge: MIT.
Ritt, J. (2003). Evaluation of entrainment of a nonlinear neural oscillator to white noise. Physical Review E, 68, 041915–041921.
Seriès, P., Latham, P. E., & Pouget, A. (2004). Tuning curve sharpening for orientation selectivity: Coding efficiency and the impact of correlations. Nature Neuroscience, 7, 1129–1135.
Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. Journal of Neuroscience, 18, 3870–3896.
Shepard, G. (2004). The synaptic organization of the brain. Oxford: Oxford University Press.
Teramae, J., & Fukai, T. (2007). Reliability of temporal coding on pulsecoupled networks of oscillators. arXiv:0708.0862v1 [nlin.AO].
Teramae, J., & Tanaka, D. (2004). Robustness of the noiseinduced phase synchronization in a general class of limit cycle oscillators. Physical Review Letters, 93, 204103–204106.
Terman, D., Rubin, J., Yew, A., & Wilson, C. J. (2002). Activity patterns in a model for the subthalamopallidal network of the basal ganglia. Journal of Neuroscience, 22, 2963–2976.
van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274, 1724–1726.
van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Computation, 10, 1321–1371.
Vogels, T., & Abbott, L. (2005). Signal propagation and logic gating in networks of integrateandfire neurons. Journal of Neuroscience, 25, 10786–10795.
Winfree, A. (2001). The geometry of biological time. New York: Springer.
Zhou, C., & Kurths, J. (2003). Noiseinduced synchronization and coherence resonance of a hodgkinhuxley model of thermally sensitive neurons. Chaos, 13, 401–409.
Zohary, E., Shadlen, M. N., & Newsome, W. T. (1994). Correlated neuronal discharge rate and its implication for psychophysical performance. Nature, 370, 140–143.
Acknowledgements
We thank David Cai, AnneMarie Oswald, Alex Reyes, and John Rinzel for their helpful discussions of this material. We acknowledge a Career Award at the Scientific Interface from the BurroughsWellcome Fund (E.S.B.), and a grant from the NSF (L.S.Y.).
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: David Terman
Appendices
Appendix A Review of random dynamical systems theory
In this appendix, we review some relevant mathematical theory that justifies the use of Lyapunov exponents in determining the reliability of a system. “Reliability” here refers exclusively to “neuronal reliability”. As these results are very general and can potentially be used elsewhere, we will present them in a context considerably more general than the system defined by Eq. (1).
Consider a stochastic differential equation (SDE) of the form
Here x _{ t } ∈ M where M is a compact Riemannian manifold of any dimension d ≥ 1, a(·) and b(·) are smooth functions on M, and \((W^1_t, \cdots, W^k_t)\) is a kdimensional standard Brownian motion. In general, the equation is assumed to be of Stratonovich type, but when \(M\,=\,{\mathbb T}^N \equiv {\mathbb{S}}^1 \times {\mathbb{S}}^1 \times \cdots \times {\mathbb{S}}^1\), we have the choice between the Itô and Stratonovich integrals.^{Footnote 5} Equation (1) is a special case of this setting with \(M={\mathbb T}^N\).
1.1 A.1 Stochastic flows associated with SDEs (see e.g. Kunita 1990; Baxendale 1992)
In general, one fixes an initial x _{0}, and looks at the distribution of x _{ t } for t > 0. Under fairly general conditions, these distributions converge to the unique stationary measure μ, the density of which is given by the FokkerPlanck equation. For our purposes, however, this is not the most relevant viewpoint. Since reliability is about a system’s reaction to a single realization of Brownian motion at a time, and concerns the simultaneous evolution of all or large ensembles of initial conditions, of relevance to us are not the distributions of x _{ t } but flowmaps of the form \(F_{t_1,t_2;\omega}\). Here t _{1} < t _{2} are two points in time, ω is a sample Brownian path, and \(F_{t_1,t_2;\omega}(x_{t_1})=x_{t_2}\) where x _{ t } is the solution of (7) corresponding to ω. A well known theorem states that such stochastic flows of diffeomorphisms are well defined if the functions a(x) and b(x) in Eq. (7) are sufficiently smooth (see Kunita 1990). More precisely, the maps \(F_{t_1,t_2;\omega}\) are well defined for almost every ω, and they are invertible, smooth transformations with smooth inverses. Moreover, \(F_{t_1,t_2;\omega}\) and \(F_{t_3,t_4;\omega}\) are independent for t _{1} < t _{2} < t _{3} < t _{4}. These results allow us to treat the evolution of systems described by (7) as compositions of random, i.i.d., smooth maps.
Since reliability questions involve one ω at a time, the stationary measure μ, which gives the steadystate distribution averaged over all ω, is not the object of direct interest. Of relevance are the sample measures {μ _{ ω }}, which are the conditional measures of μ given the past. More precisely, we think of ω as defined for all t ∈ ( − ∞ , ∞ ) and not just for t > 0. Then μ _{ ω } describes what one sees at t = 0 given that the system has experienced the input defined by ω for all t < 0. Two useful facts about these sample measures are

(a)
(F _{ − t,0;ω })_{*} μ→μ _{ ω } as t → ∞, where (F _{ − t,0;ω })_{*} μ is the measure obtained by transporting μ forward by F _{ − t,0;ω }, and

(b)
the family {μ _{ ω }} is invariant in the sense that \((F_{0,t;\omega})_*(\mu_\omega) = \mu_{\sigma_t(\omega)}\) where σ _{ t } (ω) is the timeshift of the sample path ω by t.
Thus in the context of a reliability study, if our initial distribution is given by a probability density ρ and we apply the stimulus corresponding to ω, then the distribution at time t is (F _{0,t;ω })_{*} ρ. For t sufficiently large, one expects in most situations that (F _{0,t;ω })_{*} ρ is very close to (F _{0,t;ω })_{*} μ, which by (a) above is essentially given by \(\mu_{\sigma_t(\omega)}\). The timeshift by t of ω is necessary because by definition, μ _{ ω } is the conditional distribution of μ at time 0.
1.2 A.2 Lyapunov exponents of random dynamical systems (see e.g. Arnold 2003)
The fact that the evolution of systems described by (7) can be represented as compositions of random, i.i.d., smooth maps allows us to tap into a large part of dynamical systems theory, namely the theory of random dynamical systems (RDS). Many of the techniques for analyzing smooth deterministic systems have been extended to this random setting, including the notion of Lyapunov exponents. For the stochastic flows above, the largest Lyapunov exponent is defined to be
These numbers are known to be defined for μa.e. x ∈ M and a.e. ω. Moreover, they are nonrandom, i.e., they do not depend on ω, and when μ is ergodic, λ _{max} does not depend on x either, i.e., λ _{max} is equal to a single (fixed) number for almost every initial condition in the phase space and for almost every sample path.
Numerical calculation of Lyapunov exponents
Lyapunov exponents can be computed numerically by solving the variational equations associated with Eq. (7); the largest Lyapunov λ _{max} is given by the logarithmic growth rate of a typical tangent vector. This is what we have done in this paper, using the Euler method for SDEs to solve the variational equations.
1.3 A.3 Implications of the sign of λ _{max}
The two results below are in fact valid in greater generality, but let us restrict ourselves to the SDE setting at the beginning of this subsection.
Theorem 1
Let μ be an ergodic stationary measure of the RDS defined by Eq. (7).

(1)
(Random sinks) (Le Jan 1987) If λ _{ max } < 0, then for a.e. ω , μ _{ ω } is supported on a finite set of points.

(2)
(Random strange attractors) (Ledrappier and Young 1988) If μ has a density and λ _{ max } > 0, then for a.e. ω , μ _{ ω } is a random SRB measure.
In Part (1) of the theorem, if in addition to λ _{ max } < 0, mild conditions (on the relative motions of two points) are assumed, then almost surely μ _{ ω } is supported on a single point (Baxendale 1992). From the discussion above, μ _{ ω } being supported on a single point corresponds to the collapse of trajectories starting from almost all initial conditions to a single trajectory. In the context of Eq.(1), this is exactly what it means for the system to be neuronally reliable as explained in Section 3.1.
The conclusion of Part (2) requires clarification: In deterministic dynamical systems theory, SRB measures are natural invariant measures that describe the asymptotic dynamics of chaotic dissipative systems (in the same way that Liouville measures are the natural invariant measures for Hamiltonian systems). SRB measures are typically singular. They are concentrated on unstable manifolds, which are families of curves, surfaces etc. that wind around in a complicated way in the phase space (Eckmann and Ruelle 1985). Part (2) of Theorem 1 generalizes these ideas to random dynamical systems. Here, random (meaning ωdependent) SRB measures live on random unstable manifolds, which are complicated families of curves, surfaces, etc. that evolve with time. In particular, in a system with random SRB measures, different initial conditions lead to very different outcomes at time t when acted on by the same stimulus; this is true for all t > 0, however large. In the context of a reliability study, therefore, it is natural to regard the distinctive geometry of random SRB measures as a signature of unreliability.
We do not claim here that mathematically, the results in Theorem 1 apply to Eq. (1). To formally apply these results, conditions of ergodicity, invariant density etc. have to be verified. Evidence—both analytic and numerical—point to an affirmative answer when the coupling constants a, a _{ i }, a _{ff} etc. are nonzero.
Appendix B A 2D toy model of twolayer networks
We provide here more detail on how twoneuron models can be used to shed light on twolayer networks as suggested in Section 5.2. Specifically, we will explain how the shapes of the phase distributions P _{12} and P _{21} are predicted.
Consider a system comprised of two neurons whose dynamics obey Eq. (1). For definiteness, we set ω _{1} = ω _{2} = 1, A _{ff} = 2.8, and A _{fb} = 0.8 to mimic the parameters in the twolayer networks considered in Section 5.2, with neurons 1 and 2 representing layers 1 and 2 in the twolayer system. The phase space of this system is the 2torus, which we identify with the square [0.1]^{2} with periodic boundary conditions; the coordinates are denoted by (θ _{1},θ _{2}). In Fig. 12, we show a few trajectories of the undriven system, i.e., with ε = 0. Away from the edges, they are northeasterly with slope 1; near the edges, they are bent due to the coupling. We now turn on the stimulus, setting ε = 2.5 as in Section 5.2. Because only neuron 1 hears the stimulus, it perturbs trajectories only in the θ _{1} direction. When the stimulus is turned on, trajectories will, for the most part, continue to go in roughly the same directions as those shown in Fig. 12, but they become “wriggly”, being driven randomly to the left and right by the whitenoise stimulus.
We will refer to the top and bottom edges of the square (which are identified with each other) as Σ. Every time a trajectory crosses Σ, neuron 2 spikes, and the location in Σ tells us the phase of neuron 1 when this spiking occurs. We view Σ as a crosssection to the flow, and consider the induced return map Φ: Σ→Σ. In the case of two neurons with feedback, the distribution of trajectories of Φ on Σ tells us the phase distribution of neuron 1 when it receives a synaptic input from neuron 2. In our analogy with the twolayer system, this is the analog of P _{21}. Similarly, the distribution of returns to the left and right edges identified (we call that Σ′) represents the phases of neuron 2 when it receives an input from neuron 1., i.e., the distribution analogous to P _{12}.
To understand these distributions, let us view the return dynamics to Σ as the result of two “moves” in succession (this is not entirely accurate but will suffice for present purposes): The first is the return map for the flow with ε = 0, and the second is a “smearing”, obtained by, e.g., taking a convolution with a Gaussian, to simulate the perturbations experienced by the trajectory between returns to Σ.
The return dynamics of the undriven flow are very simple: From the geometry of the flowlines, one sees that starting from any point in Σ, there is a leftward displacement due to the fact that the upward kick along the vertical edges are stronger than the rightward kicks along the horizontal edges (i.e. A _{ff} > A _{fb}). This leftward displacement is quite substantial away from θ _{1} = 0, reaching a maximum at θ _{1} ≈ 0.75. Because of the character of the phase response function, this displacement is very small (but strictly positive) near θ _{1} ≈ 0. It is so weak there that with ε = 0, all trajectories spend most of their time near the diagonal, with only brief excursions in between. In other words, when ε = 0, the phase distributions on Σ peak sharply at θ _{1} = 0.
With ε = 2.5, the “smearing” is nontrivial. Immediately, one sees that it causes the distribution to be more spread out. It is also easy to see that some concentration near θ _{1} = 0 will be retained, only that the peak will be more rounded. We now explain why one should expect the peak to be shifted to the right: Suppose we start with a roughly constant distribution on an interval centered at θ _{1} = 0. Since the return map is nearly the identity in this region, we may assume it does not change this distribution substantially. Next we take a convolution, which causes the distribution to have wider support. Now the part of the distribution that is pushed to the right of the original interval will get pushed back in when we apply the return map of the undriven flow again, due to the leftward displacement discussed earlier, whereas the part that is pushed to the left will be pushed further away from θ _{1} = 0 by the ε = 0 dynamics. The result is an obvious asymmetry in the distribution, one that is reinforced in subsequent iterations. (The argument we have presented does not constitute a proof, but a proof is probably not out of reach.)
Once we have convinced ourselves of the geometry of P _{21}, it is easy to see from the northeasterly direction of the flowlines that if P _{21} peaks to the right of θ _{1} = 0 on Σ, then P _{12} must peak just below θ _{2} = 0 on Σ′. This completes the explanation promised in Section 5.2.
We finish with a couple of remarks that may be illuminating:

(1)
The phenomenon described above occurs for both excitatory and inhibitory couplings: with A _{ff} < 0 and A _{fb} < A _{ff}, the displacement of the return map Φ: Σ→Σ is to the right. But in the inhibitory situation, there are other forces shaping the phase distribution, making the picture there more complicated. (In case the reader wonders how to interpret our reasoning in the purefeedforward case: statements about phase distributions on Σ are valid, except that spikings of neuron 2 do not impact neuron 1!)

(2)
We may also represent the singlelayer system by two neurons. Here by far the biggest difference is that both neurons receive the stimulus in the same way, and that translates into perturbations that are in the direction of the diagonal. Such perturbations are not very effective in spreading out distributions, especially when the trajectories are concentrated near the diagonal. These observations provide a geometric understanding for the material in Section 4.2.
A detailed analysis of the twoneuron model with different emphasis is carried out in Lin et al. (2009a).
Rights and permissions
About this article
Cite this article
Lin, K.K., SheaBrown, E. & Young, LS. Spiketime reliability of layered neural oscillator networks. J Comput Neurosci 27, 135–160 (2009). https://doi.org/10.1007/s1082700801333
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1082700801333