Abstract
We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.
Similar content being viewed by others
References
Amit, D., & Brunel, N. (1997a). Dynamics of a recurrent network of spiking neurons before and following learning. Network, 8, 373–404.
Amit, D., & Brunel, N. (1997b). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.
Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3), 183–208.
Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11(7), 1621–1671.
Brunel, N., & Latham, P. (2003). Firing rate of the noisy quadratic integrate-and-fire neuron. Neural Computation, 15, 2281–2306.
Deger, M., Helias, M., Boucsein, C., Rotter, S. (2012). Statistical properties of superimposed stationary spike trains. Journal of Computational Neuroscience, 32(3), 443–463.
Ermentrout, B. (1996). Type i membranes, phase resetting curves, and synchrony. Neural Computation, 8, 979–1001.
Ermentrout, B., & Kopell, N. (1986). Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM Journal on Applied Mathematics, 46, 233–253.
Gutkin, B., & Ermentrout, B. (1998). Dynamics of membrane excitability determine interspike interval variability: a link between spike generation mechanisms and cortical spike train statistics. Neural Computation, 10, 1047–1065.
Hansel, D., & Mato, G. (2001). Existence and stability of persistent states in large neuronal networks. Biophysical Reviews and Letters, 86, 4175–4178.
Hertz, J. (2010). Cross-correlations in high-conductance states of a model cortical network. Neural Computation, 22(2), 427–447.
Koch, C. (1998). Biophysics of computation: information processing in single neurons (Computational Neuroscience), 1st edn. Oxford University.
Latham, P. (2002). Associative memory in realistic neuronal networks. Advances in neural information processing systems (Vol. 14). Cambridge: MIT.
Latham, P., & Nirenberg, S. (2004). Computing and stability in cortical networks. Neural Computation, 16, 1385–1412.
Latham, P., Richmond, B., Nelson, P., Nirenberg, S. (2000a). Intrinsic dynamics in neuronal networks. I. theory. Journal of Neurophysiology, 83, 808–827.
Latham, P., Richmond, B., Nirenberg, S., Nelson, P. (2000b). Intrinsic dynamics in neuronal networks. II. experiment. Journal of Neurophysiology, 83, 828–835.
Lerchner, A., Sterner, G., Hertz, J., Ahmadi, M. (2006a). Mean field theory for a balanced hypercolumn model of orientation selectivity in primary visual cortex. Network, 17(2), 131–150.
Lerchner, A., Ursta, C., Hertz, J., Ahmadi, M., Ruffiot, P., Enemark, S. (2006b). Response variability in balanced cortical networks. Neural Computation, 18(3), 634–659.
Rappel, W.J., & Karma, A. (1996). Noise-induced coherence in neural networks. Physical Review Letters, 77(15), 3256–3259.
Renart, A., de la Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., Harris, K.D. (2010). The asynchronous state in cortical circuits. Science, 327(5965), 587–590.
Rice, S. (1954). Mathematical analysis of random noise. In Selected papers on noise and stochastic processes (pp. 130–294). Dover.
Roudi, Y., & Latham, P. (2007). A balanced memory network. PLoS Computational Biology, 3, 679–1700.
Salinas, E. (2003). Background synaptic activity as a switch between dynamical states in a network. Neural Computation, 15, 1439–1475.
Shiino, M., & Fukai, T. (1992). Self-consistent signal-to-noise analysis and its application to analogue neural networks with asymmetric connections. Journal of Physics A, 25, L375–L381.
Shiino, M., & Fukai, T. (1993). Self-consistent signal-to-noise analysis of the statistical behavior of analog neural networks and enhancement of the storage capacity. Physical Review E, 48, 867–897.
Shriki, O., Hansel, D., Sompolinsky, H. (2003). Rate models for con ductance-based cortical neuronal networks. Neural Computation, 15, 1809–1841.
Tuckwell, H. (1988). Introduction to theoretical neurobiology (Vol. 2) Cambridge: Cambridge University.
van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Comput, 10, 1321–1371.
Walsh, J. (1981). A stochastic model of neural response. Advances in Applied Probability, 13, 231–281.
Wilson, H., & Cowan, J. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12, 1–24.
Acknowledgments
We thank Nicolas Brunel for helping initiate the project and for critical reading of the manuscript. We thank Peter Dayan for productive discussions. P.E.L. and A.G-B. were supported by the Gatsby Charitable Foundation. We also acknowledge the hospitality of the Kavli Institute for Theoretical Physics, where a portion of this work was performed.
Conflict of interests
The authors declare that they have no conflict of interest.
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: Brent Doiron
Appendices
Appendix A: Statistics of the synaptic drive
In the main text we approximated h¯ L i as a Gaussian random variable with respect to index, i, and the right hand side of Eq. (2.16) as Gaussian white noise. With this approximation, all we need are the variance of h¯ L i and the covariance of the right hand side of Eq. (2.16). Here we compute those quantities.
We start with the variance of h¯ L i , Eq. (2.12). To isolate the index-independent and index-dependent terms, we write
where 𝜖J LM is the population averaged value of \(J_{LM}^{ij}\) (see Eq. (2.7)) and δ \(J_{LM}^{ij}\) ≡ \(J_{LM}^{ij}\) − 𝜖J LM represents the index-dependent fluctuations around that average (sometimes referred to as the quenched noise). Making this substitution, using Eq. (2.15a) for the mean firing rate, and recalling that 𝜖 = K M / N M , Eq. (2.12) becomes
where h L is given in Eq. (2.13b) and the sum is over M = E, I and X. The last term in this expression is the sum of a large number of variables. The weights inside the sum are truly random, so if the firing rates and the weights are sufficiently weakly correlated, this sum is a Gaussian random variable with respect to index, i. Here we assume they are, although this is clearly an approximation: the firing rates, ν Mj , are functions of the connection strengths, and so the variables inside the sum are not quite independent. However, in practice this is a good approximation, especially if 𝜖 (which is a measure of the sparseness of the connectivity; see Eq. (2.7)) is small, something that tends to reduce correlations. Given this approximation, and the fact that, by construction, the mean is zero, all we need is the variance. This variance (plus the variance of δ μ Li , which, by construction, is \(\Delta _{mu _L }^{2}\) (see Eq. (2.8)), is given by
where, as in Eq. (2.13a), we use \(\Delta^{2}_{h_{L}}\) for the total variance. When j ≠ j′ or M ≠ M′, in the large K limit the sum is approximately zero; when j = j′ and M = M′, the sum over i is just the variance of \(J_{LM}^{ij}\). Thus, using Eq. (2.7) for the variance of \(J_{LM}^{ij}\), Eq. (A.3) becomes, after a small amount of algebra,
where \(\nu_{M}^{2}\) is the second moment of the firing rate (Eq. (2.15b)).
We next compute the covariance of the right hand side of Eq. (2.16). Using \(C_{LL'}^{ii'}(\tau )\) to denote the covariance between neuron i of type L and neuron i’ of type L’ at times separated by τ, we have
The angle brackets represent an average over the distribution of spike times. Real neurons have a nontrivial correlational structure; if nothing else, there is a refractory period. However, we ignore that and make the approximation that the neurons are Poisson. In that case, as shown by (Rice 1954), and as is relatively easy to derive, the average over the distribution of spikes yields
where δ i j is the Kronecker delta ( δ i j = 1 if i = j and 0 otherwise). Thus, Eq. (A.5) becomes
Assuming, as usual, that the connections strengths are approximately independent of the firing rates, we may average the connection strengths and firing rates separately. Using Eq. (2.7) for the distribution of connection strengths, we have
An important observation is that \(C_{LL'}^{ii'}(\tau )\) is nonzero even when i ≠ i′ and/or L ≠ L′. Thus, the driving terms for different neurons are correlated; this in turn implies that spike times are correlated across neurons. This would seem to imply that our independence approximation is badly violated. However, as shown by (Renart et al. 2010; Hertz 2010), for balanced networks operating in the asynchronous regime, correlations between excitatory and inhibitory neurons largely cancel, leaving the mean correlation on the order of 1 / N. Thus, in large networks the independence approximation tends to work relatively well. This means we can focus on the autocorrelation, \(C_{LL}^{ii}\), which is somewhat simpler than the full covariance,
This expression leads to Eqs. (2.17) and (2.18).
Appendix B: Transforming from the quadratic integrate and fire neuron to the θ-neuron
For quadratic integrate and fire neurons, action potentials are emitted when the voltage reaches + ∞, at which point the voltage is reset to − ∞. Integrating to infinity, however, poses a problem numerically. To get around this, we make the change of variables
This moves the points at V Li = ± ∞ to θ Li = ± π, and also removes the singularities at ± ∞. Inserting this into Eq. (2.6a) we see that θ Li evolves according to
A spike is emitted when θ Li = π, at which point it is reset to − π.
Appendix C: White noise approximation to external input
To speed up the simulations, we use Gaussian white noise instead of actual spike trains for the external input (the term with M = X in Eq. (2.6b)). To do that, we make the replacement
where η LXi is a zero mean, unit variance Gaussian random variable with respect to index, i, ξ LXi (t) is Gaussian white noise, and we assumed that all the external neurons have the same firing rate ν X (which allowed us to replace νX21 / 2 with ν X ); see Eqs. (2.13b), (2.14) and (2.18).
Rights and permissions
About this article
Cite this article
Grabska-Barwińska, A., Latham, P.E. How well do mean field theories of spiking quadratic-integrate-and-fire networks work in realistic parameter regimes?. J Comput Neurosci 36, 469–481 (2014). https://doi.org/10.1007/s10827-013-0481-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10827-013-0481-5