Skip to main content
Log in

How well do mean field theories of spiking quadratic-integrate-and-fire networks work in realistic parameter regimes?

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Amit, D., & Brunel, N. (1997a). Dynamics of a recurrent network of spiking neurons before and following learning. Network, 8, 373–404.

    Article  Google Scholar 

  • Amit, D., & Brunel, N. (1997b). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.

    Article  CAS  PubMed  Google Scholar 

  • Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3), 183–208.

    Article  CAS  PubMed  Google Scholar 

  • Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11(7), 1621–1671.

    Article  CAS  PubMed  Google Scholar 

  • Brunel, N., & Latham, P. (2003). Firing rate of the noisy quadratic integrate-and-fire neuron. Neural Computation, 15, 2281–2306.

    Article  PubMed  Google Scholar 

  • Deger, M., Helias, M., Boucsein, C., Rotter, S. (2012). Statistical properties of superimposed stationary spike trains. Journal of Computational Neuroscience, 32(3), 443–463.

    Article  PubMed Central  PubMed  Google Scholar 

  • Ermentrout, B. (1996). Type i membranes, phase resetting curves, and synchrony. Neural Computation, 8, 979–1001.

    Article  CAS  PubMed  Google Scholar 

  • Ermentrout, B., & Kopell, N. (1986). Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM Journal on Applied Mathematics, 46, 233–253.

    Article  Google Scholar 

  • Gutkin, B., & Ermentrout, B. (1998). Dynamics of membrane excitability determine interspike interval variability: a link between spike generation mechanisms and cortical spike train statistics. Neural Computation, 10, 1047–1065.

    Article  CAS  PubMed  Google Scholar 

  • Hansel, D., & Mato, G. (2001). Existence and stability of persistent states in large neuronal networks. Biophysical Reviews and Letters, 86, 4175–4178.

    Article  CAS  Google Scholar 

  • Hertz, J. (2010). Cross-correlations in high-conductance states of a model cortical network. Neural Computation, 22(2), 427–447.

    Article  PubMed  Google Scholar 

  • Koch, C. (1998). Biophysics of computation: information processing in single neurons (Computational Neuroscience), 1st edn. Oxford University.

  • Latham, P. (2002). Associative memory in realistic neuronal networks. Advances in neural information processing systems (Vol. 14). Cambridge: MIT.

  • Latham, P., & Nirenberg, S. (2004). Computing and stability in cortical networks. Neural Computation, 16, 1385–1412.

    Article  PubMed  Google Scholar 

  • Latham, P., Richmond, B., Nelson, P., Nirenberg, S. (2000a). Intrinsic dynamics in neuronal networks. I. theory. Journal of Neurophysiology, 83, 808–827.

    CAS  PubMed  Google Scholar 

  • Latham, P., Richmond, B., Nirenberg, S., Nelson, P. (2000b). Intrinsic dynamics in neuronal networks. II. experiment. Journal of Neurophysiology, 83, 828–835.

    CAS  PubMed  Google Scholar 

  • Lerchner, A., Sterner, G., Hertz, J., Ahmadi, M. (2006a). Mean field theory for a balanced hypercolumn model of orientation selectivity in primary visual cortex. Network, 17(2), 131–150.

    Article  PubMed  Google Scholar 

  • Lerchner, A., Ursta, C., Hertz, J., Ahmadi, M., Ruffiot, P., Enemark, S. (2006b). Response variability in balanced cortical networks. Neural Computation, 18(3), 634–659.

    Article  Google Scholar 

  • Rappel, W.J., & Karma, A. (1996). Noise-induced coherence in neural networks. Physical Review Letters, 77(15), 3256–3259.

    Article  CAS  PubMed  Google Scholar 

  • Renart, A., de la Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., Harris, K.D. (2010). The asynchronous state in cortical circuits. Science, 327(5965), 587–590.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Rice, S. (1954). Mathematical analysis of random noise. In Selected papers on noise and stochastic processes (pp. 130–294). Dover.

  • Roudi, Y., & Latham, P. (2007). A balanced memory network. PLoS Computational Biology, 3, 679–1700.

    Article  Google Scholar 

  • Salinas, E. (2003). Background synaptic activity as a switch between dynamical states in a network. Neural Computation, 15, 1439–1475.

    Article  PubMed  Google Scholar 

  • Shiino, M., & Fukai, T. (1992). Self-consistent signal-to-noise analysis and its application to analogue neural networks with asymmetric connections. Journal of Physics A, 25, L375–L381.

    Article  Google Scholar 

  • Shiino, M., & Fukai, T. (1993). Self-consistent signal-to-noise analysis of the statistical behavior of analog neural networks and enhancement of the storage capacity. Physical Review E, 48, 867–897.

    Article  Google Scholar 

  • Shriki, O., Hansel, D., Sompolinsky, H. (2003). Rate models for con ductance-based cortical neuronal networks. Neural Computation, 15, 1809–1841.

    Article  PubMed  Google Scholar 

  • Tuckwell, H. (1988). Introduction to theoretical neurobiology (Vol. 2) Cambridge: Cambridge University.

    Book  Google Scholar 

  • van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Comput, 10, 1321–1371.

    Article  PubMed  Google Scholar 

  • Walsh, J. (1981). A stochastic model of neural response. Advances in Applied Probability, 13, 231–281.

    Article  Google Scholar 

  • Wilson, H., & Cowan, J. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12, 1–24.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

Download references

Acknowledgments

We thank Nicolas Brunel for helping initiate the project and for critical reading of the manuscript. We thank Peter Dayan for productive discussions. P.E.L. and A.G-B. were supported by the Gatsby Charitable Foundation. We also acknowledge the hospitality of the Kavli Institute for Theoretical Physics, where a portion of this work was performed.

Conflict of interests

The authors declare that they have no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Agnieszka Grabska-Barwińska.

Additional information

Action Editor: Brent Doiron

Appendices

Appendix A: Statistics of the synaptic drive

In the main text we approximated h¯ L i as a Gaussian random variable with respect to index, i, and the right hand side of Eq. (2.16) as Gaussian white noise. With this approximation, all we need are the variance of h¯ L i and the covariance of the right hand side of Eq. (2.16). Here we compute those quantities.

We start with the variance of h¯ L i , Eq. (2.12). To isolate the index-independent and index-dependent terms, we write

$$ J_{LM}^{ij} = \epsilon J_{LM} + \delta J_{LM}^{ij} $$
(A.1)

where 𝜖J LM is the population averaged value of \(J_{LM}^{ij}\) (see Eq. (2.7)) and δ \(J_{LM}^{ij}\)\(J_{LM}^{ij}\) − 𝜖J LM represents the index-dependent fluctuations around that average (sometimes referred to as the quenched noise). Making this substitution, using Eq. (2.15a) for the mean firing rate, and recalling that 𝜖 = K M / N M , Eq. (2.12) becomes

$$ \bar{h}_{Li} = {h_{L}} + \delta \mu_{Li} + \sum\limits_{M,j} \frac{\tau_m}{K_{M}^{1/2}} \, \delta J_{LM}^{ij} \, \nu_{Mj} $$
(A.2)

where h L is given in Eq. (2.13b) and the sum is over M = E, I and X. The last term in this expression is the sum of a large number of variables. The weights inside the sum are truly random, so if the firing rates and the weights are sufficiently weakly correlated, this sum is a Gaussian random variable with respect to index, i. Here we assume they are, although this is clearly an approximation: the firing rates, ν Mj , are functions of the connection strengths, and so the variables inside the sum are not quite independent. However, in practice this is a good approximation, especially if 𝜖 (which is a measure of the sparseness of the connectivity; see Eq. (2.7)) is small, something that tends to reduce correlations. Given this approximation, and the fact that, by construction, the mean is zero, all we need is the variance. This variance (plus the variance of δ μ Li , which, by construction, is \(\Delta _{mu _L }^{2}\) (see Eq. (2.8)), is given by

$$ \Delta^{2}_{h_{L}} = \Delta_{\mu_{L}}^{2}+ \sum\limits_{M,M', j,j'} \frac{\tau_{m}^{2}}{(K_{M} K_{M'})^{1/2}}\, \nu_{Mj} \nu_{M'j'} \frac{1}{N_{L}} \sum\limits_i \delta J_{LM}^{ij} \delta J_{LM'}^{ij'} $$
(A.3)

where, as in Eq. (2.13a), we use \(\Delta^{2}_{h_{L}}\) for the total variance. When jj′ or MM′, in the large K limit the sum is approximately zero; when j = j′ and M = M′, the sum over i is just the variance of \(J_{LM}^{ij}\). Thus, using Eq. (2.7) for the variance of \(J_{LM}^{ij}\), Eq. (A.3) becomes, after a small amount of algebra,

$$ \Delta^{2}_{h_{L}} = \Delta_{\mu_{L}}^{2}+ \sum\limits_M J_{LM}^{2} \left(1 + \Delta^{2} - \epsilon \right) \tau_{m}^{2} \nu_{M}^{2} $$
(A.4)

where \(\nu_{M}^{2}\) is the second moment of the firing rate (Eq. (2.15b)).

We next compute the covariance of the right hand side of Eq. (2.16). Using \(C_{LL'}^{ii'}(\tau )\) to denote the covariance between neuron i of type L and neuron i’ of type L’ at times separated by τ, we have

$$\begin{array}{@{}rcl@{}} C_{LL'}^{ii'}(\tau) &=& \sum\limits_{M,M', j,j'} \frac{\tau_{m}^{2} }{(K_{M} K_{M'})^{1/2}} \, J_{LM}^{ij} J_{L'M'}^{i'j'}\\ &&\times \left\langle \sum\limits_{l,l'} \left[\delta\left(t-t_{j}^{l}\right) - \nu_{Mj}\right] \left[\delta\left(t + \tau -t_{j'}^{l}\right) - \nu_{M'j'}\right] \right\rangle. \end{array} $$
(A.5)

The angle brackets represent an average over the distribution of spike times. Real neurons have a nontrivial correlational structure; if nothing else, there is a refractory period. However, we ignore that and make the approximation that the neurons are Poisson. In that case, as shown by (Rice 1954), and as is relatively easy to derive, the average over the distribution of spikes yields

$$ \left\langle \sum\limits_{l,l'} \left[\delta\left(t-t_{j}^{l}\right) - \nu_{Mj} \right] \left[\delta\left(t + \tau-t_{j'}^{l'}\right) - \nu_{M'j'}\right] \right\rangle = \nu_{Mj} \delta(\tau) \delta_{jj'} \delta_{MM'} $$
(A.6)

where δ i j is the Kronecker delta ( δ i j = 1 if i = j and 0 otherwise). Thus, Eq. (A.5) becomes

$$ C_{LL'}^{ii'}(\tau) = \delta(\tau) \sum\limits_{M,j} \frac{\tau_{m}^{2}}{K_{M}} \, J_{LM}^{ij} J_{L'M}^{i'j} \nu_{Mj}. $$
(A.7)

Assuming, as usual, that the connections strengths are approximately independent of the firing rates, we may average the connection strengths and firing rates separately. Using Eq. (2.7) for the distribution of connection strengths, we have

$$\begin{array}{rll} C_{LL'}^{ii'}(\tau) &=& \delta(\tau)\tau_{m}^{2} \sum\limits_{M} \frac{\nu_{M}}{K_{M}}\, \sum\limits_j J_{LM}^{ij} J_{L'M}^{i'j} \\ &=&\delta(\tau)\tau_{m}^{2} \sum\limits_M J_{LM}^{2} \left[ \epsilon (1-\delta_{ii'} \delta_{LL'}) + \left(1+\Delta^{2}\right) \delta_{ii'} \delta_{LL'} \right] {\nu_{M}}. \end{array} $$
(A.8)

An important observation is that \(C_{LL'}^{ii'}(\tau )\) is nonzero even when ii′ and/or LL′. Thus, the driving terms for different neurons are correlated; this in turn implies that spike times are correlated across neurons. This would seem to imply that our independence approximation is badly violated. However, as shown by (Renart et al. 2010; Hertz 2010), for balanced networks operating in the asynchronous regime, correlations between excitatory and inhibitory neurons largely cancel, leaving the mean correlation on the order of 1 / N. Thus, in large networks the independence approximation tends to work relatively well. This means we can focus on the autocorrelation, \(C_{LL}^{ii}\), which is somewhat simpler than the full covariance,

$$ C_{LL}^{ii}(\tau) = \delta(\tau) \tau_{m}^{2} \sum_M J_{LM}^{2} \left(1+\Delta^{2}\right) {\nu_{M}}. $$
(A.9)

This expression leads to Eqs. (2.17) and (2.18).

Appendix B: Transforming from the quadratic integrate and fire neuron to the θ-neuron

For quadratic integrate and fire neurons, action potentials are emitted when the voltage reaches + ∞, at which point the voltage is reset to − ∞. Integrating to infinity, however, poses a problem numerically. To get around this, we make the change of variables

$$ V_{Li} = \bar{v} + (V_{th} - V_r) \tan(\theta_{Li}/2). $$
(B.1)

This moves the points at V Li = ± ∞ to θ Li = ± π, and also removes the singularities at ± ∞. Inserting this into Eq. (2.6a) we see that θ Li evolves according to

$$\tau_m \frac{d \theta_{Li}}{dt} = (1-\cos \theta_{Li}) + (1 + \cos \theta_{Li})(\mu_L + \mu_{Li} + h_{Li}). $$
(B.2)

A spike is emitted when θ Li = π, at which point it is reset to − π.

Appendix C: White noise approximation to external input

To speed up the simulations, we use Gaussian white noise instead of actual spike trains for the external input (the term with M = X in Eq. (2.6b)). To do that, we make the replacement

$$\begin{array}{@{}rcl@{}} &&\frac{\tau_m}{K_{X}^{1/2}}\sum\limits_{j,l} J_{LX}^{ij} \, \delta\left(t-t_{Xj}^{l}\right)\rightarrow \; \\ &&J_{LX} \tau_m \overline{\nu}_{X}\left( K_{X}^{1/2} + ( 1 + \Delta^{2} - \epsilon )^{1/2} \eta_{LXi} + \left[\frac{1 + \Delta^{2}}{\tau_m \overline{\nu}_{X}} \right]^{1/2} \xi_{LXi}(t) \right) \end{array} $$
(C.1)

where η LXi is a zero mean, unit variance Gaussian random variable with respect to index, i, ξ LXi (t) is Gaussian white noise, and we assumed that all the external neurons have the same firing rate ν X (which allowed us to replace νX21 / 2 with ν X ); see Eqs. (2.13b), (2.14) and (2.18).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Grabska-Barwińska, A., Latham, P.E. How well do mean field theories of spiking quadratic-integrate-and-fire networks work in realistic parameter regimes?. J Comput Neurosci 36, 469–481 (2014). https://doi.org/10.1007/s10827-013-0481-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-013-0481-5

Keywords

Navigation