Skip to main content
Log in

Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous “multiple firing events” (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population–based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Donsker’s theorem states that the fluctuations of an empirical CDF about its theoretical CDF converge to Gaussian random variables with zero mean and certain variance. The sequence of independent Gaussian random variables can be formulated in terms of a standard Brownian bridge, a continuous-time stochastic process on the unit interval, conditioned to begin and end at zero.

References

  • Amari, S. (1974). A method of statistical neurodynamics. Kybernetik, 14, 201–215.

    CAS  PubMed  Google Scholar 

  • Amit, D., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.

    Article  CAS  PubMed  Google Scholar 

  • Anderson, J., Carandini, M., Ferster, D. (2000). Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex. Journal of Neurophysiology, 84, 909–926.

    CAS  PubMed  Google Scholar 

  • Battaglia, D., & Hansel, D. (2011). Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex. PLoS Computational Biology, 7(10), e1002176.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Benayoun, M., Cowan, J.V., Drongelen, W., Wallace, E. (2010). Avalanches in a stochastic model of spiking neurons. PLoS Computational Biology, 6(7), e1002176.

    Article  Google Scholar 

  • Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, J., Bower, J., Diesmann, M., Morrison, A., Goodman, P., Harris JR., F., et al. (2007). Simulation of networks of spiking neurons: a review of tools and strategies. Journal of Computational Neuroscience, 23(3), 349–398.

    Article  PubMed Central  PubMed  Google Scholar 

  • Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8, 183–208.

    Article  CAS  PubMed  Google Scholar 

  • Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11, 1621–1671.

    Article  CAS  PubMed  Google Scholar 

  • Bruzsaki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science, 304, 1926–1929.

    Article  Google Scholar 

  • Cai, D., Rangan, A., McLaughlin, D. (2005). Architectural and synaptic mechanisms underlying coherent spontaneous activity in V1. Proceedings of the National Academy of Science, 102(16), 5868–5873.

    Article  CAS  Google Scholar 

  • Cai, D., Tao, L., Rangan, A. (2006). Kinetic theory for neuronal network dynamics. Communications Mathematical Sciences, 4, 97–127.

    Article  Google Scholar 

  • Cai, D., Tao, L., Shelley, M., McLaughlin, D. (2004). An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proceedings of the National Academy of Science, 101(20), 7757–7762.

    Article  CAS  Google Scholar 

  • Cardanobile, S., & Rotter, S. (2010). Multiplicatively interacting point processes and applications to neural modeling. Journal of Computational Neuroscience, 28, 267–284.

    Article  PubMed  Google Scholar 

  • Churchland, M.M., & et al. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience, 13(3), 369–378.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • DeVille, R., & Peskin, C. (2008). Synchrony and asynchrony in a fully stochastic neural network. Bulletin of Mathematical Biology, 70(6), 1608–33.

    Article  PubMed  Google Scholar 

  • DeWeese, M., & Zador, A. (2006). Non-gaussian membrane potential dynamics imply sparse, synchronous activity in auditory cortex. Journal of Neuroscience, 26(47), 12,206–12,218.

    Article  CAS  Google Scholar 

  • Donsker, M. (1952). Justification and extension of Doobs heuristic approach to the Kolmogorov-Smirnov theorems. Annals of Mathematical Statistics, 23(2), 277–281.

    Article  Google Scholar 

  • Durbin, J. (1985). The first passage density of a continuous Gaussian process to a general boundary. Journal of Applied Probability, 22, 99–122.

    Article  Google Scholar 

  • Durbin, J., & Williams, D. (1992). The first passage density of the brownian process to a curved boundary. Journal of Applied Probability, 29, 291–304.

    Article  Google Scholar 

  • Eggert, J., & Hemmen, J. (2001). Modeling neuronal assemblies: theory and implementation. Neural Computation, 13, 1923–1974.

    Article  CAS  PubMed  Google Scholar 

  • Fusi, S., & Mattia, M. (1999). Collective behavior of networks with linear integrate and fire neurons. Neural Computation, 11, 633–652.

    Article  CAS  PubMed  Google Scholar 

  • Gerstner, W. (1995). Time structure of the activity in neural network models. Physical Review E, 51, 738–758.

    Article  CAS  Google Scholar 

  • Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states and locking. Neural Computation, 12, 43–89.

    Article  CAS  PubMed  Google Scholar 

  • Hansel, D., & Sompolinsky, H. (1996). Chaos and synchrony in a model of a hypercolumn in visual cortex. Journal of Computational Neuroscience, 3, 7–34.

    Article  CAS  PubMed  Google Scholar 

  • Knight, B. (1972). Dynamics of encoding in a population of neurons. Journal of General Physiology, 59, 734–766.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., Rotter, S. (2008). Correlations and population dynamics in cortical networks. Neural Computation, 20, 2185–2226.

    Article  PubMed  Google Scholar 

  • Krukowski, A., & Miller, K. (2000). Thalamocortical NMDA conductances and intracortical inhibition can explain cortical temporal tuning. Nature Neuroscience, 4, 424–430.

    Article  Google Scholar 

  • Lampl, I., Reichova, I., Ferster, D. (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.

    Article  CAS  PubMed  Google Scholar 

  • Lei, H., Riffell, J., Gage, S., Hildebrand, J. (2009). Contrast enhancement of stimulus intermittency in a primary olfactory network and its behavioral significance. Journal of Biology, 8, 21.

    Article  PubMed Central  PubMed  Google Scholar 

  • Mazzoni, A., Broccard, F., Garcia-Perez, E., Bonifazi, P., Ruaro, M., Torre, V. (2007). On the dynamics of the spontaneous activity in neuronal networks. PLoS One, 2(5), e439.

    Article  PubMed Central  PubMed  Google Scholar 

  • Murthy, A., & Humphrey, A. (1999). Inhibitory contributions to spatiotemporal receptive-field structure and direction selectivity in simple cells of cat area 17. Journal of Neurophysiology, 81, 1212–1224.

    CAS  PubMed  Google Scholar 

  • Newhall, K., Kovačič, G., Kramer, P., Cai, D. (2010). Cascade-induced synchrony in stochastically driven neuronal networks. Physical Review E, 82, 041903.

    Article  Google Scholar 

  • Nykamp, D., & Tranchina, D. (2000). A population density approach that facilitates large scale modeling of neural networks: analysis and application to orientation tuning. Journal of Computational Neuroscience, 8, 19–50.

    Article  CAS  PubMed  Google Scholar 

  • Omurtage, A., Knight, B., Sirovich, L. (2000). On the simulation of a large population of neurons. Journal of Computational Neuroscience, 8, 51–63.

    Article  Google Scholar 

  • Petermann, T., Thiagarajan, T., Lebedev, M., Nicolelis, M., Chailvo, D., Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Science, 106, 15,921–15,926.

    Article  CAS  Google Scholar 

  • Rangan, A., & Cai, D. (2007). Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. Journal of Computational Neuroscience, 22(1), 81–100.

    Article  PubMed  Google Scholar 

  • Rangan, A., & Young, L. (2012). A network model of V1 with collaborative activity. PNAS Submitted.

  • Rangan, A., & Young, L. (2013a). Dynamics of spiking neurons: between homogeneity and synchrony. Journal of Computational Neuroscience, 34(3), 433-460. doi:10.1007/s10827-012-0429-1.

    Article  PubMed  Google Scholar 

  • Rangan, A., & Young, L. (2013b). Emergent dynamics in a model of visual cortex. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0445-9.

    PubMed Central  Google Scholar 

  • Renart, A., Brunel, N., Wang, X. (2004). Mean field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. Computational Neuroscience: A comprehensive approach.

  • Riffell, J., Lei, H., Hildebrand, J. (2009). Neural correlates of behavior in the moth Manduca sexta in response to complex odors. Proceedings of the National Academy of Science, 106, 19,219–19,226.

    Article  CAS  Google Scholar 

  • Riffell, J., Lei, H., Christensen, T., Hildebrand, J. (2009). Characterization and coding of behaviorally significant odor mixtures. Current Biology, 19, 335–340.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Samonds, J., Zhou, Z., Bernard, M., Bonds, A. (2005). Synchronous activity in cat visual cortex encodes collinear and cocircular contours. Journal of Neurophysiology, 95, 2602–2616.

    Article  PubMed  Google Scholar 

  • Sillito, A. (1975). The contribution of inhibitory mechanisms to the receptive field properties of neurons in the striate cortex of the cat. Journal of Physiology, 250, 305–329.

    CAS  PubMed Central  PubMed  Google Scholar 

  • Singer, W. (1999). Neuronal synchrony: a versatile code for the definition of relations?Neuron, 24, 49–65.

    Article  CAS  PubMed  Google Scholar 

  • Sompolinsky, H., & Shapley, R. (1997). New perspectives on the mechanisms for orientation selectivity. Current Opinion in Neurobiology, 7, 514–522.

    Article  CAS  PubMed  Google Scholar 

  • Sun, Y., Zhou, D., Rangan, A., Cai, D. (2010). Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics. Journal of Computational Neuroscience, 28, 247–266.

    Article  PubMed  Google Scholar 

  • Treves, A. (1993). Mean-field analysis of neuronal spike dynamics. Network, 4, 259–284.

    Article  Google Scholar 

  • Wilson, H., & Cowan, D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12, 1–24.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Wilson, H., & Cowan, D. (1973). A Mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13, 55–80.

    Article  CAS  PubMed  Google Scholar 

  • Worgotter, F., & Koch, C. (1991). A detailed model of the primary visual pathway in the cat comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity. Journal of Neuroscience, 11, 1959–1979.

    CAS  PubMed  Google Scholar 

  • Yu, Y., & Ferster, D. (2010). Membrane potential synchrony in primary visual cortex during sensory stimulation. Neuron, 68, 1187–1201.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Yu, S., Yang, H., Nakahara, H., Santos, G., Nikolic, D., Plenz, D. (2011). Higher-order interactions characterized in cortical activity. Journal of Neuroscience, 31, 17,514–17,526.

    Article  CAS  Google Scholar 

  • Zhang, J., Rangan, A., Cai, D., et al. (In preparation). A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony.

  • Zhou, D., Sun, Y., Rangan, A., Cai, D. (2008). Network induced chaos in integrate-and-fire neuronal ensembles. Physical Review E, 80(3), 031918.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank David Cai for useful discussions. J. Z. is partially supported by NSF grant DMS-1009575, K. N. is supported by the Courant Institute. D. Z. is supported by Shanghai Pujiang Program (Grant No. 10PJ1406300), NSFC (Grant No. 11101275 and No. 91230202), as well as New York University Abu Dhabi Research Grant G1301. A. R. is supported by NSF Grant DMS-0914827.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katherine Newhall.

Additional information

Action Editor: Gaute T. Einevoll

Appendixes

Appendixes

1.1 Appendix A: Firing events in I&F networks

In order to accurately simulate the dynamics of Eq. (1a), we need to resolve the neurons’ firing sequence correctly despite the fact that all neurons fire during the same instance of time in a single MFE. The algorithm used must produce an MFE magnitude consistent with the cascade condition (6) in Sec. 3.1. We present one such algorithm here in which the voltages of the remaining neurons are updated based on the neuron with the highest voltage first. We also emphasize that neurons having fired previously in an MFE do not receive input from neurons firing after it within the same MFE, because the neurons have a short refractory period, thus neurons can never fire more than once in an MFE.

To resolve the firing sequence of an MFE, we start from the sets \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) of the excitatory and inhibitory neuronal voltages at the time one excitatory voltage is above threshold, indicating that this neuron is about to fire. Then, the following algorithm is used:

  1. 1.

    Find the k E excitatory and k I inhibitory voltages within the sets v j and w j that are above the threshold, V T .

  2. 2.

    If both k E and k I are zero, stop; the MFE has ended. Otherwise, find the neuron with largest voltage, V max, out of the k E excitatory and k I inhibitory voltages above the threshold.

  3. 3.

    The neuron with voltage V max fires; it is reset to V R . If it is type E (I), it is removed from the list v j {w j } and the remaining voltages of neurons that have not fired are updated by adding S EEto (subtracting S EIfrom) the voltages in {v j } and adding S IEto (subtracting S IIfrom) the voltages in w j .

  4. 4.

    If either set v j or w j is non-empty, return to Step 1. Otherwise, stop.

The MFE magnitudes for the excitatory and inhibitory populations are obtained by subtracting the number of neurons that did not fire in the MFE from the total numbers of neurons in the two populations,

$$m_{E} = N_{E} - \#\left\{v_{j}\right\} \quad\text{and}\quad m_{I} = N_{I} - \#\left\{w_{j}\right\} , $$
(36)

where # denotes the number of elements in the set. Note that the sets in Eq. (36) only include the voltages of non-firing neurons, as detailed in Step 3 of the algorithm.

1.2 Appendix B: Derivation of the geometric method

Starting from condition (6) in the main text, we describe here how to obtain a single condition, and therefore the MFE magnitude described by the single intersection point of G(v) defined in Eq. (14) and the line l(v) in Eq. (5). From the last points v < V T and w < V T where condition (6) is satisfied,

$$V_{T} {} -{} v = N_{E} S^{EE}{} \left(1{}-{}F_{E}(v)\right) {}-{} N_IS^{EI}{}\left(1-F_{I}(w)\right){},$$
(37)
$$ V_{T} {}-{} w =N_{E} S^{IE} \left(1-F_{E}(v)\right){} - {}N_{I} S^{II}\left(1-F_{I}(w)\right){}, $$
(38)

we have

$$\frac{V_{T} - v}{N_ES^{EE}} = 1-F_{E}(v) - \frac{N_{I}}{N_{E}}\frac{S^{EI}}{S^{EE}}(1-F_{I}(w)),$$
(39)
$$ \frac{V_{T} - w}{N_ES^{IE}} = 1-F_{E}(v) - \frac{N_{I} S^{II}}{N_ES^{IE}}(1-F_{I}(w)). $$
(40)

Subtracting Eq. (40) from Eq. (39) we have

$$\begin{array}{rll} \frac{V_{T} - v}{N_ES^{EE}} =& \frac{V_{T} - w}{N_ES^{IE}} + \frac{N_{I} S^{II}}{N_ES^{IE}}(1-F_{I}(w))\\ && - \frac{N_{I}}{N_{E}}\frac{S^{EI}}{S^{EE}}(1-F_{I}(w)), \end{array}$$

which can be written as

$$ v = V_{T} - (V_{T} -w)\frac{S^{EE}}{S^{IE}} - \delta N_{I} (1-F_{I}(w)), $$
(41)

where δ = S II S EE / S IES EI. The first two terms on the RHS suggest the transformation of the inhibitory voltages defined in Eq. (7) in the main text,

$$ \hat{w} = V_{T} - (V_{T} - w) \frac{S^{EE}}{S^{IE}}. $$
(42)

This, together with the fact that one can show the empirical CDF

$$\begin{array}{rll} F_{I}(w) &=& \int_{-\infty}^{w} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}} \delta \left(\omega^{\prime} - w_{j}\right)d\omega^{\prime}\\ & = &\int_{-\infty}^{\hat{w}} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}}\delta \left(z - \hat{w}_{j}\right) dz \equiv \hat{F}_{I}(\hat{w}) \end{array} $$
(43)

allows Eq. (41) to be written as

$$ v = \hat{w} - \delta N_{I} \left(1-\hat{F}_{I}(\hat{w})\right). $$
(44)

Next, we must consider the two cases of δ > 0 and δ < 0 separately.

Case 1

If δ > 0, we further transform \(\hat {w}\) by

$$ \bar{w} = \hat{w} - \delta N_{I} \left(1- \hat{F}_{I}(\hat{w})\right), $$
(45)

appearing as Eq. (9) in the main text. Then, Eq. (44) is simply

$$ v = \bar{w} $$
(46)

and we substitute this into Eq. (39), together with Eq. (43) to obtain

$$ \frac{V_T- \bar{w}}{N_ES^{EE}} = 1- F_{E}(\bar{w}) - \frac{N_{I}}{NE} \frac{S^{EI}}{S^{EE}} \left(1-\hat{F}_{I}(\hat{w})\right). $$
(47)

What we want to do next is replace the function of \(\hat {w}\) by a function of \(\bar {w}\). Suppose we can invert the transformation \(\bar {w}= \hat {w} - \delta N_{I} \left (1-\hat {F}_{I}(\hat {w})\right )\) to obtain

$$\hat{w} = h(\bar{w}). $$

(Note that \(h^{-1}(x) = x - \delta N_{I}(1-\hat {F}_{I}(x))\), we have \(\left [h^{-1}(x)\right ]' = 1+\hat {F}'_{I}(x)>0\). Therefore, the transformation h is a monotonic increasing map and has only one root.) Then we have

$$\hat{F}_{I}(\hat{w}) = \int_{-\infty}^{\hat{w}} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}} \delta\left(z-h(\bar{w}_j)\right)dz. $$

Changing the integration to the variable y defined by y = h − 1(z), we have

$$ \hat{F}_{I}(\hat{w}) = \int_{-\infty}^{\bar{w}} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}} \delta\left(h(y)-h(\bar{w}_j)\right)h'(y)dy. $$
(48)

Taylor expanding h(y) and only keeping the linear term, we have \(h(y)-h(\bar {w}) = C_{0}(y-\bar {w})\) and h (y) = C 0, resulting in

$$\begin{array}{rll} \hat{F}_{I}(\hat{w}) &\approx& \int_{-\infty}^{\bar{w}} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}} \delta\left(C_{0}(y-\bar{w})\right)C_0dy \\ &=& \frac{C_{0}}{|C_0|}\int_{-\infty}^{\bar{w}} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}} \delta(y-\bar{w})dy \equiv \bar{F}_{I}(\bar{w}). \end{array} $$
(49)

Substituting Eq. (49) into Eq. (47), we have

$$ \frac{V_{T} - \bar{w}}{N_ES^{EE}} = 1- F_{E}(\bar{w}) - \frac{N_IS^{EI}}{N_ES^{EE}} \left(1-\bar{F}_{I}(\bar{w})\right), $$
(50)

corresponding to Eq. (12) in the main text, where we defined \(\bar {v}=v\) so that \(\bar {F}_{E}(v) = F_{E}(v)\). Solving this for \(\bar {w}\) is equivalent to finding the intersection point between the line \(l(v) = 1+\frac {1}{N_ES^{EE}}(v-V_T)\) and the function

$$G(v) = F_{E}(v) + \frac{N_{I}}{N_{E}} \frac{S^{EI}}{S^{EE}} \left(1-\bar{F}_{I}(v)\right) \textrm{ for } \delta \ge 0.$$

Case 2

If δ < 0, then \(\bar {w}\) defined in Eq. (45) may be larger than V T , which we would like to avoid. Rather, we solve Eq. (44) for \(\hat {w}\),

$$ \hat{w} = v + \delta N_{I} \left(1-\hat{F}_{I}(\hat{w})\right) $$
(51)

and transform v to (Eq. (10) in the main text)

$$ \bar{v} = v + \delta N_{I} \left(1-\hat{F}_{I}(\hat{w})\right) $$
(52)

so that Eq. (51) becomes

$$ \bar{v}= \hat{w}. $$
(53)

Using Eq. (53) to rewrite Eq. (52) in terms of \(\bar {v}\) and solving for v gives

$$ v = \bar{v} - \delta N_{I} \left(1-\hat{F}_{I}(\bar{v})\right). $$
(54)

Using Eq. (54) and that \(F_{I}(w) = \hat {F}_{I}(\hat {w})\), Eq. (40) becomes

$$ \frac{V_{T} - \bar{v}}{N_ES^{EE}} = 1- F_{E}(v) - \frac{N_IS^{II}}{N_{E} S^{IE}} \left(1-\hat{F}_{I}(\bar{v})\right), $$
(55)

corresponding to Eq. (12) in the main text, where we defined \(\bar {w}=\hat {w}\), so that \(\bar {F}_{I}(v) = \hat {F}_{I}(v)\).

Now, what we want to do is replace the function of v by a function of \(\bar {v}\). If we denote Eq. (54) by

$$v = g(\bar{v}), $$

then change the integration variable in

$$F_{E}(v) = \int_{-\infty}^{v} \frac{1}{N_{E}} \sum\limits_{j=1}^{N_{E}} \delta(z-v_j)dz $$

to y using z = g(y) and \(v_{j} = g(\bar {v}_j)\), we have that

$$\begin{array}{rll} F_{E}(v) &=& \int_{-\infty}^{\bar{v}} \frac{1}{N_{E}} \sum\limits_{j=1}^{N_{E}} \delta\left(g(y)-g(\bar{v}_j)\right)g'(y)dy \\ &\approx& \int_{-\infty}^{\bar{v}}\frac{1}{N_{E}} \sum\limits_{j=1}^{N_{E}} \delta \left(y-\bar{v}_{j}\right)dy\equiv \bar{F}_{E}(\bar{v}) \end{array} $$

following the same argument used to derive Eq. (49). Now, Eq. (55) becomes

$$ \frac{V_{T} - \bar{v}}{N_ES^{EE}} = 1-\bar{F}_{E}(\bar{v}) - \frac{N_{I}}{N_{E}}\frac{S^{II}}{S^{IE}}\left(1-\hat{F}_{I}(\bar{v})\right). $$
(56)

Solving this for \(\bar {v}\) is equivalent to finding the intersection point between the line \(l(v) = 1 + \frac {1}{N_ES^{EE}} (v-V_T)\) and the function

$$G(v) = \bar{F}_{E}(v) + \frac{N_{I}}{N_{E}} \frac{S^{II}}{S^{IE}} \left(1-\hat{F}_{I}(v)\right) \textrm{ for } \delta < 0. $$

Combining these two cases, we arrive at the MFE magnitude defined via the intersection of the function G(v) defined in Eq. (14) in the main text, and the line l(v) defined in Eq. (5) in the main text.

1.3 Appendix C: Analytical formula of first passage time

In this appendix, we explain how to reformulate the passage of a generic 2-dimensional anisotropic Brownian motion across a moving boundary to the viewpoint of one-dimensional Brownian motion as used in Durbin’s papers (Durbin 1985; Durbin and Williams 1992).

We define x = (x, y) to be two-dimensional anisotropic brownian motion with linear drift, given by the solution to

$$\begin{array}{l} dx = \alpha_{x}dt+\beta_{x}dW_{t}^{X} \\ dy =\alpha_{y}dt+\beta_{y}dW_{t}^{Y} \end{array}$$

with initial starting point x(0) = 0 for simplicity. If β x β y , then we define the isotropic version of this process via \(\mathbf {z}=( \hat {x},\hat {y})\), with \(\hat {x}=x/\beta _{x}\), \(\hat {y}=y/\beta _{y}\), or z = β − 1 x, with

$$\beta = \left[ \begin{array}{cc} \beta_{x} & 0 \\ 0 & \beta_{y} \end{array}\right]. $$

We will also define

$$\bar{\beta} = \left[\begin{array}{ccc} \beta_{x} & 0 & 0 \\ 0& \beta_{y} & 0 \\ 0& 0 & 1 \end{array} \right] \text{.} $$

We consider the first crossing density of x to a linear boundary \( \mathcal {A}\) described by the normal constraint

$$\bar{\eta}\cdot ( x,y,t) =k $$

where \(\bar {\eta }=\left [\bar {\eta }_{x},\bar {\eta }_{y},\bar {\eta }_{t}\right ]^{\intercal }\) is the unit normal to \(\mathcal {A}\) in (x, y, t) space and k is the constant ‘offset’ of the linear plane (i.e., the distance between \(\mathcal {A}\) and the origin (0, 0, 0)). We also define \(\eta _{\perp }=\left [ \bar {\eta }_{x},\bar {\eta }_{y} \right ]^{\intercal }\) (not a unit vector) to be the projection of \(\bar {\eta }\) onto the x, y plane. So we know the process (xt), y(t) crosses \(\mathcal {A}\) when

$$\bar{\eta}\cdot ( x\left( t) ,y( t) ,t\right) =k $$

or where

$$( \bar{\beta}\bar{\eta}) \cdot \left( \hat{x},\hat{y} ,t\right) =k $$

or when

$$\frac{\beta \eta_{\perp }}{\left\vert \beta \eta_{\perp }\right\vert }\cdot ( \hat{x},\hat{y}) =\frac{k-\bar{ \eta}_{t}t}{\left\vert \beta \eta_{\perp }\right\vert } . $$

That is, x crosses \(\mathcal {A}\) when z crosses \(\bar {\beta } ^{-1}\mathcal {A}\) (i.e., the boundary perpendicular to \(\bar {\beta }\bar {\eta } \)). Now z is isotropic brownian motion (plus a linear drift), so z can be described in terms of z and z , the directions parallel and perpendicular to η . Each of the processes z and z are given by independent brownian motion (plus a linear drift). So the probability of z hitting the rescaled boundary \(\bar {\beta }^{-1}\mathcal {A}\) at time t is just the probability of z reaching the moving (scalar) boundary \( \left [ \bar {\beta }^{-1}\mathcal {A}\right ]_{\perp }=\left ( k-\bar {\eta } _{t}t\right ) /\left \vert \beta \eta _{\perp }\right \vert \) at time t. That is to say

$$\begin{array}{rll} P( \mathbf{x}\text{ crosses }\mathcal{A}\text{ at time }t) &= &P(\mathbf{z}\text{ crosses }\bar{\beta}^{-1}\mathcal{A}\text{ at time }t)\\ & =&P\left( z_{\perp }\text{ crosses }\left[ \bar{\beta}^{-1}\mathcal{A}\right]_{\perp }\right.\\ &&\hspace*{13pt} \left.\text{ at time }t{\vphantom{\bar{\beta}^{-1}\mathcal{A}}}\right) \text{.} \end{array} $$

According to Durbin (1985), this probability is simply

$$\begin{array}{rll} P&\left( z_{\perp }\text{ crosses }\left[ \bar{\beta}^{-1}\mathcal{A}\right]_{\perp }\text{ at time }t\right)\\ &=\left[ \text{distance of }z_{\perp } \text{ to }\left[ \bar{\beta}^{-1}\mathcal{A}\right]_{\perp }\text{ at time }0\right] \frac{1}{t} \\ &\times \left[ \text{density of }z_{\perp }\text{ on }\left[ \bar{\beta}^{-1}\mathcal{A}\right]_{\perp }\text{ at time }t \right] \\ &=\frac{k}{\left\vert \beta \eta_{\perp }\right\vert }\frac{1}{t}\cdot \left[ \text{density of }z_{\perp }\text{ on }\left[ \bar{\beta}^{-1} \mathcal{A}\right]_{\perp }\text{ at time }t\right] \text{.} \end{array}$$

Now if z crosses at time t at a particular point \(\bar {\beta } ^{-1}a( t) \), then two things must happen. First, z must equal \(\left [ \bar {\beta }^{-1}a( t) \right ]_{\perp }\) which is also \(\left ( k-\bar {\eta }_{t}t\right ) /\left \vert \beta \eta _{\perp }\right \vert \) at time t. Second, z must equal \(\left [ \bar {\beta } ^{-1}a( t) \right ]_{\parallel }\) at time t, where \(\left [ \bar {\beta }^{-1}a( t) \right ]\) is the component of a(t) perpendicular to η . Thus, the probability that the process z crosses at time t at a particular point \(\bar {\beta }^{-1}a( t) \) on \(\bar {\beta }^{-1} \mathcal {A}\) is given by

$$\begin{array}{@{}rcl@{}} &&P( t\text{, }\mathbf{z}\text{ crosses at }\bar{\beta}^{-1}a\left( t) \right)\\ && =P\left( z_{\perp }\text{ crosses }\left[ \bar{\beta}^{-1}\mathcal{A} \right]_{\perp }\text{ at time }t\right) \\ && \times P\left( z_{\parallel }\text{ is equal to }[ \bar{\beta}^{-1}a( t) ]_{\parallel } \text{ at time }t\right) \text{.} \end{array} $$

Now, as we discussed above,

$$\begin{array}{rll} P\left( z_{\perp }\text{ crosses }[ \bar{\beta}^{-1}\mathcal{A}]_{\perp }\text{ at time }t\right) \\ && = \frac{k}{\left\vert \beta \eta_{\perp }\right\vert }\frac{1}{t}\cdot \left[ \text{density of }z_{\perp }\text{ on } [ \bar{\beta}^{-1}a( t) ]_{\perp }\text{ at time }t \right] \text{.} \end{array}$$

For the second term, we have

$$\begin{array}{rll} P\left( z_{\parallel }\text{ is equal to }\left[ \bar{\beta}^{-1}a\left( t\right) \right]_{\parallel }\text{ at time }t\right) \\ && =\left[ \text{density of }z_{\parallel }\text{ on }[ \bar{\beta}^{-1}a( t) ]_{\parallel }\text{ at time }t\right] \text{,} \end{array}$$

and together, since z and z are independent,

$$\begin{array}{rll} P ( t\text{, }\mathbf{z}\text{ crosses at }\bar{\beta}^{-1}a\left( t) \right) \\ && =\frac{k}{\left\vert \beta \eta_{\perp }\right\vert }\frac{1}{t} \cdot \left[ \text{density of }\mathbf{z}\text{ on }\left[ \bar{\beta} ^{-1}a( t) \right] \text{ at time }t\right] \text{.} \end{array}$$

As we discussed above, if z crosses at time t at a point \(\bar {\beta } ^{-1}a( t) \), then x crosses at time t at at. However, the density \( p\left ( t\text {, }\mathbf {z}\text { crosses at }\bar {\beta } ^{-1}a( t) \right )\) is not equivalent to the density p (t, x crosses at a(t)). Rather, we have

$$\begin{array}{rll} P\left({\vphantom{\bar{\beta} ^{-1}a( t) ,\bar{\beta}^{-1}}} t\text{, }\mathbf{z}\text{ crosses the line-segment}\right.\\ &&\hspace*{13pt} \left.\left[ \bar{\beta} ^{-1}a( t) ,\bar{\beta}^{-1}( a\left( t) +\delta \cdot \bar{\eta}_{\parallel }\right) \right] \right)\\ =P\left( t\text{, }\mathbf{x} \text{ crosses the line-segment }\left[ a( t) ,a( t) +\delta \cdot \bar{\eta}_{\parallel }\right] \right) \text{,} \end{array} $$

where \(\bar {\eta }_{\parallel }=\left [ -\bar {\eta }_{y},\bar {\eta }_{x},0 \right ]^{\intercal }\), and \(\eta _{\parallel }=\left [ -\bar {\eta }_{y},\bar {\eta }_{x}\right ]^{\intercal }\) (not a unit vector) lies along \(\mathcal {A}\) and lies in the x-y plane, and is perpendicular to η . Comparing the lengths of these line segments, we see that length \(\left | \bar {\beta }^{-1}a( t) ,\bar {\beta } ^{-1}( a\left ( t) +\delta \cdot \bar {\eta }_{\parallel }\right ) \right | =\delta \cdot \left \vert \beta ^{-1}\eta _{\parallel }\right \vert \) and length \(\left | a( t) ,a( t) +\delta \cdot \bar {\eta }_{\parallel }\right |=\delta \cdot \left \vert \eta _{\parallel }\right \vert \text {.} \) Using this relationship we can conclude the density

$$\begin{array}{rll} p\left( t\text{, }\mathbf{z}\text{ crosses at }\bar{\beta} ^{-1}a( t) \right) \frac{\left\vert \beta^{-1}\eta_{\parallel }\right\vert }{\left\vert \eta_{\parallel }\right\vert }\\ && = p( t\text{, }\mathbf{x}\text{ crosses at }a( t) ) . \end{array} $$

This in turn implies that the probability that the process x crosses \(\mathcal {A}\) at time t at a particular point a(t) on \( \mathcal {A}\) is given by the density

$$\begin{array}{l} p( t,\mathbf{x}\text{ crosses at }\mathbf{a}) =\frac{ \left\vert \beta^{-1}\eta_{\parallel }\right\vert }{\left\vert \eta_{\parallel }\right\vert }\frac{k}{\left\vert \beta \eta_{\perp }\right\vert }\frac{1}{t} \\ \times \left[ \text{density of }\mathbf{z}\text{ on } [ \bar{\beta}^{-1}a( t) ] \text{ at time }t\right] \text{.} \end{array}$$

Finally, since

$$\begin{array}{rll} \left[ \text{density of }\mathbf{z}\text{ on }\left[ \bar{\beta}^{-1}a\left( t\right) \right] \text{ at time }t\right] \left\vert \beta \right\vert^{-1} \\ \qquad\qquad =\left[ \text{density of }\mathbf{x}\text{ on }a( t) \text{ at time }t\right], \end{array}$$

where β is the determinant of β. The density

$$\begin{array}{rll} p( t,\mathbf{x}\text{ crosses at }\mathbf{a}) =\frac{ \left\vert \beta^{-1}\eta_{\parallel }\right\vert }{\left\vert \eta_{\parallel }\right\vert }\frac{k}{\left\vert \beta \eta_{\perp }\right\vert }\frac{1}{t}\\ && \times \left\vert \beta \right\vert \left[ \text{density of }\mathbf{x}\text{ on }a( t) \text{ at time }t\right] . \end{array}$$

Using the language described in Section 3.3, the above can be written as

$$\begin{array}{rll} p ( t,\mathbf{a}|0,\mathbf{x}\left( 0) \right) =\frac{1}{t-0}\left( \widehat{\mathcal{A}}_{a( t) }|_{0}-\mathbf{x}( 0) \right) \cdot \eta_{\perp } \\ && \times \frac{\left\vert \beta^{-1}\eta_{\parallel }\right\vert }{\left\vert \beta \cdot \eta_{\perp }\right\vert }\frac{ \left\vert \beta \right\vert }{\left\vert \eta_{\perp }\right\vert \left\vert \eta_{\parallel }\right\vert }f\left( t,\mathbf{a}|0,\mathbf{x}\left( 0\right) \right) \text{,} \end{array}$$

leading to

$$\begin{array}{rll} p(t,\mathbf{a}) &=& \lim_{s\rightarrow t^{-}}\frac{f( t,\mathbf{a}) }{t-s} \int_{\partial\Omega (s) }\left(\widehat{\mathcal{A}}_{a(t) }|_{s}-\mathbf{x}\right) \cdot n_{\perp } \\ &\times & \frac{\left\vert \beta^{-1}n_{\parallel }\right\vert}{\left\vert \beta n_{\perp}\right\vert}\frac{\left\vert \beta \right\vert}{\left\vert n_{\perp}\right\vert \left\vert n_{\parallel}\right\vert}g\left(s,\mathbf{x}|t,\mathbf{a}\right) d\mathbf{x}, \end{array} $$
(57)

where f(t, a | 0, x0) is a (unconditioned) density for the process x at t, a, see details in Appendix B.

We can now fill in the geometry for the specific problem at hand. The points (x, y) on the surface \(\mathcal {A}\) at time t was given in Eq. (23) and comes from the constraint of \(\bar {G}(t) = \bar {l}(t)\). The vector

$$\eta = \left[-\gamma \text{, }\alpha \text{, }\alpha \bar{p}_{I}( t) -\gamma\bar{p}_{E}( t) +\frac{1}{N_{E}S ^{EE}}\right] $$

is normal to the surface in 3D space at the point (x, y, t). The 2-component normal to \(\mathcal {A}\), restricted to the x-y plane, is

$$n_{\perp }=\frac{1}{\left\vert \eta \right\vert }\eta_{\perp }=\frac{1}{ \left\vert \eta \right\vert }[ -\gamma \text{, }\alpha ], $$

and the 2-component vector perpendicular to n is

$$n_{\parallel }=\frac{1}{\left\vert \eta \right\vert }\eta_{\parallel }=\frac{1}{\left\vert \eta \right\vert }[ \alpha ,\gamma ] $$

where | η | is the length of the vector η. One can also think of \(\mathcal {A}\) as a line parallel to − γx + αy in the x-y plane, with a time dependent offset of \(k/N_{E}+\gamma \bar {f}_{E}( t) -\alpha \bar {f}_{I}( t) -\frac {1}{ N_{E}S^{EE}}t\). So, around any point (a(t), t) = a x , a y , t on \(\mathcal {A}\) , the point

$$\widehat{\mathcal{A}}_{a( t) }|_{s} = \mathbf{a} - \mathbf{u}\; \nu_{n}(t) (t-s) $$

is on the linearized boundary tangent to \(\mathcal {A}\) at the point a(t). The factor

$$ \nu_{n}( t) =\sqrt{\gamma^{2}+\alpha^{2}}\left( \alpha \bar{\rho} _{I}( t) -\gamma \bar{\rho}_{E}( t) +\frac{1}{N_{E}S^{EE}}\right)^{-1} $$
(58)

can be thought of as the speed at which the boundary propagates in the normal direction, as s (and hence ts) changes and the unit vector in the x-y plane \( \mathbf {u} = \frac {[-\gamma ,\alpha ] }{\sqrt {\gamma ^{2}+\alpha ^{2}}} \) is perpendicular to \(\widehat {\mathcal {A}}\) and therefore parallel to n . Therefore,

$$\frac{\widehat{\mathcal{A}}_{a( t) }|_{s}-\mathbf{a}}{t-s} =-\mathbf{u}\;\nu_{n}(t) \text{.} $$

We now change variables to the n and n directions by first changing to variables in which the stochastic processes (ϕ E (t), ϕ I (t)) obeys isotropic Brownian motion in 2D. This is done with the matrix

$$\beta =\left(\begin{array}{cc}\sqrt{\frac{\bar{p}_{E}(t)}{N_{E}}} & 0 \\ 0 & \sqrt{\frac{\bar{p}_{I}(t)}{N_{I}}}\end{array}\right)$$

and requires the change of variables term given by

$$\begin{array}{@{}rcl@{}} n_{\perp }\frac{\left\vert \beta^{-1}n_{\parallel }\right\vert }{\left\vert \beta n_{\perp }\right\vert }\frac{\left\vert \beta \right\vert }{\left\vert n_{\perp }\right\vert \left\vert n_{\parallel }\right\vert } = \mathbf{u}\sqrt{1+1/\nu_{n}^{2}} = \mathbf{u}\; \zeta \text{,} \end{array} $$

where \(\zeta =\sqrt {1+1/\nu _{n}^{2}}\).

Using the definition \( l( s,\mathbf {x}|t,\mathbf {a}) =\left ( \widehat {\mathcal {A}}_{a\left ( t\right ) }|_{s}-\mathbf {x}\right ) \cdot \mathbf {u}\; \zeta \; , \) we can calculate the density of crossing at time t and position a on \(\mathcal {A}\) via

$$\begin{array}{l} p( t,\mathbf{a}) \\ =\lim\limits_{s\rightarrow t^{-}}\frac{f\left( t,\mathbf{a} \right) }{t-s}\mathbb{E}( l\left( s,\mathbf{x}|t,\mathbf{a}) |\text{ first crossing at }t,\mathbf{a}\right) \\ =\lim\limits_{s\rightarrow t^{-}}\frac{f( t,\mathbf{a}) }{t-s}\mathbb{E} ( l\left( s,\mathbf{x}|t,\mathbf{a}) |\text{ some crossing at }t, \mathbf{a}\right) \\ -\lim\limits_{s\rightarrow t^{-}}\frac{f( t,\mathbf{a}) }{t-s}\mathbb{E} ( l\left( s,\mathbf{x}|t,\mathbf{a}) |\text{ crossing at }t, \mathbf{a}\right.\\ \quad\quad\left.\text{ and a first crossing at }( r,\mathbf{b}) \text{ where } r<t\right) \text{.} \end{array} $$
(59)

In the above expression the term f(t, a) refers to the unconditioned density of the process (ϕ E , ϕ I , t) at time t and position a (presumed to lie on \(\mathcal {A}\)). The first term in the sum is straightforward, the second term is a little trickier. Note that the conditional first passage density at time r to point b given that the process also crosses the point a at time t can be written as q(r, b)f(r, b | t, a) = qr, bf(r, b, t, a) / f(t, a), where q(r, b) is defined via p(r, b) = q(r, b)f(r, b). Following Durbin’s method, we achieve the analytical formula (32) by taking the first two terms of Eq. (59) in the main text.

1.4 Appendix D: Probability density of stochastic process

In this appendix, we derive the distributions f(t, a) in Eq. (33) and f(r, b | t, a) in Eq. (34) in the main text. Recall that \(\bar {f}_{E}(t)\) and \(\bar {f}_{I}(t)\) are the smooth CDFs for the transformed voltage variables, defined in Eq. (18). These define the stochastic processes ϕ E (t) and ϕ I (t) in Eq. (20).

The distribution f(t, a) refers to the probability of observing the unconstrained process (ϕ E , ϕ I ) at at at time t. In other words, f(t, a) is the probability that

$$\mathbf{\phi}( t) = \left[ \begin{array}{c} \phi_{E}( t) \\ \phi_{I}( t) \end{array} \right] =\mathbf{a}( t) =\left[ \begin{array}{c} a_{E}( t) \\ a_{I}( t) \end{array} \right] $$

at time t, without assuming anything about how ϕ crosses \(\mathcal {A}.\) This probability is given by:

$$f( t,\mathbf{a}) =P( \phi_{E}\left( t) =a_{E}\left( t\right) \right) P( \phi_{I}\left( t) =a_{I}\left( t\right) \right) \text{,} $$

where each component of ϕ evolves independently. Now of course the probability that ϕ Q (t) = a Q (t) at the time t is the same as the probability that \(\bar {\phi }_{Q}\left ( \tau \right ) =a_{Q}( t) \) at time τ = f Q (t). This in turn is the same as \(\sqrt {N_{Q}}\) times the probability \(P\left ( B_{\tau }=\sqrt {N_{Q}}a_{Q}\right ) \), the probability that a standard brownian bridge (connecting (0, 0) to (1, 0)) reaches the value \(\sqrt {N_{Q}}a_{Q}( t) \) at time τ = f Q t. According to Durbin and Williams (1992), this probability is given by

$$P\left( B_{\tau }=\sqrt{N_{Q}}a_{Q}\right) =\frac{1}{\sqrt{2\pi \tau \left( 1-\tau \right) }}\exp \left( -\frac{1}{2}\frac{N_{Q}a_{Q}^{2}}{\tau \left( 1-\tau \right) }\right) $$

therefore f(t, a) is precisely Eq. (33) in the main text.

In a similar vein the distribution f(r, b | t, a) is the conditional distribution of the process ϕ at r, b, given that the point t, a is reached. Using logic similar to the above, this is given by

$$\begin{array}{lll} && f (r,\mathbf{b}|t,\mathbf{a}) = P\left(\phi _{E}(r) =b_{E}(r) \text{ }|\text{ }\phi_{E}(t) =a_{E}(t) \right) \\&& \times P\left(\phi _{I}(r) =b_{I}(r) \text{ }|\text{ }\phi _{I}( t) =a_{I}(t)\right) \\&&= P\left(\bar{\phi}_{E}( \bar{f}_{E}(r)) =b_{E}(r) \text{ }|\text{ }\bar{\phi}_{E}(\bar{f}_{E}(t))=a_{E}(t) \right)\\&&\times P\left(\bar{\phi}_{I}(\bar{f}_{I}(r)) = b_{I}(r) \text{ }|\text{ }\bar{\phi}_{I}(\bar{f}_{I}( t)) = a_{I}( t) \right) \\&&=\sqrt{N_{E}}P\left(B_{f_{E}(r) }=\sqrt{N_{E}}b_{E}\text{ }|\text{ }B_{f_{E}(t) }=\sqrt{N_{E}}a_{E}\right)\\&& \times \sqrt{N_{I}} P\left(B_{f_{I}(r)} = \sqrt{N_{I}}b_{I}\text{ }|\text{ }B_{f_{I}(t)}=\sqrt{N_{I}}a_{I}\right) . \end{array} $$

Note that B τ conditioned to hit \(\sqrt {N_{Q}}a_{Q}\) at time f Q (t) is just a brownian bridge connecting 0, 0 to \(( f_{Q}\left ( t) ,\sqrt {N_{Q}}a_{Q}\right ) \). One can think of this new brownian bridge as a rescaled version of a standard brownian bridge:

$$\begin{array}{rll} P( B_{r}=x\text{ }|\text{ }B_{t}=a\text{ }|\text{ }B_{1}=0) =P( B_{r}=x\text{ }|\text{ }B_{t}=a) \\ && =P\left( B_{r}-a\frac{r}{t}=x-a\frac{r}{t}\text{ }|\text{ }B_{t}-a\frac{t}{t}=0\right) \\ &&=P\left( B_{r/t}=x-a\frac{r}{t}\text{ }|\text{ }B_{1}=0\right) \\ && =\frac{1}{\sqrt{2\pi \frac{r}{t}\left( 1-\frac{r}{t}\right) }}\exp \left( - \frac{1}{2}\frac{\left[ x-a\frac{r}{t}\right] {}^{2}}{\frac{r}{t}\left( 1- \frac{r}{t}\right) }\right) \text{.} \end{array}$$

Then

$$\begin{array}{@{}rcl@{}} && P\left(B_{f_{Q}(r)}=\sqrt{N_{Q}}b_{Q}\text{ }|\text{ }B_{f_{Q}(t)}=\sqrt{N_{Q}}a_{Q}\right) \\ &=&\frac{f_{Q}(t)}{\sqrt{2\pi f_{Q}( r) \left(f_{Q}(t) -f_{Q}(r) \right) }}\\ && \exp \left( -\frac{1}{2}\frac{N_{Q}\left[ b_{Q}f_{Q}( t) -a_{Q}f_{Q}( r) \right] {}^{2}}{f_{Q}( r) ( f_{Q}(t)-f_{Q}( r)) }\right) \end{array} $$

and formula (34) for f(r, b | t, a) in the main text follows. In the computation of MFE magnitude, we must simply account for not selecting the k neurons that initiate the MFE, so we replace N E by N E k.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, J., Newhall, K., Zhou, D. et al. Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks. J Comput Neurosci 36, 279–295 (2014). https://doi.org/10.1007/s10827-013-0472-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-013-0472-6

Keywords

Navigation