Journal of Computational Neuroscience

, Volume 36, Issue 2, pp 279–295 | Cite as

Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks

  • Jiwei Zhang
  • Katherine Newhall
  • Douglas Zhou
  • Aaditya Rangan
Article

Abstract

Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous “multiple firing events” (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population–based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.

Keywords

Spiking neurons Synchrony Homogeneity Multiple firing events First passage time Integrate and fire neuronal networks 

1 Introduction

The homogeneous and synchronous dynamics of biological and model neuronal networks have received much attention over the years (Amari 1974; Amit and Brunel 1997; Bruzsaki and Draguhn 2004; Cai et al. 2006; Cai et al. 2004; DeVille and Peskin 2008; Eggert and Hemmen 2001; Fusi and Mattia 1999; Gerstner 1995; 2000; Knight 1972; Newhall et al. 2010; Nykamp and Tranchina 2000; Omurtage et al. 2000; Singer 1999; Treves 1993; Wilson and Cowan 1972; 1973), but it is perhaps what lies in between that gives rise to their rich dynamics. In this paper, we consider a regime of apparent and sudden barrages of firing that are temporally localized and are separated by time spans of homogeneous firing (c.f. Fig. 1). These spurts of firing activity involving different groups of neurons (ranging from a few neurons to a substantial fraction of the population) are called multiple firing events (MFEs) (Rangan and Young 2013a), and can not be described as either synchronous or homogenous firing events. These events are generally initiated by one excitatory spiking neuron that in turn causes some subset of other neurons to fire, reflecting the strong correlation between firing activity. The complicated structures of MFE dynamics have not only been observed in experimental studies of a variety of animals, including the leech ganglia, rat hippocampus, as well as the olfactory system of moths, the auditory cortex of rats and the visual cortex of the monkey and the cat (Churchland and et al. 2010; DeWeese and Zador 2006; Lampl et al. 1999; Lei et al. 2009; Mazzoni et al. 2007; Pertermann 2009; Riffell et al. 2009a, b; Samonds et al. 2005; Yu and Ferster 2010; Yu et al. 2011), but also been revealed in computational studies of the mammalian primary visual cortex (V1) (Cai et al. 2005; Rangan and Young 2012; 2013b), as well as in a number of other computational and analytical studies (Battaglia and Hansel 2011; Benayoun et al. 2010; Brunel and Hakim 1999; Brunel 2000; Cardanobile and Rotter 2010; Hansel and Sompolinsky 1996; Kriener et al. 2008; Renart et al. 2004; Sun et al. 2010; Zhou et al. 2008). The generation of MFEs in idealized spiking neuron networks reflects a strong competition between excitatory and inhibitory populations operating near threshold, providing a possible mechanism underlying similar phenomena observed in real neuronal systems (Anderson et al. 2000; Krukowski and Miller 2000; Murthy and Humphrey 1999; Sillito 1975; Sompolinsky and Shapley 1997; Worgotter and Koch 1991).
Fig. 1

(Color online) Raster plots for excitatory (red - top half ) and inhibitory (blue - bottom half) neuron populations demonstrating three regimes that the I&F model can operate in: a a homogeneous regime with little to no correlations between spike times, SEE = SEI = SIE = SII = 0.003, b an MFE regime with bouts of homogeneous firing interrupted by larger synchronous events, SEE = SEI = SIE = SII = 0.009 and c a more synchronous regime with the majority of the network firing occurring in unison, SEE = SII = 0.009, SEI = SIE = 0.0072. For all three, the network sizes are NE = NI = 300, and the Poisson driving parameters are ηE = 550Hz, fE = 0.07, ηI = 530Hz, fI = 0.07

The inclusion of MFE dynamics into large-scale computational models has only been possible by carefully resolving each spike (as by Rangan and Cai 2007) for example). Population based methods such as firing rate models and master equations or Fokker-Planck equations rely heavily upon the assumption of the network remaining homogeneous (Brunel and Hakim 1999; Cai et al. 2006; Cai et al. 2004; Rangan and Young 2013a). This assumption is characterized by weak correlations between the individual neurons’ evolution, or nearly independent spike times generated across the network (i.e. roughly Poissonian firing statistics). The extension of the master equation to include time correlated MFEs has yet to be fully addressed. The difficulty lies in how to self-consistently incorporate the MFEs into a master equation or Fokker-Planck equation description. Recently, a proposal to circumvent this difficulty computationally has been given in Refs. (Rangan and Young 2013a; Zhang et al. In preparation): stop the evolution of the master equation when an MFE (manifested as a synchronous event of a subset of neurons in the network) occurs, then reshape the population distributions after counting the number of firing neurons participating in the synchronous event, and return to evolving the master equation until next occurrence of an MFE. While the above procedure appears to be straightforward, there are two questions that need to be answered: (1) what is the stopping criteria to indicate that an MFE occurs? and (2) how many neurons participate in an MFE? The first question concerning the stopping criteria depends on the probability of more than two excitatory neurons firing; this question has been addressed by Zhang et al. (In preparation). In this paper, we answer the second question for a specific current-based integrate-and-fire (I&F) neuronal network model by tackling the conceptual issue of how to mathematically characterize MFEs and developing analytical approaches to obtaining the number of neurons firing in an MFE.

In the pulse-coupled current-based integrate-and-fire (I&F) neuronal network model, MFEs are cascade-induced synchronous events occurring at single moments in time; one excitatory neuron fires, increasing the voltages of the other neurons, causing more neurons to fire, and continuing in this cascading fashion until no more neurons fire, or all neurons in the network fire. The independent stochastic processes driving each neuron between synchronous events cause the neuronal voltages to diverge, thus each MFE may include not only a different subset of the population, but may also include entirely different numbers of neurons. Even if all the neurons fire together once, they are not guaranteed to repeat this total synchronous event (see Ref. (Newhall et al. 2010) for a detailed discussion). We are interested in the specific model parameter regime in which the network displays dynamics of substantially sized MFEs separated by time intervals of effectively homogeneous firings. This regime is strongly influenced by the competition between the excitatory and inhibitory populations.

In order to mimic the I&F dynamics using a population-based model, we seek to characterize the MFE by its magnitude, defined as the number of neurons firing together with the neuron(s) initiating the synchronous event, in terms of the information available in the population-based description. Specifically, at the time when the stopping criteria is met (Zhang et al. In preparation), we know the voltage distributions for the excitatory and inhibitory populations, as well as the synaptic coupling strengths and population sizes. For an all-to-all coupled network of excitatory neurons, the requirement on the individual voltage arrangements for an MFE of a given size to occur was discussed in Ref. (Newhall et al. 2010), but its probability distribution in terms of the voltage distributions could only be computed practically for small MFE sizes. To obtain the size of an MFE, here, we circumvent the “balls-in-bins” combinatorics problem in Ref. (Newhall et al. 2010) by further developing the graphical method presented in Ref. (Rangan and Young 2013a) which is used to describe MFE magnitudes for interacting excitatory and inhibitory populations. We show that the distribution of the MFE size is reducible to the distribution of first passage times of a two-dimensional stochastic process to a moving boundary. While it is possible to write down an explicit partial differential equation for the first passage time distribution of an arbitrary white-noise driven stochastic process to an arbitrary boundary, this equation can only be solved exactly in a very few, simple, cases. In this paper, we approximate the MFE magnitude distribution by extending Durbin’s method for 1D passage times with moving boundaries (Durbin and Williams 1992) to the 2D case. The resulting analytical formula for the distribution of MFE magnitudes can not only be easily integrated into population-based methods, but also furnishes a conceptual advantage by illuminating the mechanism underlying MFEs and their dependence on the voltages distributions in different parameter regimes of synaptic strength and population sizes.

The remainder of the paper is organized as follows. In Section 2, we review the Integrate-and-Fire network with inhibitory and excitatory neurons. In Section 3, we develop methods to compute the magnitudes of the MFEs. First, we review the condition for the cascade to continue, and discuss the graphical interpretation of MFEs (see Ref. (Rangan and Young 2013a)), which relates the MFE magnitude to the intersection of a cumulative distribution function (CDF) and a line. Next, we show how to approximate this intersection by replacing the empirical CDF with solutions to two stochastic differential equations, given the original voltage distributions of the excitatory and inhibitory neurons under an appropriate change of variables. Finally, extending Durbin’s method, we derive a formula for the density of the first passage time, which in turn provides the magnitude density of MFEs. We finally discuss the validity of our approximations in Section 4 and draw conclusions in Section 5. Many of the mathematical details are described in the Appendixes.

2 Integrate-and-fire network

We consider a model network of all-to-all coupled, current-based I&F neurons consisting of NE excitatory (E) and NI inhibitory (I) neurons. The voltage difference across the ith neuron’s membrane of type Q ∈ {E, I} obeys the equation
$$ \frac{dV^{Q}_{i}}{dt}=-g_{L}\left( V^{Q}_i-V_{L}\right) + I_{i}^{QE} -I_{i}^{QI} $$
(1a)
for i = 1,…, NQ, whenever \(V_{i}^{Q} < V_{T}\) for firing threshold VT and where VL is the leakage voltage. When the voltage \(V_{i}^{Q}\) crosses VT, the neuron is said to generate an action potential; a spike time \(t_{ik}^{Q}\) is recorded and \(V_{i}^{Q}\) is reset to the reset voltage VR, and held there for a time τref, referred to as the “refractory period”. (In all figures we use the non-dimensional values VT = 1 and VL = VR = 0, see Cai et al. (2005)). The spike times also generate input currents within the last two terms in Eq. (1a). These excitatory and inhibitory currents are given by
$$ I_{i}^{QE} = \sum\limits_{l}f^{Q}\delta \left( t-s_{i l}^{Q}\right)+ \sum\limits_{j \ne i} \sum\limits_{k}S^{QE} \delta \left( t-t_{jk}^{E}\right) $$
(1b)
and
$$ I_{i}^{QI} = \sum\limits_{j \ne i}\sum\limits_{k}S^{QI}\delta ( t-t_{jk}^{I}) $$
(1c)
respectively. The first term in the right hand side of Eq. (1b) represents an external driving train of spikes, each with strength fQ, at the times, \(s_{i l}^{Q}\), generated independently for each neuron from a Poisson point process with rate ηQ. The second term in the right-hand side of Eq. (1b) (the only term in Eq. (1c)) represents the sum over all spikes generated by the excitatory (inhibitory) population of neurons. The current impulse is a delta function; the voltage instantaneously jumps up by an amount fQ at each external-spike time, SQE at each excitatory-spike time, and decreases by an amount SQI at each inhibitory-spike time.

Equations (1) can be numerically integrated exactly using an event-driven algorithm to determine each of the NE and NIneuronal voltages at any instance in time (Brette et al. 2007), together with a procedure to resolve an MFE like the one presented in Appendix A. However, for very large populations, the dynamics (in a homogeneous regime of firing) can be well approximated by the solution to a master equation or a Fokker-Planck equation (Brunel and Hakim 1999; Cai et al. 2004, 2006; Rangan and Young 2013a) for the voltage distributions, ρE(v, t) and ρI(v, t), of the excitatory and inhibitory populations, respectively. As mentioned previously in the Introduction (and shown in (Rangan and Young 2013a; Zhang et al. In preparation)), the master equation or Fokker-Planck equation can be extended to qualitatively capture the feature of MFE dynamics. In the MFE regime, the population has two overall modes which we call the “MFE” mode and the “inter-MFE” mode. In the “inter-MFE” mode, the neurons are weakly correlated or completely independent, and the voltage distribution of neurons can be well described by population equations. When the criteria for an MFE to occur is satisfied, the inter-MFE mode terminates, and the MFE mode begins. Since the voltages in the corresponding I&F model will instantaneously jump up or down by a synaptic kick, the MFE mode is conceptualized as occurring within a single instant of time. The non-zero refractory period will ensure that a neuron only fires once during an MFE. From the available voltage distributions, ρE(v, t) and ρI(v, t), it is important to have an efficient method for determining the MFE magnitude and therefore effectively capture the features of the MFE regime.

The focus of this paper is developing different approaches to obtain the size of an MFE during the MFE mode. Incidentally, in what follows, we do not present the details of the master equation, but assume we have access to ρE(v) and ρI(v), the distributions of neuronal voltages at the time some excitatory neurons are about to fire and trigger an MFE.

3 Determining MFE magnitude

As mentioned in the Introduction, many biologically realistic regimes include bursts of firing activity similar in nature to MFEs. Due to the delta function impulses, the network modeled by Eq. (1) can exhibit MFEs in the form of cascade-induced synchrony (Newhall et al. 2010), in which the external driving causes one excitatory neuron to spike, instantaneously increasing the voltages of other neurons, causing more excitatory (and possibly inhibitory) neurons to fire, increasing (and possibly decreasing) the voltages of other neurons, cascading through the network, resulting in many neurons spiking at the exact same instant time. The number of neurons participating in one such MFE is determined solely by the arrangement of all the voltages at the time one excitatory neuron fires, as well as by the four coupling strengths, SEE, SIE, SEI and SII. We therefore seek the connection between the voltage distributions of the excitatory and inhibitory populations at the time when the MFE is initiated, and the distribution of MFE magnitudes (number of spiking excitatory and inhibitory neurons in one such event). In this section, we achieve this by calculating the distribution of MFE magnitudes in three ways: In Section 3.1 we present the condition for the cascading firing event to continue in terms of the set of excitatory \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and inhibitory \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) neuronal voltages at the time one excitatory neuron is about to fire, and compute the MFE magnitude graphically from the intersection of a line and a function of the empirical voltage CDFs. In Section 3.2 we approximate the empirical CDF by a stochastic process depending on the voltage density distributions (not the set of voltages themselves) and calculate the MFE magnitude as a first passage time problem. Finally, in Section 3.3 we approximate the solution to the first passage time problem and obtain the distribution of MFE magnitudes in terms of the voltage distributions, numbers of neurons, and the coupling strengths.

3.1 Geometrical method

We will begin by describing the connection between the cascade-mechanism responsible for an MFE and a graphical representation of its magnitude. The connection is easiest to understand if we consider only a population of excitatory neurons with a set of sorted descending voltages, \(\left \{v^{(j)}\right \}_{j=1}^{N_{E}}\) with v(j)v(i) for j < i, at the time one neuron fires ( v(1)VT). The MFE will continue with a second neuron firing if v(2)VTSEE, VT, as all neuronal voltages are increased by SEE when the neuron with voltage v(1) fires. A third neuron will fire if v(3)VT − 2SEE, VT, the two previously firing neurons cause the voltages of remaining neurons to increase by 2SEE. Exactly mE neurons will fire if the condition
$$ v^{(j)} \in \left[ V_T- (j-1)S^{EE},V_{T}\right) $$
(2)
is satisfied for \(j=2\dots m_{E}\) but not for j = mE + 1. If we define the empirical CDF to be
$$ F_{E}( v ) =\int_{-\infty }^{v} \frac{1}{N_{E}}\sum_{j=1}^{N_{E}}\delta ( z-v_{j}) dz , $$
(3)
then satisfying condition (2) for j = 1 to mE is equivalent to satisfying the condition
$$ V_T-v \le S^{EE}N_{E}(1-F_{E}(v)) $$
(4)
for v < VT such that NE(1 − FE(v)) ≤ mE. The magnitude, mE, can be determined by the value V∗ for which condition (4) is no longer true. This is precisely the point V∗ where the CDF FE(v) intersects the line
$$ l(v) = 1 + \frac{1}{N_{E} S^{EE}} (v - V_T) . $$
(5)
The MFE magnitude is then given by
$$m_{E}=N_{E} \left[1 - F_{E}( V^{*} ) \right]. $$
Figure 2 demonstrates this graphical interpretation. As the voltage configuration before the MFE changes, the empirical CDF FE(v) will also change, as will the intersection point, V∗ and hence the MFE magnitude, mE. Because the underlying distribution for the excitatory voltages can be multi-modal, (c.f. Fig. 2c), it is possible for V∗ to be discontinuous as a function of SEE.
Fig. 2

Geometrical method for determining the MFE magnitudes in a population of excitatory neurons. a The histogram of NE = 10 excitatory neurons when one is about to fire. b The intersection of the empirical CDF FE(v) (black line over cyan shaded region) and the line \(l(v)=1 + \frac {1}{N_{E} S^{EE}} (v - V_T)\) (blue dashed line) is the point when the cascading MFE terminates: for voltages above this point there are j = 7 neurons within the interval \(\left [ V_T-jS^{EE},V_{T}\right ]\) with SEE = 0.04, and these neurons will fire in the MFE. Below this point, condition (2) is violated. The red ticks on the x-axis indicate the voltages of the neurons firing in the MFE. (c and d) Same as (a and b) but for a bimodal distribution of voltages, NE = 128, and SEE = 0.005

We extend the above geometrical method to account for the addition of an inhibitory population of neurons with voltages \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) and empirical CDF
$$F_{I}(w) = \int_{-\infty}^{w} \frac{1}{N_{I}} \sum\limits_{j=1}^{N_{I}} \delta (z-w_j) dz. $$
Inhibitory neurons impede the continuation of MFEs as each generated spike reduces the excitatory voltages by SEI and the other inhibitory voltages by SII. Therefore, the analogous condition to Eq. (4) for the cascade to continue when there are two populations is
$$\begin{array}{l} V_T-v \le S^{EE}N_{E}\left(1-F_{E}(v)\right) - S^{EI}N_{I}(1-F_{I}(w)), \\ V_T-w \le S^{IE}N_{E}\left(1-F_{E}(v)\right) - S^{II}N_{I}(1-F_{I}(w)). \end{array} $$
(6)
The magnitude of the MFE is found from the points v = V∗ and w = W∗ such that for v < V∗ and w < W∗ the conditions in Eq. (6) no longer hold. What we do next is describe how to reduce the failure of the two conditions in Eq. (6) as the intersection of a function of only v and the line in Eq. (5), thereby deriving the MFE magnitude in terms of a single intersection point as was just done for only the excitatory population. We present a simple overview here; the details are explained in Appendix B.
We first rescale the inhibitory voltages to
$$ \hat{w}_{j}=V_T-\frac{S^{EE}}{S^{IE}}\left( V_T-w_{j}\right), $$
(7)
so that the firing of one excitatory neuron will cause an inhibitory neurons to fire if its rescaled voltage \(\hat {w}_{j}\) is in the interval VTSEE, VT. This allows the difference between the two conditions in Eq. (6) to be written as the single condition appearing in Eq. (44) in Appendix B. We also define the empirical inhibitory voltage CDF,
$$ \hat{F}_{I}(w) =\lim\limits_{V\rightarrow w^{-}}\int_{-\infty}^{V}\frac{1}{N_{I}}\sum\limits_{j=1}^{N_{I}}\delta \left( z-\hat{w}_{j}\right) dz\text{,} $$
(8)
associated with the transformed voltages in Eq. (7). Next, we have two cases: If the inhibitory effect on the inhibitory neurons is larger than on the excitatory neurons, i.e., δ = SIISEE / SIESEI > 0, then we need to shift \(\hat {w}_{j}\) further by defining
$$ \bar{v}_{j}=v_{j}\text{, \ \ \ \ }\bar{w}_{j}=\hat{w}_{j} - \delta N_{I}\left(1- \hat{F}_{I} \left( \hat{w}_{j}\right)\right) \text{.} $$
(9)
On the other hand, if the inhibitory effect on the excitatory neurons is larger than the inhibitory neurons, i.e., δ = SIISEE / SIESEI < 0, then we shift the vj by
$$ \bar{v}_{j}=v_{j} +\delta N_{I} \left(1-\hat{F}_{I}\left( \bar{v}_{j}\right)\right) \text{, \ \ \ \ } \bar{w}_{j}=\hat{w}_{j}. $$
(10)
By constructing new CDFs for the transformed variables,
$$\begin{array}{l} \bar{F}_{E}( v ) =\int_{-\infty }^{v} \frac{1}{N_{E}} \sum\limits_{j=1}^{N_{E}} \delta \left( z - \bar{v}_{j}\right) dz, \\ \bar{F}_{I}( v) =\int_{-\infty }^{v}\frac{1}{N_{I}}\sum\limits_{j=1}^{N_{I}} \delta \left( z-\bar{w}_{j}\right) dz\text{,} \end{array} $$
(11)
we have that
$$ \frac{V_T-\bar{v}}{N_ES^{EE}} \le 1-\bar{F}_{E}(\bar{v}) - \alpha \left(1-\bar{F}_{I}\left(\bar{v}\right)\right) $$
(12)
is equivalent to the two conditions in Eq. (6). The value \(\alpha = \min \left (\frac {S^{II}}{S^{IE}},\frac {S^{EI}}{S^{EE}}\right )\frac {N_{I}}{N_{E}} \) is obtained by considering that δ ≥ 0 implies \(\frac {S^{II}}{S^{IE}}>\frac {S^{EI}}{S^{EE}}\), and δ < 0 implies \(\frac {S^{II}}{S^{IE}}<\frac {S^{EI}}{S^{EE}}\). Condition (12) is equivalent to the conditions (50) and (56) derived in Appendix B for the two different cases of δ.
Interpreting condition (12) failing for the first time as an intersection point, we determine the magnitude of the MFE as
$$ m_{Q}=N_{Q}\left[ 1- \bar{F}_{Q}\left(V^{*} \right) \right], $$
(13)
where V∗ is the intersection point of the new function
$$ G( v ) = \bar{F}_{E}( v ) + \alpha \left[ 1-\bar{F}_{I}( v ) \right], $$
(14)
and the line l(v) in Eq. (5). Notice that each initial set of specific voltages \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) yields exactly one MFE magnitude. We can obtain the distribution of MFE magnitudes by repeated sampling of the sets of voltages \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) from some known densities ρE(v) and ρI(w), respectively, and computing the MFE magnitude using the above algorithm.

3.2 First passage time formulation

Having derived a method above for obtaining the magnitude of an MFE in terms of the empirical CDFs of the excitatory and inhibitory populations, we are now ready to reformulate the problem of obtaining the magnitude of an MFE as a first passage time problem. We take advantage of Donsker’s theorem (Donsker 1952) to approximate the empirical CDFs in terms of rescaled Brownian Bridges.1 In this framework, in which “time” is considered the voltage difference from the threshold voltage VT, the intersection point of interest, and thus the MFE magnitude, is the first passage time of a stochastic process to a line.

Here, we first derive the theoretical CDFs for the transformed voltages \(\bar {v}_{j}\) and \(\bar {w}_{j}\) defined in the previous section in terms of “time” starting from the original probability density functions (PDFs) for the voltages of the excitatory and inhibitory populations. (Recall from the end of Section 2 we assume that we have access to these distributions at the time an excitatory neuron fires.) Then, from the theoretical PDFs we can write two stochastic differential equations (SDEs) that approximate the possible empirical CDFs. We obtain the MFE magnitude by simulating the SDEs and determining the first passage time. In the next section, we complete the connection between the population voltage distributions and the distribution of MFE magnitudes by analytically approximating the first passage time density.

First, in the framework of time defined as t = VTv, we derive the theoretical PDFs for the transformed voltages defined in either Eq. (9) or Eq. (10), starting from the original theoretical PDFs for the excitatory and inhibitory voltages, ρE(v) and ρI(w), respectively. Switching to t and using the transformation in Eq. (7), we obtain the transformed PDFs pE(s) = ρE(VTs) and \( p_{I}\left (\hat {t}\right ) = \frac {S^{IE}}{S^{EE}} \rho _{I} \left ( V_{T} - \frac {S^{IE}}{S^{EE}} \hat {t} \right ) \). The equivalent formulas to Eqs. (9) and (10) for transforming the variables s and \(\hat {t}\) are
$$\bar{s} = s,\qquad \bar{t}=\hat{t} + \delta N_{I} \hat{f}_{I}\left(\hat{t}\right), \quad \text{ if } \delta \ge 0,$$
(15a)
$$\bar{s} =s - \delta N_{I} \hat{f}_{I}\left( \bar{s}\right), \qquad \bar{t} =\hat{t}, \quad\text{ if } \delta < 0, $$
(15b)
where δ = SIISEE / SIESEI as before and where we have defined \(\hat {f}_{I}(t) = \int _{0}^{t} p_{I}(\tau ) d\tau \).
We now consider the densities for these new variables \(\bar {s}\) and \(\bar {t}\) for two cases. First, if δ ≥ 0, then we can approximate the distributions of \(\bar {t}\) and \(\bar {s}\) as
$$ \bar{p}_{I} \left( \bar{t}\right) = \frac{p_{I} \left( g^{-1}\left( \bar{t}\right) \right) }{g^{\prime }\left( g^{-1}\left( \bar{t}\right) \right) } \; \textrm{ and } \; \bar{p}_{E}(\bar{s}) = p_{E}(\bar{s}) $$
(16)
where we have defined \(g( t) =t + \delta N_{I} \hat {f}_{I} ( t)\), and g′(t) is the derivative with respect to t. Similarly, for the case when δ < 0,
$$ \bar{p}_{E}(\bar{s}) = p_{E}( g(\bar{s})) g'(\bar{s})\; \textrm{ and } \; \bar{p}_{I}(\bar{t}) = p_{I}(\bar{t}). $$
(17)
Finally, we define the theoretical CDFs of these transformed densities by
$$ \bar{f}_{E}(t) = \int_{0}^{t} \bar{p}_{E}(\tau) d\tau \; \textrm{ and } \; \bar{f}_{I}(t) = \int_{0}^{t} \bar{p}_{E}(\tau) d\tau . $$
(18)
One example of transforming the PDFs is shown in Fig. 3. The original PDFs ρE(v) and ρI(w) are shown in Fig. 3a while the transformed PDFs in Eq. (16) are shown in Fig. 3b for one choice of coupling strengths.
Fig. 3

a The original densities ρE(v) and ρI(v). b The transformed densities \(\bar {p}_{E}(t)\) and \(\bar {p}_{I}(t)\) defined in Eq. (16) for the case when SEE = 0.009, SIE = 0.0072, SEI = 0.0057, SII = 0.0072 and NE = NI = 128. c A sample trajectory of the SDE (black line) for the densities shown in (b) intersecting the surface 𝒜 in the 3D space (x, y, t)

What we have so far is a way to transform from the PDFs of the excitatory and inhibitory population voltages to the theoretical CDFs of the transformed voltages. Note that the MFE is determined by the intersection of a combination of the empirical CDFs and the line in Eq. (5). We now use Donsker’s Theorem (Donsker 1952) to approximate the possible empirical distributions, \(\tilde {F}_{Q}(t)\), selected at random from the theoretical distributions in Eq. (18), as
$$ \tilde{F}_{Q}(t) \approx \bar{f}_{Q}(t) + \frac{1}{\sqrt{N_{Q}}} B\left(\bar{f}_{Q}(t)\right) $$
(19)
for Q ∈ {E, I}, where B( · ) is a standard Brownian bridge on the unit interval starting and ending at zero. The approximation (19) will be valid when NE, NI ≫ 1. A single stochastic trajectory, \(\phi _{Q}(t) = \frac {1}{\sqrt {N_{Q}}} B\left (\bar {f}_{Q}(t)\right )\), solves the stochastic differential equation
$$ d\phi_{Q}(t)=-\frac{\phi_{Q}( t)\bar{p}_{Q}( t)}{1-\bar{f}_{Q}( t) } dt +\sqrt{\frac{\bar{p}_{Q}( t) }{N_{Q}}}dW_{Q}(t) $$
(20)
with ϕQ(0) = 0, and where dWQ(t) is standard white noise in time.
We must also know how many neurons initiate the MFE, as the population-based description does not include any neurons with voltages over the threshold. If precisely k excitatory neurons fire to initiate the MFE, then the MFE magnitudes are derived from the first intersection point of
$$\bar{G}( t) =\gamma ( \bar{f}_{E}(t) + \phi_{E}(t)) - \alpha ( \bar{f}_{I}(t) + \phi_{I}(t) ) + \frac{k}{N_{E}} $$
(21)
with the line
$$ \bar{l}(t) = \frac{1}{N_{E} S^{EE}} t ,$$
(22)
where γ = 1 − k / NE and \(\alpha =\min \left ( \frac {S^{II}}{S^{IE}},\frac {S^{EI}}{S^{EE}}\right ) \frac {N_{I}}{N_{E}}\), as before. Note that \(\bar {G}(t)\) is different from the direct transformation of G(v) (Eq. 14) to either set of variables defined in Eq. (15) in that \(\bar {G}(t)\) also takes into account the k excitatory neurons that initiate the MFE.
Finally, we determine the magnitudes of the MFE by finding the first point in time when \(\bar {G}(t)\) in Eq. (21) crosses the line l̄(t) in Eq. (22), or equivalently in three-dimensional space, the first time, t∗, that the joint stochastic process (ϕE(t), ϕI(t), t) exits the region bounded by the surface 𝒜 given by the algebraic constraint that \(\bar {G}(t) = \bar {l}(t)\) for points (x, y, t):
$$ - \gamma ( x +\bar{f}_{E}(t)) +\alpha ( y + \bar{f}_{I}(t)) + \frac{1}{N_{E} S^{EE}} t =\frac{k}{N_{E}} . $$
(23)
Figure 3c shows an example trajectory crossing this surface. Given this intersection point, t∗, the magnitudes of the MFE are
$$\begin{array}{l} m_{E} = k+\left[ N_{E}-k\right] \left(\bar{f}_{E}(t^{*}) + \phi_{E}(t^{*})\right),\\ m_{I} =N_{I} \left(\bar{f}_{I}(t^{*}) + \phi_{I}(t^{*})\right) . \end{array} $$
(24)

Using the above formulation, we can numerically determine the distribution of MFE magnitudes as follows: First, take the theoretical PDFs ρE(v) and ρI(w) and transform them to \(\bar {p}_{E}(t)\) and \(\bar {p}_{I}(t)\) using either Eq. (16) or Eq. (17) and then calculate the CDFs \(\bar {f}_{E}(t)\) and \(\bar {f}_{I}(t)\) defined in Eq. (18). Next, using these transformed PDFs and CDFs, simulate stochastic trajectories using Eq. (20), and determine the intersection point of (ϕE(t), ϕI(t), t) with the surface \(\mathcal {A}\) in Eq. (23). Last, compute the MFE magnitudes in Eq. (24). The distribution of MFE magnitudes is obtained by repeatedly simulating the stochastic trajectories and determining the intersection points. Next, in Section 3.3, we devote ourselves to deriving an analytical formula for this MFE magnitude distribution.

3.3 Analytical formula for the distribution of MFE magnitudes

In Section 3.2, we reduced the problem of obtaining the MFE magnitude to one of finding the first exit time (or first passage time) of the joint stochastic process (ϕE(t), ϕI(t), t) out of the region bounded by the surface \(\mathcal {A}\). This can also be thought of as a 2D stochastic process (ϕE(t), ϕI(t)) hitting a moving boundary. Durbin’s method of approximating first passage time distributions of 1D stochastic trajectories to moving boundaries (Durbin 1985; Durbin and Williams 1992) depends on the distance between the surface and the starting point of the trajectory and the distribution of the stochastic trajectory as a function of time. We employ a similar technique by first transforming the 2D stochastic process to an isotropic 2D stochastic process with the same diffusion in any direction. This allows us to decompose the process into the directions perpendicular and parallel to the surface \(\mathcal {A}\). The problem reduces to a 1D passage time of the perpendicular component to the boundary. Here, we briefly describe how to obtain the two term approximation of this first passage time density; details appear in Appendix C.

We begin by conditioning on some additional information to determine the first passage time density pT(t) of the process (ϕE(t), ϕI(t), t) to the surface \(\mathcal {A}\) in Eq. (23). First we include the location a = (aE, aI) on the boundary that the process hits, and write pT(t) in terms of the joint distribution p(t, a) for first hitting the point a at time t as
$$ p_{T}(t) = \int_{\mathcal{A}|_{t}} p(t,\textbf{a}) d\mathbf{a}, $$
(25)
where the integration is over \(\mathcal {A}|_{t}\), i.e., all points on the surface at time t. Then, we condition on the process being at the point x at some intermediate time s. In order to have a first passage time t and hit a, the trajectory must first get to x at time s without hitting the boundary \(\mathcal {A}\) and then proceed to have a first passage time t to the point a. In terms of distributions, we may write
$$ p(t,\textbf{a}) = \int_{\Omega(s)} p(t,\mathbf a | s,\mathbf x)g(s,\mathbf x) d\mathbf x, $$
(26)
where p(t, a | s, x) is the first passage time density to a at time t given the trajectory starts at the point x at time s, g(s, x) is the density of the process at x at time s given that it did not cross \(\mathcal {A}\) previously, and the integration is over Ω(s). Ω(s) represents all points at time s in the (x, y) plane with the boundary Ω = 𝒜.
To derive an expression for p(t, a | s, x) we consider two independent processes: one moves perpendicular to the boundary \(\mathcal {A}\) which controls the time the boundary is hit, and one moves parallel to the boundary which controls at which point the boundary is hit. This is possible if we consider s sufficiently close to t (i.e., ts ≪ 1). Over the small time interval s, t we approximate the joint process (ϕE, ϕI) by two component Brownian motion, \(\left (\hat {\phi }_{E},\hat {\phi }_{I}\right )\) (constant drift and diffusion coefficients) that solves Eq. (20) with frozen coefficients at time s, and consider hitting the boundary, \(\hat {\mathcal A}_{a(t)}\). \(\hat {\mathcal A}_{a(t)}\) is a plane tangent to the surface \(\mathcal {A}\) at the point a at time t. To decompose \(\left (\hat {\phi }_{E},\hat {\phi }_{I}\right )\) into independent parallel and perpendicular to \(\hat {\mathcal A}_{a(t)}\) components, we must first transform it into isotropic Brownian motion (same diffusion in all directions) with the transformation matrix
$$\beta =\left(\begin{array}{cc} \sqrt{\frac{\bar{p}_{E}(s)}{N_{E}}} & 0 \\ 0 & \sqrt{\frac{\bar{p}_{I}(s)}{N_{I}}} \end{array}\right), $$
and then decompose it into the directions perpendicular to \(\hat {\mathcal A}_{a(t)}\) given by vector
$$ n_{\perp} = \frac{1}{\eta}[-\gamma,\alpha] $$
and parallel to \(\hat {\mathcal A}_{a(t)}\) given by vector
$$ n_{\parallel} = \frac{1}{\eta}[\alpha,\gamma], $$
where
$$ \eta = \left[ -\gamma \text{, }\alpha \text{, }\alpha \bar{p}_{I}( t) -\gamma \bar{p}_{E}( t) +\frac{1}{N_{E}S^{EE}}\right] $$
(27)
is the three component normal of \(\hat {\mathcal A}_{a(t)}\). Now, we approximate p(t, a | s, x) as
$$\begin{array}{rll} p( t,\mathbf{a}|s,\mathbf{x}) & \approx & f( t,\mathbf{a}|s,\mathbf{x}) \\ & \times & \frac{\widehat{\mathcal{A}}_{a( t) }|_{s}-\mathbf{x}}{t-s} \cdot n_{\perp }\frac{\left\vert \beta^{-1}n_{\parallel }\right\vert }{\left\vert \beta n_{\perp }\right\vert }\frac{\left\vert \beta \right\vert }{\left\vert n_{\perp }\right\vert \left\vert n_{\parallel }\right\vert }, \end{array} $$
(28)
following Durbin by adjusting the density, f(t, a | s, x), of the process starting at x at time s to be at the boundary point a at time t by the probability for the parallel component of the isotropic Brownian motion to hit the boundary and the probability for the perpendicular component to reach the point a. In (28)\(\widehat {\mathcal {A}}_{a( t) }|_{s}\) are points on the tangent boundary \(\hat {\mathcal {A}}_{a(t)}\) at the time s. Using approximation (28) in the limit as st, Eq. (26) remains exact and can be expressed as
$$\begin{array}{rll} p(t,\mathbf{a}) &=&\lim\limits_{s\to t^-} \frac{f( t,\mathbf{a})}{t-s} \\ &&\times \int_{\partial\Omega(s)}\left(\widehat{\mathcal{A}}_{a(t)}|_{s}-\mathbf{x} \right) \cdot \mathbf u \zeta(t) g(s,\mathbf x) d\mathbf x, \end{array} $$
(29)
where we have defined \(\mathbf u = [-\gamma , \alpha ]/\sqrt {\gamma ^2+\alpha ^{2}}\), and \(\zeta (t) = \sqrt {1+ 1/\nu ^{2}_{n}(t)}\) so that
$$\mathbf u \zeta(t) = n_{\perp }\frac{\left\vert \beta^{-1}n_{\parallel }\right\vert }{\left\vert \beta n_{\perp }\right\vert }\frac{\left\vert \beta \right\vert }{\left\vert n_{\perp }\right\vert \left\vert n_{\parallel }\right\vert } $$
and the quantity
$$ \nu_{n}( t) =\frac{\sqrt{\gamma^{2}+\alpha^{2}}}{ \alpha \bar{p} _{I}( t) -\gamma \bar{p}_{E}( t) +1/\left(N_{E}S^{EE}\right)} $$
(30)
can be thought of as the speed at which the boundary propagates in the normal direction.
So far, we have described how to obtain p(t, a) in Eq. (29) as an adjustment to the density f(t, a), for the process at (t, a). By writing Eq. (29) as
$$\begin{array}{lcr} p(t,\mathbf a) &=& \lim\limits_{s\to t^-} \frac{f(t,\mathbf{a})}{t-s} \mathbb{E}\left[\left(\widehat{\mathcal{A}}_{a(t) }|_{s}-\mathbf{x} \right) \cdot \mathbf u \zeta(t) | \textrm{cross }(t,\mathbf a)\right] \notag\\ &-& \lim\limits_{s\to t^-} \frac{f( t,\mathbf{a}) }{t-s} \mathbb{E}\left[\left(\widehat{\mathcal{A}}_{a(t) }|_{s}-\mathbf{x} \right) \cdot \mathbf u \zeta(t) | \textrm{cross }(t,\mathbf a),\right. \notag\\ \qquad\qquad\textrm{first crossing } (r,\mathbf b),\notag \\ &&\hspace*{58pt}\left. \textrm{ where } r<t{\vphantom{\widehat{\mathcal{A}}_{a(t) }|_{s}-\mathbf{x}}}\right] \end{array} $$
(31)
and defining q(t, a) = p(t, a) / f(t, a) we arrive at an integral equation for q(t, a). We approximate its solution with two terms and obtain
$$ p(t,\mathbf a) \approx \left(q_{0}(t,\mathbf a) - q_{1}(t,\mathbf a)\right)f(t,\mathbf{a}), $$
(32)
for the density of the first passage time to the point a(t) = (aE(t), aI(t)) on the surface \(\mathcal {A}\), with
$$\begin{array}{rll} q_{0}( t,\mathbf{a}) &= &\left( \sqrt{1+\nu^{2}_{n}(t)}- \frac{\zeta(t)}{\sqrt{\gamma^{2}+\alpha^{2}}} \right. \\ &&\times \left.\left[ \alpha \frac{a_{I}(t)\bar{p}_{I}( t) }{\bar{f}_{I}( t) } -\gamma \frac{a_{E}(t)\bar{p}_{E}( t) }{\bar{f}_{E}( t) } \right] {\vphantom{\sqrt{1+\nu^{2}_{n}(t)}- \frac{\zeta(t)}{\sqrt{\gamma^{2}+\alpha^{2}}}}}\right) \end{array} $$
and with
$$\begin{array}{rll} q_{1}( t,\mathbf{a}) &=& \int_{0}^{t}\int_{{\partial \Omega }( r) }q_{0}( r,\mathbf{b}) f( r,\mathbf{b}|t,\mathbf{a}) \\ &&\times \left({\vphantom{\frac{\zeta(t)}{\sqrt{\gamma^{2}+\alpha^{2}}} }}- \sqrt{1+\nu^{2}_{n}(t)} + \left[ \alpha \frac{\left( a_{I}(t)-b_{I}\left( r\right) \right) \bar{p}_{I}( t) }{ \bar{f}_{I}( t) - \bar{f}_{I}( r) }\right.\right. \\ &&- {}\left.\left.\gamma \frac{\left( a_{E}(t)-b_{E}\left( r\right) \right) \bar{p}_{E}( t) }{ \bar{f}_{E}( t) - \bar{f}_{E}( r) }\right] \frac{\zeta(t)}{\sqrt{\gamma^{2}+\alpha^{2}}} \right) d\mathbf{b}dr, \end{array} $$
where νn(t) is given in Eq. (30), \(\alpha = \frac {N_{I}}{N_{E}}\min \left \{\frac {S^{II}}{S^{IE}},\frac {S^{EI}}{S^{EE}}\right \} \) and γ = 1 − k / NE as before. The point b(r) = (bE(r), bI(r)) is another point on the surface \(\partial \Omega =\mathcal {A}\) at time rt, i.e., both a(t) and b(r) satisfy the condition to lie on the surface given in Eq. (23).The density of the process at point a at time t is
$$ f( t,\mathbf{a}) = \frac{1}{2\pi\sqrt{\sigma_{E}(t) \sigma_{I}(t) }} \exp \left( - \frac{a_{E}^{2}( t) }{2\sigma_{E}(t) }- \frac{a_{I}^{2}( t) }{2\sigma_{I}(t) }\right) $$
(33)
where \(\sigma _{E}(t) = \bar {f}_{E}\left ( t\right ) ( 1-\bar {f}_{E}\left ( t) \right ) / (N_E-k) \) and \(\sigma _{I}(t) = \bar {f}_{I}( t) \left ( 1-\bar {f}_{I}( t) \right ) /N_{I}\) and the density of process at the point a at time t given that it was previously at the point b at time r and did not cross the surface \(\mathcal {A}\) before time r is
$$\begin{array}{rll} f( r,\mathbf{b} | t,\mathbf{a}) &=& \frac{\bar{f}_{E}(t)\bar{f}_{I}(t)}{2\pi\sqrt{ \sigma_{E}(r,t) \sigma_{I}(r,t)}} \notag\\ &&\times \exp \left( - \frac{\left[b_{E}\bar{f}_{E}( t) -a_{E} \bar{f}_{E}(r)\right]^{2} }{2 \sigma_{E}(r,t) }\notag\right.\\ &&\hspace*{28pt}\left.- \frac{\left[b_{I}\bar{f}_{I}(t) -a_{I}\bar{f}_{I}(r)\right]^{2}}{2\sigma_{I}(r,t) }\right) \end{array} $$
(34)
where \(\sigma _{E}(r,t) = \bar {f}_{E}(r) ( \bar {f}_{E}(t)-\bar {f}_{E}(r) )/(N_E-k)\) and \(\sigma _{I}(r,t) = \bar {f}_{I}( r) ( \bar {f}_{I}(t)-\bar {f}_{I}\left ( r) \right )/N_{I}\). These two distributions are derived in Appendix D.
Recall that these formulas all involve the densities of the transformed variables \(\bar {s}\) and \(\bar {t}\) defined in Section 3.2. From the original voltage densities, ρE(v) and ρI(v) at the time the MFE is initiated, the transformed PDFs \(\bar {\rho }_{Q}(t)\) must be calculated using Eq. (16) if δ ≥ 0 (recall δ = SIISEE / SIESEI) or Eq. (17) if δ < 0, then the transformed CDFs \(\bar {f}_{Q}(t)\) calculated from Eq. (18). The final step is to select t∗ from Eq. (25) with p(t, a) approximated by Eq. (32) and compute the MFE magnitudes using
$$\begin{array}{l} m_{E} = k + (N_E-k)\left(a_{E}^{*}+\bar{f}_{E}(t^{*})\right)\\ m_{I} =N_{I}\bar{f}_{I}(t^{*}), \end{array} $$
(35)
where \(a_{E}^{*}\) is drawing from the density p(t∗, a) after fixing t∗.

The MFE magnitude density is computed by selecting many values of t∗ from the distribution in Eq. (25) and then determining the magnitude of the MFE from Eq. (35). As we discuss in the next section, the density computed in this manner is in excellent agreement with the density computed using an appropriate method for the original I&F dynamics, such as the one presented in Appendix A or the one by Rangan and Young (2013a).

4 Validity of approximations

In Section 3 we discussed three methods for determining the density of MFE magnitudes in term of the original voltage densities of the excitatory and inhibitory neurons. We now discuss the error introduced in each method, and present the numerically obtained distributions in order to examine the propagation of the error through the entire procedure. In the end, we find excellent agreement between the distribution obtained by using the I&F dynamics (as described by Rangan and Young (2013a)) and the single formula (25) obtained in Section 3.3.

First, we point out that resolving an MFE with I&F dynamics is indistinguishable from resolving it with the geometric method using the true empirical CDFs in Section 3.1 when using the same set of voltages. Their respective distributions will only vary from the true MFE distribution due to statical error. This can be seen in Fig. 4, where the MFE magnitude distribution is shown in terms of the intersection point in the transformed variable t. The red solid line corresponding to the intersection points obtained from the I&F dynamics is nearly indistinguishable from the black dot-dash line corresponding to the geometrically obtained intersection points in all four parameter cases.
Fig. 4

Distribution of the intersection point t for the three methods in Section 3 starting from the same voltage distributions, ρE(v) and ρI(v) shown in Fig. 3a, and initiated by k = 2 neurons (the excitatory and inhibitory MFE magnitudes are given in Eq. (35) in terms of t∗, a value selected from the shown distribution). aSEE = SII = SIE = SEI = 0.009, and NE = NI = 300, bSEE = SII = SIE = SEI = 0.008, and NE = NI = 2000, cSEE = SIE = 0.009, SII = SEI = 0.0072, NE = NI = 300, and (d) SEE = 0.009, SEI = 0.0072, SIE = SEI = 0.0081, NE = NI = 128

The first approximation occurs in Section 3.2 when the empirical CDFs are replaced by approximations constructed by adding white noise driven stochastic processes to the theoretical CDFs. The result of Donsker’s Theorem (Donsker 1952) is that the convergence is inversely proportional to the square-root of the number of points forming the empirical distribution; i.e. the number of neurons NE or NI in this case. The upper panels in Fig. 4 display results with NE = NI = 300 and 2000, respectively.

The most pronounced impact of the different fluctuations in the empirical CDFs can be seen for small MFE magnitudes (small values of t), when comparing the I&F dynamics (red solid line) and the intersection of the SDE generated empirical CDFs (blue dashed line) in Fig. 4. These early exit times (the peaks close to t = 0 in Fig. 4) are governed by the fluctuations of the empirical CDFs, and thus we expect a discrepancy here in contrast to the later exit times (the peaks near to t = 0. 3 in Fig. 4) which are governed by the mean of the process (i.e. the theoretical CDFs themselves). As we increase the number, k, of neurons initiating the MFE, the accuracy of the distribution created from the approximate empirical CDFs is improved since the boundary starts further away from the initial conditions, making early crossing less likely. In effect, we remove the part of the distribution that cannot be accurately resolved by the approximate empirical CDFs.

The other approximation we introduce is the analytical formula itself, approximating the first passage time distribution of the stochastic Brownian Bridge process to the moving boundary derived in Section 3.3. The distribution in Eq. (25), constructed from the two term approximation (32), which in turn is derived in a similar manner to that of Durbin (Durbin and Williams 1992), and thus we expect to have similar convergence. The error of this approximation can be seen in Fig. 4 by comparing the simulated SDE exit time distribution (blue dashed line) to the analytical formula (25) (green dashed line).

Finally, we investigate the validity of the analytical formula in approximating the true distribution of MFE magnitudes. For the parameters used in Figs. 4c and d, we compute the MFE magnitudes according to Eq. (35) in order to construct the MFE magnitude distributions. These distributions for both the number of excitatory and inhibitory neurons participating in the MFE agrees well with the corresponding distributions obtained by resolving the original I&F dynamics (according to the method discussed in Ref. (Rangan and Young 2013a)) as shown in Fig. 5. As a result, the analytical formula (25) is a rather good approximation, capturing the bimodal nature of the MFE magnitude distributions.
Fig. 5

Distribution of MFE magnitudes computed by resolving the I&F dynamics as in Ref. (Rangan and Young 2013a), and by selecting random times from the distribution in Eq. (25) to use in Eq. (35) for the same two cases shown in Fig. 4c and d: Left corresponds to Fig. 4c and right corresponds to Fig. 4d. The panels show the distributions of MFE magnitudes for the excitatory population (upper graph) and for the inhibitory population (lower graph), respectively

5 Conclusions

Based on the cascade-induced synchrony in pulse-coupled I&F neuronal network models, we have explored how to obtain the distribution of the number of neurons firing together as part of a multiple firing event (MFE) from the voltage distributions of the excitatory and inhibitory populations. For population-based modeling (e.g., master equations or Fokker-Planck equations), this distribution provides a way to simulate dynamics in biologically relevant regimes which do not display homogeneous firing statistics. The method proposed in Ref. (Zhang et al. In preparation) involves stopping the evolution of the master equation, and then selecting the number of neurons to participate in an MFE. The analytical formula presented in this paper for the MFE magnitude distribution could be used in this step to improve computational efficiency.

The analytical formula for the MFE magnitude distribution accurately captures the bimodal nature of MFE sizes, revealing the strong competition between the excitatory and inhibitory neurons. Fluctuations in the distribution of neuronal voltages near threshold voltage can cause a very small sized MFE, or, if enough excitatory neurons fire initially, then a large sized MFE can ensue, involving a large fraction of both populations. It is these larger MFEs that characterize partial synchrony, and we are able to accurately capture these with the analytical formula. Further analysis of the analytical formula could provide more insight into the mechanism responsible for synchrony and provide a way to characterize partial synchrony as a function of coupling strengths as well as network size and voltage distributions.

The method we present in this paper is devised to capture the instantaneous MFEs produced by a current-based I&F model with infinitely-fast synaptic time-scales. These techniques can also be used to approximate the types of MFEs which manifest in spiking network models with nonzero synaptic rise and decay times. In this latter case the MFEs will not be instantaneous, but will still occur relatively quickly–typically lasting only 2–3 ms when the synaptic decay time-scales are 2–4 ms (see Ref. (Rangan and Young 2012) for some examples of these dynamic structures). How accurately these rapid transients will be captured by Eq. (32) depends on the ratio r = τE / τIbetween the excitatory and inhibitory synaptic time-scales. When we derive Eq. (32) we assume that, while τE and τI both go to zero, τI < τE, thus giving the inhibitory synapses the potential to stifle excitation. When dealing with non-instantaneous synapses, if r ∼ 1 we have a similar situation: the inhibitory firing events also have an opportunity to interfere with the excitatory cascade, and we expect Eq. (32) to be qualitatively accurate. However, when r ≪ 1 we would expect Eq. (32) to underestimate the magnitude of the MFEs; synaptic excitation may transpire too quickly for inhibition to play a role, and the MFEs magnitude would be comparable to the all-excitatory case.

One of the limitations of our methodology is that we strongly rely on the assumption that our network is all-to-all coupled. Our approach does not directly generalize to more complicated network topologies; when the connectivity graph is nonuniform the simple picture painted in Fig. 2 breaks down.

Nevertheless, our work does illuminate how realistic synchronous burst sizes can be created by the competition between excitatory and inhibitory populations as opposed to complex network topology. We look forward to investigating how well this method describes other more realistic models and experimental data in future work.

Footnotes

  1. 1.

    Donsker’s theorem states that the fluctuations of an empirical CDF about its theoretical CDF converge to Gaussian random variables with zero mean and certain variance. The sequence of independent Gaussian random variables can be formulated in terms of a standard Brownian bridge, a continuous-time stochastic process on the unit interval, conditioned to begin and end at zero.

Notes

Acknowledgments

The authors would like to thank David Cai for useful discussions. J. Z. is partially supported by NSF grant DMS-1009575, K. N. is supported by the Courant Institute. D. Z. is supported by Shanghai Pujiang Program (Grant No. 10PJ1406300), NSFC (Grant No. 11101275 and No. 91230202), as well as New York University Abu Dhabi Research Grant G1301. A. R. is supported by NSF Grant DMS-0914827.

References

  1. Amari, S. (1974). A method of statistical neurodynamics. Kybernetik, 14, 201–215.PubMedGoogle Scholar
  2. Amit, D., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.PubMedCrossRefGoogle Scholar
  3. Anderson, J., Carandini, M., Ferster, D. (2000). Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex. Journal of Neurophysiology, 84, 909–926.PubMedGoogle Scholar
  4. Battaglia, D., & Hansel, D. (2011). Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex. PLoS Computational Biology, 7(10), e1002176.PubMedCentralPubMedCrossRefGoogle Scholar
  5. Benayoun, M., Cowan, J.V., Drongelen, W., Wallace, E. (2010). Avalanches in a stochastic model of spiking neurons. PLoS Computational Biology, 6(7), e1002176.CrossRefGoogle Scholar
  6. Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, J., Bower, J., Diesmann, M., Morrison, A., Goodman, P., Harris JR., F., et al. (2007). Simulation of networks of spiking neurons: a review of tools and strategies. Journal of Computational Neuroscience, 23(3), 349–398.PubMedCentralPubMedCrossRefGoogle Scholar
  7. Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8, 183–208.PubMedCrossRefGoogle Scholar
  8. Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11, 1621–1671.PubMedCrossRefGoogle Scholar
  9. Bruzsaki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science, 304, 1926–1929.CrossRefGoogle Scholar
  10. Cai, D., Rangan, A., McLaughlin, D. (2005). Architectural and synaptic mechanisms underlying coherent spontaneous activity in V1. Proceedings of the National Academy of Science, 102(16), 5868–5873.CrossRefGoogle Scholar
  11. Cai, D., Tao, L., Rangan, A. (2006). Kinetic theory for neuronal network dynamics. Communications Mathematical Sciences, 4, 97–127.CrossRefGoogle Scholar
  12. Cai, D., Tao, L., Shelley, M., McLaughlin, D. (2004). An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proceedings of the National Academy of Science, 101(20), 7757–7762.CrossRefGoogle Scholar
  13. Cardanobile, S., & Rotter, S. (2010). Multiplicatively interacting point processes and applications to neural modeling. Journal of Computational Neuroscience, 28, 267–284.PubMedCrossRefGoogle Scholar
  14. Churchland, M.M., & et al. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience, 13(3), 369–378.PubMedCentralPubMedCrossRefGoogle Scholar
  15. DeVille, R., & Peskin, C. (2008). Synchrony and asynchrony in a fully stochastic neural network. Bulletin of Mathematical Biology, 70(6), 1608–33.PubMedCrossRefGoogle Scholar
  16. DeWeese, M., & Zador, A. (2006). Non-gaussian membrane potential dynamics imply sparse, synchronous activity in auditory cortex. Journal of Neuroscience, 26(47), 12,206–12,218.CrossRefGoogle Scholar
  17. Donsker, M. (1952). Justification and extension of Doobs heuristic approach to the Kolmogorov-Smirnov theorems. Annals of Mathematical Statistics, 23(2), 277–281.CrossRefGoogle Scholar
  18. Durbin, J. (1985). The first passage density of a continuous Gaussian process to a general boundary. Journal of Applied Probability, 22, 99–122.CrossRefGoogle Scholar
  19. Durbin, J., & Williams, D. (1992). The first passage density of the brownian process to a curved boundary. Journal of Applied Probability, 29, 291–304.CrossRefGoogle Scholar
  20. Eggert, J., & Hemmen, J. (2001). Modeling neuronal assemblies: theory and implementation. Neural Computation, 13, 1923–1974.PubMedCrossRefGoogle Scholar
  21. Fusi, S., & Mattia, M. (1999). Collective behavior of networks with linear integrate and fire neurons. Neural Computation, 11, 633–652.PubMedCrossRefGoogle Scholar
  22. Gerstner, W. (1995). Time structure of the activity in neural network models. Physical Review E, 51, 738–758.CrossRefGoogle Scholar
  23. Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states and locking. Neural Computation, 12, 43–89.PubMedCrossRefGoogle Scholar
  24. Hansel, D., & Sompolinsky, H. (1996). Chaos and synchrony in a model of a hypercolumn in visual cortex. Journal of Computational Neuroscience, 3, 7–34.PubMedCrossRefGoogle Scholar
  25. Knight, B. (1972). Dynamics of encoding in a population of neurons. Journal of General Physiology, 59, 734–766.PubMedCentralPubMedCrossRefGoogle Scholar
  26. Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., Rotter, S. (2008). Correlations and population dynamics in cortical networks. Neural Computation, 20, 2185–2226.PubMedCrossRefGoogle Scholar
  27. Krukowski, A., & Miller, K. (2000). Thalamocortical NMDA conductances and intracortical inhibition can explain cortical temporal tuning. Nature Neuroscience, 4, 424–430.CrossRefGoogle Scholar
  28. Lampl, I., Reichova, I., Ferster, D. (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.PubMedCrossRefGoogle Scholar
  29. Lei, H., Riffell, J., Gage, S., Hildebrand, J. (2009). Contrast enhancement of stimulus intermittency in a primary olfactory network and its behavioral significance. Journal of Biology, 8, 21.PubMedCentralPubMedCrossRefGoogle Scholar
  30. Mazzoni, A., Broccard, F., Garcia-Perez, E., Bonifazi, P., Ruaro, M., Torre, V. (2007). On the dynamics of the spontaneous activity in neuronal networks. PLoS One, 2(5), e439.PubMedCentralPubMedCrossRefGoogle Scholar
  31. Murthy, A., & Humphrey, A. (1999). Inhibitory contributions to spatiotemporal receptive-field structure and direction selectivity in simple cells of cat area 17. Journal of Neurophysiology, 81, 1212–1224.PubMedGoogle Scholar
  32. Newhall, K., Kovačič, G., Kramer, P., Cai, D. (2010). Cascade-induced synchrony in stochastically driven neuronal networks. Physical Review E, 82, 041903.CrossRefGoogle Scholar
  33. Nykamp, D., & Tranchina, D. (2000). A population density approach that facilitates large scale modeling of neural networks: analysis and application to orientation tuning. Journal of Computational Neuroscience, 8, 19–50.PubMedCrossRefGoogle Scholar
  34. Omurtage, A., Knight, B., Sirovich, L. (2000). On the simulation of a large population of neurons. Journal of Computational Neuroscience, 8, 51–63.CrossRefGoogle Scholar
  35. Petermann, T., Thiagarajan, T., Lebedev, M., Nicolelis, M., Chailvo, D., Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Science, 106, 15,921–15,926.CrossRefGoogle Scholar
  36. Rangan, A., & Cai, D. (2007). Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. Journal of Computational Neuroscience, 22(1), 81–100.PubMedCrossRefGoogle Scholar
  37. Rangan, A., & Young, L. (2012). A network model of V1 with collaborative activity. PNAS Submitted.Google Scholar
  38. Rangan, A., & Young, L. (2013a). Dynamics of spiking neurons: between homogeneity and synchrony. Journal of Computational Neuroscience, 34(3), 433-460. doi:10.1007/s10827-012-0429-1.PubMedCrossRefGoogle Scholar
  39. Rangan, A., & Young, L. (2013b). Emergent dynamics in a model of visual cortex. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0445-9.PubMedCentralGoogle Scholar
  40. Renart, A., Brunel, N., Wang, X. (2004). Mean field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. Computational Neuroscience: A comprehensive approach.Google Scholar
  41. Riffell, J., Lei, H., Hildebrand, J. (2009). Neural correlates of behavior in the moth Manduca sexta in response to complex odors. Proceedings of the National Academy of Science, 106, 19,219–19,226.CrossRefGoogle Scholar
  42. Riffell, J., Lei, H., Christensen, T., Hildebrand, J. (2009). Characterization and coding of behaviorally significant odor mixtures. Current Biology, 19, 335–340.PubMedCentralPubMedCrossRefGoogle Scholar
  43. Samonds, J., Zhou, Z., Bernard, M., Bonds, A. (2005). Synchronous activity in cat visual cortex encodes collinear and cocircular contours. Journal of Neurophysiology, 95, 2602–2616.PubMedCrossRefGoogle Scholar
  44. Sillito, A. (1975). The contribution of inhibitory mechanisms to the receptive field properties of neurons in the striate cortex of the cat. Journal of Physiology, 250, 305–329.PubMedCentralPubMedGoogle Scholar
  45. Singer, W. (1999). Neuronal synchrony: a versatile code for the definition of relations?Neuron, 24, 49–65.PubMedCrossRefGoogle Scholar
  46. Sompolinsky, H., & Shapley, R. (1997). New perspectives on the mechanisms for orientation selectivity. Current Opinion in Neurobiology, 7, 514–522.PubMedCrossRefGoogle Scholar
  47. Sun, Y., Zhou, D., Rangan, A., Cai, D. (2010). Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics. Journal of Computational Neuroscience, 28, 247–266.PubMedCrossRefGoogle Scholar
  48. Treves, A. (1993). Mean-field analysis of neuronal spike dynamics. Network, 4, 259–284.CrossRefGoogle Scholar
  49. Wilson, H., & Cowan, D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12, 1–24.PubMedCentralPubMedCrossRefGoogle Scholar
  50. Wilson, H., & Cowan, D. (1973). A Mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13, 55–80.PubMedCrossRefGoogle Scholar
  51. Worgotter, F., & Koch, C. (1991). A detailed model of the primary visual pathway in the cat comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity. Journal of Neuroscience, 11, 1959–1979.PubMedGoogle Scholar
  52. Yu, Y., & Ferster, D. (2010). Membrane potential synchrony in primary visual cortex during sensory stimulation. Neuron, 68, 1187–1201.PubMedCentralPubMedCrossRefGoogle Scholar
  53. Yu, S., Yang, H., Nakahara, H., Santos, G., Nikolic, D., Plenz, D. (2011). Higher-order interactions characterized in cortical activity. Journal of Neuroscience, 31, 17,514–17,526.CrossRefGoogle Scholar
  54. Zhang, J., Rangan, A., Cai, D., et al. (In preparation). A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony.Google Scholar
  55. Zhou, D., Sun, Y., Rangan, A., Cai, D. (2008). Network induced chaos in integrate-and-fire neuronal ensembles. Physical Review E, 80(3), 031918.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Jiwei Zhang
    • 1
  • Katherine Newhall
    • 1
  • Douglas Zhou
    • 2
  • Aaditya Rangan
    • 1
  1. 1.Courant Institute of Mathematical SciencesNew York UniversityNew YorkUSA
  2. 2.Department of Mathematics, MOE-LSC and Institute of Natural SciencesShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations