# Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks

## Abstract

Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous “multiple firing events” (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population–based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.

### Keywords

Spiking neurons Synchrony Homogeneity Multiple firing events First passage time Integrate and fire neuronal networks## 1 Introduction

The inclusion of MFE dynamics into large-scale computational models has only been possible by carefully resolving each spike (as by Rangan and Cai 2007) for example). Population based methods such as firing rate models and master equations or Fokker-Planck equations rely heavily upon the assumption of the network remaining homogeneous (Brunel and Hakim 1999; Cai et al. 2006; Cai et al. 2004; Rangan and Young 2013a). This assumption is characterized by weak correlations between the individual neurons’ evolution, or nearly independent spike times generated across the network (i.e. roughly Poissonian firing statistics). The extension of the master equation to include time correlated MFEs has yet to be fully addressed. The difficulty lies in how to self-consistently incorporate the MFEs into a master equation or Fokker-Planck equation description. Recently, a proposal to circumvent this difficulty computationally has been given in Refs. (Rangan and Young 2013a; Zhang et al. In preparation): stop the evolution of the master equation when an MFE (manifested as a synchronous event of a subset of neurons in the network) occurs, then reshape the population distributions after counting the number of firing neurons participating in the synchronous event, and return to evolving the master equation until next occurrence of an MFE. While the above procedure appears to be straightforward, there are two questions that need to be answered: (1) what is the stopping criteria to indicate that an MFE occurs? and (2) how many neurons participate in an MFE? The first question concerning the stopping criteria depends on the probability of more than two excitatory neurons firing; this question has been addressed by Zhang et al. (In preparation). In this paper, we answer the second question for a specific current-based integrate-and-fire (I&F) neuronal network model by tackling the conceptual issue of how to mathematically characterize MFEs and developing analytical approaches to obtaining the number of neurons firing in an MFE.

In the pulse-coupled current-based integrate-and-fire (I&F) neuronal network model, MFEs are cascade-induced synchronous events occurring at single moments in time; one excitatory neuron fires, increasing the voltages of the other neurons, causing more neurons to fire, and continuing in this cascading fashion until no more neurons fire, or all neurons in the network fire. The independent stochastic processes driving each neuron between synchronous events cause the neuronal voltages to diverge, thus each MFE may include not only a different subset of the population, but may also include entirely different numbers of neurons. Even if all the neurons fire together once, they are not guaranteed to repeat this total synchronous event (see Ref. (Newhall et al. 2010) for a detailed discussion). We are interested in the specific model parameter regime in which the network displays dynamics of substantially sized MFEs separated by time intervals of effectively homogeneous firings. This regime is strongly influenced by the competition between the excitatory and inhibitory populations.

In order to mimic the I&F dynamics using a population-based model, we seek to characterize the MFE by its magnitude, defined as the number of neurons firing together with the neuron(s) initiating the synchronous event, in terms of the information available in the population-based description. Specifically, at the time when the stopping criteria is met (Zhang et al. In preparation), we know the voltage distributions for the excitatory and inhibitory populations, as well as the synaptic coupling strengths and population sizes. For an all-to-all coupled network of excitatory neurons, the requirement on the individual voltage arrangements for an MFE of a given size to occur was discussed in Ref. (Newhall et al. 2010), but its probability distribution in terms of the voltage distributions could only be computed practically for small MFE sizes. To obtain the size of an MFE, here, we circumvent the “balls-in-bins” combinatorics problem in Ref. (Newhall et al. 2010) by further developing the graphical method presented in Ref. (Rangan and Young 2013a) which is used to describe MFE magnitudes for interacting excitatory and inhibitory populations. We show that the distribution of the MFE size is reducible to the distribution of first passage times of a two-dimensional stochastic process to a moving boundary. While it is possible to write down an explicit partial differential equation for the first passage time distribution of an arbitrary white-noise driven stochastic process to an arbitrary boundary, this equation can only be solved exactly in a very few, simple, cases. In this paper, we approximate the MFE magnitude distribution by extending Durbin’s method for 1D passage times with moving boundaries (Durbin and Williams 1992) to the 2D case. The resulting analytical formula for the distribution of MFE magnitudes can not only be easily integrated into population-based methods, but also furnishes a conceptual advantage by illuminating the mechanism underlying MFEs and their dependence on the voltages distributions in different parameter regimes of synaptic strength and population sizes.

The remainder of the paper is organized as follows. In Section 2, we review the Integrate-and-Fire network with inhibitory and excitatory neurons. In Section 3, we develop methods to compute the magnitudes of the MFEs. First, we review the condition for the cascade to continue, and discuss the graphical interpretation of MFEs (see Ref. (Rangan and Young 2013a)), which relates the MFE magnitude to the intersection of a cumulative distribution function (CDF) and a line. Next, we show how to approximate this intersection by replacing the empirical CDF with solutions to two stochastic differential equations, given the original voltage distributions of the excitatory and inhibitory neurons under an appropriate change of variables. Finally, extending Durbin’s method, we derive a formula for the density of the first passage time, which in turn provides the magnitude density of MFEs. We finally discuss the validity of our approximations in Section 4 and draw conclusions in Section 5. Many of the mathematical details are described in the Appendixes.

## 2 Integrate-and-fire network

*N*

_{E}excitatory (E) and

*N*

_{I}inhibitory (I) neurons. The voltage difference across the

*i*th neuron’s membrane of type

*Q*∈ {

*E*,

*I*} obeys the equation

*i*= 1,…,

*N*

_{Q}, whenever \(V_{i}^{Q} < V_{T}\) for firing threshold

*V*

_{T}and where

*V*

_{L}is the leakage voltage. When the voltage \(V_{i}^{Q}\) crosses

*V*

_{T}, the neuron is said to generate an action potential; a spike time \(t_{ik}^{Q}\) is recorded and \(V_{i}^{Q}\) is reset to the reset voltage

*V*

_{R}, and held there for a time

*τ*

_{ref}, referred to as the “refractory period”. (In all figures we use the non-dimensional values

*V*

_{T}= 1 and

*V*

_{L}=

*V*

_{R}= 0, see Cai et al. (2005)). The spike times also generate input currents within the last two terms in Eq. (1a). These excitatory and inhibitory currents are given by

*f*

^{Q}, at the times, \(s_{i l}^{Q}\), generated independently for each neuron from a Poisson point process with rate

*η*

^{Q}. The second term in the right-hand side of Eq. (1b) (the only term in Eq. (1c)) represents the sum over all spikes generated by the excitatory (inhibitory) population of neurons. The current impulse is a delta function; the voltage instantaneously jumps up by an amount

*f*

^{Q}at each external-spike time,

*S*

^{QE}at each excitatory-spike time, and decreases by an amount

*S*

^{QI}at each inhibitory-spike time.

Equations (1) can be numerically integrated exactly using an event-driven algorithm to determine each of the *N*_{E} and *N*_{I}neuronal voltages at any instance in time (Brette et al. 2007), together with a procedure to resolve an MFE like the one presented in Appendix A. However, for very large populations, the dynamics (in a homogeneous regime of firing) can be well approximated by the solution to a master equation or a Fokker-Planck equation (Brunel and Hakim 1999; Cai et al. 2004, 2006; Rangan and Young 2013a) for the voltage distributions, *ρ*_{E}(*v*, *t*) and *ρ*_{I}(*v*, *t*), of the excitatory and inhibitory populations, respectively. As mentioned previously in the *Introduction* (and shown in (Rangan and Young 2013a; Zhang et al. In preparation)), the master equation or Fokker-Planck equation can be extended to qualitatively capture the feature of MFE dynamics. In the MFE regime, the population has two overall modes which we call the “MFE” mode and the “inter-MFE” mode. In the “inter-MFE” mode, the neurons are weakly correlated or completely independent, and the voltage distribution of neurons can be well described by population equations. When the criteria for an MFE to occur is satisfied, the inter-MFE mode terminates, and the MFE mode begins. Since the voltages in the corresponding I&F model will instantaneously jump up or down by a synaptic kick, the MFE mode is conceptualized as occurring within a single instant of time. The non-zero refractory period will ensure that a neuron only fires once during an MFE. From the available voltage distributions, *ρ*_{E}(*v*, *t*) and *ρ*_{I}(*v*, *t*), it is important to have an efficient method for determining the MFE magnitude and therefore effectively capture the features of the MFE regime.

The focus of this paper is developing different approaches to obtain the size of an MFE during the MFE mode. Incidentally, in what follows, we do not present the details of the master equation, but assume we have access to *ρ*_{E}(*v*) and *ρ*_{I}(*v*), the distributions of neuronal voltages at the time some excitatory neurons are about to fire and trigger an MFE.

## 3 Determining MFE magnitude

As mentioned in the *Introduction*, many biologically realistic regimes include bursts of firing activity similar in nature to MFEs. Due to the delta function impulses, the network modeled by Eq. (1) can exhibit MFEs in the form of cascade-induced synchrony (Newhall et al. 2010), in which the external driving causes one excitatory neuron to spike, instantaneously increasing the voltages of other neurons, causing more excitatory (and possibly inhibitory) neurons to fire, increasing (and possibly decreasing) the voltages of other neurons, cascading through the network, resulting in many neurons spiking at the exact same instant time. The number of neurons participating in one such MFE is determined solely by the arrangement of all the voltages at the time one excitatory neuron fires, as well as by the four coupling strengths, *S*^{EE}, *S*^{IE}, *S*^{EI} and *S*^{II}. We therefore seek the connection between the voltage distributions of the excitatory and inhibitory populations at the time when the MFE is initiated, and the distribution of MFE magnitudes (number of spiking excitatory and inhibitory neurons in one such event). In this section, we achieve this by calculating the distribution of MFE magnitudes in three ways: In Section 3.1 we present the condition for the cascading firing event to continue in terms of the set of excitatory \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and inhibitory \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) neuronal voltages at the time one excitatory neuron is about to fire, and compute the MFE magnitude graphically from the intersection of a line and a function of the empirical voltage CDFs. In Section 3.2 we approximate the empirical CDF by a stochastic process depending on the voltage density distributions (not the set of voltages themselves) and calculate the MFE magnitude as a first passage time problem. Finally, in Section 3.3 we approximate the solution to the first passage time problem and obtain the distribution of MFE magnitudes in terms of the voltage distributions, numbers of neurons, and the coupling strengths.

### 3.1 Geometrical method

*v*

^{(j)}≥

*v*

^{(i)}for

*j*<

*i*, at the time one neuron fires (

*v*

^{(1)}≥

*V*

_{T}). The MFE will continue with a second neuron firing if

*v*

^{(2)}∈

*V*

_{T}−

*S*

^{EE},

*V*

_{T}, as all neuronal voltages are increased by

*S*

^{EE}when the neuron with voltage

*v*

^{(1)}fires. A third neuron will fire if

*v*

^{(3)}∈

*V*

_{T}− 2

*S*

^{EE},

*V*

_{T}, the two previously firing neurons cause the voltages of remaining neurons to increase by 2

*S*

^{EE}. Exactly

*m*

_{E}neurons will fire if the condition

*j*=

*m*

_{E}+ 1. If we define the empirical CDF to be

*j*= 1 to

*m*

_{E}is equivalent to satisfying the condition

*v*<

*V*

_{T}such that

*N*

_{E}(1 −

*F*

_{E}(

*v*)) ≤

*m*

_{E}. The magnitude,

*m*

_{E}, can be determined by the value

*V*∗ for which condition (4) is no longer true. This is precisely the point

*V*∗ where the CDF

*F*

_{E}(

*v*) intersects the line

*F*

_{E}(

*v*) will also change, as will the intersection point,

*V*∗ and hence the MFE magnitude,

*m*

_{E}. Because the underlying distribution for the excitatory voltages can be multi-modal, (c.f. Fig. 2c), it is possible for

*V*∗ to be discontinuous as a function of

*S*

^{EE}.

*S*

^{EI}and the other inhibitory voltages by

*S*

^{II}. Therefore, the analogous condition to Eq. (4) for the cascade to continue when there are two populations is

*v*=

*V*∗ and

*w*=

*W*∗ such that for

*v*<

*V*∗ and

*w*<

*W*∗ the conditions in Eq. (6) no longer hold. What we do next is describe how to reduce the failure of the two conditions in Eq. (6) as the intersection of a function of only

*v*and the line in Eq. (5), thereby deriving the MFE magnitude in terms of a single intersection point as was just done for only the excitatory population. We present a simple overview here; the details are explained in Appendix B.

*V*

_{T}−

*S*

^{EE},

*V*

_{T}. This allows the difference between the two conditions in Eq. (6) to be written as the single condition appearing in Eq. (44) in Appendix B. We also define the empirical inhibitory voltage CDF,

*δ*=

*S*

^{II}

*S*

^{EE}/

*S*

^{IE}−

*S*

^{EI}> 0, then we need to shift \(\hat {w}_{j}\) further by defining

*δ*=

*S*

^{II}

*S*

^{EE}/

*S*

^{IE}−

*S*

^{EI}< 0, then we shift the

*v*

_{j}by

*δ*≥ 0 implies \(\frac {S^{II}}{S^{IE}}>\frac {S^{EI}}{S^{EE}}\), and

*δ*< 0 implies \(\frac {S^{II}}{S^{IE}}<\frac {S^{EI}}{S^{EE}}\). Condition (12) is equivalent to the conditions (50) and (56) derived in Appendix B for the two different cases of

*δ*.

*V*∗ is the intersection point of the new function

*l*(

*v*) in Eq. (5). Notice that each initial set of specific voltages \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) yields exactly one MFE magnitude. We can obtain the distribution of MFE magnitudes by repeated sampling of the sets of voltages \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) from some known densities

*ρ*

_{E}(

*v*) and

*ρ*

_{I}(

*w*), respectively, and computing the MFE magnitude using the above algorithm.

### 3.2 First passage time formulation

Having derived a method above for obtaining the magnitude of an MFE in terms of the empirical CDFs of the excitatory and inhibitory populations, we are now ready to reformulate the problem of obtaining the magnitude of an MFE as a first passage time problem. We take advantage of Donsker’s theorem (Donsker 1952) to approximate the empirical CDFs in terms of rescaled Brownian Bridges.^{1} In this framework, in which “time” is considered the voltage difference from the threshold voltage *V*_{T}, the intersection point of interest, and thus the MFE magnitude, is the first passage time of a stochastic process to a line.

Here, we first derive the theoretical CDFs for the transformed voltages \(\bar {v}_{j}\) and \(\bar {w}_{j}\) defined in the previous section in terms of “time” starting from the original probability density functions (PDFs) for the voltages of the excitatory and inhibitory populations. (Recall from the end of Section 2 we assume that we have access to these distributions at the time an excitatory neuron fires.) Then, from the theoretical PDFs we can write two stochastic differential equations (SDEs) that approximate the possible empirical CDFs. We obtain the MFE magnitude by simulating the SDEs and determining the first passage time. In the next section, we complete the connection between the population voltage distributions and the distribution of MFE magnitudes by analytically approximating the first passage time density.

*t*=

*V*

_{T}−

*v*, we derive the theoretical PDFs for the transformed voltages defined in either Eq. (9) or Eq. (10), starting from the original theoretical PDFs for the excitatory and inhibitory voltages,

*ρ*

_{E}(

*v*) and

*ρ*

_{I}(

*w*), respectively. Switching to

*t*and using the transformation in Eq. (7), we obtain the transformed PDFs

*p*

_{E}(

*s*) =

*ρ*

_{E}(

*V*

_{T}−

*s*) and \( p_{I}\left (\hat {t}\right ) = \frac {S^{IE}}{S^{EE}} \rho _{I} \left ( V_{T} - \frac {S^{IE}}{S^{EE}} \hat {t} \right ) \). The equivalent formulas to Eqs. (9) and (10) for transforming the variables

*s*and \(\hat {t}\) are

*δ*=

*S*

^{II}

*S*

^{EE}/

*S*

^{IE}−

*S*

^{EI}as before and where we have defined \(\hat {f}_{I}(t) = \int _{0}^{t} p_{I}(\tau ) d\tau \).

*δ*≥ 0, then we can approximate the distributions of \(\bar {t}\) and \(\bar {s}\) as

*g*′(

*t*) is the derivative with respect to

*t*. Similarly, for the case when

*δ*< 0,

*ρ*

_{E}(

*v*) and

*ρ*

_{I}(

*w*) are shown in Fig. 3a while the transformed PDFs in Eq. (16) are shown in Fig. 3b for one choice of coupling strengths.

*Q*∈ {

*E*,

*I*}, where

*B*( · ) is a standard Brownian bridge on the unit interval starting and ending at zero. The approximation (19) will be valid when

*N*

_{E},

*N*

_{I}≫ 1. A single stochastic trajectory, \(\phi _{Q}(t) = \frac {1}{\sqrt {N_{Q}}} B\left (\bar {f}_{Q}(t)\right )\), solves the stochastic differential equation

*ϕ*

_{Q}(0) = 0, and where

*dW*

_{Q}(

*t*) is standard white noise in time.

*k*excitatory neurons fire to initiate the MFE, then the MFE magnitudes are derived from the first intersection point of

*γ*= 1 −

*k*/

*N*

_{E}and \(\alpha =\min \left ( \frac {S^{II}}{S^{IE}},\frac {S^{EI}}{S^{EE}}\right ) \frac {N_{I}}{N_{E}}\), as before. Note that \(\bar {G}(t)\) is different from the direct transformation of

*G*(

*v*) (Eq. 14) to either set of variables defined in Eq. (15) in that \(\bar {G}(t)\) also takes into account the

*k*excitatory neurons that initiate the MFE.

*l*̄(

*t*) in Eq. (22), or equivalently in three-dimensional space, the first time,

*t*∗, that the joint stochastic process (

*ϕ*

_{E}(

*t*),

*ϕ*

_{I}(

*t*),

*t*) exits the region bounded by the surface

*𝒜*given by the algebraic constraint that \(\bar {G}(t) = \bar {l}(t)\) for points (

*x*,

*y*,

*t*):

*t*∗, the magnitudes of the MFE are

Using the above formulation, we can numerically determine the distribution of MFE magnitudes as follows: First, take the theoretical PDFs *ρ*_{E}(*v*) and *ρ*_{I}(*w*) and transform them to \(\bar {p}_{E}(t)\) and \(\bar {p}_{I}(t)\) using either Eq. (16) or Eq. (17) and then calculate the CDFs \(\bar {f}_{E}(t)\) and \(\bar {f}_{I}(t)\) defined in Eq. (18). Next, using these transformed PDFs and CDFs, simulate stochastic trajectories using Eq. (20), and determine the intersection point of (*ϕ*_{E}(*t*), *ϕ*_{I}(*t*), *t*) with the surface \(\mathcal {A}\) in Eq. (23). Last, compute the MFE magnitudes in Eq. (24). The distribution of MFE magnitudes is obtained by repeatedly simulating the stochastic trajectories and determining the intersection points. Next, in Section 3.3, we devote ourselves to deriving an analytical formula for this MFE magnitude distribution.

### 3.3 Analytical formula for the distribution of MFE magnitudes

In Section 3.2, we reduced the problem of obtaining the MFE magnitude to one of finding the first exit time (or first passage time) of the joint stochastic process (*ϕ*_{E}(*t*), *ϕ*_{I}(*t*), *t*) out of the region bounded by the surface \(\mathcal {A}\). This can also be thought of as a 2D stochastic process (*ϕ*_{E}(*t*), *ϕ*_{I}(*t*)) hitting a moving boundary. Durbin’s method of approximating first passage time distributions of 1D stochastic trajectories to moving boundaries (Durbin 1985; Durbin and Williams 1992) depends on the distance between the surface and the starting point of the trajectory and the distribution of the stochastic trajectory as a function of time. We employ a similar technique by first transforming the 2D stochastic process to an isotropic 2D stochastic process with the same diffusion in any direction. This allows us to decompose the process into the directions perpendicular and parallel to the surface \(\mathcal {A}\). The problem reduces to a 1D passage time of the perpendicular component to the boundary. Here, we briefly describe how to obtain the two term approximation of this first passage time density; details appear in Appendix C.

*p*

_{T}(

*t*) of the process (

*ϕ*

_{E}(

*t*),

*ϕ*

_{I}(

*t*),

*t*) to the surface \(\mathcal {A}\) in Eq. (23). First we include the location a = (

*a*

_{E},

*a*

_{I}) on the boundary that the process hits, and write

*p*

_{T}(

*t*) in terms of the joint distribution

*p*(

*t*, a) for first hitting the point a at time

*t*as

*t*. Then, we condition on the process being at the point

*x*at some intermediate time

*s*. In order to have a first passage time

*t*and hit

*a*, the trajectory must first get to

*x*at time

*s*without hitting the boundary \(\mathcal {A}\) and then proceed to have a first passage time

*t*to the point

*a*. In terms of distributions, we may write

*p*(

*t*,

*a*|

*s*,

*x*) is the first passage time density to

*a*at time

*t*given the trajectory starts at the point

*x*at time s,

*g*(

*s*,

*x*) is the density of the process at

*x*at time

*s*given that it did not cross \(\mathcal {A}\) previously, and the integration is over

*Ω*(

*s*).

*Ω*(

*s*) represents all points at time

*s*in the (

*x*,

*y*) plane with the boundary

*∂*Ω =

*𝒜*.

*p*(

*t*,

*a*|

*s*,

*x*) we consider two independent processes: one moves perpendicular to the boundary \(\mathcal {A}\) which controls the time the boundary is hit, and one moves parallel to the boundary which controls at which point the boundary is hit. This is possible if we consider

*s*sufficiently close to

*t*(i.e.,

*t*−

*s*≪ 1). Over the small time interval

*s*,

*t*we approximate the joint process (

*ϕ*

_{E},

*ϕ*

_{I}) by two component Brownian motion, \(\left (\hat {\phi }_{E},\hat {\phi }_{I}\right )\) (constant drift and diffusion coefficients) that solves Eq. (20) with frozen coefficients at time

*s*, and consider hitting the boundary, \(\hat {\mathcal A}_{a(t)}\). \(\hat {\mathcal A}_{a(t)}\) is a plane tangent to the surface \(\mathcal {A}\) at the point

*a*at time

*t*. To decompose \(\left (\hat {\phi }_{E},\hat {\phi }_{I}\right )\) into independent parallel and perpendicular to \(\hat {\mathcal A}_{a(t)}\) components, we must first transform it into isotropic Brownian motion (same diffusion in all directions) with the transformation matrix

*p*(

*t*,

*a*|

*s*,

*x*) as

*f*(

*t*,

*a*|

*s*,

*x*), of the process starting at

*x*at time

*s*to be at the boundary point

*a*at time

*t*by the probability for the parallel component of the isotropic Brownian motion to hit the boundary and the probability for the perpendicular component to reach the point

*a*. In (28)\(\widehat {\mathcal {A}}_{a( t) }|_{s}\) are points on the tangent boundary \(\hat {\mathcal {A}}_{a(t)}\) at the time

*s*. Using approximation (28) in the limit as

*s*→

*t*, Eq. (26) remains exact and can be expressed as

*p*(

*t*,

*a*) in Eq. (29) as an adjustment to the density

*f*(

*t*,

*a*), for the process at (

*t*,

*a*). By writing Eq. (29) as

*q*(

*t*,

*a*) =

*p*(

*t*,

*a*) /

*f*(

*t*,

*a*) we arrive at an integral equation for

*q*(

*t*,

*a*). We approximate its solution with two terms and obtain

*a*(

*t*) = (

*a*

_{E}(

*t*),

*a*

_{I}(

*t*)) on the surface \(\mathcal {A}\), with

*ν*

_{n}(

*t*) is given in Eq. (30), \(\alpha = \frac {N_{I}}{N_{E}}\min \left \{\frac {S^{II}}{S^{IE}},\frac {S^{EI}}{S^{EE}}\right \} \) and

*γ*= 1 −

*k*/

*N*

_{E}as before. The point

*b*(

*r*) = (

*b*

_{E}(

*r*),

*b*

_{I}(

*r*)) is another point on the surface \(\partial \Omega =\mathcal {A}\) at time

*r*≤

*t*, i.e., both

*a*(

*t*) and

*b*(

*r*) satisfy the condition to lie on the surface given in Eq. (23).The density of the process at point

*a*at time

*t*is

*a*at time

*t*given that it was previously at the point

*b*at time

*r*and did not cross the surface \(\mathcal {A}\) before time

*r*is

*ρ*

_{E}(

*v*) and

*ρ*

_{I}(

*v*) at the time the MFE is initiated, the transformed PDFs \(\bar {\rho }_{Q}(t)\) must be calculated using Eq. (16) if

*δ*≥ 0 (recall

*δ*=

*S*

^{II}

*S*

^{EE}/

*S*

^{IE}−

*S*

^{EI}) or Eq. (17) if

*δ*< 0, then the transformed CDFs \(\bar {f}_{Q}(t)\) calculated from Eq. (18). The final step is to select

*t*∗ from Eq. (25) with

*p*(

*t*, a) approximated by Eq. (32) and compute the MFE magnitudes using

*p*(

*t*∗, a) after fixing

*t*∗.

The MFE magnitude density is computed by selecting many values of *t*∗ from the distribution in Eq. (25) and then determining the magnitude of the MFE from Eq. (35). As we discuss in the next section, the density computed in this manner is in excellent agreement with the density computed using an appropriate method for the original I&F dynamics, such as the one presented in Appendix A or the one by Rangan and Young (2013a).

## 4 Validity of approximations

In Section 3 we discussed three methods for determining the density of MFE magnitudes in term of the original voltage densities of the excitatory and inhibitory neurons. We now discuss the error introduced in each method, and present the numerically obtained distributions in order to examine the propagation of the error through the entire procedure. In the end, we find excellent agreement between the distribution obtained by using the I&F dynamics (as described by Rangan and Young (2013a)) and the single formula (25) obtained in Section 3.3.

*t*. The red solid line corresponding to the intersection points obtained from the I&F dynamics is nearly indistinguishable from the black dot-dash line corresponding to the geometrically obtained intersection points in all four parameter cases.

The first approximation occurs in Section 3.2 when the empirical CDFs are replaced by approximations constructed by adding white noise driven stochastic processes to the theoretical CDFs. The result of Donsker’s Theorem (Donsker 1952) is that the convergence is inversely proportional to the square-root of the number of points forming the empirical distribution; i.e. the number of neurons *N*_{E} or *N*_{I} in this case. The upper panels in Fig. 4 display results with *N*_{E} = *N*_{I} = 300 and 2000, respectively.

The most pronounced impact of the different fluctuations in the empirical CDFs can be seen for small MFE magnitudes (small values of *t*), when comparing the I&F dynamics (red solid line) and the intersection of the SDE generated empirical CDFs (blue dashed line) in Fig. 4. These early exit times (the peaks close to *t* = 0 in Fig. 4) are governed by the fluctuations of the empirical CDFs, and thus we expect a discrepancy here in contrast to the later exit times (the peaks near to *t* = 0. 3 in Fig. 4) which are governed by the mean of the process (i.e. the theoretical CDFs themselves). As we increase the number, *k*, of neurons initiating the MFE, the accuracy of the distribution created from the approximate empirical CDFs is improved since the boundary starts further away from the initial conditions, making early crossing less likely. In effect, we remove the part of the distribution that cannot be accurately resolved by the approximate empirical CDFs.

The other approximation we introduce is the analytical formula itself, approximating the first passage time distribution of the stochastic Brownian Bridge process to the moving boundary derived in Section 3.3. The distribution in Eq. (25), constructed from the two term approximation (32), which in turn is derived in a similar manner to that of Durbin (Durbin and Williams 1992), and thus we expect to have similar convergence. The error of this approximation can be seen in Fig. 4 by comparing the simulated SDE exit time distribution (blue dashed line) to the analytical formula (25) (green dashed line).

## 5 Conclusions

Based on the cascade-induced synchrony in pulse-coupled I&F neuronal network models, we have explored how to obtain the distribution of the number of neurons firing together as part of a multiple firing event (MFE) from the voltage distributions of the excitatory and inhibitory populations. For population-based modeling (e.g., master equations or Fokker-Planck equations), this distribution provides a way to simulate dynamics in biologically relevant regimes which do not display homogeneous firing statistics. The method proposed in Ref. (Zhang et al. In preparation) involves stopping the evolution of the master equation, and then selecting the number of neurons to participate in an MFE. The analytical formula presented in this paper for the MFE magnitude distribution could be used in this step to improve computational efficiency.

The analytical formula for the MFE magnitude distribution accurately captures the bimodal nature of MFE sizes, revealing the strong competition between the excitatory and inhibitory neurons. Fluctuations in the distribution of neuronal voltages near threshold voltage can cause a very small sized MFE, or, if enough excitatory neurons fire initially, then a large sized MFE can ensue, involving a large fraction of both populations. It is these larger MFEs that characterize partial synchrony, and we are able to accurately capture these with the analytical formula. Further analysis of the analytical formula could provide more insight into the mechanism responsible for synchrony and provide a way to characterize partial synchrony as a function of coupling strengths as well as network size and voltage distributions.

The method we present in this paper is devised to capture the instantaneous MFEs produced by a current-based I&F model with infinitely-fast synaptic time-scales. These techniques can also be used to approximate the types of MFEs which manifest in spiking network models with nonzero synaptic rise and decay times. In this latter case the MFEs will not be instantaneous, but will still occur relatively quickly–typically lasting only 2–3 ms when the synaptic decay time-scales are 2–4 ms (see Ref. (Rangan and Young 2012) for some examples of these dynamic structures). How accurately these rapid transients will be captured by Eq. (32) depends on the ratio *r* = *τ*_{E} / *τ*_{I}between the excitatory and inhibitory synaptic time-scales. When we derive Eq. (32) we assume that, while *τ*_{E} and *τ*_{I} both go to zero, *τ*_{I} < *τ*_{E}, thus giving the inhibitory synapses the potential to stifle excitation. When dealing with non-instantaneous synapses, if *r* ∼ 1 we have a similar situation: the inhibitory firing events also have an opportunity to interfere with the excitatory cascade, and we expect Eq. (32) to be qualitatively accurate. However, when *r* ≪ 1 we would expect Eq. (32) to underestimate the magnitude of the MFEs; synaptic excitation may transpire too quickly for inhibition to play a role, and the MFEs magnitude would be comparable to the all-excitatory case.

One of the limitations of our methodology is that we strongly rely on the assumption that our network is all-to-all coupled. Our approach does not directly generalize to more complicated network topologies; when the connectivity graph is nonuniform the simple picture painted in Fig. 2 breaks down.

Nevertheless, our work does illuminate how realistic synchronous burst sizes can be created by the competition between excitatory and inhibitory populations as opposed to complex network topology. We look forward to investigating how well this method describes other more realistic models and experimental data in future work.

## Footnotes

- 1.
Donsker’s theorem states that the fluctuations of an empirical CDF about its theoretical CDF converge to Gaussian random variables with zero mean and certain variance. The sequence of independent Gaussian random variables can be formulated in terms of a standard Brownian bridge, a continuous-time stochastic process on the unit interval, conditioned to begin and end at zero.

## Notes

### Acknowledgments

The authors would like to thank David Cai for useful discussions. J. Z. is partially supported by NSF grant DMS-1009575, K. N. is supported by the Courant Institute. D. Z. is supported by Shanghai Pujiang Program (Grant No. 10PJ1406300), NSFC (Grant No. 11101275 and No. 91230202), as well as New York University Abu Dhabi Research Grant G1301. A. R. is supported by NSF Grant DMS-0914827.

### References

- Amari, S. (1974). A method of statistical neurodynamics.
*Kybernetik*,*14*, 201–215.PubMedGoogle Scholar - Amit, D., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex.
*Cerebral Cortex*,*7*, 237–252.PubMedCrossRefGoogle Scholar - Anderson, J., Carandini, M., Ferster, D. (2000). Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex.
*Journal of Neurophysiology*,*84*, 909–926.PubMedGoogle Scholar - Battaglia, D., & Hansel, D. (2011). Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex.
*PLoS Computational Biology*,*7*(10), e1002176.PubMedCentralPubMedCrossRefGoogle Scholar - Benayoun, M., Cowan, J.V., Drongelen, W., Wallace, E. (2010). Avalanches in a stochastic model of spiking neurons.
*PLoS Computational Biology*,*6*(7), e1002176.CrossRefGoogle Scholar - Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, J., Bower, J., Diesmann, M., Morrison, A., Goodman, P., Harris JR., F., et al. (2007). Simulation of networks of spiking neurons: a review of tools and strategies.
*Journal of Computational Neuroscience*,*23*(3), 349–398.PubMedCentralPubMedCrossRefGoogle Scholar - Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.
*Journal of Computational Neuroscience*,*8*, 183–208.PubMedCrossRefGoogle Scholar - Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates.
*Neural Computation*,*11*, 1621–1671.PubMedCrossRefGoogle Scholar - Bruzsaki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks.
*Science*,*304*, 1926–1929.CrossRefGoogle Scholar - Cai, D., Rangan, A., McLaughlin, D. (2005). Architectural and synaptic mechanisms underlying coherent spontaneous activity in V1.
*Proceedings of the National Academy of Science*,*102*(16), 5868–5873.CrossRefGoogle Scholar - Cai, D., Tao, L., Rangan, A. (2006). Kinetic theory for neuronal network dynamics.
*Communications Mathematical Sciences*,*4*, 97–127.CrossRefGoogle Scholar - Cai, D., Tao, L., Shelley, M., McLaughlin, D. (2004). An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex.
*Proceedings of the National Academy of Science*,*101*(20), 7757–7762.CrossRefGoogle Scholar - Cardanobile, S., & Rotter, S. (2010). Multiplicatively interacting point processes and applications to neural modeling.
*Journal of Computational Neuroscience*,*28*, 267–284.PubMedCrossRefGoogle Scholar - Churchland, M.M., & et al. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon.
*Nature Neuroscience*,*13*(3), 369–378.PubMedCentralPubMedCrossRefGoogle Scholar - DeVille, R., & Peskin, C. (2008). Synchrony and asynchrony in a fully stochastic neural network.
*Bulletin of Mathematical Biology*,*70*(6), 1608–33.PubMedCrossRefGoogle Scholar - DeWeese, M., & Zador, A. (2006). Non-gaussian membrane potential dynamics imply sparse, synchronous activity in auditory cortex.
*Journal of Neuroscience*,*26*(47), 12,206–12,218.CrossRefGoogle Scholar - Donsker, M. (1952). Justification and extension of Doobs heuristic approach to the Kolmogorov-Smirnov theorems.
*Annals of Mathematical Statistics*,*23*(2), 277–281.CrossRefGoogle Scholar - Durbin, J. (1985). The first passage density of a continuous Gaussian process to a general boundary.
*Journal of Applied Probability*,*22*, 99–122.CrossRefGoogle Scholar - Durbin, J., & Williams, D. (1992). The first passage density of the brownian process to a curved boundary.
*Journal of Applied Probability*,*29*, 291–304.CrossRefGoogle Scholar - Eggert, J., & Hemmen, J. (2001). Modeling neuronal assemblies: theory and implementation.
*Neural Computation*,*13*, 1923–1974.PubMedCrossRefGoogle Scholar - Fusi, S., & Mattia, M. (1999). Collective behavior of networks with linear integrate and fire neurons.
*Neural Computation*,*11*, 633–652.PubMedCrossRefGoogle Scholar - Gerstner, W. (1995). Time structure of the activity in neural network models.
*Physical Review E*,*51*, 738–758.CrossRefGoogle Scholar - Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states and locking.
*Neural Computation*,*12*, 43–89.PubMedCrossRefGoogle Scholar - Hansel, D., & Sompolinsky, H. (1996). Chaos and synchrony in a model of a hypercolumn in visual cortex.
*Journal of Computational Neuroscience*,*3*, 7–34.PubMedCrossRefGoogle Scholar - Knight, B. (1972). Dynamics of encoding in a population of neurons.
*Journal of General Physiology*,*59*, 734–766.PubMedCentralPubMedCrossRefGoogle Scholar - Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., Rotter, S. (2008). Correlations and population dynamics in cortical networks.
*Neural Computation*,*20*, 2185–2226.PubMedCrossRefGoogle Scholar - Krukowski, A., & Miller, K. (2000). Thalamocortical NMDA conductances and intracortical inhibition can explain cortical temporal tuning.
*Nature Neuroscience*,*4*, 424–430.CrossRefGoogle Scholar - Lampl, I., Reichova, I., Ferster, D. (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex.
*Neuron*,*22*, 361–374.PubMedCrossRefGoogle Scholar - Lei, H., Riffell, J., Gage, S., Hildebrand, J. (2009). Contrast enhancement of stimulus intermittency in a primary olfactory network and its behavioral significance.
*Journal of Biology*,*8*, 21.PubMedCentralPubMedCrossRefGoogle Scholar - Mazzoni, A., Broccard, F., Garcia-Perez, E., Bonifazi, P., Ruaro, M., Torre, V. (2007). On the dynamics of the spontaneous activity in neuronal networks.
*PLoS One*,*2*(5), e439.PubMedCentralPubMedCrossRefGoogle Scholar - Murthy, A., & Humphrey, A. (1999). Inhibitory contributions to spatiotemporal receptive-field structure and direction selectivity in simple cells of cat area 17.
*Journal of Neurophysiology*,*81*, 1212–1224.PubMedGoogle Scholar - Newhall, K., Kovačič, G., Kramer, P., Cai, D. (2010). Cascade-induced synchrony in stochastically driven neuronal networks.
*Physical Review E*,*82*, 041903.CrossRefGoogle Scholar - Nykamp, D., & Tranchina, D. (2000). A population density approach that facilitates large scale modeling of neural networks: analysis and application to orientation tuning.
*Journal of Computational Neuroscience*,*8*, 19–50.PubMedCrossRefGoogle Scholar - Omurtage, A., Knight, B., Sirovich, L. (2000). On the simulation of a large population of neurons.
*Journal of Computational Neuroscience*,*8*, 51–63.CrossRefGoogle Scholar - Petermann, T., Thiagarajan, T., Lebedev, M., Nicolelis, M., Chailvo, D., Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches.
*Proceedings of the National Academy of Science*,*106*, 15,921–15,926.CrossRefGoogle Scholar - Rangan, A., & Cai, D. (2007). Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
*Journal of Computational Neuroscience*,*22*(1), 81–100.PubMedCrossRefGoogle Scholar - Rangan, A., & Young, L. (2012). A network model of V1 with collaborative activity. PNAS Submitted.Google Scholar
- Rangan, A., & Young, L. (2013a). Dynamics of spiking neurons: between homogeneity and synchrony.
*Journal of Computational Neuroscience*,*34*(3), 433-460. doi:10.1007/s10827-012-0429-1.PubMedCrossRefGoogle Scholar - Rangan, A., & Young, L. (2013b). Emergent dynamics in a model of visual cortex.
*Journal of Computational Neuroscience*. doi:10.1007/s10827-013-0445-9.PubMedCentralGoogle Scholar - Renart, A., Brunel, N., Wang, X. (2004). Mean field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks.
*Computational Neuroscience: A comprehensive approach*.Google Scholar - Riffell, J., Lei, H., Hildebrand, J. (2009). Neural correlates of behavior in the moth Manduca sexta in response to complex odors.
*Proceedings of the National Academy of Science*,*106*, 19,219–19,226.CrossRefGoogle Scholar - Riffell, J., Lei, H., Christensen, T., Hildebrand, J. (2009). Characterization and coding of behaviorally significant odor mixtures.
*Current Biology*,*19*, 335–340.PubMedCentralPubMedCrossRefGoogle Scholar - Samonds, J., Zhou, Z., Bernard, M., Bonds, A. (2005). Synchronous activity in cat visual cortex encodes collinear and cocircular contours.
*Journal of Neurophysiology*,*95*, 2602–2616.PubMedCrossRefGoogle Scholar - Sillito, A. (1975). The contribution of inhibitory mechanisms to the receptive field properties of neurons in the striate cortex of the cat.
*Journal of Physiology*,*250*, 305–329.PubMedCentralPubMedGoogle Scholar - Singer, W. (1999). Neuronal synchrony: a versatile code for the definition of relations?
*Neuron*,*24*, 49–65.PubMedCrossRefGoogle Scholar - Sompolinsky, H., & Shapley, R. (1997). New perspectives on the mechanisms for orientation selectivity.
*Current Opinion in Neurobiology*,*7*, 514–522.PubMedCrossRefGoogle Scholar - Sun, Y., Zhou, D., Rangan, A., Cai, D. (2010). Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics.
*Journal of Computational Neuroscience*,*28*, 247–266.PubMedCrossRefGoogle Scholar - Treves, A. (1993). Mean-field analysis of neuronal spike dynamics.
*Network*,*4*, 259–284.CrossRefGoogle Scholar - Wilson, H., & Cowan, D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons.
*Biophysical Journal*,*12*, 1–24.PubMedCentralPubMedCrossRefGoogle Scholar - Wilson, H., & Cowan, D. (1973). A Mathematical theory of the functional dynamics of cortical and thalamic nervous tissue.
*Kybernetik*,*13*, 55–80.PubMedCrossRefGoogle Scholar - Worgotter, F., & Koch, C. (1991). A detailed model of the primary visual pathway in the cat comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity.
*Journal of Neuroscience*,*11*, 1959–1979.PubMedGoogle Scholar - Yu, Y., & Ferster, D. (2010). Membrane potential synchrony in primary visual cortex during sensory stimulation.
*Neuron*,*68*, 1187–1201.PubMedCentralPubMedCrossRefGoogle Scholar - Yu, S., Yang, H., Nakahara, H., Santos, G., Nikolic, D., Plenz, D. (2011). Higher-order interactions characterized in cortical activity.
*Journal of Neuroscience*,*31*, 17,514–17,526.CrossRefGoogle Scholar - Zhang, J., Rangan, A., Cai, D., et al. (In preparation). A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony.Google Scholar
- Zhou, D., Sun, Y., Rangan, A., Cai, D. (2008). Network induced chaos in integrate-and-fire neuronal ensembles.
*Physical Review E*,*80*(3), 031918.CrossRefGoogle Scholar