# Statistical properties of superimposed stationary spike trains

## Abstract

The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities—like the count variability, inter-spike interval (ISI) variability and ISI correlations—and demonstrate the match with the *in vivo* data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.

## Keywords

Point process Population activity Spike train variability Serial interval correlations Spike train simulation Network simulation## 1 Introduction

A neuron embedded in a cortical network receives incoming spike trains from thousands of presynaptic neurons (Binzegger et al. 2004) (for a recent review on cortical connectivity see Boucsein et al. 2011). In order to model the summed input a neuron receives from its presynaptic partners it is therefore required to study superpositions of spike trains (see Fig. 1(b)). Refractoriness in a single spike train can be described in the framework of renewal processes (Cox 1962). In contrast to the superposition of Poisson processes, however, the superposition of renewal processes with refractoriness is not a Poisson process (Lindner 2006; Câteau and Reyes 2006) nor is it a renewal process (Cox and Smith 1954), complicating the analysis. The gamma process is an often employed renewal process that can model single spike trains with effective refractoriness (Kuffler et al. 1957). Recently Ostojic (2011) demonstrated that spike trains of spiking neurons driven by fluctuating input generally resemble gamma processes. Superpositions of gamma processes, however, are hard to analyze and simulation results only provide limited insights. Thus it is desirable to find a description of the spiking of cortical neurons at an intermediate level of detail between the Poisson process, which neglects all properties of the inter-spike-interval (ISI) distribution except for the mean, and the gamma process, which allows a good fit of the neuronal ISI histograms. Here, we use the Poisson process with dead time (PPD) as such an intermediate model (Johnson 1996). The PPD is a simple extension of the Poisson process, which produces spikes with equal probability at any time, except for a fixed duration of silence after each event. This time span is called the dead-time. The PPD has been used successfully before as a model of the discharges of auditory nerve fibers (Johnson and Swami 1983). Note, however, that in the current work the dead-time is used to model the effective refractoriness of a cortical neuron, which is on the order of tens of milliseconds. The simplicity of the PPD alone, in contrast to the gamma process and other renewal processes, enables us to obtain the analytical results on the statistics of superimposed processes which are presented here.

Stochastic point processes can be described by a hazard function, which defines the stochastic intensity, typically conditioned on time and the spike history. Therefore, the hazard function is often also called the conditional intensity of the process. In the special case of a renewal processes the hazard function only depends on the time that has passed since the last spike, which is also called the age. Given a hazard function that depends only on the age, an ensemble of processes will tend towards an equilibrium distribution of ages, which is called the stochastic equilibrium of the process. However, renewal processes can be generalized to inhomogeneous renewal process by introducing a time dependence in the hazard function. For instance, in case of the PPD, the dead-time can be fixed, while the rate parameter can be made time-dependent to model non-stationary input to a neuron. For such time-dependent input, ensembles of PPDs display stochastic transients, like overshoots of the firing rate in response to rapid changes of the input, which are caused by the dead-time. Due to the changing hazard function this point process operates far from its stochastic equilibrium, but can still be understood and analyzed quantitatively (Deger et al. 2010) because of its relative simplicity. Stochastic transients caused by refractoriness have been found to contribute to the precision of the neuronal response to fluctuating input (Berry and Meister 1998). Effective refractoriness is, by means of a spike history term in the conditional intensity function, commonly incorporated into nonstationary point process models (Kass and Ventura 2001; Meyer and van Vreeswijk 2002) and into generalized linear point process models of neuronal stimulus encoding (Paninski 2004; Pillow et al. 2008). Also in multivariate point process models, the spike history was found to be important for the statistical prediction of spike times (Truccolo et al. 2010), see Truccolo (2010) for an overview. In the absence of a stimulus, however, the spontaneous neuronal activity can often be well described by stationary point processes. For spontaneous activity, the concept of encoding is not applicable since it is unclear which quantities are encoded in the neuronal activity. But also beyond applications in neuronal coding, point process theory is instrumental to characterize neuronal spiking, in particular when it comes to comparing real brains with network models. Recurrent networks must be self-consistent: Superimposed spike trains constitute the input to a neuron, the response (output) of which must be compatible with the properties of its input (Câteau and Reyes 2006).

Here we investigate the statistics of superposition spike trains with stationary rates, both analytically for PPDs and numerically for superimposed spike trains from *in vivo* recordings. In Section 3.1 we demonstrate how the parameters of the PPD can be chosen to accurately reproduce first- and second order statistics of the spike trains of single neurons recorded *in vivo* by the method of moments (Tuckwell 1988). We investigate second-order statistical properties, in particular the Fano factor, the coefficient of variation of the ISIs, and the serial correlations between subsequent ISIs. These quantities are called second-order statistics since they involve first and second moments of the respective probability distributions. In Section 3.2 we introduce the auto-correlation function of the PPD, and Section 3.3 presents an analytical expression for the Fano factor depending on the counting window. In Section 3.4 we study the pooled spike trains from populations of neurons with effective refractoriness. We compare superpositions of the recorded spike trains and find that the corresponding superimposed model spike trains match their statistics remarkably well, much in contrast to the Poisson process. In models of recurrent networks, mean field theory can be applied to theoretically estimate the spike rate in the network Brunel (2000), but relies on the assumption that individual neurons spike like Poisson processes. In Section 3.5 we show that the firing rate of integrate-and-fire model neurons is in fact sensitive to refractoriness in the single spike train and explain the observed deviation compared to Poisson input.

To date, the superpositions of point processes other than Poisson had to be generated by superimposing numerous realizations of the single point process. If each simulated neuron is to receive independently generated superposition spike trains (corresponding to the Poisson spike trains used, for example, in Brunel (2000)) this generation procedure would slow down the simulation to an unbearable extent. Here we present novel algorithms which efficiently generate superpositions of arbitrary numbers of PPDs (Algorithm 1) and of gamma processes (Algorithm 2) in discretized time. The two generators require on the order of 10 to 100 times the number of computations that a Poisson process generator does. This factor is independent of the number of superimposed processes, which makes it feasible to use superpositions of PPD or gamma processes as population models in contemporary and future simulation studies.

## 2 Materials and methods

### 2.1 *In vivo* neuron recordings

*in vivo*, as published in Nawrot et al. (2007). We estimate the time-dependent spike rate with a Gaussian filter kernel with parameter

*σ*= 2.5 s. From the original dataset consisting of the spike trains of eight neurons, we select the three spike trains which show the lowest rate variability and at least 500 spikes. These three spike trains are labeled neurons 1, 2 and 3 in the following. Table 1 lists the parameters characterizing the spike trains. Neurons 1 and 2 were recorded from female rats, neuron 3 from a male rat.

Statistical measures of the spike trains for neurons 1–3, and the parameters of the matched renewal processes

| Unit | Neuron 1 | Neuron 2 | Neuron 3 |
---|---|---|---|---|

Total number of spikes | 1 | 3,959 | 989 | 531 |

Kernel-estimated rate | s | 12.29 ± 0.64 | 10.86 ± 0.59 | 9.45 ± 0.48 |

Total duration | s | 321.7 | 90.2 | 55.8 |

Inter-spike-interval: \(\hat{\mu}\pm\hat{\sigma}\) | ms | 81.3 ± 24.5 | 91.3 ± 44.5 | 105.4 ± 36.3 |

Matched PPD: | s | 40.83 | 22.48 | 27.56 |

Matched PPD: | ms | 56.79 | 46.84 | 69.09 |

Matched PPD: \(\bar{d}=d/\mu\) | 1 | 0.70 | 0.51 | 0.66 |

Matched gamma process: | 1 | 11.01 | 4.21 | 8.43 |

Matched gamma process: | s | 135.49 | 46.14 | 80.04 |

To check whether the serial interval correlations affect the statistical quantities we compute from the spike trains throughout the manuscript, we also shuffled the original spike trains, which removes serial interval correlations (Nawrot et al. 2007). To shuffle the spike trains, we compute the inter-spike-intervals (ISI), randomly permute them, and consider the cumulative sum of the permuted ISI as the shuffled spike train.

### 2.2 Superposition spike train and surrogate data generation

*n*component processes from the recorded

*in vivo*data, we split the neuron spike trains into

*n*fragments of equal duration. In each of the fragments, the time of the beginning of the fragment is subtracted from each spike time. Then the fragments are superimposed, as depicted in Fig. 1(b). Thereby we consider the fragments of the spike train of the neuron as independent realizations of the same point process in equilibrium. We match the parameters of three different point processes to the recorded spike trains: a PPD as described in Section 3.1, a gamma process as in Appendix A and a Poisson process, which is defined by the rate of spikes only. In Figs. 2, 3 and 4 the error-bars denote the standard deviation from the mean across multiple realizations of the matched processes. Each of these realizations has the same duration as the original, unfragmented recording. Since the recorded spike trains are of finite duration, all statistical quantities we compute for the spike trains are estimates. We quantify their variance due to the finite duration of the recording from the statistics across many realizations of the matched processes of the same length.

## 3 Results

### 3.1 Inter-spike interval statistics

In this section we match the Poisson process with dead-time (PPD) to recorded neural spiking activity by means of the inter-spike interval (ISI) statistics (Tuckwell 1988). The PPD is a renewal process, which means each ISI is drawn independently from the same distribution. The three recorded spike trains show serial interval correlations, as can be seen in Fig. 4(d)–(f) for *n* = 1, which has also been reported in Nawrot et al. (2007). Since renewal processes by definition do not have serial interval correlations, they can only be an approximate model of the spike trains’ statistics. We will evaluate this model a posteriori using surrogate methods. In Figs. 2, 3 and 4 results computed from the shuffled spike trains are shown as circles, whereas results computed from the original spike trains are shown as crosses. As can be seen in Fig. 4(d)–(f) for *n* = 1, ISI shuffling efficiently removed the serial correlations in the single spike trains.

*θ*(

*x*) = {1 for

*x*≥ 0, 0 else} denotes the Heaviside function,

*λ*≥ 0 is the rate parameter and

*d*≥ 0 is the dead-time for which no spikes can occur. The first two central moments of the ISI density are

*μ*denotes the mean and

*σ*

^{2}denotes the variance of the ISI. The equilibrium rate of the process is

*λ*. The coefficient of variation (CV) of the ISI is

*d*≤

*μ*it follows that CV ≤ 1, which means that the PPD is generally more regular than the Poisson process. A PPD can be associated to the stationary spiking of a neuron given empirical estimates of mean and standard deviation of the neural ISI, \(\hat{\mu}\) and \(\hat{\sigma}\). By matching the central moments of the ISI (Eqs. (2) and (3)), we obtain the parameters of the PPD as

### 3.2 Auto-correlation function

*t*= 0,

*f*

_{ k }(

*t*) is the density of the

*k*-th order interval (Holden 1976). In case of a renewal process the density of the

*k*-th order interval can be written as \(f_{k}=f^{\ast k}\), where

*f*is the first-order ISI density which here is given by Eq. (1) and

*θ*(

*t*−

*kd*) each of the terms is restricted to

*t*>

*kd*, which explains the distinct domains of the function. For

*t*∈ (0,

*d*) the auto-correlation is zero, followed by a jump at

*t*=

*d*to the value

*λ*, from which it decays exponentially for

*t*∈ [

*d*, 2

*d*). In the following intervals [

*kd*, (

*k*+ 1)

*d*),

*k*> 1, higher order terms of the type

*t*

^{ k − 1}

*e*

^{ − λt }are added sequentially. For large

*t*, the effect of the spike at

*t*= 0 becomes negligible, therefore lim

_{ t→∞ }

*γ*(

*t*) =

*ν*. The auto-correlation function of a recorded neuron and the one of the associated PPD are shown in Fig. 2(d)–(f). Discrepancies of this model to the actual shape of the neuron’s autocorrelation are obvious. As in Fig. 2(a)–(c), the PPD captures the fact that neurons are refractory and neglects the details how the neuron returns from refractoriness. The auto-correlation functions of the matched gamma processes are very similar to the ones of the recorded neuronal spike trains. Shuffling of the neuronal spike trains resulted in minor improvements by removing serial correlations. The matched Poisson processes are not shown in Fig. 2(b)–(f) since their auto-correlation functions are constant and equal to

*ν*for

*t*> 0, not showing any refractoriness.

### 3.3 Count variance and Fano factor

Some statistical measures of spike trains are based on the spike count in a certain time window. For instance the Fano factor, a frequently invoked measure to quantify irregularity of neuronal activity, is defined as the variance over the mean of the spike count. Since electrophysiological recordings are necessarily of limited duration, a limited length of the counting window must be chosen. However, the choice of the counting window can influence the value of the Fano factor. Typically, the dependence of the Fano factor on the counting window cannot easily be computed for an arbitrary point process. Recently Farkhooi et al. (2011) presented a general formula for the Fano factor of stationary point processes in dependence of the counting window and its limit for large windows (Eq. (21)). Here we use a different method where we consider the spike count as a shot noise with a rectangular filter kernel and compute the variance of the count based on Campbell’s theorem (Campbell 1909; Tetzlaff et al. 2008). By this approach we obtain the same result as Farkhooi et al. (2011) for the Fano factor of the spike count. However, our approach can also be used with other kernels and will allow us to compute the variance of the membrane potential of neurons driven by PPDs below. In Nawrot et al. (2008) the Fano factor of gamma processes has been computed by the same method, which in that case required numerical integration. The simplicity of the PPD enables us to analytically compute this dependence here.

*X*

_{ l }in a counting window of length

*l*for the PPD. Counting events is equivalent to evaluating a shot noise (Papoulis 1991) with a rectangular filter kernel

*h*(

*t*) = 1

_{ t ∈ [0,l]}, where {1 if

*Z*, 0 else}, driven by the spike train

*S*(

*t*), such that

*a*,

*b*) denotes the incomplete gamma function. If

*l*<

*d*the count variance simplifies to

*l*. For

*l*→ 0, Eq. (13) yields FF

_{0}= 1. Expression (14) is particularly interesting for small counting windows

*l*<

*d*, since in this case the sum over

*ξ*

_{ k }on the right of Eq. (14) is empty. As

*l*increases, more terms

*ξ*

_{ k }are added to Eq. (14). For

*l*→ ∞ we use the identity (21) and the property of vanishing serial interval correlations valid for any renewal process to obtain

The dependence of the Fano factor on the length of the counting window for neural spike trains compared to matched PPDs and gamma processes is shown in Fig. 3. Apart from slight deviations around the kink of the curve at *l* ≈ 1.3*d*, neurons 1 and 3 and their associated gamma processes follow Eq. (14) exactly. For neuron 2 the deviations are larger, but Eq. (14) still gives a reasonable estimate. We attribute the increased deviations in neuron 2 to the fact that the inter-spike intervals of neuron 2 have stronger serial correlations than those of neurons 1 and 3, as shown in Fig. 4(e) for *n* = 1, which is incompatible with a renewal process model.

### 3.4 Superpositions

Superpositions of PPDs are a model of the summed synaptic input of neurons in neuronal networks. From the assumption that each presynaptic neuron spikes according to a PPD, the statistics of the summed input follow. Consider the superposition \(\sum_{i=1}^{n}S_{i}(t)\) of *n* independent and identically distributed renewal processes *S* _{ i }(*t*). The variance of the superposition’s count in the window *l* is by Eq. (10) \({\rm{Var}}[\sum_{i=1}^{n}(S_{i}\star h)]=n{\rm{Var}}[X_{l}]\), and the mean count is *n*E[*X* _{ l }]. It follows that the Fano factor of an independent superposition does not depend on *n*, so it is identical to the Fano factor of the component processes. This, however, does not hold for all statistics of the superimposed spike train.

*n*PPDs, from which we obtain the coefficient of variation of the ISI, enabling us to determine the serial correlations in the superposition as well. The ISI density

*f*of a component process is given by Eq. (1). According to Cox (1962) the ISI density of an

*n*-fold superposition of independent and identically distributed renewal processes in equilibrium is

_{ n }of the superposition only depends on the relative refractoriness

*d*/

*μ*and the number of component processes. As is easily seen, in the limit of large numbers of component processes,

*n*renewal processes is, in general, not a renewal process, because serial correlations of subsequent ISIs occur. We can use the previous results to quantify the magnitude of serial correlations in the superposition spike train: The sum over serial correlations of all orders is accessible through the relation (Cox and Lewis 1966)

*ρ*

_{ k }is the correlation coefficient of

*k*-th neighbor ISIs. Relation (21) is valid for any stationary point process. With Eqs. (20) and (15) this yields for the total serial correlation in the superposition of PPDs

*n*= 1 the figures show the total serial correlation of the single neuronal spike train. For neuron 1 this is small and negative but non-zero, neuron 2 has larger negative serial correlations, and neuron 3 has small positive serial correlations (Nawrot et al. 2007). The matched renewal processes have independent subsequent ISIs and hence for these

*S*

_{1}= 0 by construction. Nonetheless, for

*n*> 1 the data for neuron 1 and the matched gamma process agree with the analytical result for the PPD (22) within two standard errors or better. For neurons 2 and 3 the larger serial correlations of the recorded spike trains introduce systematic deviations, but the data still follow Eq. (22) approximately. In order to investigate to what extent serial interval correlations of the single spike trains cause these deviations we shuffled the intervals of the original spike trains, removing serial interval correlations altogether (Nawrot et al. 2007). For the shuffled spike trains the serial correlations of the superpositions are now closer to the analytical result for the PPD for all three neurons. These results show that the superposition of PPDs is an appropriate model for serial correlations of superpositions of spike trains with small serial correlations. In contrast, for the Poisson process model these serial correlations do not exist, since a superposition thereof is again a Poisson process.

*n*, which is the case of superpositions of many component processes, we obtain from Eq. (22)

*n*≈ 5,000 synaptic inputs (Binzegger et al. 2004; Boucsein et al. 2011). The serial correlation magnitude of the superimposed spike trains can be well approximated by Eq. (23) in these cases.

### 3.5 Effects on integrate-and-fire neurons

*τ*and resistance

*R*. Whenever the membrane potential reaches the threshold

*U*

_{ θ }, the neuron elicits a spike and the membrane potential is reset to

*U*

_{ r }= 0. After producing a spike the neuron cannot receive input for the duration of absolute refractory period

*τ*

_{ r }. The input current

*I*(

*t*) is brought about by excitatory and inhibitory point events. Each input spike elicits a

*δ*-shaped postsynaptic current, which leads to a jump of the membrane potential that relaxes back exponentially. The jump amplitude of excitatory input spikes is

*w*, of inhibitory input spikes it is −

*gw*,

*i*and

*j*index the excitatory and inhibitory input spikes that the neuron receives, respectively. The input is scaled by

*τ*to let the membrane potential jump by

*w*or −

*gw*, respectively, upon each input spike.

*U*(

*t*) upon the impulse input

*RI*(

*t*) =

*δ*(

*t*) is called the impulse response

*h*(

*t*), which in this case evaluates to

*n*PPDs with mean ISI

*μ*, dead-time

*d*and synaptic amplitude

*w*follows from a similar calculation as in the case before (see Appendix D for details)

*d*= 0. So relative to a membrane potential

*U*′ driven by a superposition of

*n*Poisson processes, with the same mean ISI

*μ*, the dead-time in the input processes reduces the variance by a factor of

*w*corresponding to

*g*= 0 in Eq. (25). A mixture of inputs with different synaptic weights is considered below in Eq. (34).

*μ*, while keeping the relative dead-time \(\bar{d}=d/\mu\) constant, with \(\bar{d}\in[0,1]\), to obtain

*τ*. Another interesting limit of Eq. (29) is the completely regular process with

*d*=

*μ*for which we obtain the relative variance

*ν*

_{e}and

*ν*

_{i}were chosen to bring the neuron into the fluctuation driven regime (van Vreeswijk and Sompolinsky 1996), see also Table 2. Given these parameters, in case of Poisson process inputs and in the absence of a spiking threshold, the free membrane potential has the moments E[

*U*] =

*τw*(

*ν*

_{e}−

*gν*

_{i}) = 10.0 mV and \({\rm{Var}}[U]=\frac{\tau}{2}w^{2}(\nu_{\rm{e}}+g^{2}\nu_{\rm{i}})=12.5\,{\rm{mV}}^{2}\) in equilibrium.

Parameters of the simulation of leaky integrate-and-fire (LIF) model neurons and their input

| Symbol | Unit | Value |
---|---|---|---|

Excitatory input rate | | s | 35,757.6 |

Inhibitory input rate | | s | 6,464.6 |

Synaptic weight | | mV | 0.1 |

Relative inhibitory weight | | 1 | 4.5 |

Membrane time constant | | ms | 15 |

Threshold potential | | mV | 15 |

Reset / equilibrium potential | | mV | 0 |

Simulation time step | Δ | ms | 0.05 |

Refractory period of neurons | | ms | 1 |

Number of neurons each (1,2) | | 1 | 100 |

Simulation duration | | s | 1,000 |

Duration of recording of | | s | 100 |

*ν*

_{e}and

*ν*

_{i}as superpositions of PPD components, each with rate 1/

*μ*and dead-time

*d*, we choose superpositions of

*μ*and

*d*. In the absence of a spiking mechanism (Fig. 5(a), population 1) of the receiving neuron, refractoriness in the input spike trains decreases the variance of the membrane potential. Driving input composed of independent excitatory and inhibitory superpositions of PPDs results in the membrane potential variance

To determine how the refractoriness in the input spike trains affects the spiking of neurons, we simulated LIF neurons that emit an action potential if the voltage reaches a threshold *U* _{ θ } as defined above (Fig. 5(a), population 2). Figure 5(c) shows the dependence of the firing rate of the LIF neurons on the relative dead-time *d*/*μ* of the input processes, keeping the total input rate and the rate 1/*μ* of a single process constant. Data are shown for six different values of the component process rate 1/*μ*, indicated by the colors of the curves. The case of *d* = 0 here reflects the commonly used Poisson process. With increasing dead-time the firing rate of the LIF neurons first rapidly decreases. This corresponds to a decrease in variance of the free membrane potential as can be seen in Fig. 5(d). The initial decrease in firing rate of the LIF neurons is followed by a slight increase that saturates (for all but the yellow and cyan curves, see below) as the component spike trains become completely clock-like as *d* → *μ*.

*μ*for five values of

*d*. We observe that

*d*changes the shape of the stationary distribution of the membrane potentials, most visible around the peak of the distribution and at the threshold. The distribution of membrane potentials determines the rate and response properties of the neuron (Helias et al. 2010b), ultimately giving rise to the rate dependence shown in Fig. 5(c). From the inset in Fig. 5(b) it can be seen that close to the spiking threshold, the distribution of membrane potentials does not go to zero linearly, as a diffusion approximation would predict (Gerstner and Kistler 2002). This is also the case for Poisson input with

*d*= 0 (blue curve) and can be explained by the time-discretization of the simulation (Helias et al. 2010a) and the small but non-vanishing synaptic weight (Helias et al. 2010b). Apart from this phenomenon, all three curves with

*d*> 0 show a decreased probability density close to threshold. The firing rate of the neuron depends strongly on the values of the distribution in this range, as we recently illustrated in a focused review article (Helias et al. 2011). The changes of the shape of the probability density close to threshold explains the significant decrease in the firing rates in Fig. 5(c). However, the yellow and cyan curves in Fig. 5(c), where 1/

*μ*= 14 s

^{ − 1}and 17 s

^{ − 1}, deviate from the other three since they do not saturate after the initial decrease, but continue to rise. A similar trend can also be seen in the green (8 s

^{ − 1}), red (11 s

^{ − 1}) and orange (20 s

^{ − 1}) curves, which ultimately saturate, but rise a little at first. This effect is related to the auto-covariance function of the input process, cf. Fig. 2(d)–(f), and will be discussed in detail based on Fig. 6 below.

The relative variances shown in Fig. 5(d) can further be related to the asymptotics of Eq. (29) derived above. All four curves show the same maximum of the variance for *d* = 0, which also corresponds to the limit of small rates of the input processes (Eq. (31)). For small *d* the curves then follow the limiting case of infinite component rate Eq. (30) (dotted line), but the slopes soon decrease in magnitude to saturate at their respective limiting value Eq. (32) (dashed lines). Note that the PPDs matched to the recorded neurons above have *d*/*μ* ≈ 0.6 (see Table 1), which is well described by the limiting case Eq. (32). The parameters of the neuron model and the input processes are shown in Table 2.

In both the Fig. 5(c) and (d) we also included the results we obtained by using a gamma process superposition generator instead of the PPD one. Here we used gamma processes with integer shape parameter *p* which ranged from 1 to 10. When matched to the moments of a PPD, this corresponds to a relative dead time of \(\bar{d}=1-p^{-1/2}\) irrespective of the scale parameter of the gamma process. The results for the ten different gamma process superposition inputs are displayed as circles in the figures, showing a very similar trend both concerning the membrane potential variance and the firing rate of the stimulated neurons. Because the gamma process has a different auto-correlation function than the PPD, the analytical result for the reduction of the membrane potential Eq. (29) is not valid for the superposition of gamma processes. Still the variance reduction follows a similar law in this case. The error bars in Fig. 5(c) display the standard deviation of the firing rate estimate across simulated neurons in population 2.

To better understand the non-monotonous effect of increasing dead-time of the component input processes on the firing rate of LIF neurons which is displayed in Fig. 5(c) we investigated the power spectral densities (PSD) of input, membrane potential and neuronal output spike trains, for five values of \(\bar{d}=d/\mu\) and the previously chosen input component rates in Fig. 6. The PSD of the PPD, of independent superpositions of PPDs and of the membrane potential driven by them, is known analytically as described in Appendix D. Figure 6(a) shows the PSD of the superpositions of excitatory and inhibitory PPDs which are used as input to the simulated LIF neurons. For \(\bar{d}=0\) the PSD is flat, as it should be for a Poisson process. As \(\bar{d}\) increases, peaks emerge in the power spectrum at frequencies which are roughly multiples of 1/*d*. These correspond to the oscillations in the auto-correlation function, shown in Fig. 2(d)–(f), because the auto-covariance Eq. (38) is the Fourier transform of the PSD according to the Wiener–Khintchine theorem. The colors of the six curves correspond to different component process rates 1/*μ* as in Fig. 5(c) and (d). Note that the maximum value of the PSD is identical for all rates at a fixed \(\bar{d}\). Figure 6(b) displays the PSD of the membrane potential of neurons in population 1 (which do not spike). The membrane acts as a low-pass filter with a gain decreasing as ∼ 1/*f* ^{2} beyond cutoff frequency 2*π*/*τ*. Accordingly, the peaks in the input PSD (Fig. 6(a)) are diminished more and more for larger frequencies.

The output spike trains of the LIF neurons in population 2, however, show a different characteristic, as can be seen from their PSD shown in Fig. 6(c). For the Poisson input case \(\bar{d}=0\) the PSD of the spike trains is low for small frequencies and gradually approaches the stationary firing rate, which is a sign of the effective refractoriness of these neurons. As the oscillatory components in the input signal increase for rising \(\bar{d}\), the peaks in the input PSD (Fig. 6(a)) become visible in the neuronal spike trains (Fig. 6(c)), indicating that the oscillatory input modulates the outgoing firing rate. However, although the input amplitude at peak frequency is invariant, the output amplitude at peak frequency depends on the peak frequency, showing maximum transmission at about 15 Hz in the red curve in subplot \(\bar{d}=0.6\) and in the yellow curve in subplot \(\bar{d}=0.8\). This effect might be at least partly explained by linear response theory of the LIF neuron (Ledoux and Brunel 2011), which has shown that in the regime of sufficiently low fluctuations of the membrane, resonances of the transmission gain appear near the firing rate of the neuron.

The resonance of the LIF neuron, however, coincides with an increase of the mean firing rate (Fig. 5(c)) when the position of the peak of the input PSD comes close to the resonance frequency. An increase in mean rate generally cannot be a linear effect of oscillatory input—there the mean input is unchanged by the oscillatory components in the input. Still a qualitative explanation can be given here by considering the PPD with a time-dependent hazard function (sine-modulated) and dead-time *d* _{n} as a simple model for the LIF neuron. In Deger et al. (2010) this model system has been analyzed for general periodic inputs, revealing multiplicative couplings of input frequency components in the output rate. In particular Fig. 3(c) of Deger et al. (2010) shows that the mean rate (*β* _{0}) of a PPD with sine-modulated hazard has a local maximum at frequencies of about 0.41/*d* _{n} and 0.88/*d* _{n}. For a more quantitative argument, in the following we need to relate the statistics of the modulated PPD to the spiking of the LIF neurons.

Results of the analysis of the inter-spike-intervals of the LIF neurons for several \(\bar{d}\) (colors) as a function of input component rate 1/*μ* are shown in Fig. 6(d). As the input component rate 1/*μ* increases, the mean neuronal ISI *μ* _{n} grows. This general trend is due to the continuous decrease in relative variance of the membrane potential (Eq. (29)) with increasing 1/*μ*. In the range between 8 and 20 Hz, shown in the inset, there are local minima of *μ* _{n} for larger \(\bar{d}\), which corresponds to the rise of *ν* = 1/*μ* _{n} with growing \(\bar{d}\) in Fig. 5(c). The second subplot shows the coefficient of variation of the neurons. For the larger values of \(\bar{d}\), 0.6 and 0.8, it changes non-monotonically. In the region 1/*μ* < 10 s^{ − 1}, the CV_{n} decreases, presumably because the variance of the input is continuously reduced. The mean integration time of the LIF neuron here is *μ* _{n} ≈ 0.1 s, so on average it integrates less than one spike of each component input process. However, for 1/*μ* > 10 s^{ − 1}, the LIF neuron integration period *μ* _{n}, which also grows slowly, covers an increasing number of spikes of each input PPD on average, which seems to gradually increase the CV_{n}.

Matching a PPD to the LIF neurons’ spike trains via Eq. (6) yields the parameters *λ* _{n} and *d* _{n} shown in the bottom two subplots of Fig. 6(d). The matched value of *d* _{n} hence depends on \(\bar{d}\) and 1/*μ*. Local maxima of the output spike rate are expected around frequencies of 0.41/*d* _{n} and 0.88/*d* _{n}. Given the range of values of *d* _{n} which are matched to the LIF neurons, the resonances of the mean rate are located in the frequency ranges between 8.1 and 17.4 Hz and between 17.3 and 37.4 Hz, respectively. The peaks of the input PSD (Fig. 6(a)) lie well within these frequency ranges. Hence the existence of resonances of the mean firing rate in Fig. 5(c) can be explained by the resonance properties of the non-stationary matched PPD. The same arguments extends to the resonance observed in the first harmonic (Fig. 6(c)) and higher harmonics, which also show local maxima of the transmission gain for certain frequencies in the PPD model (Deger et al. 2010). Further studies, which investigate the effects of component dead-time in the input spike trains on the dynamics of LIF neurons in more detail, are necessary to quantitatively explain this phenomenon.

## 4 Discussion

We have demonstrated how a PPD can be associated to a stationary neuronal spike train by matching of mean and variance of the ISI (Tuckwell 1988). The PPD is the simplest possible extension of the Poisson process to capture effective refractoriness. Due to the simplicity of the PPD, we uncovered the functional dependence of the Fano factor (FF) on the length of the counting window. Our analytical result for the PPD is in good agreement both with the gamma process and the neuronal data, which suggests that effective refractoriness is the key issue in understanding this functional dependence. In contrast to the Poisson process which has FF = 1, the FF of the PPD, and of independent superpositions of PPDs, is generally smaller than unity. As a model for a population of independently spiking neurons the independent superposition of PPDs is therefore more accurate in terms of count variability.

Considering the ISI density of a superposition of PPDs, we find that it converges rapidly to the exponential distribution. Correspondingly, the coefficient of variation (CV) of the ISI converges to 1 for large numbers of superimposed processes. This, however, does not mean that the process becomes a Poisson process. The superposition of PPDs still differs from the Poisson process with respect to its FF, its auto-correlation function and its serial interval correlations. For large counting windows, the deviations of the FF can be explained by the serial interval correlations through Eq. (21). But already for small counting windows the FF of PPDs differs from the Poisson process, see Eq. (14). Moreover, the analytical dependence of the CV on the number of superimposed processes agrees with the neuronal spike data and the gamma realizations, which hints again at effective refractoriness being the key issue to understand second order statistics of the process.

Finally, the total serial interval correlation between subsequent ISIs in superpositions of neural spike trains are accurately predicted by our analytic result. Serial correlations in neuronal spike trains have been reported frequently (see Farkhooi et al. 2009 for an overview). As has been shown by Muller et al. (2007) and Schwalger et al. (2010) they can result from spike-frequency adaptation. However, in superimposed spike trains the total serial correlation is due to another effect, which can be illustrated by the example of the superposition of two spike trains: Given the spike train of one neuron, superimposed spikes of another neuron will divide an ISI of the first neuron in two parts that add up to a fixed length. Because one of the parts is generally longer than the other, the two intervals have negative serial correlation. For a superposition of *n* spike trains, a similar argument holds. The detailed mechanisms of how serial correlations and effective refractoriness in the input spike train affect the membrane potential dynamics of LIF neurons remain to be investigated. Our simulation results show that the variance and the shape of the equilibrium distribution of membrane potentials and the stationary firing rate of integrate-and-fire neurons with balanced excitatory and inhibitory input are significantly affected.

We have applied the PPD as a model for the spike trains of three somatosensory cortical neurons with a coefficient of variation CV < 1. This means that the modeled spike trains are more regular than Poisson processes. Only neurons with this property can be modeled with a stationary PPD. In contrast, neurons in the prefrontal cortex of monkeys typically show CV > 1 (Shinomoto et al. 2003), which can not be achieved with the stationary PPD according to Eq. (5). It might be possible, though, to capture such increased irregularity by a PPD with a time-dependent rate parameter, see for example (Turcott et al. 1994; Deger et al. 2010). Regularly spiking neurons with a CV < 1, for which the presented results apply, are the majority in motor and premotor regions (Shinomoto et al. 2003) and somatosensory regions (Nawrot et al. 2007) of the cortex.

Another vividly debated topic is the ability of spiking neurons to transmit correlations in the input spike trains to output spikes (De la Rocha et al. 2007; Rosenbaum and Josic 2011; Renart et al. 2010). It has been shown that correlation transmission depends on the auto-correlation functions of the input spike trains (Tetzlaff et al. 2008). Generally, the auto-correlation function of a superposition of independent spike trains is the sum of the auto-correlation functions of the single processes. The latter are, as we demonstrated, closely linked to the effective refractoriness of the neurons.

In models of recurrent networks, mean field theory can be applied to theoretically estimate the spike rate of leaky integrate-and-fire (LIF) neurons in a recurrent neuronal network (Brunel 2000). Thereby the spike rate of each neuron is assumed to be the same and is obtained as the self-consistent solution of the input to output rate mapping of a single neuron. Obviously, in the analytical derivation of the firing rate of a neuron given its input rates, several assumptions are made (Brunel 2000). One of them is that the spike trains of the neurons in the network have Poisson statistics. In fact, it has been shown that the choice of a particular point process as an input to a neuron has impact on the dynamics of the membrane potential of neurons (Câteau and Reyes 2006). In Section 3.5 we have shown that the firing rate of LIF neurons is sensitive to the refractoriness in the single spike trains and we explain the observed deviation compared to Poisson input. Theoretical estimates of the self-consistent mean-field firing rate of recurrent neuronal networks could thus be improved by taking into account the refractoriness of the single neurons.

Refractoriness in the input processes of LIF neurons can also be interpreted as a “colored-noise” problem. As can be seen in Fig. 6(a), the PSD of the input to the neurons is not flat (“white”) as for driving Poisson processes. For small dead-time in the input (\(\bar{d}=0.2\)) the PSD is reduced for small frequencies and gradually increases towards 1/*d*, which is a similar PSD as that of a high-pass filtered white noise (also called “green” noise) . The complementary case of LIF neurons driven by low-pass filtered “white” noise (“red” noise) has been dealt with previously (Brunel and Sergi 1998; Lindner 2004; Moreno-Bote and Parga 2010). An extended Fokker–Planck equation to treat the case of “green” noise effectively as “white minus red” noise has been suggested in Câteau and Reyes (2006). For larger input dead-time (\(\bar{d}\geq0.4\)), however, the oscillatory character of the input signal becomes more influential and a description based on “green” noise alone does not suffice, since the PSD of “green” noise does not contain the pronounced peaks in the input PSD of PPD superpositions (Fig. 6(a)).

We found that the spike trains of LIF neurons driven by superpositions of PPDs show resonances to certain frequency components of the input 6(c)). When the input power at the resonance frequency becomes large (for large \(\bar{d}=d/\mu\)) the mean firing rate of the neurons also increases (Fig. 5(c)). This effect cannot be explained by linear response theory of the LIF neuron. Qualitatively, we explained the change of the mean firing rate by regarding the LIF neuron itself effectively as a PPD with a time-dependent hazard function, which transmits signals non-linearly (Deger et al. 2010). However, this effect might be visible here only since the PSD of the input contains high power at a narrow frequency band. If the dead-time of the input component processes is heterogeneous, the input PSD is less concentrated and might not provoke this non-linear transmission effect. For neurons in cortical networks, it is more reasonable to assume heterogeneous as opposed to homogeneous input processes, suggesting that the change of the mean firing rate for large \(\bar{d}\) that we see here is a hallmark of a rather extreme scenario.

*in vivo*, and a very good model for the pooled spike trains of homogeneous neuronal populations. This is in contrast to the established Poisson process (without dead-time), which does not account for the correct auto-correlation, count variability, ISI variability, and serial interval correlations. We showed that these properties indeed affect the dynamics of the membrane potential of LIF neurons. For simulations in discrete time, homogeneous superpositions of PPDs and of gamma processes can be efficiently generated by the methods we present in Algorithms 1 and 2. The PPD and gamma superposition generators have been implemented in the Neural Simulation Tool (NEST, Gewaltig and Diesmann 2007), which was used to obtain the simulation results presented in this work.

## Notes

### Acknowledgements

We thank Martin Nawrot and Stefano Cardanobile for helpful comments, and two anonymous reviewers for suggesting substantial improvements. Partially funded by BMBF grant 01GQ0420 to BCCN Freiburg, and DFG grant to SFB 780, subproject C4.

### **Open Access**

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

## References

- Berry, M. J., & Meister, M. (1998). Refractoriness and neural precision.
*Journal of Neuroscience, 18*(6), 2200–2211.PubMedGoogle Scholar - Binzegger, T., Douglas, R. J., & Martin, K. A. C. (2004). A quantitative map of the circuit of cat primary visual cortex.
*Journal of Neuroscience, 39*(24), 8441–8453.CrossRefGoogle Scholar - Boucsein, C., Nawrot, M. P., Schnepel, P., & Aertsen, A. (2011). Beyond the cortical column: Abundance and physiology of horizontal connections imply a strong role for inputs from the surround.
*Frontiers in Neuroscience, 5*(32), 1–13.Google Scholar - Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.
*Journal of Computational Neuroscience, 8*(3), 183–208.PubMedCrossRefGoogle Scholar - Brunel, N., & Sergi, S. (1998). Firing frequency of leaky intergrate-and-fire neurons with synaptic current dynamics.
*Journal of Theoretical Biology, 195*(1), 87–95.PubMedCrossRefGoogle Scholar - Campbell, N. (1909). The study of discontinuous phenomena.
*Proceedings of the Cambridge Philological Society, 15*, 117–136.Google Scholar - Câteau, H., & Reyes, A. (2006). Relation between single neuron and population spiking statistics and effects on network activity.
*Physical Review Letters, 96*(5), 058101.PubMedCrossRefGoogle Scholar - Cox, D. R. (1962).
*Renewal theory*. London: Methuen.Google Scholar - Cox, D. R., & Lewis, P. A .W. (1966).
*The statistical analysis of series of events. Methuen’s monographs on applied probability and statistics*. London: Methuen.Google Scholar - Cox, D. R., & Smith, W. L. (1954). On the superposition of renewal processes.
*Biometrika, 41*, 1–2, 91–99.Google Scholar - Deger, M., Helias, M., Cardanobile, S., Atay, F. M., & Rotter, S. (2010). Nonequilibrium dynamics of stochastic point processes with refractoriness.
*Physical Review E, 82*(2), 021129.CrossRefGoogle Scholar - De la Rocha, J., Doiron, B., Shea-Brown, E., Kresimir, J., & Reyes, A. (2007). Correlation between neural spike trains increases with firing rate.
*Nature, 448*(16), 802–807.PubMedCrossRefGoogle Scholar - Farkhooi, F., Muller, E., & Nawrot, M. P. (2011). Adaptation reduces variability of the neuronal population code.
*Physical Reviews E, 83*(5), 050905.CrossRefGoogle Scholar - Farkhooi, F., Strube-Bloss, M. F., & Nawrot, M. P. (2009). Serial correlation in neural spike trains: Experimental evidence, stochastic modeling, and single neuron variability.
*Physical Review E, 79*(2), 021905.CrossRefGoogle Scholar - Gerstein, G. L., & Kiang, N. Y. S. (1960). An approach to the quantitative analysis of electrophysiological data from single neurons.
*Biophysical Journal, 1*(1), 15–28.PubMedCrossRefGoogle Scholar - Gerstner, W., & Kistler, W. (2002).
*Spiking neuron models: Single neurons, populations, plasticity*. Cambridge: Cambridge University Press.Google Scholar - Gewaltig, M. O.,& Diesmann, M. (2007). NEST (NEural Simulation Tool).
*Scholarpedia, 2*, 1430.CrossRefGoogle Scholar - Helias, M., Deger, M., Diesmann, M., & Rotter, S. (2010a). Equilibrium and response properties of the integrate-and-fire neuron in discrete time.
*Frontiers in Computational Neuroscience, 3*(29), 1–17.Google Scholar - Helias, M., Deger, M., Rotter, S., & Diesmann, M. (2010b). Instantaneous non-linear processing by pulse-coupled threshold units.
*PLoS Computation Biology, 6*(9), e1000929.CrossRefGoogle Scholar - Helias, M., Deger, M., Rotter, S., & Diesmann, M. (2011). Finite post synaptic potentials cause a fast neuronal response.
*Frontiers in Neuroscience, 5*(19), 1–16.Google Scholar - Heyman, D. P., & Sobel, M. J. (1982).
*Stochastic models in operations research*(Vol. I). New York: McGraw-Hill.Google Scholar - Holden, A. V. (1976). Models of the stochastic activity of neurones. In
*Lecture notes in biomathematics*. Berlin: Springer.Google Scholar - Johnson, D. H. (1996). Point process models of single-neuron discharges.
*Journal of Computational Neuroscience, 3*(4), 275–299.PubMedCrossRefGoogle Scholar - Johnson, D. H., & Swami, A. (1983). The transmission of signals by auditory-nerve fiber discharge patterns.
*Journal of the Acoustical Society of America, 74*(2), 493–501.PubMedCrossRefGoogle Scholar - Kass, R., & Ventura, V. (2001). A spike-train probability model.
*Neural Computation, 13*(8), 1713–1720.PubMedCrossRefGoogle Scholar - Kuffler, S. W., Fitzhugh, R., & Barlow, H. B. (1957). Maintained activity in the cat’s retina in light and darkness.
*Journal of General Physiology, 40*(5), 683–702.PubMedCrossRefGoogle Scholar - Ledoux, E., & Brunel, N. (2011). Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs.
*Frontiers in Computational Neuroscience, 5*(25), 1–17.Google Scholar - Lindner, B. (2004). Interspike interval statistics of neurons driven by colored noise.
*Physical Review E, 69*, 0229011.CrossRefGoogle Scholar - Lindner, B. (2006). Superposition of many independent spike trains is generally not a Poisson process.
*Physical Review E, 73*(2), 022901.CrossRefGoogle Scholar - Maimon, G., & Assad, J. A. (2009). Beyond Poisson: Increased spike-time regularity across primate parietal cortex.
*Neuron, 62*(3), 426–440.PubMedCrossRefGoogle Scholar - Meyer, C., & van Vreeswijk, C. (2002). Temporal correlations in stochastic networks of spiking neurons.
*Neural Computation, 14*(2), 369–404.PubMedCrossRefGoogle Scholar - Moreno-Bote, R., & Parga, N. (2010). Response of integrate-and-fire neurons to noisy inputs filtered by synapses with arbitrary timescales: Firing rate and correlations.
*Neural Computation, 22*(6), 1528–1572.PubMedCrossRefGoogle Scholar - Muller, E., Buesing, L., Schemmel, J., & Meier, K. (2007). Spike-frequency adapting neural assemblies: Beyond mean adaptation and renewal theories.
*Neural Computation, 19*(11), 2958–3010.PubMedCrossRefGoogle Scholar - Nawrot, M. P., Boucsein, C., Rodriguez Molina, V., Aertsen, A., Grün, S., et al. (2007). Serial interval statistics of spontaneous activity in cortical neurons
*in vivo*and*in vitro*.*Neurocomputing, 70*(10–12), 1717–1722.CrossRefGoogle Scholar - Nawrot, M. P., Boucsein, C., Rodriguez Molina, V., Riehle, A., Aertsen, A., et al. (2008). Measurement of variability dynamics in cortical spike trains.
*Journal of Neuroscience Methods, 169*(2), 374–390.PubMedCrossRefGoogle Scholar - Ostojic, S. (2011). Interspike interval distributions of spiking neurons driven by fluctuating inputs.
*Journal of Neurophysiology, 106*(1), 361–373PubMedCrossRefGoogle Scholar - Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding models.
*Network: Computation in Neural Systems, 15*(4), 243–262.CrossRefGoogle Scholar - Papoulis, A. (1991).
*Probability, random variables, and stochastic processes*(3rd ed.). New York: McGraw-Hill.Google Scholar - Picinbono, B. (2009). Output dead-time in point processes.
*Communications in Statistics - Simulation and Computation, 38*(10), 2198–2213.CrossRefGoogle Scholar - Pillow, J. W., Shlens, J., Paninski, L., Sher A, Litke, A. M., et al. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population.
*Nature, 454*(7207), 995–999.PubMedCrossRefGoogle Scholar - Renart, A., De La Rocha, J., Bartho, P., Hollender, L., Parga, N., et al. (2010). The asynchronous state in cortical cicuits.
*Science, 327*(5965), 587–590.PubMedCrossRefGoogle Scholar - Rosenbaum, R., & Josic, K. (2011). Mechanisms that modulate the transfer of spiking correlations.
*Neural Computation, 23*(5), 1261–1305.PubMedCrossRefGoogle Scholar - Schwalger, T., Fisch, K., Benda, J., & Lindner, B. (2010). How noisy adaptation of neurons shapes interspike interval histograms and correlations.
*PLoS Computational Biology, 6*(12), e1001026.PubMedCrossRefGoogle Scholar - Shinomoto, S., Shima, K., & Tanji, J. (2003). Differences in spiking patterns among cortical neurons.
*Neural Computation, 15*(12), 2823–2842.PubMedCrossRefGoogle Scholar - Tetzlaff, T., Rotter, S., Stark, E., Abeles, M., Aertsen, A., et al. (2008). Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics.
*Neural Computation, 20*(9), 2133–2184.PubMedCrossRefGoogle Scholar - Truccolo, W. (2010). Stochastic models for multivariate neural point processes: Collective dynamics and neural decoding. In S. Rotter, & S. Grün (Eds.),
*Analysis of parallel spike trains*. Berlin: Springer.Google Scholar - Truccolo, W., Hochberg, L. R., & Donoghue, J. P. (2010). Collective dynamics in human and monkey sensorimotor cortex: Predicting single neuron spikes.
*Nature Neuroscience, 13*(1), 105–113.PubMedCrossRefGoogle Scholar - Tuckwell, H. C. (1988).
*Introduction to theoretical neurobiology*(Vol. 2). Cambridge: Cambridge University Press.Google Scholar - Turcott, R. G., Lowen, S. B,. Li, E., Johnson, D. H., Tsuchitani, C., et al. (1994). A nonstationary Poisson point process describes the sequence of action potentials over long time scales in lateral-superior-olive auditory neurons.
*Biological Cybernetics, 70*(3), 209–217.PubMedCrossRefGoogle Scholar - van Vreeswijk, C. (2010). Stochastic models of spike trains. In S. Rotter, & S. Grün (Eds.),
*Analysis of parallel spike trains*. Berlin: Springer.Google Scholar - van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity.
*Science, 274*(5293), 1724–1726.PubMedCrossRefGoogle Scholar