Journal of Computational Neuroscience

, Volume 27, Issue 2, pp 177–200 | Cite as

Correlations in spiking neuronal networks with distance dependent connections

  • Birgit Kriener
  • Moritz Helias
  • Ad Aertsen
  • Stefan Rotter
Open Access


Can the topology of a recurrent spiking network be inferred from observed activity dynamics? Which statistical parameters of network connectivity can be extracted from firing rates, correlations and related measurable quantities? To approach these questions, we analyze distance dependent correlations of the activity in small-world networks of neurons with current-based synapses derived from a simple ring topology. We find that in particular the distribution of correlation coefficients of subthreshold activity can tell apart random networks from networks with distance dependent connectivity. Such distributions can be estimated by sampling from random pairs. We also demonstrate the crucial role of the weight distribution, most notably the compliance with Dales principle, for the activity dynamics in recurrent networks of different types.


Spiking neural networks Small-world networks Pairwise correlations Distribution of correlation coefficients 

1 Introduction

The collective dynamics of balanced random networks was extensively studied, assuming different neuron models as constituting dynamical units (van Vreeswijk and Sompolinsky 1996, 1998; Brunel and Hakim 1999; Brunel 2000; Mattia and Del Guidice 2002; Timme et al. 2002; Mattia and Del Guidice 2004; Kumar et al. 2008b; Jahnke et al. 2008; Kriener et al. 2008).

Some of these models have in common that they assume random network topologies with a sparse connectivity ϵ ≈ 0.1 for a local, but large neuronal network, embedded into an “external” population that supplies unspecific white noise drive to the local network. These systems are considered as minimal models for cortical networks of about 1 mm3 volume, because they can display activity states similar to those observed invivo, such as asynchronous irregular spiking. Yet, as recently reported (Song et al. 2005; Yoshimura et al. 2005; Yoshimura and Callaway 2005), local cortical networks are characterized by a circuitry, which is specific and hence, non-random even on a small spatial scale. Since it is still impossible to experimentally uncover the whole coupling structure of a neuronal network, it is necessary to infer some of its features from its activity dynamics. Timme (2007) e.g. studied networks of N coupled phase oscillators in a stationary phase locked state. In these networks it is possible to reconstruct details of the network coupling matrix (i.e. topology and weights) by slightly perturbing the stationary state with different driving conditions and analyzing the network response. Here, we focus on both network structure and activity dynamics in spiking neuronal networks on a statistical level. We consider several abstract model networks that range from strict distance dependent connectivity to random topologies, and examine their activity dynamics by means of numerical simulation and quantitative analysis. We focus on integrate-and-fire neurons arranged on regular rings, random networks, and so-called small-world networks (Watts and Strogatz 1998). Small-world structures seem to be optimal brain architectures for fast and efficient inter-areal information transmission with potentially low metabolic consumption and wiring costs due to a low characteristic path length ℓ (Chklovskii et al. 2002), while at the same time they may provide redundancy and error tolerance by highly recurrent computation (high clustering coefficient \(\mathcal{C}\), for the general definitions of ℓ and \(\mathcal{C}\), cf. e.g. Watts and Strogatz (1998), Albert and Barabasi (2002)). Also on an intra-areal level cortical networks may have pronounced small-world features as was shown in simulations by Sporns and Zwi (2004), who assumed a local Gaussian connection probability and a uniform long-range connection probability for local cortical networks, assumptions that are in line with experimental observations (Hellwig 2000; Stepanyants et al. 2007). Network topology is just one aspect of neuronal network coupling, though. Here, we also demonstrate the crucial role of the weight distribution, especially with regard to the notion that all inhibitory neurons project only hyperpolarizing synapses onto their postsynaptic targets, and excitatory neurons only project depolarizing synapses. This assumption is sometimes referred to as Dales principle (Li and Dayan 1999; Dayan and Abbott 2001; Hoppensteadt and Izhikevich 1997). Strikingly, this has strong implications for the dynamical states of random networks already (Kriener et al. 2008). Yet, the main focus of the present study is put on the distance dependence and overall distribution of correlation coefficients. Especially the joint statistics of subthreshold activity, i.e. correlations and coherences between the incoming currents that neurons integrate, has been shown to contain elusive information about network parameters, e.g. the mean connectivity in random networks (Tetzlaff et al. 2007).

The paper is structured as follows: In Section 2 we give a short description of the details of the neuron model and the simulation parameters used throughout the paper. In Section 3 we introduce the notion of small-world networks and in Section 4 we discuss features of the activity dynamics in dependence of the topology. In ring and small-world networks groups of neighboring neurons tend to spike highly synchronously, while the population dynamics in random networks is asynchronous-irregular. To understand the source of this differences in the population dynamics, we analyze the correlations of the inputs of neurons in dependence of the network topology. Section 5 is devoted to the theoretical framework we apply to calculate the input correlations in dependence of the pairwise distance in sparse ring (Section 5.1) and small-world networks (Section 5.2). In Section 6 we finally derive the full distribution of correlation coefficients for ring and random networks. Random networks have rather narrow distributions centered around the mean correlation coefficient, while sparse ring and small-world networks have distributions with heavy tails. This is due to the high probability to share a common input partner if the neurons are topological neighbors, and very low probability if they are far apart, yielding distributions with a few high correlation coefficients and many small ones. This offers a way to potentially distinguish random topologies from topologies with small-world features by their subthreshold activity dynamics on a statistical level.

2 Neuronal dynamics and synaptic input

The neurons in the network of size N are modeled as leaky integrate-and-fire point neurons with current-based synapses. The membrane potential dynamics Vk(t), k ∈ {1,...,N} of the neurons is given by
$$\tau_{\sf m}\dot{V_k}(t) = - V_k(t) + RI_k(t) $$
with membrane resistance R and membrane time constant \(\tau_{\sf m}\). Whenever Vk(t) reaches the threshold θ, a spike is emitted, Vk(t) is reset to \(V_{\sf res}\), and the neuron stays refractory for a period \(\tau_{\sf ref}\). Synaptic inputs
$$RI_{{\sf loc},k}=\tau_{\sf m}\sum\limits_{i=1}^{N} J_{ki}\sum\limits_l \delta(t-t_{il}-{\mathit{\Delta}}) $$
from the local network are modeled as δ-currents. Whenever a presynaptic neuron i fires an action potential at time til, it evokes an exponential postsynaptic potential (PSP) of amplitude
$$J_{ki}= \begin{cases} J & {\rm if\,\,the\,\,synapse}\,\,i\to{}k\,\,{\rm is\,\,excitatory,}\\ -gJ & {\rm if\,\,the\,\,synapse}\,\,i\to{}k\,\,{\rm is\,\,inhibitory,}\\ 0 & {\rm if\,\,there\,\,is\,\,no\,\,synapse}\,\,i\to{}k \end{cases}$$
after a transmission delay Δ that is the same for all synapses. Note that multiple connections between two neurons and self-connections are excluded in this framework. In addition to the local input, each neuron receives an external Poisson current \(I_{{\sf ext},k}\) mimicking inputs from other cortical areas or subcortical regions. The total input is thus given by
$$I_k(t)=I_{\text{loc},k}(t)+I_{{\sf ext},k}(t)\,. $$

2.1 Parameters

The neuron parameters are set to \(\tau_{\sf m}=20\) ms, R = 80 MΩ, J = 0.1 mV, and Δ = 2 ms. The firing threshold θ is 20 mV and the reset potential \(V_{\sf res}=0\) mV. After a spike event, the neurons stay refractory for \(\tau_{\sf ref}=2\) ms. If not stated otherwise, all simulations are performed for networks of size N = 12,500, with \(N_{\sf E}=10,\!000\) and \(N_{\sf I}=2,\!500\). We set the fraction of excitatory neurons in the network to \(\beta=N_{\sf E}/N=0.8\). The connectivity is set to ϵ = 0.1, such that each neuron receives exactly κ = ϵN inputs. For g = 4 inhibition hence balances excitation in the local network, while for g > 4 the local network is dominated by a net inhibition. Here, we choose g = 6. External inputs are modeled as \(K_{\sf ext}=\epsilon N_{\sf E}\) independent Poissonian sources with frequencies \(\nu_{\sf ext}\). All network simulations were performed using the NEST simulation tool (Gewaltig and Diesmann 2007) with a temporal resolution of h = 0.1 ms. For details of the simulation technique see Morrison et al. (2005).

3 Structural properties of small-world networks

Many real world networks, including cortical networks (Watts and Strogatz 1998; Strogatz 2001; Sporns 2003; Sporns and Zwi 2004), possess so-called small-world features. In the framework originally studied by Watts and Strogatz (1998), small-world networks are constructed from a ring graph of size N, where all nodes are connected to their κ ≪ N nearest neighbors (“boxcar footprint”), by random rewiring of connections with probability \(p_{\sf r}\) (cf. Fig. 1(a), (b)). Watts and Strogatz (1998) characterized the small-world regime by two graph-theoretical measures, a high clustering coefficient \(\mathcal{C}\) and a low characteristic pathlength ℓ (cf. Fig. 1(c)). The clustering coefficient \(\mathcal{C}\) measures the transitivity of the connectivity, i.e. how likely it is, that, given there is a connection between nodes i and j, and between nodes j and k, there is also a connection between nodes i and k. The characteristic path length ℓ on the other hand quantifies how many steps on average suffice to get from some node in the network to any other node. In the following we will analyze small-world networks of spiking neurons. Networks can be represented by the adjacency matrix A with Aki = 1, if node i is connected to k and Aki = 0 otherwise. We neglect self-connections, i.e. Akk = 0 for all k ∈ {1,...,N}. In the original paper by Watts and Strogatz (1998) undirected networks were studied. Connections between neurons, i.e. synapses are however generically directed. We define the clustering coefficient \(\mathcal{C}\) for directed networks1 here as
$$\mathcal{C}_i=\frac{\sum_{j=1}^N\,\sum_{k=1}^N\,A_{ki}A_{\!ji}\big(A_{\!jk}+A_{kj}\big)} {\left(\sum_{k=1}^N\,A_{ki}\right)\left(\left[\sum_{k=1}^N\,A_{ki}\right] -1\right)}\, $$
$$\mathcal{C}=\frac{1}{N}\sum\limits_{i=1}^N\,\mathcal{C}_i\,. $$
In this definition, \(\mathcal{C}\) measures the likelihood of having a connection between two neurons, given they have a common input neuron, and it is hence directly related to the amount of shared input \(\sum_{i=1}^N W_{ki}W_{li}\) between neighboring neurons l and k, where Wki are the weighted connections from neuron i to neuron k (cf. Section 2), i.e.
$$ W_{ki}=\begin{cases} J\,A_{ki} & {\rm if}\,\,\,i\,\,{\rm excitatory}\\ -gJ\,A_{ki} & {\rm if}\,\,\,i\,\,{\rm inhibitory}\\ 0 & {\rm if\,\,there\,\,is\,\,no\,\,synapse}\end{cases} \,. $$
The characteristic path length ℓ of the network graph is given by
$$ \ell=\frac{1}{N}\sum\limits_{i=1}^N \ell_{i} \,, \,\,\,\,{\rm with}\,\,\,\,\ell_{i}=\frac{1}{N-1}\sum\limits_{j\neq\,i} \ell_{ij}\,, $$
where ℓij is the shortest path between neurons i and j, i.e. \(\ell_{ij}={\sf min}_{n\in \mathbb{Z}}\{(A^n)_{ij}>0\}\) (Albert and Barabasi 2002). The clustering coefficient is a local property of a graph, while the characteristic path length is a global quantity. This leads to the relative stability of the clustering coefficient during gradual rewiring of connections, because the local properties are hardly affected, whereas the introduction of random shortcuts decreases the average shortest path length dramatically (cf. Fig. 1(c)).
Fig. 1

A sketch of (a) the ring and (b) a small-world network with the neuron distribution we use throughout the paper for the Dale-conform networks (gray triangle: excitatory neuron, black square: inhibitory neuron, ratio excitation/inhibition \(=N_{\sf E}/N_{\sf I}=4\)). The footprint κ of the ring in this particular example is 4, i.e. each neuron is connected to its κ = 4 nearest neighbors, irrespective of the identity. To derive the small-world network we rewire connections randomly with probability \(p_{\sf r}\). Note, that in the actual studied networks all connections are directed. (c) The small-world regime is characterized by a high relative clustering coefficient \(\mathcal{C}(p_{\sf r})/\mathcal{C}(0)\) and a low characteristic path length \(\ell(p_{\sf r})/\ell(0)\) (here N = 2,000, κ = 200, averaged over 10 network realizations)

4 Activity dynamics in spiking small-world networks

In a ring graph directly neighboring neurons receive basically the same input, as can be seen from the high clustering coefficient \(\mathcal{C}(0)=3(\kappa-2)/4(\kappa-1)\approx 0.75\)2 which is the same as in undirected ring networks (Albert and Barabasi 2002). This leads to high input correlations and synchronous spiking of groups of neighboring neurons (Fig. 2(a)). As more and more connections are rewired, the local synchrony is attenuated and we observe a transition to a rather asynchronous global activity (Fig. 2(b), (c)). The clustering coefficient of the corresponding random graph equals \(\mathcal{C}(1)=\kappa/N=\epsilon\) (here ϵ = 0.1), because the probability to be connected is always ϵ for any two neurons, independent of the adjacency of the neurons (Albert and Barabasi 2002). This corresponds to the strength of the input correlations observed in these networks (Kriener et al. 2008). However, the population activity still shows pronounced fluctuations around ∼1/4Δ (with the transmission delay Δ = 2 ms, cf. Section 2) even when the network is random (\(p_{\sf r}=1\), Fig. 2(c)). These fluctuations decrease dramatically, if we violate Dalesprinciple, i.e. the constraint that any neuron can either only depolarize or hyperpolarize all its postsynaptic targets, but not both at the same time. We refer to the latter as the hybrid scenario in which neurons project both excitatory and inhibitory synapses (Kriener et al. 2008). Ren et al. (2007) suggest that about 30% of pyramidal cell pairs in layer 2/3 mouse visual cortex have effectively strongly reliable, short latency inhibitory couplings via axo-axonic glutamate receptor mediated excitation of the nerve endings of inhibitory interneurons, thus bypassing dendrites, soma, and axonal trunk of the involved interneuron. These can be interpreted as hybrid-like couplings in real neural tissue.
Fig. 2

Activity dynamics for (a) a ring network, (b) a small-world network (\(p_{\sf r}=0.1\)) and (c) a random network that all comply with Dalesprinciple. (d) shows activity in a ring network with hybrid neurons. In the Dale-conform ring network (a) we observe synchronous spiking of large groups of neighboring neurons. This is due to the high amount of shared input: neurons next to each other have basically the same presynaptic input neurons. This local synchrony is slightly attenuated in small-world networks (b). In random networks the activity is close to asynchronous-irregular (AI), apart from network fluctuations due to the finite size of the network (c). Networks made of hybrid neurons have a perfect AI activity, even if the underlying connectivity is a ring graph (d). The simulation parameters were N = 12,500, κ = 1,250, g = 6, J = 0.1 mV, with \(N_{\sf I}=2,\!500\) equidistantly distributed inhibitory neurons and \(K_{\sf ext}=1\!,000\) independent Poisson inputs per neuron of strength \(\nu_{\sf ext}=15\) Hz each (cf. Section 2)

The average rate in all four networks is hardly affected by the underlying topology or weight distribution of the networks (cf. Table 1), while the variances of the population activity are very different. This is reflected in the respective Fano factors \({\sf FF}[n(t;h)]={\sf Var}[n(t;h)]/{\sf E}[n(t;h)]\) of population spike counts \(n(t;h)=\sum_{i=1}^Nn_i(t;h)\) per time bin h = 0.1 ms, where \(n_i(t;h)=\int_{t}^{t+h}S_i(s)\,ds= \int_{t}^{t+h}\sum_l\delta(s-s_{il})\,ds\) is the number of spikes emitted by neuron i at time points sl within the interval [t,t + h) (cf. Appendix A). If the population spike count n(t;h) is a compound process of independent stationary Poisson random variables ni(t;h) with parameter \(\nu_{\sf o}h\), we have
$$\begin{array}{rll} {\sf FF}[n(t;h)] &=& \frac{{\sf Var}[n(t;h)]}{{\sf E}[n(t;h)]} \\ &=& \frac{\sum_{i,j=1}^N{\sf Cov}[n_i(t;h),n_j(t;h)]}{\sum_{i=1}^N{\sf E}[n_i(t;h)]} \\ &=&\frac{N\nu_{\sf o}h}{N\nu_{\sf o}h} = 1\,, \end{array} $$
because the covariances \({\sf Cov}[n_i(t;h),n_j(t;h)]={\sf E}[n_i(t;h)n_j(t;h)]-{\sf E}[n_i(t;h)]{\sf E}[n_j(t;h)]\) are zero for all i ≠ j and the variance of the sum equals the sum of the variances. If it is larger than one this indicates positive correlations between the spiking activities of the individual neurons (cf. Appendix A) (Papoulis 1991; Nawrot et al. 2008; Kriener et al. 2008). We see (cf. Table 1) that indeed it is largest for the Dale-conform ring network, still manifestly larger than one for the Dale-conform random network, and about one for the hybrid networks for both the ring and the random case. The quantitative differences of the Fano factors in all four cases can be explained by the different amount of pairwise spike train correlations (cf. Appendix A, Section 6). This demonstrates how a violation of Dale’s principle stabilizes and actually enables asynchronous irregular activity, even in networks whose adjacency, i.e. the mere unweighted connectivity, suggests highly correlated activity, as it is the case for Dale-conform ring (Fig. 2(a)) and small-world networks (Fig. 2(b)).
Table 1

Mean population rates νo and Fano factors FF\([n(t;h)]={\sf Var}[n(t;h)]/{\sf E}[n(t;h)]\) of population spike count n(t;h) per time bin h (10 s of population activity, N = 12,500, bin size h = 0.1 ms) for the random Dale and hybrid networks and the corresponding ring networks

Network type

Mean rate νo

Fano factor FF

Random, Dale

12.9 Hz


Random, Hybrid

12.8 Hz


Ring, Dale

13.5 Hz


Ring, Hybrid

13.1 Hz


If all spike trains contributing to the population spike count were uncorrelated Poissonian, the FF would equal 1. A FF larger than 1 indicates correlated activity (cf. Appendix A)

To understand the origin of the different correlation strengths in the various network types, and hence the different spiking dynamics and population activities in dependence on both the weight distribution and the rewiring probability, we will extend our analysis introduced in Kriener et al. (2008) to ring and small-world networks in the following sections.

5 Distance dependent correlations in a shot-noise framework

We assume that all incoming spike trains Si(t) = ∑ lδ(t − til) are realizations of point processes corresponding to stationary correlated Poisson processes, such that
$${\sf E}[S_i(t)S_j(t+\tau)]:=\psi_{ij}(\tau)=c_{ij}\sqrt{\nu_i\nu_j}\,\delta(\tau)\,, $$
with spike train correlations cij ∈ [ − 1,1], and mean rates νi, νj (cf. however Fig. 4(d)). The spike trains can either stem from the pool of local neurons i ∈ {1,...,N} or from external neurons \(i\,\in\{N+1,\ldots,N+NK_{\sf ext}\}\), where we assume that each neuron receives external inputs from \(K_{\sf ext}\) neurons, which are different for all N local neurons. We describe the total synaptic input Ik(t) of a model neuron k as a sum of linearly filtered presynaptic spike trains (i.e. the spike trains are convolved with filter-kernels fki(t)), also called shot noise (Papoulis 1991; Kriener et al. 2008):
$$\begin{array}{rll} I_k(t)&=& I_{{\sf loc},k}(t)+ I_{{\sf ext},k}(t) =\sum\limits_{i=1}^N\,(S_i\ast f_{ki})(t)\\&&+\sum\limits_{i=N+1}^{N+NK_{\sf ext}}\,(S_i\ast f_{ki})(t)\,. \end{array}$$
Ik(t) could represent e.g. the weighted input current, the synaptic input current (fki(t) = unit postsynaptic current, PSC), or the free membrane potential (fki(t) = unit postsynaptic potential, PSP). All synapses are identical in their kinetics and differ only in strength Wki, hence we can write
$$ f_{ki}(t)=W_{ki}\,f(t)\,. $$
With si(t): = (Si ∗ f)(t), Eq. (11) is then rewritten as
$$I_k(t)=\sum\limits_{i=1}^N\,W_{ki}s_i(t)+\sum\limits_{i=N+1}^{N+NK_{\sf ext}}\,W_{ki}s_i(t) \,. $$
The covariance function of the inputs Ik, Il is given by
$$\begin{array}{rll}{\sf Cov}[I_k(t)I_l(t+\tau)]&=&\sum\limits_{i=1}^{N+NK_{\sf ext}}\sum\limits_{j=1}^{N+NK_{\sf ext}}\\ &&\times\, W_{ki}W_{l\!j}{\sf Cov}[s_i(t)s_j(t+\tau)]\,. \end{array} $$
This sum can be split into
$$\begin{array}{lll}&&{\kern-6pt}{\sf Cov}[I_k(t)I_l(t+\tau)]\\ &&=\sum\limits_{i=1}^{N+NK_{\sf ext}}W_{ki}W_{li}{\sf Cov}[s_i(t)s_i(t+\tau)]\,\,\,\,\,\,\,{\sf (i)}\\ &&{\kern6pt}\,+\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{lj}{\sf Cov}[s_i(t)s_j(t+\tau)]\,\,\,\,\,\,\,{\sf (ii)} \,. \end{array} $$
The first sum Eq. (15) (i) contains contributions of the auto-covariance functions \({\sf Cov}[s_i(t)s_i(t+\tau)]\) of the filtered input spike trains, i.e. the spike trains that stem from common input neurons i ∈ {1,...,N} (WkiWli ≠ 0, including \(W_{ki}^2\)). The second sum Eq. (15) (ii) contains all contributions of the cross-covariance functions \({\sf Cov}[s_i(t)s_j(t+\tau)]\) of filtered spike trains that stem from presynaptic neurons i ≠ j, i,j ∈ {1,...,N}, where we already have taken into account that the external spike sources are uncorrelated, and hence \({\sf Cov}[s_i(t)s_j(t+\tau)]=0\) for all \(i,j\in\{N+1,...,N+NK_{\sf ext}\}\). It is apparent that the high degree of shared input, as present in ring and small-world topologies, should show up in the spatial structure of input correlations between neurons. The closer two neurons k,l are located on the ring, the more common presynaptic neurons i they share. This will lead to a dominance of the first sum, unless the general strength of spike train covariances, accounted for in the second sum, is too high, and the second sum dominates the structural amplifications, because it contributes quadratically in neuron number. If the input covariances due to the structural overlap of presynaptic pools is however dominant, a fraction of this input correlation should also be present at the output side of the neurons, i.e. the spike train covariances cij should be a function of the interneuronal distance as well. This is indeed the case as we will see in the following.
We will hence assume that all incoming spike train correlations cij are in general dependent on the pairwise distance Dij = |i − j| of neurons i,j (neuron labeling across the ring in clockwise manner), and the rewiring probability \(p_{\sf r}\). With Campbell’s theorem for shot noise (Papoulis 1991; Kriener et al. 2008) we can write
$$ \begin{array}{rll} {\sf E}_t[s_i(t)]&=&\nu_i\,{\int\limits_{-\infty}^{\infty}}\,f(t)\,dt \qquad\forall{}i\in\{1,\ldots,N\},\\ {\sf Cov}[s_i(t)s_j(t+\tau)]&=&{\sf E}_t[s_i(t)s_j(t+\tau)]-{\sf E}_t[s_i(t)]{\sf E}_t[s_j(t+\tau)]\\ &=:& \begin{cases} a^{s,{\sf loc}}_i(\tau)=(\psi_{ii}*\phi)(\tau) & {\sf if}\,\,\,i=j,\,i\,\in\,\{1,\ldots,N\}\\ a^{s,{\sf ext}}_i(\tau) =(\psi_{ii}*\phi)(\tau) & {\sf if}\,\,\, i=j,\,i\,\in\,\{N+1,\ldots,N+NK_{\sf ext}\}\\ c^{s}_{ij}(\tau, D_{ij},p_{\sf r})=(\psi_{ij}*\phi)(\tau) & {\sf if}\,\,i\neq j\,\,{\sf and}\,\,i,j\,\in\,\{1,\ldots,N\}\\ 0 & {\sf otherwise} \end{cases} \end{array} $$
$$ {\sf E}_t[\,.\,]= \lim\limits_{T\to \infty}\frac{1}{2T}\int\limits_{-T}^{T} (\,.\,) \,dt\,. $$
Here, \(\phi(\tau)=\int_{-\infty}^{\infty}\,f(t)f(t+\tau)\,dt\) represents the auto-correlation of the filter kernel f(t).
We now want to derive the zero time-lag input covariances, i.e. the auto-covariance \(a_{\sf in}\) and cross-covariance \(c_{\sf in}\) of Ik,Il, defined as
$${\sf Cov}[I_k(t)I_l(t)]=: \begin{cases} a_{{\sf in},k}(p_{\sf r}) & {\sf for}\,\, k=l\\ c_{{\sf in},kl}(D_{kl},p_{\sf r} ) & {\sf for}\,\, k\ne{}l \end{cases} $$
in dependence of the auto- and cross-covariances \(a^s_i(0)\), \(c^s_{ij}(0,D_{ij},p_{\sf r})\) of the individual filtered input spike trains to obtain the input correlation coefficient
$$C_{\sf in}(D_{kl},p_{\sf r})=\frac{c_{{\sf in},{kl}}(D_{kl},p_{\sf r})}{\sqrt{a_{{\sf in},k}(p_{\sf r})a_{{\sf in},l}(p_{\sf r})}}\,. $$
With the definitions Eq. (16) the input auto-covariance function at zero time lag \(a_{{\sf in},k}(p_{\sf r})\), i.e. the variance of the input Ik, explicitly equals
$$ \begin{array}{rll} a_{{\sf in},k}(p_{\sf r})&=&\sum\limits_{i=1}^{N}W_{ki}^2\,a^{s,{\sf loc}}_i(0)\\ &&+\sum\limits_{i=N+1}^{N+NK_{\sf ext}}W_{ki}^2\,a_i^{s,{\sf ext}}(0)\\ &&+\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{kj}\,c^s_{ij}(0,D_{ij},p_{\sf r})\,, \end{array} $$
while the cross-covariance of the input currents \(c_{\sf in}(D_{ij},p_{\sf r})\) is given by
$$ \begin{array}{rll} c_{{\sf in},kl}(D_{kl},p_{\sf r})&=&\sum\limits_{i=1}^{N}W_{ki}W_{li}\,a_i^{s,{\sf loc}}(0)\\ &&+\,\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{lj}\,c_{ij}^s(0,D_{ij},p_{\sf r})\,. \end{array} $$
To assess the zero-lag shot noise covariances \(a^s_i(0)\) and \(c^s_{ij}(0,D_{ij},p_{\sf r})\) we derive with Eqs. (10), (16)
$$\begin{array}{lll} &&{\kern-6pt}{\sf Cov}[s_i(t),s_j(t)]\\ &&=\begin{cases} a_{i}^{s}(0)=\nu_{i}\,\phi(0) & {\sf if}\,\,\,i=j\\ c^s_{ij}(0,D_{ij},p_{\sf r})=c_{ij}(D_{ij},p_{\sf r})\sqrt{\nu_i\nu_j}\,\phi(0) & {\sf if}\,\,\,i\neq j \end{cases}\\\ \end{array} $$
We assume \(\nu_i=\nu_{\sf o}\) for all i ∈ {1,...,N}, with \(\nu_{\sf o}\) denoting the average stationary rate of the network neurons, and \(\nu_i=\nu_{\sf ext}\) for all \({i\in\{N+1,...,N+NK_{\sf ext}\}}\), with \(\nu_{\sf ext}\) denoting the rate of the external neurons. Hence, \(a_i^{s,{\sf loc}}(0)=\nu_{\sf o}\,\phi(0)\), and \(a_i^{s,{\sf ext}}(0)=\nu_{\sf ext}\,\phi(0)\) are the same for all neurons i ∈ {1,...,N}, and \(i\in\{N+1,...,N+NK_{\sf ext}\}\), respectively. For the cross-covariance function we analogously have \( c^s_{ij}(0,D_{ij},p_{\sf r})=\nu_{\sf o}\,c_{ij}(D_{ij},p_{\sf r})\,\phi(0)\). We define Hk as the contribution of the shot noise variances as to the variance \(a_{\sf in}\) of the inputs (cf. Fig. 3(a))
$$H_k:=a^{s,{\sf loc}}(0)\sum\limits_{i=1}^{N}W^2_{ki} + a^{s,{\sf ext}}(0)\sum\limits_{i=N+1}^{N+NK_{\sf ext}}W^2_{ki}\,, $$
Gkl as the contribution of the shot noise variances as to the cross-covariance \(c_{\sf in}\) of the inputs (cf. Fig. 3(c))
$$G_{kl}(D_{kl},p_{\sf r}):=a^{s,{\sf loc}}(0)\sum\limits_{i=1}^{N}W_{ki}W_{li}\,, $$
Lk as the contribution of the shot noise cross-covariances cs to the auto-covariance \(a_{\sf in}\) of the inputs (cf. Fig. 3(b))
$$L_k(p_{\sf r}):=\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{kj}c^s_{ij}(0,D_{ij},p_{\sf r})\,, $$
and Mkl as the contribution of the shot noise cross-covariances cs to the cross-covariance \(c_{\sf in}\) of the inputs (cf. Fig. 3(d))
$$M_{kl}(D_{kl},p_{\sf r}):=\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{lj}c^s_{ij}(0,D_{ij},p_{\sf r})\,. $$
Finally, if we assume input structure homogeneity, i.e. that the expected values of these individual contributions do not depend on k and l, but only on the relative distance Dkl and the rewiring probability \(p_{\sf r}\), we can rewrite Eq. (18) as
$$\begin{array}{rll} C_{\sf in}(D_{kl},p_{\sf r}) &=&\frac{c_{\sf in}(D_{kl},p_{\sf r})}{a_{\sf in}(p_{\sf r})}\\ &=&\frac{G(D_{kl},p_{\sf r})+M(D_{kl},p_{\sf r})}{H+L(p_{\sf r})}\,, \end{array} $$
The next two sections are devoted to the calculation of these expressions for ring and small-world networks.
Fig. 3

Sketch of the different contributions to the input correlation coefficient \(C_{\sf in}(D_{kl},0)\), cf. Eq. (26) for the ring graph. The variance \(a_{\sf in}(0)\), Eq. (17) of the input to a neuron k is given by the sum of the variances H (panel (a)), Eq. (22) and the sum of the covariances L(0) (panel (b)), Eq. (24) of the incoming filtered spike trains si from neurons i ≠ j with WkiWkj ≠ 0. The cross-covariance \(c_{\sf in}(D_{kl},0)\), Eq. (17) is given by the sum of the variances of the commonly seen spike trains si with WkiWli ≠ 0, G(Dkl,0) (panel (c)), Eq. (23) and the sum of the covariances of the spike trains si from non-common input neurons i ≠ j with WkiWlj ≠ 0, M(Dkl,0), Eq. (25). We always assume that the only source of spike train correlations cij(Dij,0) stems from presynaptic neurons sharing a common presynaptic neuron m (green)

5.1 Ring graphs

First we consider the case of Dale-conform ring networks, i.e. \(p_{\sf r}=0\). A fraction of βκ of the presynaptic neurons i within the local input pool of neuron k is excitatory and depolarizes the postsynaptic neuron by Wki = J with each spike, while (1 − β) κ presynaptic neurons are inhibitory and hyperpolarize the target neuron k by Wki = − gJ per spike (cf. Section 2). Moreover, each neuron receives \(K_{\sf ext}\) excitatory inputs from the external neuron pool with Wki = J. Hence, for all neurons k ∈ {1,...,N} we obtain for the input variance Hk = H, Eq. (22), Fig. 3(a)
$$ \begin{array}{rll} H&=&a^{s,{\sf loc}}(0)\sum\limits_{i=1}^{N}W^2_{ki} + a^{s,{\sf ext}}(0)\sum\limits_{i=N+1}^{N+NK_{\sf ext}}W^2_{ki} \\ &\stackrel{\forall\,k}{=}&\Big(\kappa\,J^2\,(\beta+g^2(1-\beta))\,\nu_{\sf o} + K_{\sf ext}\,J^2\,\nu_{\sf ext}\Big)\,\phi(0)\,. \end{array} $$
Because of the boxcar footprint, the contribution of the auto-covariances as(0) of the individual filtered spike trains to the input cross-covariance Gkl(Dkl,0), Eq. (23), is basically the same as H, only scaled by the respective overlap of the two presynaptic neuron pools of neurons k and l. This overlap only depends on the distance Dkl between k and l, cf. Fig. 3(c). Hence, for all k,l ∈ {1,...,N}
$$ \begin{array}{rll}G(\!D_{kl},\!0)&\!=\!& a^{s,{\sf loc}}(0)\,\sum\limits_{i=1}^{N}W_{ki}W_{li}\\ &\!=\!&{\mathit{\Theta}}\!\left[\kappa\!-\!D_{kl}\right]\!\left(\kappa\!-\!D_{kl}\right)\!J^2(\beta\!+\!g^2(1\!-\!\beta))\,\nu_{\sf o}\,\phi(0)\,,\\ \end{array} $$
with minor modulations because of the exclusion of self-couplings and the relative position of the inhibitory neurons with respect to the boxcar footprint, but for large κ these corrections are negligible. Θ[x] is the Heaviside stepfunction that equals 1 if x ≥ 0, and 0 if x < 0. If all incoming spike trains from local neurons are uncorrelated and Poissonian, and the external drive is a direct current, the complete input covariance stems from the structural (i.e. common input) correlation coefficient \(C_{\sf struc}(D_{kl},p_{\sf r}):=\frac{G(D_{kl},p_{\sf r})}{H}\) alone, that can then be written as
$$\begin{array}{rll} C_{\sf struc}(D_{kl},0)&=&C_{\sf in}(D_{kl},0)|_{c^s=0}=\frac{c_{\sf in}(D_{kl},0)}{a_{\sf in}(0)}\Big|_{c^s=0}\\ &=&\frac{G(D_{kl},0)}{H}=\left(1-\frac{D_{kl}}{\kappa}\right)\Theta\left[\kappa-D_{kl}\right]\,.\\ \end{array} $$
The spike train correlations cij(Dij,0) show however a pronounced distance dependent decay and reach non-negligible amplitudes up to cij(1,0) ≈ 0.04 (cf. Fig. 4(c)). In the following we will use two approximations of the distance dependence of cij(Dij,0), a linear relation and an exponential relation. We start by assuming a linear decay on the interval (0,κ] (cf. Fig. 4(c), black). This choice is motivated by two assumptions. First, we assume that the main source of spike correlations stems from the structural input correlations \(C_{\sf struc}(D_{ij},0)\), Eq. (29), of the input neurons i,j alone, i.e. the strength of the correlations between two input spike trains Si and Sj depends on the overlap of their presynaptic input pools, determined by their interneuronal distance Dij. Analogous to the reasoning that lead to the common input correlations G(Dkl,0)/H, Eq. (29), before, the output spike train correlation between neurons i and j will hence be zero if Dij ≥ κ. Moreover, the neurons i and j will only be contributing to the input currents of k and l, if they are within a distance κ/2, that is Dki < κ/2 and Dlj < κ/2. Hence, for the correlations of two input spike trains Si, Sj, i ≠ j to contribute to the input covariance of neurons k and l, these must be within a range κ + 2 κ/2 = 2κ. Additionally, we assume that the common input correlations \(C_{\sf struc}(D_{ij},0)\) are transmitted linearly with the same transmission gain \(\gamma:=c_{ij}(1,0)/C_{\sf struc}(1,0)\) to the output side of i and j. We hence make the following ansatz for the distance dependent correlations between the filtered input spike trains from neuron i to k and from neuron j to l (we always indicate the dependence on k and l by |kl):
$$ \begin{array}{rll} \frac{c^{s,{\sf lin}}_{ij}(0,D_{ij},0)}{\nu_{\sf o}\phi(0)}\,\Bigg|_{kl} &=& c_{ij}^{\sf lin}(D_{ij},0)|_{kl} = \gamma\,C_{\sf struc}(D_{ij},0)|_{kl}\\ &=&\gamma\,\left(1-\frac{D_{ij}}{\kappa}\right)\Theta\left[\kappa/2-|i-k|\right]\\ &&\times\,{\mathit{\Theta}}\left[\kappa/2-|j-l|\right]\Theta\left[\kappa-|i-j|\right]\\ \end{array} $$
For the third sum in Eq. (19) this yields for all k ∈ {1,...,N} (cf. Appendix B for details of the derivation)
$$ \begin{array}{rll} \label{eqn:Ll} L^{\sf lin}_k(0)&=& \sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{kj}c^{s,{\sf lin}}_{ij}(0,D_{ij},0)\\ &\stackrel{\forall\,k}{=}&\frac{\gamma J^2}{3}(2\kappa-1)(\kappa-1)(\beta-g(1-\beta))^2\,\nu_{\sf o}\phi(0)\,.\\ \end{array} $$
For the second term in Eq. (20) we have
$$ M^{\sf lin}(D_{kl},0)=\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{lj}c^{s,{\sf lin}}_{ij}(0,D_{ij},0)\,, $$
which again only depends on the distance, so we dropped the sub-script of M. It is derived in Appendix B and explicitly given by Eq. (67). After calculation of \(L^{\sf lin}(0)\) and \(M^{\sf lin}(D_{kl},0)\) with the ansatz Eq. (30), we can plot \(C_{\sf in}(D_{kl},0)\) as a function of distance and get a curve as shown in Fig. 4(a). It is obvious (Fig. 4(c)) that the linear fit overestimates the spike train correlations as a function of distance, the correlation transmission decreases with interneuronal distance, i.e. strength of input correlation \(C_{\sf in}\), non-linearly (De la Rocha et al. 2007; Shea-Brown et al. 2008). This leads to an overestimation of the total input correlations for distances Dkl ≥ κ (Fig. 4(a)). If we instead fit the distance dependence of the spike train correlations of neuron i and j by a decaying exponential function with a cut-off at Dij = κ,
$$ c_{ij}^{\sf exp}(D_{ij},0)=\gamma\,{\sf e}^{-\eta\,D_{ij}}\Theta[\kappa-D_{ij}]\,, $$
and fit the corresponding parameters γ and η to the values estimated from the simulations, the sums in Eqs. (31) and (32) can still be reduced to simple terms (cf. Eqs. (69), (70)) and the correspondence with the observed input correlations becomes very good over the whole range of distances (Fig. 4(b)). We conclude that the strong common input correlations \(C_{\sf struc}\) of neighboring neurons due to the structural properties of Dale-conform ring networks predominantly cause the spatio-temporally correlated spiking of neuron groups of size ∼κ.
Fig. 4

Input current (a, b, e) and spike train (c, f) correlation coefficients as a function of the pairwise interneuronal distance D for a ring network of size N = 12,500, κ = 1,250, g = 6, J = 0.1 mV with βN equidistantly distributed inhibitory neurons and \(K_{\sf ext}=1,\!000\) external Poisson inputs per neuron of strength \(\nu_{\sf ext}=15\) Hz each. (a) depicts the input correlation coefficients Eq. (18) derived with the assumption that the spike train correlation coefficients cij(D,0) go linearly like \(c^{\sf lin}(D,0)=0.0406\,(1 - D/\kappa)\Theta[\kappa-D]\), cf. Eq. (30), and (b) fitted as a decaying exponential function \(c^{\sf exp}(D,0)=0.0406\,{\sf e}^{-0.0025\, D}\) (red). The gray curves show the input correlations estimated from simulations. (c) shows the spike train correlation coefficients estimated from simulations (gray), and both the linear (black) and exponential fit (red) used to obtain the theoretical predictions for the input correlation coefficients in (a) and (b). (d) shows the measured spike train cross-correlation functions ψij(τ,D,0) for four different distances D = {1, 325, 625, 1250}. (e) shows the average input correlation coefficients (averaged over 50 neuron pairs per distance) and (f) the average spike train correlation coefficients measured in a hybrid ring network (for the full distribution cf. Fig. 7(c)). Note, that the average input correlations in (e) are even smaller than the spike train correlations in (c). For each network realization, we simulated the dynamics during 30 s. We then always averaged over 50 pairs for the input current correlations and 1,000 pairs for the spike train correlations with selected distances D ∈ {1, 10, 20,...,100, 200,..., 6,000}

However, we saw (cf. Fig. 2(d)) that the spiking activity in ring networks becomes highly asynchronous irregular, if we relax Dale’s principle and consider hybrid neurons instead. Since the number of excitatory, inhibitory and external synapses is the same for all neurons, we get the same expressions for H, \(L^{\sf lin}(0)\) and \(M^{\sf lin}(D_{kl},0)\) for hybrid neurons as well, but the common input correlations become (the expectation value \({\sf E}_W\) is with regard to network realizations)
$$ \begin{array}{lll} &&\kern-6pt\frac{{\sf E}_W[G^{\sf hyb}(D_{kl},0)]}{\nu_{\sf o}\phi(0)} ={\mathit{\Theta}}[\kappa-D_{kl}]\\ &&\times\,\left(\kappa-D_{kl}\right) J^2 (\beta^2+g^2(1-\beta)^2-2g\beta(1-\beta))\,. \end{array} $$
The ratio of \(G^{\sf hyb}\) and G hence corresponds to the one reported for random networks (Kriener et al. 2008) and equals 0.02 for the parameters used here. This is in line with the average input correlations in the hybrid ring network (Fig. 4(e)). They are hence only about half the correlation of the spike trains in the Dale-conform ring network (Fig. 4(c)). If we assume the correlation transmission for the highest possible input correlation to be the same as in the Dale case (γ ≈ 0.04), we estimate spike train correlations of the order of \(c_{ij}\sim 10^{-4}\). The measured average values from simulations give indeed correlations of that range (Fig. 4(f)) and are, hence, of the same order as in hybrid random networks (Kriener et al. 2008). As we will show in Section 6, the distribution of input correlation coefficients \(C^{\sf hyb}_{\sf in}(D_{kl},0)\) is centered close to zero with a high peak at zero and both negative and positive contributions. In the Dale-conform ring network, however, we only observe positive correlation coefficients with values up to nearly one (it can only reach \(C_{\sf in}(1,0)=C_{\sf struc}(1,0)= 1\), if we apply identical external input to all neurons).

This transfers to the spike generation process and hence explains the dramatically different global spiking behavior, as well as the different Fano factors of the population spike counts (cf. Table 1, Appendix A) in both network types due to the decorrelation of common inputs in hybrid networks.

5.2 Small-world networks

As we stated before, the clustering coefficient \(\mathcal{C}\) (cf. Eq. (5)) is directly related to the amount of shared input \(\propto \sum_{i=1}^N W_{ki}W_{li}\) between two neurons l and k, and hence to the strength of distance dependent correlations. When it gets close to the random graph value, as it is the case for \(p_{\sf r} >0.3\), also the input correlations become similar to that of the corresponding balanced random network (cf. Fig. 5). If we randomize the network gradually by rewiring a fraction of \(p_{\sf r} \) connections, the input variance H is not affected. However, the input covariances due to common input \(G(D_{kl},p_{\sf r})\) do not only depend on the distance, but also on \(p_{\sf r}\). The boxcar footprints of the ring network get diluted during rewiring, so a distance Dkl < κ does not imply anymore that all neurons within the overlap (κ − Dkl) of the two boxcars project to both or any of the neurons k,l (cf. Appendix B, Fig. 8(b)). At the same time the probability to receive inputs from neurons outside the boxcar increases during rewiring. These contributions are independent of the pairwise distance. Still, the probability for two neurons k,l with Dkl < κ to receive input from common neurons within the overlap of the (diluted) boxcars is always higher than the probability to get synapses from common neurons in the rest of the network, as long as \(p_{\sf r}<1\): those input synapses that were not chosen for rewiring adhere to the boxcar footprint, and at the same time the boxcar regains a fraction of its synapses during the random rewiring. So, if Dkl < κ there are three different sources for common input to two neurons k and l that we have to account for: neurons within the overlap of input boxcars, that still have or re-established their synapses to k and l (possible in region ‘a’ in Fig. 8(b)), those that had not have any synapse to either k or l, but project to both k and l after rewiring (possible in region ‘c’ in Fig. 8(b)), and those that are in the boxcar footprint of one neuron k and got randomly rewired to another neuron outside of l’s boxcar footprint (possible in region ‘b’ in Fig. 8(b)). This implies that after rewiring neurons can be correlated due to common input in the regions ‘b’ and ‘c’, even if they are further apart than κ. These correlations due to the random rewiring alone are then independent of the distance between k and l. The probabilities for all these contributions to the total common input covariance \(G_{kl}(D_{kl},p_{\sf r})\) are derived in detail in Appendix B. Ignoring the minor corrections due to the exclusion of self-couplings, we obtain for all k,l ∈ {1,...,N}
$$\begin{array}{lll} &&{\kern-6pt}G(D_{kl},p_{\sf r})={\sf E}_W\left[\sum\limits_{i=1}^{N}W_{ki}W_{li}\,a_i^{s,{\sf loc}}(p_{\sf r})\right]\\ &&\Rightarrow\frac{G(D_{kl},p_{\sf r})}{J^2(\beta+g^2(1-\beta))\,\nu_{\sf o}\phi(0)}\\ &&= \begin{cases} p_1^2\,\left(\kappa\!-\!D_{kl}\right) +p_2^2\,(N\!-\!\kappa\!-\!D_{kl})\\[0.4cm] {\kern12pt}+ 2\,p_1\,p_2\,D_{kl} & {\sf if}\,\,D_{kl}<\kappa \\[0.4cm] p_2^2\,(N-2\,\kappa) + 2\,p_1\,p_2\,\kappa & {\sf otherwise} \end{cases} \end{array} $$
with (cf. Appendix B)
$$p_1(p_{\sf r})=(1-p_{\sf r})+ \frac{p_{\sf r}^2\,\kappa}{N-(1-p_{\sf r})\kappa} $$
$$ p_2(p_{\sf r})=\frac{p_{\sf r}\,\kappa}{N-(1-p_{\sf r})\kappa}\,. $$
Since we always assume that the spike train correlations \(c_{ij}(D_{ij},p_{\sf r})\) are caused solely by common input correlations transmitted to the output, i.e. that they are some function of \(C_{\sf struc}(D_{ij},p_{\sf r})\), we also have to take this into account in the ansatz for the functional form of \(c_{ij}^s(0,D_{ij},p_{\sf r})\). Again, these spike train correlations lead to contributions to the cross-covariances of inputs Ik,Il if Dkl < 2κ. With the linear distance dependence assumption we obtain (cf. Appendix B)
$$ \begin{array}{rll} \frac{c^{s,{\sf lin}}_{ij}(0,D_{ij},p_{\sf r})}{\nu_{\sf o}\phi(0)}\Bigg|_{kl}&=& c_{ij}^{\sf lin}(D_{ij},p_{\sf r})|_{kl}=\gamma(p_{\sf r})\, C_{\sf struc}(0,D_{ij},p_{\sf r})|_{kl}\\ &=& \begin{cases} \frac{\gamma(p_{\sf r})}{\kappa}\, {\mathit{\Theta}}\left[\kappa/2-|i-k|\right] {\mathit{\Theta}}\left[\kappa/2-|j-l|\right]{\mathit{\Theta}}\left[\kappa-|i-j|\right]\\ {\kern12pt}\times\,\left( p_1^2\,\left(\kappa-D_{ij}\right) + p_2^2\,\left(N-\kappa-D_{ij}\right) + 2\,p_1\,p_2\,D_{ij} \right) & {\sf if}\,\,D_{kl}<2\kappa \\[0.3cm] \frac{\gamma(p_{\sf r})}{\kappa}\,\big( p_2^2\,(N-2\kappa) + 2\,p_1\,p_2\,\kappa \big) & {\sf otherwise} \end{cases} \end{array} $$
and for all k,l ∈ {1,...,N}
$$ \begin{array}{lll} &&{\kern-6pt}L^{\sf lin}(p_{\sf r}) =\,{\sf E}_W\left[\sum\limits_{i=1}^{N}\sum\limits_{j\ne{}i}^{N}W_{ki}W_{kj}c_{ij,{\sf lin}}^s(0,D_{ij},p_{\sf r})\right]\\ &&\kern6pt =\, \gamma(p_{\sf r})\,J^2(\beta-g(1-\beta))^2\,\frac{(\kappa-1)}{3}\,\nu_{\sf o}\phi(0)\, \times\ldots \\ &&\kern6pt\phantom{=\,}\times\, \Big(p_1^2(2\kappa\!-\!1)\! +\! p_2\big(2p_1(\kappa\!+\!1)\! +\! p_2(3N\!-\!4\kappa\!-\!1)\big) \Big)\\ \end{array} $$
where we assumed that the rates, and the auto- and cross-covariances of the spike trains are the same for all neurons and neuron pairs, respectively. \(M^{\sf lin}(D_{kl},p_{\sf r})\) can be for evaluated as before. The same procedure as in the case \(p_{\sf r}=0\) hence gives the respective distance dependent input correlations (cf. Appendix B for details) for \(p_{\sf r}\neq 0\). The correspondence with the observed curves is good (Fig. 6). If we on the other hand apply cut-off exponential fits
$$ c_{ij}^{\sf exp}(D_{ij},p_{\sf r})=\gamma(p_{\sf r})\,{\sf e}^{-\eta(p_{\sf r})\,D_{ij}}\Theta[\kappa-D_{ij}]\,, $$
of the distance dependent part of the spike train covariance functions, the shot noise covariance becomes
$$ \begin{array}{rll} \frac{c^{s,{\sf exp}}_{ij}(0,D_{ij},p_{\sf r})}{\nu_{\sf o}\phi(0)}\Bigg|_{kl}&=&c_{ij}^{\sf exp}(D_{ij},p_{\sf r})|_{kl} \\ &=& \begin{cases} \gamma(p_{\sf r})\,{\mathit{\Theta}}\left[\kappa/2-|i-k|\right] {\mathit{\Theta}}\left[\kappa/2-|j-l|\right]{\mathit{\Theta}}\left[\kappa-|i-j|\right]\\ \times\,\left( p_1^2\,{\sf e}^{-\eta(p_{\sf r})\,D_{ij}} + p_2^2\,\frac{N-\kappa-D_{ij}}{\kappa} + 2\,p_1\,p_2\,\frac{D_{ij}}{\kappa} \right) & {\sf if}\,\,D_{kl}<2\kappa\\[0.3cm] \gamma(p_{\sf r})\,\left( p_2^2\,\frac{N-2\kappa}{\kappa} + 2\,p_1\,p_2 \right) & {\sf otherwise} \end{cases} \end{array} $$
With this ansatz the correspondence of the predicted and measured input correlations \(C_{\sf in}(D_{kl},p_{\sf r})\) is nearly perfect, as it was the case for the ring graphs (Fig. 6). For the random network the spike correlations cij(Dij,1) are independent of distance and are proportional to the network connectivity ϵ (cf. also Kriener et al. (2008)), i.e. cij(Dij,1) = ϵγ(1). This is indeed the case with the linear ansatz, as one can easily check with p1(1) = p2(1) = κ/N, cf. Eqs. (36), (37).
Fig. 5

Clustering coefficient \(\mathcal{C}(p_{\sf r})/\mathcal{C}(0)\) (black) versus the normalized input correlation coefficient \(C_{\sf in}^{\sf dale}(1,p_{\sf r})/C_{\sf in}^{\sf dale}(1,0)\) (gray) estimated from simulations and evaluated at its maximum at distance D = 1 in a semi-log plot. \(C_{\sf in}^{\sf dale}(1,p_{\sf r})/C_{\sf in}^{\sf dale}(1,0)\) decays slightly faster with \(p_{\sf r}\) than the clustering coefficient, but the overall shape is very similar. This shows how the topological properties translate to the joint second order statistics of neuronal inputs

With the spike train correlations fitted by an exponential, the correlation length \(1/\eta(p_{\sf r})\) actually diverges for \(p_{\sf r}\to 1\), and Eq. (41) gives the wrong limit for Dkl < 2κ. As one can see in Fig. 6(b), the distance dependence of the spike correlations approaches a linear relation as the networks leave the small-world regime (\(p_{\sf r} > 0.3\)), and the linear model becomes more adequate Eq. (30).
Fig. 6

Structural (a), spike train (b), and input (c, d) correlation coefficients as a function of the rewiring probability \(p_{\sf r}\) and the pairwise interneuronal distance D for a ring network of size N = 12,500, κ = 1,250, g = 6, J = 0.1 mV with βN equidistantly distributed inhibitory neurons and \(K_{\sf ext}=1\!,000\) external Poisson inputs per neuron of strength \(\nu_{\sf ext}=15\) Hz each. (a) The structural correlation coefficients \(C_{\sf struc}(D,p_{\sf r})=\frac{G(D,p_{\sf r})}{H}\). For \(p_{\sf r}=0\) they are close to one for D = 1 and tend to zero for D = 1,250. These would be the expected input correlation coefficients, if the spike train correlations were zero and the external input was DC. (b) shows the spike train correlations \(c_{ij}(D,p_{\sf r})\) as estimated from simulations (gray) and the exponential fits \(c_{ij}^{\sf exp}\sim c_{ij}(1,p_{\sf r}){\sf e}^{-\eta(p_{\sf r}) D}\) (red) we used to calculate \(C_{\sf in}(D,p_{\sf r})\) as shown in panel (d). (c) shows the input correlation coefficients \(C_{\sf in}(D,p_{\sf r})\) (gray) estimated from simulations and the theoretical prediction (red) using linear fits of the respective \(c_{ij}^{\sf lin}(D,p_{\sf r})\). (d) shows the same as (c), but with \(c_{ij}^{\sf exp}(D,p_{\sf r})\) fitted as decaying exponentials as shown in panel (c). For each network realization, we simulated the dynamics for 30 s. We then always averaged over 50 pairs for the input current correlations and 1,000 pairs for the spike train correlations with selected distances D ∈ {1, 10, 20,...,100, 200,..., 6000} and rewiring probabilities \(p_{\sf r}\in\{0,0.05,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8\}\), always shown from top to bottom

6 Distribution of correlation coefficients in ring and random networks

After derivation of the distance dependent correlation coefficients of the inputs in different neuronal network types, we can now ask for the distribution of correlation coefficients. In the following, we restrict the quantitative analysis to correlations of weighted input currents, but the qualitative results also hold for different linear synaptic filter kernels fki(t), cf. Section 5. Note that the mean structural input correlation coefficient \(\bar{c}_{\sf struc}^{\sf dale/hyb}\) is independent of ring or random topology, both in the Dale and the hybrid case, while the distributions differ dramatically (Fig. 7).
Fig. 7

Estimated (gray) and predicted (red) input correlation coefficient probability density function (pdf) for a Dale-conform (a) ring and (b) random network, and for a hybrid (c) ring and (d) random network. (N = 12,500, κ = ϵN = 1,250, g = 6, estimation window size 0.005). The estimated data (gray) stem from 10s of simulated network activity and are compared with the structural input correlation probability mass functions \(P(C_{\sf struc})\) in (b), (c), and (d) as derived in Eq. (46), Eq. (73), and Appendix D (red, binned with the same window as the simulated data), and compared to the full theory including spike correlations in (a, red). In (b), (c), and (d) the spike correlations are very small and hence close to the distributions predicted from the structural correlation \(C_{\sf struc}\) (red). For the ring network, however, the real distribution differs substantially, due to the pronounced distance dependent spike train correlations. To obtain the full distribution in (a) and (c), we recorded from 6,250 subsequent neurons. Both have their maxima close to zero, we clipped the peaks to emphasize the less trivial parts of the distributions (the maximum of the pdf in (a) is 142 in theory and 136 in the estimated distribution; the maximum of the pdf in (b) is 160 in theory and 156 in the estimated distribution). For the random networks (b, d), we computed all pairwise input correlations of a random sample of 50 neurons. The oscillations of the analytically derived pdf in (b, red) are due to the specific discrete nature of the problem, cf. Eq. (46)

Ring networks
$$ \bar{C}_{\sf struc, ring}^{\sf dale}= \sum\limits_{D=1}^{\kappa-1}\,P(D)\,\left(1-\frac{D}{\kappa}\right) =\frac{\kappa-1}{N}\approx\epsilon $$
with the distribution of pairwise distances \(P(D)=\frac{2}{N}{\mathit{\Theta}}\left[\frac{N}{2}-D\right]\),3 and
$$ \begin{array}{rll} \bar{C}_{\sf struc, ring}^{\sf hyb}&=&\sum\limits_{D=1}^{\kappa-1}\,P(D)\,\left(1\!-\!\frac{D}{\kappa}\right)\frac{(\beta\!-\!g(1\!-\!\beta))^2}{\beta\!+\!g^2(1\!-\!\beta)}\\ &=&\frac{\kappa\!-\!1}{N}\frac{(\beta\!-\!g(1\!-\!\beta))^2}{\beta\!+\!g^2(1\!-\!\beta)}\\&\approx&\epsilon\frac{(\beta\!-\!g(1\!-\!\beta))^2}{\beta\!+\!g^2(1\!-\!\beta)}. \end{array} $$
Random networks (Kriener et al. 2008)
$$ \bar{C}_{\sf struc, rand}^{\sf dale}=\frac{\epsilon^2(\beta+g^2\,(1-\beta))}{\epsilon(\beta+g^2\,(1-\beta))}=\epsilon\,, $$
$$ \bar{C}_{\sf struc, rand}^{\sf hyb}=\frac{\epsilon^2(\beta-g\,(1-\beta))^2}{\epsilon(\beta+g^2\,(1-\beta))}=\epsilon\,\frac{(\beta-g\,(1-\beta))^2}{\beta+g^2\,(1-\beta)}\,, $$
where (1 − β) and β are the fractions of inhibitory and excitatory inputs per neuron (each the same for all neurons).
The distribution of structural correlation coefficients in the random Dale network is given by
$$ \begin{array}{lll}&&{\kern-6pt}P(C_{\sf struc}^{\sf dale}=c)\\ & & = \sum\limits_{Q_{\sf E},Q_{\sf I}}\delta_{c,C_{\sf struc}^{\sf dale}(Q_{\sf E},Q_{\sf I})}P(Q_{\sf E}|N_{\sf E},K_{\sf E})\, P(Q_{\sf I}|N_{\sf I},K_{\sf I})\\ && = \sum\limits_{Q_{\sf I}=0}^{K_{\sf I}}P(c\cdot \zeta-g^{2}Q_{\sf I}|N_{\sf E},K_{\sf E})P(Q_{\sf I}|N_{\sf I},K_{\sf I})\,, \end{array} $$
where \(\zeta=K_{\sf E} +g^2K_{\sf I}\), \(Q_{\sf I}\) is the number of common inhibitory inputs and \(Q_{\sf E}\) that of excitatory ones,
$$ C_{\sf struc}^{\sf dale} = \frac{Q_{\sf E} + g^2Q_{\sf I}}{K_{\sf E} + g^2K_{\sf I}}\,, $$
$$ P(Q_{\sf E/I}|N_{\sf E/I},K_{\sf E/I})=\frac{\begin{pmatrix}K_{\sf E/I} \\ Q_{\sf E/I}\end{pmatrix}\begin{pmatrix}N_{\sf E/I}-K_{\sf E/I} \\ K_{\sf E/I}-Q_{\sf E/I}\end{pmatrix}}{\begin{pmatrix}N_{\sf E/I} \\ K_{\sf E/I}\end{pmatrix}}\,. $$
Note that \(P(Q_{\sf E/I}|N_{\sf E/I},K_{\sf E/I})=0\) for non-integer \(Q_{\sf E/I}\). The correlation coefficient distribution for the random hybrid network is derived in Appendix D.

For a ring graph the structural correlation coefficient distribution \(P(C_{\sf struc})(D,0)\) has the probability mass \(\frac{N-2\kappa}{N}\) at the origin, \(\frac{2}{N}\) in the discrete open interval (0,1), and \(\frac{1}{N}\) at 1, if we include the variance for distance D = 0. However, due to the non-negligible spike train correlations cij(D,0), the actually measured input correlations \(C_{\sf in}(D,0)^{\sf ring}_{\sf dale}\) have a considerably different distribution that has less mass at 0 due to the positive input correlations up to a distance ∼2κ. They are very well described by the full theory with an exponential ansatz for the spike train correlations as described in Section 5.1, Eq. (41). These two limiting cases emphasize that the distribution of input (subthreshold) correlations may give valuable information about whether there is a high degree of locally shared input (heavy tail probability distribution \(P(C_{\sf struc})\)) or if it is rather what is to be expected from random connectivity in a Dale-conform network.

7 Discussion

We analyzed the activity dynamics in sparse neuronal networks with ring, small-world and random topologies. In networks with a high clustering coefficient \(\mathcal{C}\) such as ring and small-world networks, neighboring neurons tend to fire highly synchronously. With increasing randomness, governed by the rewiring probability \(p_{\sf r}\), activity becomes more asynchronous, but even in random networks we observe a high Fano factor FF of the population spike counts, indicating its residual synchrony.4 As shown by Kriener et al. (2008) these fluctuations become strongly attenuated for hybrid neurons which have both excitatory and inhibitory synaptic projections.

Here, we demonstrated that the introduction of hybrid neurons leads to highly asynchronous (FF ≈ 1) population activity even in networks with ring topology. Recent experimental data suggest, that there are abundant fast and reliable couplings between pyramidal cells, which are effectively inhibitory (Ren et al. 2007) and which might be intepreted as hybrid-like couplings. However, the hybrid concept contradicts the general paradigm that pyramidal cells depolarize all their postsynaptic targets while inhibitory interneurons hyperpolarize them, a paradigm known as Dalesprinciple (Li and Dayan 1999; Dayan and Abbott 2001; Hoppensteadt and Izhikevich 1997). As we showed here, a severe violation of Dale’s principle turns the specifics of network topology meaningless, and might even impede functionally potentially important processes, as for example pattern formation or line attractors in ring networks (see e.g. Ben-Yishai et al. 1995; Ermentrout and Cowan 1979), or propagation of synchronous activity in recurrent cortical networks (Kumar et al. 2008a).

We demonstrated that the difference of the amplitude of population activity fluctuations in Dale-conform and hybrid networks can be understood from the differences in the input correlation structure in both network types. We extended the ansatz presented in Kriener et al. (2008) to networks with ring and small-world topology and derived the input correlations in dependence of the pairwise distance of neurons and the rewiring probability. Because of the strong overlap of the input pools of neighboring neurons in ring and small-world networks, the assumption that the spike trains of different neurons are uncorrelated, an assumption justified in sparse balanced random networks, is no longer valid. We fitted the distance dependent instantaneous spike train correlations and took them adequately into account. This lead to a highly accurate prediction of input correlations.

A fully self-consistent treatment of correlations is however beyond the scope of the analysis presented here. As we saw in Section 5.1, in Dale-conform ring graphs neurons cover basically the whole spectrum of positive input correlation strengths between almost one (depending on the level of variance of the uncorrelated external input) and zero as a function of pairwise distance D. If we look at the ratio between input and output correlation strength, we see that it is not constant, but that stronger correlations have a higher gain. The exact mechanisms of this non-linear correlation transmission needs further analysis. Recent analysis of correlation transfer in integrate-and-fire neurons by De la Rocha et al. (2007) and Shea-Brown et al. (2008) showed that the spike train correlations can be written as a linear function of the input correlations, given they are small \(C_{\sf in}\in[0,0.3]\). For larger \(C_{\sf in}\) (De la Rocha et al. 2007; Shea-Brown et al. 2008), however, report supralinear correlation transmission. Such correlation transmission properties were also observed and analytically derived for arbitrary input correlation strength in an alternative approach that makes use of correlated Gauss processes (Tchumatchenko et al. 2008). These results are all in line with the non-linear dependence of spike train correlations on the strength of input correlations that we observed and fitted by an exponential decay with interneuronal distance.

We saw that correlations are weakened as they are transferred to the output side of the neurons, but, as is to be expected, they are much higher for neighboring neurons in ring networks as it is the case in the homogeneously correlated random networks that receive more or less uncorrelated external inputs. The assumption that the spike train covariance functions are delta-shaped is certainly an over-simplification, especially in the Dale-conform ring networks (cf. the examples of spike train cross-correlation functions ψij(τ,D,0), Fig. 4(d)). The temporal width of the covariance functions leads to an increase in the estimation of spike train correlations if the spike count bin-size h is increased. In Dale-conform ring graphs we found cij(1,0) ≤ 0.041 for time bins h = 0.1 ms (cf. Fig. 4(c), (d)). For h = 10 ms, a time window of the order of the membrane time constant \(\tau_{\sf m}\), we observed cij(1,0) ≤ 0.25 (not shown). This covers the spectrum of correlations reported in experimental studies, which range from 0.01 to approximately 0.3 (Zohary et al. 1994; Vaadia et al. 1995; Shadlen and Newsome 1998; Bair et al. 2001). For hybrid networks, however, the pairwise correlations have a narrow distribution around zero, irrespective of the topology. This explains the highly asynchronous dynamics in hybrid neuronal networks.

Finally, we suggest that the distribution of pairwise correlation coefficients of randomly chosen intracellularly recorded neurons may provide a means to distinguish different neuronal network topologies. Real neurons, however, have conductance-based synapses, and their filtering is strongly dependent on the membrane depolarization (Destexhe et al. 2003; Kuhn et al. 2004). Moreover, spikes are temporally extended events, usually with different synaptic time scales, and transmission delays are distributed and likely dependent on the distance between neurons. These effects, amongst others, might distort the results presented here. Still, though intracellular recordings are technically more involved than extracellular recordings, they are basically analog signals, and hence much shorter periods of recording time are necessary to get a sufficiently good statistics, as compared to estimation of pairwise spike train correlations from low rate spiking neurons (Lee et al. 2006). So, bell-shaped distributions of membrane potential correlations may hint towards an underlying random network structure, while heavy-tail distributions should be observed for networks with locally confined neighborhoods. Naturally, the distribution will depend on the relation between the sampled region and the footprint of the neuron-type one is interested in. This is true for both the model as well as for real neuronal tissue. Some idea about the potential input footprint, e.g. from reconstructions of the axonal and dendritic arbors (Hellwig 2000; Stepanyants et al. 2007), can help to estimate the spatial distance that must be covered. It also is a matter of the spatial scale that one is interested in: if one is mostly interested in very small, local networks < 200 μm, where the connection probability might be considered approximately homogeneous (Hellwig 2000; Stepanyants et al. 2007), the correlation coefficient distribution will be akin to that of a random topology. If one, however, samples several millimeter, the distribution may tend more to a heavy-tail shape, due to the increase of the relative number of weakly correlated neuron pairs. At this scale radial inhomogeneities, as for example due to axonal patches (Lund et al. 2003), in two dimensions or different connection probabilities within and between cortical layers (Binzegger et al. 2004) in three dimensions must be taken into account as well, as they will distort the over-simplified assumption of the connectivity made here. In conclusion, we think that a further extension of the line of research presented here might provide a way to access structural features of neuronal networks by the analysis of their input statistics. This could eventually prove helpful in separating correlations that arise due to the specifics of the network structure from those that arise due to correlated input from other areas, e.g. sensory inputs, and provide insight into the relation between structure and function.


  1. 1.

    The value of the clustering coefficient does in expectation not depend on the exact choice of triplet connectivity we ask for.

  2. 2.

    The reverse is not true, we can have a high amount of shared input in a network without having a high clustering coefficient. An example is a star graph in that a central neuron projects to N other neurons which in turn are not connected.

  3. 3.

    The density of neurons in a distance D generally behaves like P(D)∼Ddim − 1 with dimensionality dim.

  4. 4.

    This is due to the finite size of the network. For increasing network size N→ ∞ the asynchronous-irregular state becomes stable.

  5. 5.

    We refer to the distance in neuron indices here, that are arbitrarily defined to run from 1 to N in a clockwise manner. Hence the boxcar neighborhood of a neuron i includes {i − κ/2,...,i + κ/2} (modulo network size). Note that for \(p_{\sf r}>0\) this does not generally correspond to the topological neighborhood defined by adjacency anymore.



We thank Benjamin Staude, Marc Timme, and two anonymous reviewers for their valuable comments on an earlier version of the manuscript. We gratefully acknowledge funding by the German Federal Ministry of Education and Research (BMBF grants 01GQ0420 and 01GQ0430) and the European Union (EU Grant 15879, FACETS). All network simulations were carried out with the NEST simulation tool (

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.


  1. Albert, R., & Barabasi, A. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74, 47–97.CrossRefGoogle Scholar
  2. Bair, W., Zohary, E., & Newsome, W. (2001). Correlated firing in Macaque visual area MT: Time scales and relationship to behavior. Journal of Neuroscience, 21(5), 1676–1697.PubMedGoogle Scholar
  3. Ben-Yishai, R., Bar-Or, R., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 92, 3844.PubMedCrossRefGoogle Scholar
  4. Binzegger, T., Douglas, R. J., & Martin, K. A. C. (2004). A quantitative map of the circuit of cat primary visual cortex. Journal of Neuroscience, 39(24), 8441–8453.CrossRefGoogle Scholar
  5. Bronstein, I. N., & Semendjajew, K. A. (1987). Taschenbuch der Mathematik (23rd ed.). Thun und Frankfurt/Main: Verlag Harri Deutsch.Google Scholar
  6. Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3), 183–208.PubMedCrossRefGoogle Scholar
  7. Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11(7), 1621–1671.PubMedCrossRefGoogle Scholar
  8. Chklovskii, D. B., Schikorski, T., & Stevens, C. F. (2002). Wiring optimization in cortical circuits. Neuron, 34, 341–347.PubMedCrossRefGoogle Scholar
  9. Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience. Cambridge: MIT.Google Scholar
  10. De la Rocha, J., Doiron, B., Shea-Brown, E., Kresimir, J., & Reyes, A. (2007). Correlation between neural spike trains increases with firing rate. Nature, 448(16), 802–807.PubMedCrossRefGoogle Scholar
  11. Destexhe, A., Rudolph, M., & Pare, D. (2003). The high-conductance state of neocortical neurons in vivo. Nature Reviews. Neuroscience, 4, 739–751.PubMedCrossRefGoogle Scholar
  12. Ermentrout, G. B., & Cowan, J. D. (1979). A mathematical theory of visual hallucination patterns. Biological Cybernetics, 34, 137–150.PubMedCrossRefGoogle Scholar
  13. Gewaltig, M.-O., & Diesmann, M. (2007). NEST (Neural simulation tool). Scholarpedia, 2(4), 1430.Google Scholar
  14. Hellwig, B. (2000). A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biological Cybernetics, 2(82), 111–121.CrossRefGoogle Scholar
  15. Hoppensteadt, F. C., & Izhikevich, E. M. (1997). Weakly connected neural networks. New York: Springer.Google Scholar
  16. Jahnke, S., Memmesheimer, R., & Timme, M. (2008). Stable irregular dynamics in complex neural networks. Physical Review Letters, 100, 048102.PubMedCrossRefGoogle Scholar
  17. Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., & Rotter, S. (2008). Correlations and population dynamics in cortical networks. Neural Computation, 20, 2185–2226.PubMedCrossRefGoogle Scholar
  18. Kuhn, A., Aertsen, A., & Rotter, S. (2004). Neuronal integration of synaptic input in the fluctuation-driven regime. Journal of Neuroscience, 24(10), 2345–2356.PubMedCrossRefGoogle Scholar
  19. Kumar, A., Rotter, S., & Aertsen, A. (2008a). Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. Journal of Neuroscience, 28(20), 5268–5280.PubMedCrossRefGoogle Scholar
  20. Kumar, A., Schrader, S., Aertsen, A., & Rotter, S. (2008b). The high-conductance state of cortical networks. Neural Computation, 20(1), 1–43.PubMedCrossRefGoogle Scholar
  21. Lee, A., Manns, I., Sakmann, B., & Brecht, M. (2006). Whole-cell recordings in freely moving rats. Neuron, 51, 399–407.PubMedCrossRefGoogle Scholar
  22. Li, Z., & Dayan, P. (1999). Computational differences between asymmetrical and symmetrical networks. Network: Computing Neural Systems, 10, 59–77.CrossRefGoogle Scholar
  23. Lund, J. S., Angelucci, A., & Bressloff, P. C. (2003). Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cerebral Cortex, 12, 15–24.CrossRefGoogle Scholar
  24. Mattia, M., & Del Guidice, P. (2002). Population dynamics of interacting spiking neurons. Physical Review E, 66, 051917.CrossRefGoogle Scholar
  25. Mattia, M., & Del Guidice, P. (2004). Finite-size dynamics of inhibitory and excitatory interacting spiking neurons. Physical Review E, 70, 052903.CrossRefGoogle Scholar
  26. Morrison, A., Mehring, C., Geisel, T., Aertsen, A., & Diesmann, M. (2005). Advancing the boundaries of high connectivity network simulation with distributed computing. Neural Computation, 17(8), 1776–1801.PubMedCrossRefGoogle Scholar
  27. Nawrot, M. P., Boucsein, C., Rodriguez Molina, V., Riehle, A., Aertsen, A., & Rotter, S. (2008). Measurement of variability dynamics in cortical spike trains. Journal of Neuroscience Methods, 169, 374–390.PubMedCrossRefGoogle Scholar
  28. Papoulis, A. (1991). Probability, random variables, and stochastic processes (3rd ed.). Boston: McGraw-Hill.Google Scholar
  29. Ren, M., Yoshimura, Y., Takada, N., Horibe, S., & Komatsu, Y. (2007). Specialized inhibitory synaptic actions between nearby neocortical pyramidal neurons. Science, 316, 758–761.PubMedCrossRefGoogle Scholar
  30. Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. Journal of Neuroscience, 18(10), 3870–3896.PubMedGoogle Scholar
  31. Shea-Brown, E., Josic, K., de la Rocha, J., & Doiron, B. (2008). Correlation and synchrony transfer in integrate-and-fire neurons: Basic properties and consequences for coding. Physical Review Letters, 100, 108102.PubMedCrossRefGoogle Scholar
  32. Song, S., Per, S., Reigl, M., Nelson, S., & Chklovskii, D. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits. Public Library of Science, Biology, 3(3), 0507–0519.Google Scholar
  33. Sporns, O. (2003). Network analysis, complexity and brain function. Complexity, 8(1), 56–60.CrossRefGoogle Scholar
  34. Sporns, O., & Zwi, D. Z. (2004). The small world of the cerebral cortex. Neuroinformatics, 2, 145–162.PubMedCrossRefGoogle Scholar
  35. Stepanyants, A., Hirsch, J., Martinez, L. M., Kisvarday, Z. F., Ferecsko, A. S., & Chklovskii, D. B. (2007). Local potential connectivity in cat primary visual cortex. Cerebral Cortex, 18(1), 13–28.PubMedCrossRefGoogle Scholar
  36. Strogatz, S. H. (2001). Exploring complex networks. Nature, 410, 268–276.PubMedCrossRefGoogle Scholar
  37. Tchumatchenko, T., Malyshev, A., Geisel, T., Volgushev, M., & Wolf, F. (2008). Correlations and synchrony in threshold neuron models.
  38. Tetzlaff, T., Rotter, S., Stark, E., Abeles, M., Aertsen, A., & Diesmann, M. (2007). Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics. Neural Computation, 20, 2133–2184.CrossRefGoogle Scholar
  39. Timme, M. (2007). Revealing network connectivity from response dynamics. Physical Review Letters, 98, 224101.PubMedCrossRefGoogle Scholar
  40. Timme, M., Wolf, F., & Geisel, T. (2002). Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillators. Physical Review Letters, 89(25), 258701.PubMedCrossRefGoogle Scholar
  41. Vaadia, E., Haalman, I., Abeles, M., Bergman, H., Prut, Y., Slovin, H., & Aertsen, A. (1995). Dynamics of neuronal interactions in monkey cortex in relation to behavioural events. Nature, 373(6514), 515–518.PubMedCrossRefGoogle Scholar
  42. van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274, 1724–1726.PubMedCrossRefGoogle Scholar
  43. van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Computation, 10, 1321–1371.PubMedCrossRefGoogle Scholar
  44. Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of small-world networks. Nature, 393, 440–442.PubMedCrossRefGoogle Scholar
  45. Yoshimura, Y., & Callaway, E. (2005). Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity. Nature Neuroscience, 8(11), 1552–1559.PubMedCrossRefGoogle Scholar
  46. Yoshimura, Y., Dantzker, J., & Callaway, E. (2005). Excitatory cortical neurons form fine-scale functional networks. Nature, 433(24), 868–873.PubMedCrossRefGoogle Scholar
  47. Zohary, E., Shadlen, M. N., & Newsome, W. T. (1994). Correlated neuronal discharge rate and its implications for psychophysical performance. Nature, 370, 140–143.PubMedCrossRefGoogle Scholar

Copyright information

© The Author(s) 2009

Authors and Affiliations

  • Birgit Kriener
    • 1
    • 2
    • 4
    • 5
  • Moritz Helias
    • 1
    • 2
  • Ad Aertsen
    • 1
    • 2
  • Stefan Rotter
    • 1
    • 3
  1. 1.Bernstein Center for Computational NeuroscienceAlbert-Ludwig UniversityFreiburgGermany
  2. 2.Neurobiology and Biophysics, Faculty of BiologyAlbert-Ludwig UniversityFreiburgGermany
  3. 3.Computational Neuroscience, Faculty of BiologyAlbert-Ludwig UniversityFreiburgGermany
  4. 4.Network Dynamics GroupMax Planck Institute for Dynamics and Self-OrganizationGöttingenGermany
  5. 5.Bernstein Center for Computational NeuroscienceGöttingenGermany

Personalised recommendations