# Correlations in spiking neuronal networks with distance dependent connections

- 922 Downloads
- 11 Citations

## Abstract

Can the topology of a recurrent spiking network be inferred from observed activity dynamics? Which statistical parameters of network connectivity can be extracted from firing rates, correlations and related measurable quantities? To approach these questions, we analyze distance dependent correlations of the activity in small-world networks of neurons with current-based synapses derived from a simple ring topology. We find that in particular the distribution of correlation coefficients of subthreshold activity can tell apart random networks from networks with distance dependent connectivity. Such distributions can be estimated by sampling from random pairs. We also demonstrate the crucial role of the weight distribution, most notably the compliance with Dales principle, for the activity dynamics in recurrent networks of different types.

## Keywords

Spiking neural networks Small-world networks Pairwise correlations Distribution of correlation coefficients## 1 Introduction

The collective dynamics of balanced random networks was extensively studied, assuming different neuron models as constituting dynamical units (van Vreeswijk and Sompolinsky 1996, 1998; Brunel and Hakim 1999; Brunel 2000; Mattia and Del Guidice 2002; Timme et al. 2002; Mattia and Del Guidice 2004; Kumar et al. 2008b; Jahnke et al. 2008; Kriener et al. 2008).

Some of these models have in common that they assume random network topologies with a sparse connectivity *ϵ* ≈ 0.1 for a local, but large neuronal network, embedded into an “external” population that supplies unspecific white noise drive to the local network. These systems are considered as minimal models for cortical networks of about 1 mm^{3} volume, because they can display activity states similar to those observed *in* *vivo*, such as asynchronous irregular spiking. Yet, as recently reported (Song et al. 2005; Yoshimura et al. 2005; Yoshimura and Callaway 2005), local cortical networks are characterized by a circuitry, which is specific and hence, non-random even on a small spatial scale. Since it is still impossible to experimentally uncover the whole coupling structure of a neuronal network, it is necessary to infer some of its features from its activity dynamics. Timme (2007) e.g. studied networks of *N* coupled phase oscillators in a stationary phase locked state. In these networks it is possible to reconstruct details of the network coupling matrix (i.e. topology and weights) by slightly perturbing the stationary state with different driving conditions and analyzing the network response. Here, we focus on both network structure and activity dynamics in spiking neuronal networks on a statistical level. We consider several abstract model networks that range from strict distance dependent connectivity to random topologies, and examine their activity dynamics by means of numerical simulation and quantitative analysis. We focus on integrate-and-fire neurons arranged on regular rings, random networks, and so-called small-world networks (Watts and Strogatz 1998). Small-world structures seem to be optimal brain architectures for fast and efficient inter-areal information transmission with potentially low metabolic consumption and wiring costs due to a low characteristic path length ℓ (Chklovskii et al. 2002), while at the same time they may provide redundancy and error tolerance by highly recurrent computation (high clustering coefficient \(\mathcal{C}\), for the general definitions of ℓ and \(\mathcal{C}\), cf. e.g. Watts and Strogatz (1998), Albert and Barabasi (2002)). Also on an intra-areal level cortical networks may have pronounced small-world features as was shown in simulations by Sporns and Zwi (2004), who assumed a local Gaussian connection probability and a uniform long-range connection probability for local cortical networks, assumptions that are in line with experimental observations (Hellwig 2000; Stepanyants et al. 2007). Network topology is just one aspect of neuronal network coupling, though. Here, we also demonstrate the crucial role of the weight distribution, especially with regard to the notion that all inhibitory neurons project only hyperpolarizing synapses onto their postsynaptic targets, and excitatory neurons only project depolarizing synapses. This assumption is sometimes referred to as *Dale*′*s principle* (Li and Dayan 1999; Dayan and Abbott 2001; Hoppensteadt and Izhikevich 1997). Strikingly, this has strong implications for the dynamical states of random networks already (Kriener et al. 2008). Yet, the main focus of the present study is put on the distance dependence and overall distribution of correlation coefficients. Especially the joint statistics of subthreshold activity, i.e. correlations and coherences between the incoming currents that neurons integrate, has been shown to contain elusive information about network parameters, e.g. the mean connectivity in random networks (Tetzlaff et al. 2007).

The paper is structured as follows: In Section 2 we give a short description of the details of the neuron model and the simulation parameters used throughout the paper. In Section 3 we introduce the notion of small-world networks and in Section 4 we discuss features of the activity dynamics in dependence of the topology. In ring and small-world networks groups of neighboring neurons tend to spike highly synchronously, while the population dynamics in random networks is asynchronous-irregular. To understand the source of this differences in the population dynamics, we analyze the correlations of the inputs of neurons in dependence of the network topology. Section 5 is devoted to the theoretical framework we apply to calculate the input correlations in dependence of the pairwise distance in sparse ring (Section 5.1) and small-world networks (Section 5.2). In Section 6 we finally derive the full distribution of correlation coefficients for ring and random networks. Random networks have rather narrow distributions centered around the mean correlation coefficient, while sparse ring and small-world networks have distributions with heavy tails. This is due to the high probability to share a common input partner if the neurons are topological neighbors, and very low probability if they are far apart, yielding distributions with a few high correlation coefficients and many small ones. This offers a way to potentially distinguish random topologies from topologies with small-world features by their subthreshold activity dynamics on a statistical level.

## 2 Neuronal dynamics and synaptic input

*N*are modeled as leaky integrate-and-fire point neurons with current-based synapses. The membrane potential dynamics

*V*

_{ k }(

*t*),

*k*∈ {1,...,

*N*} of the neurons is given by

*R*and membrane time constant \(\tau_{\sf m}\). Whenever

*V*

_{ k }(

*t*) reaches the threshold

*θ*, a spike is emitted,

*V*

_{ k }(

*t*) is reset to \(V_{\sf res}\), and the neuron stays refractory for a period \(\tau_{\sf ref}\). Synaptic inputs

*δ*-currents. Whenever a presynaptic neuron

*i*fires an action potential at time

*t*

_{ il }, it evokes an exponential postsynaptic potential (PSP) of amplitude

*Δ*that is the same for all synapses. Note that multiple connections between two neurons and self-connections are excluded in this framework. In addition to the local input, each neuron receives an external Poisson current \(I_{{\sf ext},k}\) mimicking inputs from other cortical areas or subcortical regions. The total input is thus given by

### 2.1 Parameters

The neuron parameters are set to \(\tau_{\sf m}=20\) ms, *R* = 80 M*Ω*, *J* = 0.1 mV, and *Δ* = 2 ms. The firing threshold *θ* is 20 mV and the reset potential \(V_{\sf res}=0\) mV. After a spike event, the neurons stay refractory for \(\tau_{\sf ref}=2\) ms. If not stated otherwise, all simulations are performed for networks of size *N* = 12,500, with \(N_{\sf E}=10,\!000\) and \(N_{\sf I}=2,\!500\). We set the fraction of excitatory neurons in the network to \(\beta=N_{\sf E}/N=0.8\). The connectivity is set to *ϵ* = 0.1, such that each neuron receives exactly *κ* = *ϵ* *N* inputs. For *g* = 4 inhibition hence balances excitation in the local network, while for *g* > 4 the local network is dominated by a net inhibition. Here, we choose *g* = 6. External inputs are modeled as \(K_{\sf ext}=\epsilon N_{\sf E}\) independent Poissonian sources with frequencies \(\nu_{\sf ext}\). All network simulations were performed using the NEST simulation tool (Gewaltig and Diesmann 2007) with a temporal resolution of *h* = 0.1 ms. For details of the simulation technique see Morrison et al. (2005).

## 3 Structural properties of small-world networks

*N*, where all nodes are connected to their

*κ*≪

*N*nearest neighbors (“boxcar footprint”), by random rewiring of connections with probability \(p_{\sf r}\) (cf. Fig. 1(a), (b)). Watts and Strogatz (1998) characterized the small-world regime by two graph-theoretical measures, a high clustering coefficient \(\mathcal{C}\) and a low characteristic pathlength ℓ (cf. Fig. 1(c)). The clustering coefficient \(\mathcal{C}\) measures the transitivity of the connectivity, i.e. how likely it is, that, given there is a connection between nodes

*i*and

*j*, and between nodes

*j*and

*k*, there is also a connection between nodes

*i*and

*k*. The characteristic path length ℓ on the other hand quantifies how many steps on average suffice to get from some node in the network to any other node. In the following we will analyze small-world networks of spiking neurons. Networks can be represented by the adjacency matrix

*A*with

*A*

_{ ki }= 1, if node

*i*is connected to

*k*and

*A*

_{ ki }= 0 otherwise. We neglect self-connections, i.e.

*A*

_{ kk }= 0 for all

*k*∈ {1,...,

*N*}. In the original paper by Watts and Strogatz (1998) undirected networks were studied. Connections between neurons, i.e. synapses are however generically directed. We define the clustering coefficient \(\mathcal{C}\) for directed networks

^{1}here as

*l*and

*k*, where

*W*

_{ ki }are the weighted connections from neuron

*i*to neuron

*k*(cf. Section 2), i.e.

_{ ij }is the shortest path between neurons

*i*and

*j*, i.e. \(\ell_{ij}={\sf min}_{n\in \mathbb{Z}}\{(A^n)_{ij}>0\}\) (Albert and Barabasi 2002). The clustering coefficient is a local property of a graph, while the characteristic path length is a global quantity. This leads to the relative stability of the clustering coefficient during gradual rewiring of connections, because the local properties are hardly affected, whereas the introduction of random shortcuts decreases the average shortest path length dramatically (cf. Fig. 1(c)).

## 4 Activity dynamics in spiking small-world networks

^{2}which is the same as in undirected ring networks (Albert and Barabasi 2002). This leads to high input correlations and synchronous spiking of groups of neighboring neurons (Fig. 2(a)). As more and more connections are rewired, the local synchrony is attenuated and we observe a transition to a rather asynchronous global activity (Fig. 2(b), (c)). The clustering coefficient of the corresponding random graph equals \(\mathcal{C}(1)=\kappa/N=\epsilon\) (here

*ϵ*= 0.1), because the probability to be connected is always

*ϵ*for any two neurons, independent of the adjacency of the neurons (Albert and Barabasi 2002). This corresponds to the strength of the input correlations observed in these networks (Kriener et al. 2008). However, the population activity still shows pronounced fluctuations around ∼1/4

*Δ*(with the transmission delay

*Δ*= 2 ms, cf. Section 2) even when the network is random (\(p_{\sf r}=1\), Fig. 2(c)). These fluctuations decrease dramatically, if we violate

*Dale*′

*s*

*principle*, i.e. the constraint that any neuron can either only depolarize or hyperpolarize all its postsynaptic targets, but not both at the same time. We refer to the latter as the

*hybrid*scenario in which neurons project both excitatory and inhibitory synapses (Kriener et al. 2008). Ren et al. (2007) suggest that about 30% of pyramidal cell pairs in layer 2/3 mouse visual cortex have effectively strongly reliable, short latency inhibitory couplings via axo-axonic glutamate receptor mediated excitation of the nerve endings of inhibitory interneurons, thus bypassing dendrites, soma, and axonal trunk of the involved interneuron. These can be interpreted as hybrid-like couplings in real neural tissue.

*h*= 0.1 ms, where \(n_i(t;h)=\int_{t}^{t+h}S_i(s)\,ds= \int_{t}^{t+h}\sum_l\delta(s-s_{il})\,ds\) is the number of spikes emitted by neuron

*i*at time points

*s*

_{ l }within the interval [

*t*,

*t*+

*h*) (cf. Appendix A). If the population spike count

*n*(

*t*;

*h*) is a compound process of independent stationary Poisson random variables

*n*

_{ i }(

*t*;

*h*) with parameter \(\nu_{\sf o}h\), we have

*i*≠

*j*and the variance of the sum equals the sum of the variances. If it is larger than one this indicates positive correlations between the spiking activities of the individual neurons (cf. Appendix A) (Papoulis 1991; Nawrot et al. 2008; Kriener et al. 2008). We see (cf. Table 1) that indeed it is largest for the Dale-conform ring network, still manifestly larger than one for the Dale-conform random network, and about one for the hybrid networks for both the ring and the random case. The quantitative differences of the Fano factors in all four cases can be explained by the different amount of pairwise spike train correlations (cf. Appendix A, Section 6). This demonstrates how a violation of Dale’s principle stabilizes and actually enables asynchronous irregular activity, even in networks whose adjacency, i.e. the mere unweighted connectivity, suggests highly correlated activity, as it is the case for Dale-conform ring (Fig. 2(a)) and small-world networks (Fig. 2(b)).

Mean population rates *ν* _{ o } and Fano factors FF\([n(t;h)]={\sf Var}[n(t;h)]/{\sf E}[n(t;h)]\) of population spike count *n*(*t*;*h*) per time bin *h* (10 s of population activity, *N* = 12,500, bin size *h* = 0.1 ms) for the random Dale and hybrid networks and the corresponding ring networks

Network type | Mean rate | Fano factor FF |
---|---|---|

Random, Dale | 12.9 Hz | 9.27 |

Random, Hybrid | 12.8 Hz | 1.25 |

Ring, Dale | 13.5 Hz | 26.4 |

Ring, Hybrid | 13.1 Hz | 1.13 |

To understand the origin of the different correlation strengths in the various network types, and hence the different spiking dynamics and population activities in dependence on both the weight distribution and the rewiring probability, we will extend our analysis introduced in Kriener et al. (2008) to ring and small-world networks in the following sections.

## 5 Distance dependent correlations in a shot-noise framework

*S*

_{ i }(

*t*) = ∑

_{ l }

*δ*(

*t*−

*t*

_{ il }) are realizations of point processes corresponding to stationary correlated Poisson processes, such that

*c*

_{ ij }∈ [ − 1,1], and mean rates

*ν*

_{ i },

*ν*

_{ j }(cf. however Fig. 4(d)). The spike trains can either stem from the pool of local neurons

*i*∈ {1,...,

*N*} or from external neurons \(i\,\in\{N+1,\ldots,N+NK_{\sf ext}\}\), where we assume that each neuron receives external inputs from \(K_{\sf ext}\) neurons, which are different for all

*N*local neurons. We describe the total synaptic input

*I*

_{ k }(

*t*) of a model neuron

*k*as a sum of linearly filtered presynaptic spike trains (i.e. the spike trains are convolved with filter-kernels

*f*

_{ ki }(

*t*)), also called shot noise (Papoulis 1991; Kriener et al. 2008):

*I*

_{ k }(

*t*) could represent e.g. the weighted input current, the synaptic input current (

*f*

_{ ki }(

*t*) = unit postsynaptic current, PSC), or the free membrane potential (

*f*

_{ ki }(

*t*) = unit postsynaptic potential, PSP). All synapses are identical in their kinetics and differ only in strength

*W*

_{ ki }, hence we can write

*s*

_{ i }(

*t*): = (

*S*

_{ i }∗

*f*)(

*t*), Eq. (11) is then rewritten as

*I*

_{ k },

*I*

_{ l }is given by

*i*∈ {1,...,

*N*} (

*W*

_{ ki }

*W*

_{ li }≠ 0, including \(W_{ki}^2\)). The second sum Eq. (15) (ii) contains all contributions of the cross-covariance functions \({\sf Cov}[s_i(t)s_j(t+\tau)]\) of filtered spike trains that stem from presynaptic neurons

*i*≠

*j*,

*i*,

*j*∈ {1,...,

*N*}, where we already have taken into account that the external spike sources are uncorrelated, and hence \({\sf Cov}[s_i(t)s_j(t+\tau)]=0\) for all \(i,j\in\{N+1,...,N+NK_{\sf ext}\}\). It is apparent that the high degree of shared input, as present in ring and small-world topologies, should show up in the spatial structure of input correlations between neurons. The closer two neurons

*k*,

*l*are located on the ring, the more common presynaptic neurons

*i*they share. This will lead to a dominance of the first sum, unless the general strength of spike train covariances, accounted for in the second sum, is too high, and the second sum dominates the structural amplifications, because it contributes quadratically in neuron number. If the input covariances due to the structural overlap of presynaptic pools is however dominant, a fraction of this input correlation should also be present at the output side of the neurons, i.e. the spike train covariances

*c*

_{ ij }should be a function of the interneuronal distance as well. This is indeed the case as we will see in the following.

*c*

_{ ij }are in general dependent on the pairwise distance

*D*

_{ ij }= |

*i*−

*j*| of neurons

*i*,

*j*(neuron labeling across the ring in clockwise manner), and the rewiring probability \(p_{\sf r}\). With Campbell’s theorem for shot noise (Papoulis 1991; Kriener et al. 2008) we can write

*f*(

*t*).

*I*

_{ k },

*I*

_{ l }, defined as

*I*

_{ k }, explicitly equals

*i*∈ {1,...,

*N*}, with \(\nu_{\sf o}\) denoting the average stationary rate of the network neurons, and \(\nu_i=\nu_{\sf ext}\) for all \({i\in\{N+1,...,N+NK_{\sf ext}\}}\), with \(\nu_{\sf ext}\) denoting the rate of the external neurons. Hence, \(a_i^{s,{\sf loc}}(0)=\nu_{\sf o}\,\phi(0)\), and \(a_i^{s,{\sf ext}}(0)=\nu_{\sf ext}\,\phi(0)\) are the same for all neurons

*i*∈ {1,...,

*N*}, and \(i\in\{N+1,...,N+NK_{\sf ext}\}\), respectively. For the cross-covariance function we analogously have \( c^s_{ij}(0,D_{ij},p_{\sf r})=\nu_{\sf o}\,c_{ij}(D_{ij},p_{\sf r})\,\phi(0)\). We define

*H*

_{ k }as the contribution of the shot noise variances

*a*

^{ s }to the variance \(a_{\sf in}\) of the inputs (cf. Fig. 3(a))

*G*

_{ kl }as the contribution of the shot noise variances

*a*

^{ s }to the cross-covariance \(c_{\sf in}\) of the inputs (cf. Fig. 3(c))

*L*

_{ k }as the contribution of the shot noise cross-covariances

*c*

^{ s }to the auto-covariance \(a_{\sf in}\) of the inputs (cf. Fig. 3(b))

*M*

_{ kl }as the contribution of the shot noise cross-covariances

*c*

^{ s }to the cross-covariance \(c_{\sf in}\) of the inputs (cf. Fig. 3(d))

*k*and

*l*, but only on the relative distance

*D*

_{ kl }and the rewiring probability \(p_{\sf r}\), we can rewrite Eq. (18) as

### 5.1 Ring graphs

*βκ*of the presynaptic neurons

*i*within the local input pool of neuron

*k*is excitatory and depolarizes the postsynaptic neuron by

*W*

_{ ki }=

*J*with each spike, while (1 −

*β*)

*κ*presynaptic neurons are inhibitory and hyperpolarize the target neuron

*k*by

*W*

_{ ki }= −

*gJ*per spike (cf. Section 2). Moreover, each neuron receives \(K_{\sf ext}\) excitatory inputs from the external neuron pool with

*W*

_{ ki }=

*J*. Hence, for all neurons

*k*∈ {1,...,

*N*} we obtain for the input variance

*H*

_{ k }=

*H*, Eq. (22), Fig. 3(a)

*a*

^{ s }(0) of the individual filtered spike trains to the input cross-covariance

*G*

_{ kl }(

*D*

_{ kl },0), Eq. (23), is basically the same as

*H*, only scaled by the respective overlap of the two presynaptic neuron pools of neurons

*k*and

*l*. This overlap only depends on the distance

*D*

_{ kl }between

*k*and

*l*, cf. Fig. 3(c). Hence, for all

*k*,

*l*∈ {1,...,

*N*}

*κ*these corrections are negligible.

*Θ*[

*x*] is the Heaviside stepfunction that equals 1 if

*x*≥ 0, and 0 if

*x*< 0. If all incoming spike trains from local neurons are uncorrelated and Poissonian, and the external drive is a direct current, the complete input covariance stems from the structural (i.e. common input) correlation coefficient \(C_{\sf struc}(D_{kl},p_{\sf r}):=\frac{G(D_{kl},p_{\sf r})}{H}\) alone, that can then be written as

*c*

_{ ij }(

*D*

_{ ij },0) show however a pronounced distance dependent decay and reach non-negligible amplitudes up to

*c*

_{ ij }(1,0) ≈ 0.04 (cf. Fig. 4(c)). In the following we will use two approximations of the distance dependence of

*c*

_{ ij }(

*D*

_{ ij },0), a linear relation and an exponential relation. We start by assuming a linear decay on the interval (0,

*κ*] (cf. Fig. 4(c), black). This choice is motivated by two assumptions. First, we assume that the main source of spike correlations stems from the structural input correlations \(C_{\sf struc}(D_{ij},0)\), Eq. (29), of the input neurons

*i*,

*j*alone, i.e. the strength of the correlations between two input spike trains

*S*

_{ i }and

*S*

_{ j }depends on the overlap of their presynaptic input pools, determined by their interneuronal distance

*D*

_{ ij }. Analogous to the reasoning that lead to the common input correlations

*G*(

*D*

_{ kl },0)/

*H*, Eq. (29), before, the output spike train correlation between neurons

*i*and

*j*will hence be zero if

*D*

_{ ij }≥

*κ*. Moreover, the neurons

*i*and

*j*will only be contributing to the input currents of

*k*and

*l*, if they are within a distance

*κ*/2, that is

*D*

_{ ki }<

*κ*/2 and

*D*

_{ lj }<

*κ*/2. Hence, for the correlations of two input spike trains

*S*

_{ i },

*S*

_{ j },

*i*≠

*j*to contribute to the input covariance of neurons

*k*and

*l*, these must be within a range

*κ*+ 2

*κ*/2 = 2

*κ*. Additionally, we assume that the common input correlations \(C_{\sf struc}(D_{ij},0)\) are transmitted linearly with the same transmission gain \(\gamma:=c_{ij}(1,0)/C_{\sf struc}(1,0)\) to the output side of

*i*and

*j*. We hence make the following ansatz for the distance dependent correlations between the filtered input spike trains from neuron

*i*to

*k*and from neuron

*j*to

*l*(we always indicate the dependence on

*k*and

*l*by |

_{ kl }):

*k*∈ {1,...,

*N*} (cf. Appendix B for details of the derivation)

*M*. It is derived in Appendix B and explicitly given by Eq. (67). After calculation of \(L^{\sf lin}(0)\) and \(M^{\sf lin}(D_{kl},0)\) with the ansatz Eq. (30), we can plot \(C_{\sf in}(D_{kl},0)\) as a function of distance and get a curve as shown in Fig. 4(a). It is obvious (Fig. 4(c)) that the linear fit overestimates the spike train correlations as a function of distance, the correlation transmission decreases with interneuronal distance, i.e. strength of input correlation \(C_{\sf in}\), non-linearly (De la Rocha et al. 2007; Shea-Brown et al. 2008). This leads to an overestimation of the total input correlations for distances

*D*

_{ kl }≥

*κ*(Fig. 4(a)). If we instead fit the distance dependence of the spike train correlations of neuron

*i*and

*j*by a decaying exponential function with a cut-off at

*D*

_{ ij }=

*κ*,

*γ*and

*η*to the values estimated from the simulations, the sums in Eqs. (31) and (32) can still be reduced to simple terms (cf. Eqs. (69), (70)) and the correspondence with the observed input correlations becomes very good over the whole range of distances (Fig. 4(b)). We conclude that the strong common input correlations \(C_{\sf struc}\) of neighboring neurons due to the structural properties of Dale-conform ring networks predominantly cause the spatio-temporally correlated spiking of neuron groups of size ∼

*κ*.

*hybrid*neurons instead. Since the number of excitatory, inhibitory and external synapses is the same for all neurons, we get the same expressions for

*H*, \(L^{\sf lin}(0)\) and \(M^{\sf lin}(D_{kl},0)\) for hybrid neurons as well, but the common input correlations become (the expectation value \({\sf E}_W\) is with regard to network realizations)

*G*hence corresponds to the one reported for random networks (Kriener et al. 2008) and equals 0.02 for the parameters used here. This is in line with the

*average*input correlations in the hybrid ring network (Fig. 4(e)). They are hence only about half the correlation of the spike trains in the Dale-conform ring network (Fig. 4(c)). If we assume the correlation transmission for the highest possible input correlation to be the same as in the Dale case (

*γ*≈ 0.04), we estimate spike train correlations of the order of \(c_{ij}\sim 10^{-4}\). The measured average values from simulations give indeed correlations of that range (Fig. 4(f)) and are, hence, of the same order as in hybrid random networks (Kriener et al. 2008). As we will show in Section 6, the distribution of input correlation coefficients \(C^{\sf hyb}_{\sf in}(D_{kl},0)\) is centered close to zero with a high peak at zero and both negative and positive contributions. In the Dale-conform ring network, however, we only observe positive correlation coefficients with values up to nearly one (it can only reach \(C_{\sf in}(1,0)=C_{\sf struc}(1,0)= 1\), if we apply identical external input to all neurons).

This transfers to the spike generation process and hence explains the dramatically different global spiking behavior, as well as the different Fano factors of the population spike counts (cf. Table 1, Appendix A) in both network types due to the decorrelation of common inputs in hybrid networks.

### 5.2 Small-world networks

*l*and

*k*, and hence to the strength of distance dependent correlations. When it gets close to the random graph value, as it is the case for \(p_{\sf r} >0.3\), also the input correlations become similar to that of the corresponding balanced random network (cf. Fig. 5). If we randomize the network gradually by rewiring a fraction of \(p_{\sf r} \) connections, the input variance

*H*is not affected. However, the input covariances due to common input \(G(D_{kl},p_{\sf r})\) do not only depend on the distance, but also on \(p_{\sf r}\). The boxcar footprints of the ring network get diluted during rewiring, so a distance

*D*

_{ kl }<

*κ*does not imply anymore that all neurons within the overlap (

*κ*−

*D*

_{ kl }) of the two boxcars project to both or any of the neurons

*k*,

*l*(cf. Appendix B, Fig. 8(b)). At the same time the probability to receive inputs from neurons outside the boxcar increases during rewiring. These contributions are independent of the pairwise distance. Still, the probability for two neurons

*k*,

*l*with

*D*

_{ kl }<

*κ*to receive input from common neurons within the overlap of the (diluted) boxcars is always higher than the probability to get synapses from common neurons in the rest of the network, as long as \(p_{\sf r}<1\): those input synapses that were not chosen for rewiring adhere to the boxcar footprint, and at the same time the boxcar regains a fraction of its synapses during the random rewiring. So, if

*D*

_{ kl }<

*κ*there are three different sources for common input to two neurons

*k*and

*l*that we have to account for: neurons within the overlap of input boxcars, that still have or re-established their synapses to

*k*and

*l*(possible in region ‘a’ in Fig. 8(b)), those that had not have any synapse to either

*k*or

*l*, but project to both

*k*and

*l*after rewiring (possible in region ‘c’ in Fig. 8(b)), and those that are in the boxcar footprint of one neuron

*k*and got randomly rewired to another neuron outside of

*l*’s boxcar footprint (possible in region ‘b’ in Fig. 8(b)). This implies that after rewiring neurons can be correlated due to common input in the regions ‘b’ and ‘c’, even if they are further apart than

*κ*. These correlations due to the random rewiring alone are then independent of the distance between

*k*and

*l*. The probabilities for all these contributions to the total common input covariance \(G_{kl}(D_{kl},p_{\sf r})\) are derived in detail in Appendix B. Ignoring the minor corrections due to the exclusion of self-couplings, we obtain for all

*k*,

*l*∈ {1,...,

*N*}

*I*

_{ k },

*I*

_{ l }if

*D*

_{ kl }< 2

*κ*. With the linear distance dependence assumption we obtain (cf. Appendix B)

*k*,

*l*∈ {1,...,

*N*}

*c*

_{ ij }(

*D*

_{ ij },1) are independent of distance and are proportional to the network connectivity

*ϵ*(cf. also Kriener et al. (2008)), i.e.

*c*

_{ ij }(

*D*

_{ ij },1) =

*ϵ*

*γ*(1). This is indeed the case with the linear ansatz, as one can easily check with

*p*

_{1}(1) =

*p*

_{2}(1) =

*κ*/

*N*, cf. Eqs. (36), (37).

*D*

_{ kl }< 2

*κ*. As one can see in Fig. 6(b), the distance dependence of the spike correlations approaches a linear relation as the networks leave the small-world regime (\(p_{\sf r} > 0.3\)), and the linear model becomes more adequate Eq. (30).

## 6 Distribution of correlation coefficients in ring and random networks

*f*

_{ ki }(

*t*), cf. Section 5. Note that the mean structural input correlation coefficient \(\bar{c}_{\sf struc}^{\sf dale/hyb}\) is independent of ring or random topology, both in the Dale and the hybrid case, while the distributions differ dramatically (Fig. 7).

**Ring networks**

^{3}and

**Random networks**(Kriener et al. 2008)

*β*) and

*β*are the fractions of inhibitory and excitatory inputs per neuron (each the same for all neurons).

For a ring graph the structural correlation coefficient distribution \(P(C_{\sf struc})(D,0)\) has the probability mass \(\frac{N-2\kappa}{N}\) at the origin, \(\frac{2}{N}\) in the discrete open interval (0,1), and \(\frac{1}{N}\) at 1, if we include the variance for distance *D* = 0. However, due to the non-negligible spike train correlations *c* _{ ij }(*D*,0), the actually measured input correlations \(C_{\sf in}(D,0)^{\sf ring}_{\sf dale}\) have a considerably different distribution that has less mass at 0 due to the positive input correlations up to a distance ∼2*κ*. They are very well described by the full theory with an exponential ansatz for the spike train correlations as described in Section 5.1, Eq. (41). These two limiting cases emphasize that the distribution of input (subthreshold) correlations may give valuable information about whether there is a high degree of locally shared input (heavy tail probability distribution \(P(C_{\sf struc})\)) or if it is rather what is to be expected from random connectivity in a Dale-conform network.

## 7 Discussion

We analyzed the activity dynamics in sparse neuronal networks with ring, small-world and random topologies. In networks with a high clustering coefficient \(\mathcal{C}\) such as ring and small-world networks, neighboring neurons tend to fire highly synchronously. With increasing randomness, governed by the rewiring probability \(p_{\sf r}\), activity becomes more asynchronous, but even in random networks we observe a high Fano factor FF of the population spike counts, indicating its residual synchrony.^{4} As shown by Kriener et al. (2008) these fluctuations become strongly attenuated for *hybrid* neurons which have both excitatory and inhibitory synaptic projections.

Here, we demonstrated that the introduction of hybrid neurons leads to highly asynchronous (FF ≈ 1) population activity even in networks with ring topology. Recent experimental data suggest, that there are abundant fast and reliable couplings between pyramidal cells, which are effectively inhibitory (Ren et al. 2007) and which might be intepreted as hybrid-like couplings. However, the hybrid concept contradicts the general paradigm that pyramidal cells depolarize all their postsynaptic targets while inhibitory interneurons hyperpolarize them, a paradigm known as *Dale*′*s* *principle* (Li and Dayan 1999; Dayan and Abbott 2001; Hoppensteadt and Izhikevich 1997). As we showed here, a severe violation of Dale’s principle turns the specifics of network topology meaningless, and might even impede functionally potentially important processes, as for example pattern formation or line attractors in ring networks (see e.g. Ben-Yishai et al. 1995; Ermentrout and Cowan 1979), or propagation of synchronous activity in recurrent cortical networks (Kumar et al. 2008a).

We demonstrated that the difference of the amplitude of population activity fluctuations in Dale-conform and hybrid networks can be understood from the differences in the input correlation structure in both network types. We extended the ansatz presented in Kriener et al. (2008) to networks with ring and small-world topology and derived the input correlations in dependence of the pairwise distance of neurons and the rewiring probability. Because of the strong overlap of the input pools of neighboring neurons in ring and small-world networks, the assumption that the spike trains of different neurons are uncorrelated, an assumption justified in sparse balanced random networks, is no longer valid. We fitted the distance dependent instantaneous spike train correlations and took them adequately into account. This lead to a highly accurate prediction of input correlations.

A fully self-consistent treatment of correlations is however beyond the scope of the analysis presented here. As we saw in Section 5.1, in Dale-conform ring graphs neurons cover basically the whole spectrum of positive input correlation strengths between almost one (depending on the level of variance of the uncorrelated external input) and zero as a function of pairwise distance *D*. If we look at the ratio between input and output correlation strength, we see that it is not constant, but that stronger correlations have a higher gain. The exact mechanisms of this non-linear correlation transmission needs further analysis. Recent analysis of correlation transfer in integrate-and-fire neurons by De la Rocha et al. (2007) and Shea-Brown et al. (2008) showed that the spike train correlations can be written as a linear function of the input correlations, given they are small \(C_{\sf in}\in[0,0.3]\). For larger \(C_{\sf in}\) (De la Rocha et al. 2007; Shea-Brown et al. 2008), however, report supralinear correlation transmission. Such correlation transmission properties were also observed and analytically derived for arbitrary input correlation strength in an alternative approach that makes use of correlated Gauss processes (Tchumatchenko et al. 2008). These results are all in line with the non-linear dependence of spike train correlations on the strength of input correlations that we observed and fitted by an exponential decay with interneuronal distance.

We saw that correlations are weakened as they are transferred to the output side of the neurons, but, as is to be expected, they are much higher for neighboring neurons in ring networks as it is the case in the homogeneously correlated random networks that receive more or less uncorrelated external inputs. The assumption that the spike train covariance functions are delta-shaped is certainly an over-simplification, especially in the Dale-conform ring networks (cf. the examples of spike train cross-correlation functions *ψ* _{ ij }(*τ*,*D*,0), Fig. 4(d)). The temporal width of the covariance functions leads to an increase in the estimation of spike train correlations if the spike count bin-size *h* is increased. In Dale-conform ring graphs we found *c* _{ ij }(1,0) ≤ 0.041 for time bins *h* = 0.1 ms (cf. Fig. 4(c), (d)). For *h* = 10 ms, a time window of the order of the membrane time constant \(\tau_{\sf m}\), we observed *c* _{ ij }(1,0) ≤ 0.25 (not shown). This covers the spectrum of correlations reported in experimental studies, which range from 0.01 to approximately 0.3 (Zohary et al. 1994; Vaadia et al. 1995; Shadlen and Newsome 1998; Bair et al. 2001). For hybrid networks, however, the pairwise correlations have a narrow distribution around zero, irrespective of the topology. This explains the highly asynchronous dynamics in hybrid neuronal networks.

Finally, we suggest that the distribution of pairwise correlation coefficients of randomly chosen intracellularly recorded neurons may provide a means to distinguish different neuronal network topologies. Real neurons, however, have conductance-based synapses, and their filtering is strongly dependent on the membrane depolarization (Destexhe et al. 2003; Kuhn et al. 2004). Moreover, spikes are temporally extended events, usually with different synaptic time scales, and transmission delays are distributed and likely dependent on the distance between neurons. These effects, amongst others, might distort the results presented here. Still, though intracellular recordings are technically more involved than extracellular recordings, they are basically analog signals, and hence much shorter periods of recording time are necessary to get a sufficiently good statistics, as compared to estimation of pairwise spike train correlations from low rate spiking neurons (Lee et al. 2006). So, bell-shaped distributions of membrane potential correlations may hint towards an underlying random network structure, while heavy-tail distributions should be observed for networks with locally confined neighborhoods. Naturally, the distribution will depend on the relation between the sampled region and the footprint of the neuron-type one is interested in. This is true for both the model as well as for real neuronal tissue. Some idea about the potential input footprint, e.g. from reconstructions of the axonal and dendritic arbors (Hellwig 2000; Stepanyants et al. 2007), can help to estimate the spatial distance that must be covered. It also is a matter of the spatial scale that one is interested in: if one is mostly interested in very small, local networks < 200 *μ*m, where the connection probability might be considered approximately homogeneous (Hellwig 2000; Stepanyants et al. 2007), the correlation coefficient distribution will be akin to that of a random topology. If one, however, samples several millimeter, the distribution may tend more to a heavy-tail shape, due to the increase of the relative number of weakly correlated neuron pairs. At this scale radial inhomogeneities, as for example due to axonal patches (Lund et al. 2003), in two dimensions or different connection probabilities within and between cortical layers (Binzegger et al. 2004) in three dimensions must be taken into account as well, as they will distort the over-simplified assumption of the connectivity made here. In conclusion, we think that a further extension of the line of research presented here might provide a way to access structural features of neuronal networks by the analysis of their input statistics. This could eventually prove helpful in separating correlations that arise due to the specifics of the network structure from those that arise due to correlated input from other areas, e.g. sensory inputs, and provide insight into the relation between structure and function.

## Footnotes

- 1.
The value of the clustering coefficient does in expectation not depend on the exact choice of triplet connectivity we ask for.

- 2.
The reverse is not true, we can have a high amount of shared input in a network without having a high clustering coefficient. An example is a star graph in that a central neuron projects to

*N*other neurons which in turn are not connected. - 3.
The density of neurons in a distance

*D*generally behaves like*P*(*D*)∼*D*^{ dim − 1}with dimensionality*dim*. - 4.
This is due to the finite size of the network. For increasing network size

*N*→ ∞ the asynchronous-irregular state becomes stable. - 5.
We refer to the distance in neuron indices here, that are arbitrarily defined to run from 1 to

*N*in a clockwise manner. Hence the boxcar neighborhood of a neuron*i*includes {*i*−*κ*/2,...,*i*+*κ*/2} (modulo network size). Note that for \(p_{\sf r}>0\) this does not generally correspond to the topological neighborhood defined by adjacency anymore.

## Notes

### Acknowledgements

We thank Benjamin Staude, Marc Timme, and two anonymous reviewers for their valuable comments on an earlier version of the manuscript. We gratefully acknowledge funding by the German Federal Ministry of Education and Research (BMBF grants 01GQ0420 and 01GQ0430) and the European Union (EU Grant 15879, FACETS). All network simulations were carried out with the NEST simulation tool (http://www.nest-initiative.org).

### **Open Access**

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

## References

- Albert, R., & Barabasi, A. (2002). Statistical mechanics of complex networks.
*Reviews of Modern Physics, 74*, 47–97.CrossRefGoogle Scholar - Bair, W., Zohary, E., & Newsome, W. (2001). Correlated firing in Macaque visual area MT: Time scales and relationship to behavior.
*Journal of Neuroscience, 21*(5), 1676–1697.PubMedGoogle Scholar - Ben-Yishai, R., Bar-Or, R., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex.
*Proceedings of the National Academy of Sciences of the United States of America, 92*, 3844.PubMedCrossRefGoogle Scholar - Binzegger, T., Douglas, R. J., & Martin, K. A. C. (2004). A quantitative map of the circuit of cat primary visual cortex.
*Journal of Neuroscience, 39*(24), 8441–8453.CrossRefGoogle Scholar - Bronstein, I. N., & Semendjajew, K. A. (1987).
*Taschenbuch der Mathematik*(23rd ed.). Thun und Frankfurt/Main: Verlag Harri Deutsch.Google Scholar - Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.
*Journal of Computational Neuroscience, 8*(3), 183–208.PubMedCrossRefGoogle Scholar - Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates.
*Neural Computation, 11*(7), 1621–1671.PubMedCrossRefGoogle Scholar - Chklovskii, D. B., Schikorski, T., & Stevens, C. F. (2002). Wiring optimization in cortical circuits.
*Neuron, 34*, 341–347.PubMedCrossRefGoogle Scholar - Dayan, P., & Abbott, L. F. (2001).
*Theoretical neuroscience*. Cambridge: MIT.Google Scholar - De la Rocha, J., Doiron, B., Shea-Brown, E., Kresimir, J., & Reyes, A. (2007). Correlation between neural spike trains increases with firing rate.
*Nature, 448*(16), 802–807.PubMedCrossRefGoogle Scholar - Destexhe, A., Rudolph, M., & Pare, D. (2003). The high-conductance state of neocortical neurons
*in vivo*.*Nature Reviews. Neuroscience, 4*, 739–751.PubMedCrossRefGoogle Scholar - Ermentrout, G. B., & Cowan, J. D. (1979). A mathematical theory of visual hallucination patterns.
*Biological Cybernetics, 34*, 137–150.PubMedCrossRefGoogle Scholar - Gewaltig, M.-O., & Diesmann, M. (2007). NEST (Neural simulation tool).
*Scholarpedia, 2*(4), 1430.Google Scholar - Hellwig, B. (2000). A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex.
*Biological Cybernetics, 2*(82), 111–121.CrossRefGoogle Scholar - Hoppensteadt, F. C., & Izhikevich, E. M. (1997).
*Weakly connected neural networks*. New York: Springer.Google Scholar - Jahnke, S., Memmesheimer, R., & Timme, M. (2008). Stable irregular dynamics in complex neural networks.
*Physical Review Letters, 100*, 048102.PubMedCrossRefGoogle Scholar - Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., & Rotter, S. (2008). Correlations and population dynamics in cortical networks.
*Neural Computation, 20*, 2185–2226.PubMedCrossRefGoogle Scholar - Kuhn, A., Aertsen, A., & Rotter, S. (2004). Neuronal integration of synaptic input in the fluctuation-driven regime.
*Journal of Neuroscience, 24*(10), 2345–2356.PubMedCrossRefGoogle Scholar - Kumar, A., Rotter, S., & Aertsen, A. (2008a). Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model.
*Journal of Neuroscience, 28*(20), 5268–5280.PubMedCrossRefGoogle Scholar - Kumar, A., Schrader, S., Aertsen, A., & Rotter, S. (2008b). The high-conductance state of cortical networks.
*Neural Computation, 20*(1), 1–43.PubMedCrossRefGoogle Scholar - Lee, A., Manns, I., Sakmann, B., & Brecht, M. (2006). Whole-cell recordings in freely moving rats.
*Neuron, 51*, 399–407.PubMedCrossRefGoogle Scholar - Li, Z., & Dayan, P. (1999). Computational differences between asymmetrical and symmetrical networks.
*Network: Computing Neural Systems, 10*, 59–77.CrossRefGoogle Scholar - Lund, J. S., Angelucci, A., & Bressloff, P. C. (2003). Anatomical substrates for functional columns in macaque monkey primary visual cortex.
*Cerebral Cortex, 12*, 15–24.CrossRefGoogle Scholar - Mattia, M., & Del Guidice, P. (2002). Population dynamics of interacting spiking neurons.
*Physical Review E, 66*, 051917.CrossRefGoogle Scholar - Mattia, M., & Del Guidice, P. (2004). Finite-size dynamics of inhibitory and excitatory interacting spiking neurons.
*Physical Review E, 70*, 052903.CrossRefGoogle Scholar - Morrison, A., Mehring, C., Geisel, T., Aertsen, A., & Diesmann, M. (2005). Advancing the boundaries of high connectivity network simulation with distributed computing.
*Neural Computation, 17*(8), 1776–1801.PubMedCrossRefGoogle Scholar - Nawrot, M. P., Boucsein, C., Rodriguez Molina, V., Riehle, A., Aertsen, A., & Rotter, S. (2008). Measurement of variability dynamics in cortical spike trains.
*Journal of Neuroscience Methods, 169*, 374–390.PubMedCrossRefGoogle Scholar - Papoulis, A. (1991).
*Probability, random variables, and stochastic processes*(3rd ed.). Boston: McGraw-Hill.Google Scholar - Ren, M., Yoshimura, Y., Takada, N., Horibe, S., & Komatsu, Y. (2007). Specialized inhibitory synaptic actions between nearby neocortical pyramidal neurons.
*Science, 316*, 758–761.PubMedCrossRefGoogle Scholar - Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding.
*Journal of Neuroscience, 18*(10), 3870–3896.PubMedGoogle Scholar - Shea-Brown, E., Josic, K., de la Rocha, J., & Doiron, B. (2008). Correlation and synchrony transfer in integrate-and-fire neurons: Basic properties and consequences for coding.
*Physical Review Letters, 100*, 108102.PubMedCrossRefGoogle Scholar - Song, S., Per, S., Reigl, M., Nelson, S., & Chklovskii, D. (2005). Highly nonrandom features of synaptic connectivity in local cortical circuits.
*Public Library of Science, Biology, 3*(3), 0507–0519.Google Scholar - Sporns, O. (2003). Network analysis, complexity and brain function.
*Complexity, 8*(1), 56–60.CrossRefGoogle Scholar - Sporns, O., & Zwi, D. Z. (2004). The small world of the cerebral cortex.
*Neuroinformatics, 2*, 145–162.PubMedCrossRefGoogle Scholar - Stepanyants, A., Hirsch, J., Martinez, L. M., Kisvarday, Z. F., Ferecsko, A. S., & Chklovskii, D. B. (2007). Local potential connectivity in cat primary visual cortex.
*Cerebral Cortex, 18*(1), 13–28.PubMedCrossRefGoogle Scholar - Strogatz, S. H. (2001). Exploring complex networks.
*Nature, 410*, 268–276.PubMedCrossRefGoogle Scholar - Tchumatchenko, T., Malyshev, A., Geisel, T., Volgushev, M., & Wolf, F. (2008). Correlations and synchrony in threshold neuron models. http://arxiv.org/pdf/0810.2901.
- Tetzlaff, T., Rotter, S., Stark, E., Abeles, M., Aertsen, A., & Diesmann, M. (2007). Dependence of neuronal correlations on filter characteristics and marginal spike-train statistics.
*Neural Computation, 20*, 2133–2184.CrossRefGoogle Scholar - Timme, M. (2007). Revealing network connectivity from response dynamics.
*Physical Review Letters, 98*, 224101.PubMedCrossRefGoogle Scholar - Timme, M., Wolf, F., & Geisel, T. (2002). Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillators.
*Physical Review Letters, 89*(25), 258701.PubMedCrossRefGoogle Scholar - Vaadia, E., Haalman, I., Abeles, M., Bergman, H., Prut, Y., Slovin, H., & Aertsen, A. (1995). Dynamics of neuronal interactions in monkey cortex in relation to behavioural events.
*Nature, 373*(6514), 515–518.PubMedCrossRefGoogle Scholar - van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity.
*Science, 274*, 1724–1726.PubMedCrossRefGoogle Scholar - van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits.
*Neural Computation, 10*, 1321–1371.PubMedCrossRefGoogle Scholar - Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of small-world networks.
*Nature, 393*, 440–442.PubMedCrossRefGoogle Scholar - Yoshimura, Y., & Callaway, E. (2005). Fine-scale specificity of cortical networks depends on inhibitory cell type and connectivity.
*Nature Neuroscience, 8*(11), 1552–1559.PubMedCrossRefGoogle Scholar - Yoshimura, Y., Dantzker, J., & Callaway, E. (2005). Excitatory cortical neurons form fine-scale functional networks.
*Nature, 433*(24), 868–873.PubMedCrossRefGoogle Scholar - Zohary, E., Shadlen, M. N., & Newsome, W. T. (1994). Correlated neuronal discharge rate and its implications for psychophysical performance.
*Nature, 370*, 140–143.PubMedCrossRefGoogle Scholar