Transitions between asynchronous and synchronous states: a theory of correlations in small neural circuits

The study of correlations in neural circuits of different size, from the small size of cortical microcolumns to the large-scale organization of distributed networks studied with functional imaging, is a topic of central importance to systems neuroscience. However, a theory that explains how the parameters of mesoscopic networks composed of a few tens of neurons affect the underlying correlation structure is still missing. Here we consider a theory that can be applied to networks of arbitrary size with multiple populations of homogeneous fully-connected neurons, and we focus its analysis to a case of two populations of small size. We combine the analysis of local bifurcations of the dynamics of these networks with the analytical calculation of their cross-correlations. We study the correlation structure in different regimes, showing that a variation of the external stimuli causes the network to switch from asynchronous states, characterized by weak correlation and low variability, to synchronous states characterized by strong correlations and wide temporal fluctuations. We show that asynchronous states are generated by strong stimuli, while synchronous states occur through critical slowing down when the stimulus moves the network close to a local bifurcation. In particular, strongly positive correlations occur at the saddle-node and Andronov-Hopf bifurcations of the network, while strongly negative correlations occur when the network undergoes a spontaneous symmetry-breaking at the branching-point bifurcations. These results show how the correlation structure of firing-rate network models is strongly modulated by the external stimuli, even keeping the anatomical connections fixed. These results also suggest an effective mechanism through which biological networks may dynamically modulate the encoding and integration of sensory information. Electronic supplementary material The online version of this article (10.1007/s10827-017-0667-3) contains supplementary material, which is available to authorized users.

To conclude the description of the multi-population model, the covariance structure of the white noise , where Σ B αβ is defined as in Eq. (4) of the main text.

S2 Jacobian Matrix
From Eq. (S1) we get that the full Jacobian matrix of the network is J = [J αβ ] ∀(α,β) , where: In Eq. (S2), µ α are the stationary membrane potentials obtained form Eq. (1) of the main text when the network has constant input I α and the noise is not present (σ B i = 0 ∀i). In other words, µ α are the solutions of the following system of non-linear algebraic equations: Eq. (S3) must be solved numerically, or by the asymptotic perturbative expansion that we introduced in the Supplementary Materials of [1].

S3 Eigenvalues and Eigenvectors
For every α = 0, . . . , P − 1, it is trivial to prove that the Jacobian matrix (S2) has N α − 1 eigenvalues of the form: with corresponding eigenvectors: The term 1 − N α is the (n α−1 + i)th entry of the vector v P α;i , where: with n −1 def = 0. In Eq. (S5), the (n α−1 + j)th entries of v P α;i (with j = 0, . . . , N α − 1 and j = i) are equal to 1, while all the remaining entries are equal to 0. Now we prove that the remaining P eigenvalues of J , that we call λ R α , are those of the P × P matrix J R = J R αβ ∀(α,β) (the "reduced" Jacobian matrix of the network), where: and the corresponding eigenvectors v R α are: can be rewritten as follows: Therefore, if we define: for β = 0, . . . , P − 1, and if we replace the terms J kp with the corresponding values given by Eq. (S2), the system (S9) can be rewritten as follows: (S11) implies that v R α kγ does not depend on the index k γ for γ fixed, therefore from Eq. (S10) we get: Thus Eq. (S11) can be rewritten as follows: This is a homogeneous system in the unknowns v R where J R is given by Eq. (S7). Therefore the system has a non-trivial solution if and only if det J R − λ R α Id P = 0, which defines the characteristic polynomial of the reduced Jacobian matrix. The eigenvalues λ R α are the roots of this polynomial, with corresponding eigenvectors v R Note that the rank of the matrix J R − λ R α Id P determines the number of free components f of v R α , through the relation f = P−rank J R − λ R α Id P (rank-nullity theorem), with 0 < rank J R − λ R α Id P < P.

Example for P = 2
For simplicity, in the case of one excitatory and one inhibitory population discussed in the main text, we call the eigenvalues λ P 0,1 as λ E,I , and the corresponding eigenvectors v P 0,1 as v E,I . Then, according to Eq. (S4), we get: with multiplicity N E − 1 and N I − 1 respectively, while from Eq. (S5) we obtain: The remaining eigenvalues of J are those of the following reduced Jacobian matrix: , namely: where: (S16) We observe that rank J R − λ R α Id P = 1, therefore the eigenvectors of J R corresponding to the eigenvalues λ R 0,1 are: where: Moreover, x, y are two parameters that represent the free components of v R 0,1 (one for each eigenvector, according to the relation f = P − rank J R − λ R α Id P = 1). Therefore the corresponding eigenvectors of J are: where the first N E entries of v R 0,1 in Eq. (S19) are equal to 1 and thus K 0,1 are the remaining N I entries.

S4 Fundamental Matrix
When J is diagonalizable, its powers can be written as J n = P D n P −1 , where: while P is an N × N matrix whose columns are composed of the eigenvectors of J calculated in Sec. (S3). If we introduce the matrices: such that J n = [Ψ αβ;n ] ∀(α,β) , and we replace Eq. (S20) and the matrix P (expressed in terms of the eigenvectors of J ) into the formula J n P = P D n , after some algebra we get the following set of equations: for γ = 0, . . . , P − 1. Now if we define: the first equation of the system (S21) becomes: In matrix form the system reads Ax α;n = b α;n , where: In Eq. (S23), P R is a P × P matrix whose columns are composed of the eigenvectors of J R , namely: Therefore by inverting the system Ax α;n = b α;n we obtain: where: We observe that the matrix can be written as an outer product of vectors, namely: therefore it has rank one. This result will prove very important in demonstrating the formation of critical slowing down near the saddle-node bifurcations (see SubSec. (S6.2.1)). Moreover, from the second equation of (S21) and from the definition (S22), we get: Therefore we can rewrite Eq. (S20) as follows: where: For the sake of clarity, we implemented this method in the Online Resource 2 for an arbitrary number of populations. Furthermore, below we show the explicit calculation of the fundamental matrix in the case P = 2.
Example for P = 2 In the case of two neural populations, from Eq. (S17) we get P R = 1 1 K 0 K 1 and therefore P R −T = 1 K1−K0 Thus, according to Eqs. (S24) and (S27), the blocks of the fundamental matrix are given by the following formulas:

S5 Constraints for the Covariance Matrix of the Noise
In order to be a genuine covariance matrix, Σ B must be positive-semidefinite. Being symmetric, Σ B is positive-semidefinite if and only if its eigenvalues are non-negative. Since Σ B has the same block structure of the Jacobian matrix, we can easily calculate its eigenvalues by adapting the results of Sec. (S3). For example, in the case P = 2 we get that the matrix Σ B has to satisfy the following set of inequalities in order to be positive-semidefinite: The importance of this constraint is highlighted by the following relation: Indeed, from this equality we see that if Σ B were not positive-semidefinite, the linear combination would have negative variance for some coefficients x i .

S6 Correlation Structure
Now we have all the ingredients for studying the phenomena that affects the correlation structure of the network. In particular, in this section we prove that, depending on the parameters of the network, the neural activity spans from strong anti-correlation (i.e. Corr (V i (t) , V j (t)) → 1 1−Nγ for some inhibitory population γ) to strongly positive correlation (Corr (V i (t) , V j (t)) → 1), passing through arbitrarily weak correlation (Corr (V i (t) , V j (t)) → 0). We consider the weak-correlation regime in SubSec. (S6.1), while we study the strong-correlation regime in SubSec. (S6.2) in relation to the local bifurcations of the network. For both the regimes, we analyze the cross-correlation for an arbitrary number of populations, extending the results of the case P = 2 that we developed in the main text (see Eqs. (12) + (13)) without the need of explicit formulas. Indeed, we show that it is possible to evaluate the qualitative behavior of the covariance matrix of the membrane potentials Σ V even if the explicit expressions of the coefficients C
However, having an infinite size is not the only way a network can experience low levels of correlation. As explained in [11], for strongly depolarizing or strongly hyperpolarizing stimuli the activation function saturates to ν max or 0 respectively (see Eq. (2) of the main text), therefore A (V ) → 0. Given a network with an arbitrary number of populations of arbitrary size, if the stimulus is strong enough for all the populations, the Jacobian matrix of the network (see Eq. (S2)) becomes diagonal. In other words, the neurons become effectively disconnected. For this reason the correlation structure of the membrane potentials is determined only by the Brownian motions, namely Corr (V i (t) , V j (t)) → C B αβ for every pair of neurons i, j in the populations α, β, respectively. Therefore, given arbitrary (i.e. non-specifically tuned) network parameters, the necessary requirement to generate weak correlation between neurons is the absence of noise correlations, namely C B αβ = 0 ∀α, β. This mechanism is different from that described in [10], since it occurs for |I α | → ∞ rather than N α → ∞, and it allows the network to create decorrelated activity even if its size is small.

S6.2 Synchronous Regime
Interestingly, the network is able to show also high values of correlation, either positive or negative. This phenomenon occurs at the (local) bifurcation points of the network, and therefore for special combinations of the network's parameters. As we explained in the main text, we consider only the codimension one bifurcations of the network, namely saddle-node, Andronov-Hopf and branching-point bifurcations. Since we want to prove that critical slowing down is due to the synaptic connections and that it occurs regardless the presence of correlation between the Brownian motions, it suffices to study the correlation structure near the bifurcations points in the case C B αβ = 0 ∀α, β. In this case, Eq. (6) of the main text reduces to: The relation between the bifurcations and the correlation structure of the network is described in the next subsections by means of Eq. (S29). The analysis can be extended to the case of correlated noise by using Eq. (6) of the main text, but this does not add anything new to the results (compare with the case P = 2 considered in the main text).

S6.2.1 Saddle-Node Bifurcations
The multi-population neural network undergoes saddle-node bifurcations when one of the eigenvalues of the reduced Jacobian matrix tends to zero [1]. Therefore, if λ R γ → 0 − for a given γ, and all the other eigenvalues have negative real part, then for t According to Eq. (S29), this implies: where p (k) represents the neural population the kth neuron belongs to. Now, since Var , the amplitude of the fluctuations diverges , therefore according to Pearson's formula Corr (V i (t) , V j (t)) ≈ 1. On the other side, if p (i) = p (j), by replacing Eq. (S30) within the condition Cov 2 (V i (t) , V j (t)) = Var (V i (t)) Var (V j (t)), after some algebra we get: This equality is satisfied if and only if: namely if and only if the matrix C (γ) has rank 1, which is indeed the case (see Sec. (S4)). This proves the emergence of strong correlations also between the neurons of different populations when the network is close to a saddle-node bifurcation.

S6.2.2 Andronov-Hopf Bifurcations
The multi-population neural network undergoes an Andronov-Hopf bifurcation whenever the reduced Jacobian matrix has two complex-conjugate purely imaginary eigenvalues [1]. If the real part of a pair of complex-conjugate eigenvalues λ R γ,δ tends to zero (i.e. R λ R γ → 0 − ), and if all the other eigenvalues have negative real part, then for t Eq. (S27) gives: where λ R γ = λ R δ and the overline represents the complex conjugate operator. According to Eq. (S29), after some algebra we get: thus, similarly to the case of the saddle-node bifurcations, for t 1 |R(λ R γ )| the amplitude of the fluc- On the other side, if p (i) = p (j), from Eq. (S31) it is possible to prove that the condition Cov 2 (V i (t) , V j (t)) = Var (V i (t)) Var (V j (t)) is satisfied if and only if I C (γ) p(i),β = I C (γ) p(j),β = 0 ∀β. However, these imaginary parts cannot be equal to zero at an Andronov-Hopf bifurcation, the latter being defined by a pair of purely imaginary eigenvalues. Therefore, contrary to the saddle-node bifurcations, the cross-correlation between neurons in different populations does not tend to one at an Andronov-Hopf bifurcation. This is also confirmed by the top-right panel of Fig. (4) in the main text, in the special case P = 2.

S6.2.3 Branching-Point Bifurcations
Branching-point bifurcations occur whenever λ P γ → 0 − for a given γ [1]. According to Eq. (S4), only sufficiently strong self-inhibitory synaptic weights J γγ give rise to this kind of bifurcation. As we explained in [1], branching points do not occur in the mean-field approximation of the network, therefore the study of correlations near these bifurcations is one of the results of most interest of our article.
If λ P γ → 0 − and all the other eigenvalues have negative real part, for t 1 |λ P γ | Eq. (S27) gives: where O Nγ ,N β is the N γ × N β null matrix. According to Eq. (S29), if p (i) = p (j) = γ we get: Cov (Vi (t) , Vj (t)) ≈ 1 − e −2|λ P γ |t 2 λ P γ σ B thus for t 1 |λ P γ | the amplitude of the fluctuations diverges for λ P γ → 0 − , while Corr (V i (t) , V j (t)) ≈ 1 1−Nγ . Interestingly, according to [11] this is the lower bound of the correlation between fully-connected neurons in a homogeneous population of size N γ . Since 1 1−Nγ < 0 for N γ ≥ 2, the neurons in population γ are maximally anti-correlated at the branching-point bifurcations. Moreover, according to this formula, correlation tends to −1 only for N γ = 2 (which is confirmed by the top-right panel of Fig. (4) in the main text, in the special case P = 2).
To conclude, if p (i) = p (j) = γ, the matrix Φ p(i),p(i) (t) is not dominated by the term e −|λ P γ |t , therefore we do not observe neither the divergence of the variance nor strong correlations in population p (i). This prevents also the formation of strong inter-population correlations between γ and p (i). In the case P = 2 this phenomenon occurs for the excitatory neurons since γ = I and p (i) = E, and indeed this result is confirmed by the top panels of Fig. (4) in the main text.