Introduction

The brain can be represented as a network of connected elements at different spatial scales, from individual neurons to macroscopic, functionally specialized structures1,2,3,4,5,6,7,8,9. Interestingly, neuroimaging data, like those obtained with functional Magnetic Resonance Imaging (fMRI) techniques, naturally lend themselves to a network representation, thus attracting the interest of both graph-theorists and network scientists towards a study of the topological properties of brain connectivity structures10. Indeed, correlations between fMRI signals arising from responses to stimuli or from spontaneous fluctuations in the brain resting-state can be interpreted as a measure of functional connectivity between remote brain regions and represented as edges in a graph. Moreover, alterations in the strength and structure of functional connectivity networks have been observed in groups of patients suffering from several brain diseases, including Alzheimer, Autism and Schizophrenia, thus providing potential markers of neuropsychiatric illness1,11,12,13,14,15.

Of particular interest is the study of the modular structure of these networks, i.e. the presence of clusters of nodes that are more tightly connected among themselves than with nodes in other network substructures1,2,16,17,18. A modular structure has been observed for different types of brain networks (functional and structural) and in different species, including humans, primates and rodents11,17,18,19. Functional connectivity networks derived from fMRI experiments in human subjects exhibit a hierarchical structure of modules-within-modules3,4. It has been suggested that hierarchical modularity may confer important evolutionary and adaptive advantages to the human brain by providing intermediate modules that can respond to the evolutionary or environmental pressure without jeopardizing the function of the entire system20. A similar hierarchical organization has been observed in other species, e.g. non-human primates. Whether a hierarchical modular structure is also present in simpler networks, like those derived from lower species (as the worm C. Elegans), is a subject under investigation21,22. Here, we investigate the modular structure of the mouse brain and its hierarchical organization using a graph theoretical approach. To the best of our knowledge, our paper shows the results of this kind of approach to the mouse brain for the first time.

Percolation analysis, a tool derived from statistical physics, provides a powerful means to investigate the hierarchical organization of networks23,24. This approach is based on the assessment of the fragmentation of a network as weaker edges are gradually removed from the graph. A striking demonstration of this hierarchical organization is the presence of multiple percolation thresholds18, whereby disaggregation of modules occurs abruptly for critical values of the control parameter, pc. On the contrary, application of this analysis to Erdös-Renyi random graphs23,24,25 shows a single threshold value, separating two phases characterized by different topological features. Below the threshold (i.e. for p < pc) several tree-like components are observed whose size is of the order of lnN - with N the total number of nodes. Above the threshold (i.e. for p > pc), instead, a single giant component appears, whose structure admits cycles and from which tree-like structures (whose size is again of the order of lnN) are excluded23,25.

Here we have analyzed functional connectivity networks constructed from a large resting state fMRI dataset from mice. In particular, a set of anatomical regions of interest is individuated (see SI for the complete list) and the corresponding activity (whence the name “functional”) is recorded, in order to obtain the related set of time series; afterwards, the (temporal) correlation for each pair of areas is calculated through the Pearson coefficient (see SI for the analytical details). The mouse brain is thus mapped into a correlation matrix: such a representation provides information on the brain activity irrespectively from the physical connections potentially present between the considered areas. Specifically, in order to assess the presence of multiple percolation thresholds (i.e. the structural signature of a hierarchical modular structure in the mouse brain), we have applied standard percolation analysis and variations thereof. Importantly, we have applied novel approaches to avoid some of the pitfalls that may affect more conventional analysis of functional connectivity networks. Indeed, it will be shown that traditional percolation detects a modular structure even in random networks, thus making it necessary to introduce a null model in order to correctly asses the statistical significance of the percolation analysis. Here we introduce a novel null model, independent of the choice of a particular threshold and resting exclusively on the information encoded into the correlation matrix. Moreover, we propose the use of an algorithm to calculate the closest correlation matrix to a given symmetric matrix, thus ensuring that the proposed null model has the peculiar features of a proper correlation matrix.

We have complemented our percolation analysis by computing the Minimal Spanning Forest (MSF). Althought the MSF is not, by itself, a community detection technique, it represents a faster alternative for the identification of modules, defined by the strength of the functional relations between nodes. The modules identified by the MSF can be, in turn, linked to obtain the Minimal Spanning Tree (MST), which provides the “backbone” of the mouse brain functional connectivity.

Although the literature provides many examples of algorithms for detecting communities in networks (as the one based on modularity maximization26 or on the removal of edges characterized by high betweenneess27), we have adopted a different approach since existing procedures, beside being tailored on binary - and not fully connected - networks, suffer from a number of limitations, as discussed in ref. 5. Our case-study, on the other hand, lies at the opposite extreme, since we deal with correlation matrices, which are weighted and fully connected. For this reason, we have adopted the approach presented in the paper, based on the combination of percolation and spanning forest techniques. We explicitly point out that an algorithm for community detection on correlation matrices has been recently proposed28: the comparison with the latter has confirmed the consistency of our results, which appear to reveal the details of the modules recognized by the former.

These methodological developments make it possible to assess the presence of a hierarchically-organized modular structure in the mouse brain, both at the level of population and of individual subjects.

Results

Average correlation matrix

We first focus on the average correlation matrix, defined by the sample mean (i.e. over all individuals) of each back-transformed pair-specific correlation coefficient. More specifically, we proceeded by steps: we first calculated, for each mouse in our sample, the Pearson coefficient Cij between the time series Xi and Xj of each pair of ROI i and j. This quantity represents the simplest choice to detect pairwise similarities between signals: it ranges between −1 and 1, with −1 describing perfectly anti-correlated signals and 1 describing perfectly correlated signals. A value of 0 would indicate that no (linear) correlation has been detected.

However, the expected variance of Pearson coefficient is smaller as the correlation coefficient increases. Thus, a data transformation ensuring that the variance of Cij is disassociated from its mean would be desirable29. For this reason, we Fisher-transformed each coefficient by considering the matrix whose generic entry reads zij = arctanh(Cij) and averaged them over all subjects, in order to obtain a unique matrix . As a last step, we back-transformed the latter: . Figure 1 shows the average correlation matrix whose rows and columns have been reordered according to the dissimilarity measure

Figure 1
figure 1

Dendrogram and correlation matrix for the average brain, induced by the dissimilarity measure , with representing the correlation matrix averaged over the single subjects entries constituting our sample.

The algorithm we have adopted proceeds by computing, at each step, the minimum dissimilarity between pairs of areas and clustering them together. In other words, clusters are grouped according to the minimum intercluster dissimilarity, a linkage rule also known as “single-linkage” clustering30. The same algorithm can be used to generate the corresponding dendrogram.

While negative correlations are pronounced in subject-wise matrices (see SI file) they tend to be averaged-out in the population-wise matrix, whose terms are all positive. This is due to two main reasons: 1) negative correlations always represent a minority within the set of individual Pearson coefficients, 2) such values span a range which markedly varies across single subjects. As a consequence of the larger inter-subject variability of negative correlations with respect to the positive ones, the nested structure of the average matrix is far less pronounced than for the single individuals: nonetheless, nested red square-shaped patterns along the diagonal are still clearly visible.

The distribution of edge values of the average matrix (calculated as stated above) is shown in Fig. 2, alongside with the normal distribution whose mean and variance have been estimated through the maximum-of-the-likelihood procedure. The deviation of the distribution of experimentally-determined correlations from the normal distribution is larger for this matrix than for the individual ones (as evident from the comparison with individual matrices - see SI file).

Figure 2
figure 2

Empirical cumulative density function (CDF) of the correlations observed in average brain (blue trend) and CDF of a gaussian distribution whose means and standard deviations have been estimated through the maximum-of-the-likelihood prescription (red trend), which allows us to evaluate them as the sample mean and the sample standard deviation of the set of values .

Percolation analysis

The results of the classical and modified percolation analyses are shown in Fig. 3.

Figure 3
figure 3

Comparison between the usual percolation analysis (upper panel) and our modified percolation analysis (bottom panel) run on the average brain (red trend), on a randomized version of it, retaining the same empirical distribution of correlations (brown trend) and on the ensemble-averaged matrix (green trend).

While the usual percolation analysis detects a hierarchical modular structure even on the null model, thus making it difficult to asses the statistical significance of the observed patterns, our modified percolation analysis enables discrimination between the real and the random cases.

Our method identifies multiple steps for increasing threshold, corresponding to the stable partitions of the network1,18 highlighted in Fig. 4. The plateaus indicate the presence of connections whose removal does not affect the number of connected components, indicating that these links are not critical in determining the structure of functional correlations.

Figure 4
figure 4

Each group of areas detected by our percolation analysis in correspondence of a given correlation value is composed by many sub-modules, whose presence is evidenced by raising the threshold value.

A clear example is provided by the blue area detected for rth = 0.51, comprising the anterio-dorsal hippocampus, the dentate gyrus and the posterior dentate gyrus - i.e. areas 3, 4, 19, 20, 35, 36. Upon raising the threshold to rth = 0.52, two subgroups appear, composed respectively by the right and left parts - i.e. 3, 19, 35 and 4, 20, 36 - of the aforementioned areas (evidenced in blue and purple). Further raising the threshold to rth = 0.6, the two subgroups reveal a core structure defined by the pairs 19, 35 and 20, 36. This finding confirms the hierarchical character of the mouse brain modular structure. See also the map of the neuroanatomical ROI in the SI.

Figure 4, shows that each connected group of areas detected in correspondence of a given correlation value is composed by many nested modules, whose hierarchical organization emerges from the application of higher thresholds. Two main groups of areas can be clearly identified (colored in blue and green in Fig. 5 and detected for ). The first group (colored in green) regions include the cingulate cortex, the motor cortex, the medial prefrontal cortex and the primary somatosensory cortex1,11,13. The second group (colored in blue) is constituted by areas 3, 4, 19, 20, 35 and 36 (i.e. anterio-dorsal hippocampus, the right dentate gyrus and the right posterior gyrus), all parts of the hippocampal formation.

Figure 5
figure 5

In order to assess the statistical significance of the results of our modified percolation analysis, a test is needed.

Left panel represents the test statistics we have chosen: the slope of the percolation plot of both the average brain (red trend) and of a randomized version of it, retaining the same empirical distribution of correlations (brown trend). Right panel: ensemble distribution of our test statistics; the red point represents the (statistically significant) observed value of the latter.

Upon raising the threshold to rth = 0.52, sub-areas appear: for example, the hippocampus splits into right and left part - i.e. 3, 19, 35 and 4, 20, 36 (evidenced in blue and purple); further raising the threshold to rth = 0.6, the two latter subgroups reveal a core structure defined by the pairs 19, 35 and 20, 36. An analogous result is found for the sensory system, confirming the hierarchical character of the mouse brain modular structure.

Interestingly, the percolation curve for the null model shows a remarkably different trend. Indeed, drawing a matrix (whose distribution of correlations coincides with the observed one for the average mouse) from our ensemble and repeating our percolation analysis leads to a single, sharper transition, with basically no plateaus. This indicates that raising the threshold value leads to the sequential disconnection of individual nodes, which are removed one after the other. This supports the idea that the hierarchical structure observed for the brain connectivity network is genuine, as the stepwise behavior does not emerge in a null model with similar distribution of correlations (the same conclusion holds true for the individual-wise matrices as well).

While this is reassuring, a statistical test is needed to quantify the significance of the experimental trend with respect to the null hypothesis. Our choice of such test moves from the observation that the experimental trend is less steep than the one obtained by running the null model. For this reason, the test statistics we have computed is the steepness of the experimental trend, measured between two points: the pairs and , with and indicating the values of correlations in correspondence of which we detect 2 and 53 communities respectively (we have deliberately excluded the trivial communities represented by the whole brain and the single areas/nodes). As we expect that each randomized version of our network is characterized by a different value of the steepness of the percolation curve, we generated many randomized configurations of the original matrix and calculated the ensemble distribution of the steepness values. As shown in Fig. 5, the observed value of our test statistics (i.e. the actual steepness) is not compatible with our null model, lying well outside the 95% confidence intervals.

On the other hand, as evident upon inspecting Fig. 3, classical percolation, in which only the size of the largest component is monitored, detects multiple thresholds in both the experimental network and in the network generated according to our null model. Although this finding provides a significant evidence of the structural differences between the observed average brain and an Erdös-Renyi-like graph, for example, it also implies that the claim according to which, in this “classical” version of percolation, revealing multiple thresholds is, by itself, a proof of the hierarchical modular structure of a network is arguable. Indeed, recovering the presence of steps also in the null model seems to suggest that the dynamics of the giant component is (at least) partially encoded into the correlations distributions, while this is no longer true when considering also the remaining components, implying that one of the genuine signatures of the brain self-organization lies in their dynamics.

Minimal Spanning Forest

The MSF algorithm is defined by two simple steps: a) the observed correlations are sorted in reverse order; b) starting from the largest observed correlation, a link is drawn between the corresponding brain areas. This is done sequentially, with the limitation that any new connection must link at least one previously completely disconnected area.

At each step of the MSF algorithm, either a previously isolated area is assigned to an existing group or two previously isolated areas are linked together. In this way, “communities” remain naturally defined by the strength of their internal correlations, while redundant connections are discarded. In particular two different communities are eventually connected by edges whose correlation value is smaller than all the links of both communities. Althought the MSF is not, by itself, a community detetction technique, it provides a means to hierarchically order modules based on the strength of their internal edges. Such modules are tree-shaped and provide information on the structural importance of each area (e.g. its betweeness centrality).

The MSF of average brain is shown in Fig. 6. Our analysis reveals that presence of both inter- and intra-hemispheric modules. The module with the strongest internal connectivity is the medial-prefrontal cortex, consistent with the finding that this bilateral structure persists in the percolation analysis at high values of the threshold. The second and third modules in the MSF rank are the right and left hippocampal formation. Interestingly, larger, inter-hemisferic modules, like the one comprising frontal and orbitofrontal cortices, caudate putamen and the amygdala, are characterized by more numerous, but weaker links. Altogether, the MSF structure reflects the hierarchical organization of connectivity modules revealed by our percolation analysis.

Figure 6
figure 6

Result of the MSF algorithm mapped into the average mouse brain areas.

The algorithm works by first sorting the observed correlations in decreasing order and then linking pairs of areas sequentially, with the only limitation that each new link must connect at least one previously disconnected area. Colors correspond to the average correlation value of the links defining each tree composing the forest. See also the map of the neuroanatomical ROI in the SI.

Remarkably, the insular cortex and the secondary somatosensory cortices are found within the same tree, thus showing that the reciprocal structural connectivity among these areas results in a consistent pattern of functional connectivity which has been recently described also using voxelwise community detection approaches11. Similarly, the thalamus is found to be strongly linked to the bed nucleus of stria terminals, consistent with the reciprocal neuroanatomical links connecting these regions31. Interestingly, our MSF reveals a strong functional connection between the visual cortex and the retrosplenial cortex (i.e. between areas 43, 44, 53 and 54), an area that has been recognized as fundamental in tasks like orientation, head movement and processing of visual cues32. As a last example, the MSF suggests a role for the temporal association cortex (i.e. 49, 50) in the coordination of the sensori stimuli33, receiving inputs from the auditory and the rhinal corteces (i.e. 7, 8 and 41, 42).

Once the MSF has been built, we can use the remaining correlations in the list to build the Minimal Spanning Tree (MST), showin in Fig. 7. As for the forest, only one limitation exists: any new added link must connect a pair of previously-disconnected trees (which become part of the same tree afterwards). Naturally, the links between trees are weaker than the links within trees and the MSF can be recovered upon removing the weakest links.

Figure 7
figure 7

MST of our average mouse brain.

The MST has been built by connecting the trees of the MSF, with the only limitation that any newly-added link must connect a pair of previously-disconnected trees: a consequence of the MST algorithm is that the correlations within the trees are, on average, higher than the correlations between the trees. The MST also allows us to distinguish between connector and provincial areas. See also the map of the neuroanatomical ROI in the SI.

The information provided by the MSF can be thus complemented by the information provided by the MST, which gives a clear picture of the mouse brain connectivity skeleton. In particular, the structural role of each area becomes evident and a classification of connector areas VS provincial areas becomes now possible. Among the most prominent examples of the former are the posterio-ventral hippocampus (i.e. 38) whose physical centrality is recovered as a functional centrality, the parietal association cortex (i.e. 33) which connects all the sensory areas (i.e. the rhinal, auditory and visual ones) and the orbitofrontal cortex (i.e. 31) whose physical connections are mirrored by a high degree of functional (inter)-connectivity (e.g. it connects the thalamus and the frontal association cortex).

Discussion

In this paper we have presented the results of a network theory-based analysis of a large mouse fMRI dataset, aimed at assessing the hierarchical modular structure of resting state functional connectivity networks in this species. In order to overcome the limitations of currently available techniques, we propose a modified percolation analysis that retains the information on all the connected components of a given network. Our variation of the percolation analysis takes into account negative correlations and does not require the application of a threshold to binarize the connectivity networks.

Our technique, straightforwardly applicable to experimental correlation matrices, reveals a hierarchically organized modular structure that does not appear in a null model defined by constraining the distribution of the observed correlations. Notably, conventional percolation analysis shows the presence of multiple percolation thresholds also in the null model, thus suggesting that results based on the giant connected component alone maybe misleading.

Our percolation analysis represents a generalization of the classical one. Indeed, while each step of the classical percolation analysis is always mappable into a step of our method, the reverse is not true, since the detection of a newly disconnected module from a secondary component would be completely missed.

We have also computed the Minimal Spanning Forest (MSF) and the Minimal Spanning Tree (MST) for our population-wise mouse brain. The latter represents a faster alternative to the usual community detection techniques, since it identifies modules on the basis of the strengths of their internal correlations. The MSF reveals both intra- and inter-hemispherical modules and the presence of small, tightly coupled modules alongside with larger subnetworks characterized by weaker internal links. The MST, on the other hand, enables the classification of connector and provincial areas.

Our results indicate that the tools provided by network theory indeed provide additional, non-trivial information on the topology of functional connectivity networks from the mouse brain. This work can be straightforwardly extended to the study of the human brain.

Both methods hereby proposed show a hierarchy of modules in the organization of functional connectivity networks in the mouse brain. While this is strictly speaking topological modularity, i.e. organization resulting from the topological distribution of edges, it is remarkable that it reflects functional and anatomical features. By way of example, bilateral homotopic correspondence of modules, as detected by percolation analysis and MST, is not a given, since there is no symmetry constraint imposed in these analyses. Interestingly, descending the hierarchical ladder we found that frontal modules, like the medial prefrontal cortex, the orbitofrontal cortex and the frontal associative cortices, are always part of a symmetrical module comprising both left and right counterparts. On the other hand, structures like more posterior cortices hippocampus appear to have a weaker interhemispherical connectivity and are split into two separate left and right modules at lower levels in the modular hierarchy. Hence, the hierarchical modularity shows a distribution in the relative strength of inter- and intrahemispheric connectivity across the brain.

A hierarchically modular architecture, as we demonstrated in the mouse brain, is thought to be advantageous from a functional and neuro computational point of view. Indeed, hierarchically modular architectures exhibit more stable and diverse computational dynamics34. Moreover, hierarchical networks have been shown to be stable under large scale reconnection of substructure35 and efficient in terms of wiring costs36. While similar findings have been reported for the human brain, our results demonstrate that a hierarchical organization is also found in the functional connectivity architecture of the mouse brain. Hence, a hierarchical organization appears to be a feature of the mammalian brain that is conserved across different species.

Finally, our results may have important implications for the study of models of brain disease. Indeed, the mouse genome can be manipulated using modern transgenic technology to generate models of human brain disease, a potentially precious tool to understand the neurobiological and genetic basis of human brain disorders. By way of example, alterations in the modular organization of brain functional connectivity have been observed in schizophrenia patients37 but the etiology of this complex disorder is still largely unknown. The demonstration that a hierarchically modular organization is present in the mouse paves the way to translational investigations in transgenic models that may help unveil the biological basis of aberrant functional connectivity.

Methods

All experiments were conducted in accordance with the Italian law (DL 116, 1992 Ministero della Sanità, Roma) and the recommendations in the “Guide for the Care and Use of Laboratory Animals” of the National Institutes of Health. Animal research protocols were also reviewed and approved by the animal care committee of the Istituto Italiano di Tecnologia (permit 07-2012). All surgical procedures were performed under anesthesia.

Data acquisition and data pre-processing

The data-set used for this analysis has been reported in a recent paper11,38, where experimental details are extensively described. In short, MRI experiments were performed on male 20–24 week old C57BL/6J (B6) mice (n = 41, Charles River, Como, Italy). Mice were anaesthetised with isoflurane (5% induction), intubated and artificially ventilated under 2% isoflurane maintenance anesthesia. All experiments were performed with a 7.0 T MRI scanner (Bruker Biospin, Milan) using an echo planar imaging (EPI) sequence with the following parameters: TR/TE 1200/15 ms, flip angle 30 degrees, matrix 100 × 100, field of view 2 × 2 cm2, 24 coronal slices, slice thickness 0.50 mm, 300 volumes and a total rsfMRI acquisition time of 6 minutes.

The mouse brain was parcellated into 54 macro-regions (27 per hemisphere) described in the SI. Resting state fMRI signals from individual image voxels were averaged across each region of interest (ROI) to generate 54 time-series of approximately 300 s duration. The 54 collected time-series were pairwise correlated calculating the Pearson coefficient and organized in a 54 × 54 symmetric matrix describing the resting-state connectivity network for each mouse.

Image preprocessing was carried out using tools from FMRIB Software Library (FSL, v5.0.639,40) and AFNI (v2011_12_21_101441). RsfMRI time series were despiked (AFNI/3dDespike), corrected for motion (AFNI/3dvolreg) and spatially normalised to an in-house C57Bl/6J mouse brain template42 (FSL/FLIRT, 12 degrees of freedom). The normalised data had a spatial resolution of 0.2 × 0.2 × 0.5 mm3 (99 × 99 × 24 matrix). Head motion traces and mean ventricular signal (averaged fMRI time course within a manually-drawn ventricle mask) were regressed out of each of the timeseries (AFNI/3dDeconvolve). To assess theeffectof global signal removal, separate rsfMRI time series with the whole-brain average time course regressed out were also generated. All rsfMRI time series were spatially smoothed (AFNI/3dmerge, Gaussian kernel of full width at half maximum of 0.5 mm) and band-pass filtered to a frequency window of 0.01–0.08 Hz (AFNI/3dBandpass)42.

In order to create an average adjacency matrix describing brain functional connectivity at the population level, subject-wise matrices were first Fisher-transformed, averaged across subjects and then back-transformed (see the SI file for the analytical details).

Percolation analysis

The percolation analysis proposed by Makse et al.18 includes the following steps: a) a threshold parameter p, ranging between 0 and 1 (and thus interpretable as a probability), is chosen; b) the links corresponding to the correlations below the threshold are removed and the size of the giant component C (i.e. the largest connected component) is computed; c) the parameter p is varied and C is evaluated for different thresholds.

This procedure ignores the complex evolution of the structure of the whole network, which is not captured by the giant component only. This becomes a relevant issue when the classical percolation is applied to small networks, i.e. to networks for which no giant component is clearly distinguishable: in this case the signal provided by this kind of analysis may be rather noisy, thus misrepresenting the modular structure of the brain at the global level.

In order to overcome this drawback, we propose a variation of the percolation analysis along the following lines: a) all experimentally determined correlation coefficients are listed in increasing order; b) starting from the lowest value, each entry in the list is chosen as a threshold; c) all the links corresponding to the correlations below the threshold are removed; d) the number of connected components characterizing the remaining part of the network is computed.

Beside providing a much more precise picture of the dynamics of the brain at the global level, our variation of the percolation analysis is also more robust, since our signal results from the fragmentation of many different components at the same time and is thus less prone to the statistical noise which, instead, accompanies the fragmentation of the giant component only.

Moreover, while each step of the classical percolation analysis is always mappable into a step of our method, the reverse is not true: the detection of a newly disconnected module from a secondary component would be missed by the classical percolation analysis (which focuses on the giant component only).

A statistical benchmark for mice brains

In order to define to what extent the stepwise structure highlighted by the percolation analysis is significant, we need to compare the results with a proper statistical benchmark. In other words, in order to understand whether the “stepwise behavior” is a mere consequence of lower-order constraints or a genuine sign of self-organization we need to define a proper null model.

As a first step, we have calculated the empirical probability distributions of the entries of the correlation matrix characterizing each subject in our sample and fitted them to normal distributions, whose means and standard deviations were estimated through the maximum-of-the-likelihood procedure. We have also repeated this analysis for the average mouse, i.e. the brain functional connectivity at the population level. In this specific case, the mean and the standard deviation are precisely the sample mean and the sample standard deviation of the set of values . In all cases, the distributions of the elements of the correlation matrices appear to be well behaved, with nearly Gaussian distributions (see Figs 2 and 3 in the SI file).

Secondly, we have generated an ensemble of “null brains”, by drawing correlations from the corresponding normal distributions. In particular, we have compared the observed percolation trend (red trace in Fig. 3) with the result of the percolation analysis run on both the ensemble-averaged matrix (green trace in Fig. 3) and on a specific brain (brown trace in Fig. 3). In particular, the null model for the average brain is defined by the normal distribution shown in Fig. 2, defined by the aforementioned parameters.

However, this procedure does not guarantee that true correlation matrices, which should be positive-definite, are obtained: in fact, although the synthetic matrices can be chosen to be symmetric and with unitary elements on the main diagonal, they may still have negative eigenvalues. This problem can be solved by implementing the procedure illustrated in ref. 43 where a fast algorithm for computing the nearest correlation matrix to a given, symmetric, one is described. The last step of our method consists in the implementation of this procedure.

Additional Information

How to cite this article: Bardella, G. et al. Hierarchical organization of functional connectivity in the mouse brain: a complex network approach. Sci. Rep. 6, 32060; doi: 10.1038/srep32060 (2016).