Keywords

1 Introduction: Stability and Resilience – “Robustness”

Complex systems are found everywhere – from scheduling to traffic, food to climate, economics to ecology, the brain, and the universe. Complex systems typically have many elements, many modes of interconnectedness of those elements, and often exhibit sensitivity to initial conditions. Complex systems by their nature are generally unpredictable and can be highly unstable.

However, most highly connected complex systems are actually quite stable and resistant to disruption from minor changes in parameters [1]. This is a concept originating from Bernard and Cannon (homeostasis) and now is a central hypothesis of “robustness” in theoretical biology [2,3,4,5]. A more contemporary review by Demongeot and Demetrius [6] captures the relationship between the concepts of robustness, entropy, and complexity:

The hypothesis that a positive correlation exists between the complexity of a biological system, as described by its connectance, and its stability, as measured by its ability to recover from disturbance, derives from the investigations of the physiologists, Bernard and Cannon, and the ecologist Elton. Studies based on the ergodic theory of dynamical systems and the theory of large deviations have furnished an analytic support for this hypothesis. Complexity in this context is described by the mathematical object evolutionary entropy, stability is characterized by the rate at which the system returns to its stable conditions (steady state or periodic attractor) after a random perturbation of its robustness. This article reviews the analytical basis of the entropy — robustness theorem — and invokes studies of genetic regulatory networks to provide empirical support for the correlation between complexity and stability. Earlier investigations based on numerical studies of random matrix models and the notion of local stability have led to the claim that complex ecosystems tend to be more dynamically fragile. This article elucidates the basis for this claim which is largely inconsistent with the empirical observations of Bernard, Cannon and Elton. Our analysis thus resolves a long-standing controversy regarding the relation between complex biological systems and their capacity to recover from perturbations. The entropy-robustness principle is a mathematical proposition with implications for understanding the basis for the large variances in stability observed in biological systems having evolved under different environmental conditions.

Stability is characterized by measures of how perturbing initial conditions or default parameters result in permanent deviation from baseline behavior vs a tendency to return toward initial conditions (e.g., a basin of attraction) [7,8,9]. Biological robustness is the ability for a system to recover from disturbing its natural healthy equilibrium into deleterious regions of its parameter space [7, 10,11,12,13]. Mathematical analyses in nonlinear dynamical systems theory provide a theoretical foundation for robustness theory [14,15,16]. Since the advent of control theory, engineers have emphasized negative feedback as a restoring mechanism [17,18,19,20]. Much more empirical work at many biological systems levels is needed to marry theory and practice [13, 21,22,23,24,25,26].

On the practical side, by way of example, the medical and neuroscience literature has many examples where thresholds, numbers of neurons, synapses, locations, branching, conductivities, capacitances, impedances, time-varying dynamics, and so forth are not accurate, known, or are even different across publications. Often, sensitivity analysis is critical to understand the underlying dynamics, and the system must compensate for sensitivity to maintain stability and robustness [27]. In this regard, the complexity of neural circuitry systems may compensate for large error bars in parameter accuracy [28,29,30,31].

For practical and theoretical interest, we sought to examine stability and robustness vs complexity in neural circuitry parameter spaces using Monte Carlo simulation.

2 Methods

2.1 Node Parameters at Several Systems Levels Granularity

Using the universal neural circuitry simulation (UNCuS) software [32,33,34], we specified a subset of ten neuron parameters for sample spaces composed of different numbers of populations, groups, and numbers of cells (neurons). To globally sample the combined parameter space, we used Monte Carlo methods (Table 1) [27]. Out of a space of ~5 x 1011 parameter configurations we sampled 5 x 103 combinations. UNCuS and the Monte Carlo sampling program were written by us in C++ (Microsoft, Redmond, WA, USA) and Java (Oracle Corp., Redwood City, CA, USA). UNCuS was designed specifically to perform computationally efficient neural circuitry simulations for large as well as small circuits. The program integrates essential behavior at the underlying systems level of ion channel conductance and dendritic tree emission current and scales well in projects of the current size [32, 35].

Table 1 Neural circuit topology that varied total # (number of) nodes at several systems levels of granularity in the model

The number of populations ranged from two to nine, and each population always had just two groups of neurons, one excitatory and one inhibitory. Groups within a population were never connected to each other but were always connected to all other groups in the other populations, with the same density. What was varied in the configurations was the number of (1) cells per group and (2) neuron parameters of the cells. Figure 1 shows screenshots from UNCuS of example configurations.

Fig. 1
figure 1

Four sample project configurations from universal neural circuitry simulation software UNCuS. Neuron populations (circles) contain numbered groups and connections between groups identified as excitatory (blue) or inhibitory (green). Each neuron in each group connected to 40% of the neurons it targeted. The total number of synapses can be thought of abstractly as the total nodes in the complex system

We created 50 separate project configurations with identical numbers of cells per group in each case. Initially we looked at the extremes of the parameter space, 7 configurations with 2 populations and 7 configurations with 9 populations, totaling 14 configurations. Then we looked at the intermediate numbers of populations and groups using 6 configurations each with 3, 4, 5, 6, 7, and 8 populations, totaling 36 configurations.

To set the varying number of cells in each population, the program randomly selected a number of cells from the set {5,10, …, 145, 150}.

The number of connections from source to target group was fixed at 40%; for example, from a source group of any number of neurons to a target group of, e.g., 100 neurons, each source neuron connected to 40 of the target group’s neurons. Groups never recurred back to themselves or the other group in its population, always to exogenous target groups in other populations (Fig. 1); hence in calculating the total number of connections, we subtract one from the total number of populations in the case.

At each target cell, the axon synapsed on two of ten electrotonic compartments in the middle of the dendrite, specifically compartments four and five. See Limitations below for the reasoning behind the choice of fixing connection strength at 40% and the dendritic compartments at two in the middle of the proximal-distal range.

2.2 Neuron Cell Parameters

Just as for each project case there were a random number of cells in each group, the sampling program also randomly selected a set of neuron cell parameters. In UNCuS, there are 12 parameters to characterize a neuron type. We excluded refractory time and spike width, leaving 10 cell parameters to select from. The parameters, their default baseline values, and the value ranges subjected to random sampling are shown in Table 2. All cells in all groups in each case used the same random selection of cell parameters. For the equations governing cell behavior, see Arle et al. [32, 33].

Table 2 Neuron parameters sampled via Monte Carlo simulation

In previous projects, we usually drove a circuit with specific stimuli on specific cell groups such that their initial firing rate corresponded to what was reported in the literature. Here we raised background noise to a level that activated all circuits, specifically 1.7 nA, which we found to be sufficient.

A plot of a typical spiking neuron’s membrane voltage over time is shown in Fig. 2.

Fig. 2
figure 2

Typical membrane potential over time of a neuron in the simulations. The transient period of the first 100 ms is discarded in our calculations. To more clearly show the spiking activity, we plot the middle 600 ms of activity between 200 ms and 800 ms

2.2.1 Dynamic Adjustment of Input Amplitude

The program dynamically condensed the input amplitude to each cell from all incoming excitatory or inhibitory post-synaptic potentials (EPSP, IPSP) to a normalized range between −5 and +5 nA. The reasoning here that there is some range in actual neurons beyond stimulating or inhibiting the cell has no greater effect. However, in modeling and simulations, there is often a range of a parameter for which one has validated the behavior of the simulated entity, while, beyond that validation value, behavior is unknown and may be unpredictable. Essentially, the method used was to calculate the maximum possible input current for a given project case based on UNCuS’ dendritic emission current look-up table, the known dendritic compartments (#4 and #5), the known number of inputs (40%, e.g., 40 per 100 cells), and then, at each time step, multiply each cell’s input by the factor required to condense the scale to −5 to +5 nA.

2.3 Simulation Duration, Time Step, and Calculation of Firing Rates

All simulations were run for 1000 ms in 0.25 ms time steps, which we have found to give stable cell behavior, often after an initial transient period. We defined a “momentary firing rate” by moving a window of 100 ms, time step by time step, through the 1000 ms, taking the total number of spikes in each interval, and dividing that number by the interval length, 100 ms, to give spikes/second. This method discards the first 100 ms, which, as said above, often involved a transient period transitioning to steady state behavior.

The program recorded the spikes for each neuron, took the momentary firing rate, and averaged over all cells in a group to give a group firing rate, which underlies the data shown in Results.

2.4 Definition of “Robustness” via Coefficient of Variance (CV)

We used coefficient of variance (aka “variation”) (CV) as a metric for “robustness,” where the standard deviation was divided by the spike rate of baseline neuron cell instead of the mean of the Monte Carlo sampled data:

$$ cv=\frac{1}{F{R}_{base}}\sqrt{\sum \limits_{i=1}^n\frac{\left(F{R}_i-F{R}_{base}\right)}{n-1}} $$
(1)

where n = 50, the number of population-group configurations (each with a random set of 10-cell parameters), FRi is the firing rate of a given configuration, and FRbase is the firing rate with the baseline neural parameters. Thus, the baseline parameters, shown in Table 2, established a firing rate against which the deviation of all Monte Carlo sampled configurations would be checked.

2.5 Definition of “Robustness” via an Adapted Lyapunov Exponent

To give a second perspective on how the level of complexity affects the deviation of neural system from its baseline performance, we adapted a standard formula for the Lyapunov exponent as was done with the definition of CV. Thus, the baseline neuron cell firing rate was used as the initial condition against which deviations over time were compared in calculating the Lyapunov exponent (LE) [35, 36]:

$$ \lambda =\underset{s\to \infty }{\lim}\frac{1}{s}\sum \limits_{k=0}^{s-1}\mathit{\ln}\left|\frac{F{R}_k}{F{R}_{base,k}}\right| $$
(2)

where s is the time step, FRk is the momentary firing rate of the sampled parameter point at time step k, FRbase,k is the baseline firing rate at time step k, and s is the total number of time steps. When the sampled configuration firing rate is equal to that of the baseline, there is no divergence and LE = 0.

2.6 Cumulative Firing Rate vs Momentary Firing Rate

We calculated cumulative firing rate (CFR) as a global reflection of the circuitry in time, as distinct from the momentary firing rate. The CFR is defined at time t as the total number of spikes up to t divided by t to give spike rate/second.

2.7 Limitations

There are many other ways to vary neural circuit parameters than those we chose to explore. Notably the many different topologies of connecting circuits were programmed to perform a specific behavior. Of those, we did not, for instance, explore circuits that must be stable to perform their function, such as clocks and rhythm generators, or that evolved to become emergently stable, such as the various networks revealed by functional connectivity analyses [37]. The space of connection strengths could also be explored more fully. Our past experience building small- and large-scale connectome models [34, 38, 39] and preliminary results in this study showed that, at the connection strength extremes, very low or high connection density and/or few distal dendritic compartment connections or many proximal dendritic compartment connections gave either no activity or hyperactivity, respectively. On the other hand, one can build circuits with, e.g., high connection density in some parts of the circuit if they are regulated by other parts of the circuit; an effectively infinite number of such specific circuits are possible.

Other neural circuitry simulation software exists, and the variety is evolving rapidly, driven by rapidly increasing interest in building connectome models (see Carlson et al. elsewhere in this book [24]).

3 Results

In general, in non-quiescent circuits, the momentary firing rate, which characterizes the system’s short-term behavior in the time domain, was never constant but dynamic. For the robustness measures, the behavior of excitatory and inhibitory groups was similar, so only results for excitatory groups are shown below.

3.1 Sample Time Course of Firing Rate of Two Population-Group Configurations

Figure 3 allows us to visualize the variance of relatively simple vs relative complex neural circuits over time before we apply metrics to get a more precise understanding. The figure plots the time course of firing rates in the two extreme population-group configurations; in the left, 2 populations with 5 cells per group are shown, and in the right, 9 populations with 150 cells per group are shown. In those configurations, each curve is a unique set of ten neuron parameter values. Note that the two outlier parameter configurations easily seen in the simpler model are also present in the complex model but are to some extent “tamed” by the higher connectedness and number of circuit elements. One effect is that in more complex systems, errant random inputs can cancel each other.

Fig. 3
figure 3

Momentary firing rate over 1 second in a simple vs complex neural circuit. Left: 2 interconnected populations with just 5 cells in each group (there are always 2 groups/populations in all configurations). Right: 9 interconnected populations with 150 cells/group. In those configurations, the firing rates of 50 different neural parameters are plotted over 1 second (after a 100 ms transient period), with a baseline parameter case shown in black. Firing rates in the more complex system are higher than in the simple one due to recurrent input from a higher number of cells. The outmost curve in the left case implies that a particular set of ten-parameter value is more sensitive than others, i.e., one of the ten parameters may be more sensitive than the rest of the parameters. CV in the simple system is 0.754 and in the more complex system 0.236

3.1.1 Plots of Firing Rate of All Sample Points vs Baseline Parameters

The three plots in Fig. 4 show all data and examine robustness across increasing complexity measured by the numbers of several different circuit elements.

Fig. 4
figure 4

Top left: firing rate vs number of neural cell populations. Top right: firing rate vs total number of neurons in the system. Bottom: firing rate vs total system projections. Each point is a 1-second run with a unique set of 10 neural cell parameter values (colored dots) vs baseline values (black dots), totaling 5100 sample points. In general, as complexity increases, variance decreases, i.e., the system becomes more robust

3.1.2 Robustness vs Number of Elements as Measured by Coefficient of Variance (CV)

The next plots (Fig. 5) show CV (Eq. 1) of Monte Carlo sample firing rates vs complexity parameters compared to the baseline configuration (red line). Note that we take the baseline firing rate as the “expected value” and therefore CV < 1 means the standard deviation of the sampled firing rates is smaller than the firing rate of the baseline (Eq. 1). Each point shows the CV over 50 population configurations, each with a randomly selected set of neuron parameters, and each graph shows 100 such points. Since the total synapses and the combined complexity are linearly related (in our study, synapses are two times the number of projections since each projection synapsed on dendritic compartments 4 & 5), their distributions of CV are similar, and thus data for a number of synapses are omitted.

Fig. 5
figure 5

Top left: coefficient of variance vs number of populations. Each of the 100 points is 50 runs with a unique set of 10 neuron parameter values. CV decreased with increasing number of populations and converged to a constant 0.197 at 7+ populations. Top right: coefficient of variance vs. number of total cells. Against different numbers of cells in the system, CV is larger than when varying the number of populations. Here CV converges to 0.281. Bottom: coefficient of variance vs number of total projections. CV converges to 0.252 at ~105 projections, similar to when varying the number of cells

3.1.3 Robustness vs Number of Elements as Measured by Lyapunov Exponent (LE)

Here we used a different measure of robustness, the Lyapunov exponent (LE, Eq. 2), based on how the firing rates of sampled population-group-neuron parameter sample points diverged over time from those of the baseline neuron parameter case. Yet the results for different levels of complexity were quite similar to the previous results using CV as the metric. Figure 6 shows LE against numbers of populations, cells, and projections.

Fig. 6
figure 6

Lyapunov exponent (LE) analysis of firing rates over time vs system complexity. Each point is a run with a unique set of 10 neuron parameter values in the 50 combinations of population and group numbers, totaling 5000 sample points. The LE measures how the firing rates of sample configurations deviated from the baseline case over time. LE > 0: sample firing rates > baseline. LE < 0 sample firing rates < baseline. LE = 0: no deviation from baseline. Top left: LE of firing rate vs populations. Top right: LE of firing rate vs number of total cells. Bottom: LE of firing rate vs number of total projections. Divergence decreases with increasing numbers of populations. Generally firing rates decrease over time as steady states are reached in the circuits and those of sample points are less than those of baseline cases. Firing rates of sampled configurations were generally higher than those of baseline cases and divergence decreased against total projections in the system. Divergence decreases with increasing system complexity

3.1.4 Robustness vs Number of Elements as Measured by Cumulative Firing Rate (CFR)

The cumulative firing rate (CFR) can be compared to momentary firing rate as a global reflection of the circuitry behavior over time instead of at a given point in time.

Figure 7 shows CFR for total populations, total cells, total projections, total projections’ divergence from baseline (LE), and total populations’ divergence from baseline (CV). Results were similar to simulations shown above.

Fig. 7
figure 7

Cumulative firing rate (CFR) is the dynamic cumulative firing rate from 100 ms at each time step divided by the elapsed time in the simulation. Plots show CFR for total populations (top left), total cells (top right), total projections (middle left), LE of total projections’ CFR divergence from that of baseline parameters’ model (middle right), and CV of CFR of total populations’ divergence from that of baseline (bottom left with legend bottom right). In all cases, the CFR of sampled configurations decreases with increasing number of system elements

4 Discussion

4.1 Key Results

The data suggests that neural circuits become increasingly robust as their complexity is increased, with a relatively small amount of complexity as compared to the number of neurons in various organisms. Synapse totals in the study ranged from 160 to 5.18 x 106 – recall that a 3-year-old human has an estimated 1015 synapses in their entire brain.

Thus, the inherent nature of a moderately complex and sufficiently interconnected neural circuit results in local parameter changes being unlikely to have significant changes in circuit behavior. Given sufficient connectivity, robustness increases as the number of elements increases and may be sustained once it reaches a window of transition. In our simulations, where the elements were neurons, and other parameters being equal, this transition window to maximal robustness was in the range of ~100–1000 cells.

The number of individual synapses appeared to have the least effect on achieving robustness compared to the number of cells or axonal projections.

4.2 Robustness and Degeneracy in Biological Systems

Our pragmatic interest is in applying the results of the study and related work to faithfully modeling neurological disorders, such as Parkinson’s disease, epilepsy, neuropathic pain, and others, and how to treat them with neuromodulation.

Biological systems have evolved adaptive homeostasis to increase their robustness; conversely, loss of adaptive homeostasis can result in disease and disorder and is a hallmark of aging [1, 2, 40,41,42,43,44,45]. More broadly, the term degeneracy in complex systems theory has been applied to biological systems to describe a decline in their ability to maintain functional integrity [46], and the application of degeneracy analysis to biology and neural circuits in particular is likely to play a major role in neurological disorders and aging and their treatment [47].

4.3 Robustness and Degeneracy in Functional Connectivity Brain Networks

In the past two decades, functional connectivity (FC) – the correlated activity between brain regions as identified by imaging and monitoring the brain’s electrical activity – has become a major paradigm for understanding healthy and diseased states. The “resting state” and time course of FC networks can constitute signatures of healthy vs diseased conditions. Degeneracy of the healthy FC is identified with the diseased state. In many cases, modelers of neurological disorders and disease may find that system complexity may underlie diseased states. For instance, the reason why, in Parkinson’s disease, up to 80% of the dopaminergic neurons of the substantia nigra are lost before movement disorders manifest is unknown. Degeneracy via loss of complexity and connectedness in the basal ganglia may be part of the mechanism.

4.4 Inadvertent Modeling Error Due to Scaling

Robustness helps illuminate possible errors in modeling connectomes when scaling the number of elements. When building a complete model of the human spinal cord connectome [34], we found that Rexed lamina II had ~1300 times the number of neurons that lamina I had. Thus, in scaling down the total number of neurons to make a more computationally efficient model, we inadvertently reduced the robustness of the model, which showed up as high firing rates in lamina I, and had to scale it back up to correct for that. Such an inadvertent effect would equally invalidate many neural circuit models, notably of seizure.

5 Conclusion

The complexity of biological systems, notably neural circuitry, as defined by the number of element (neurons) and connectivity (synaptic topology) is a critical element in their robustness – the degree to which, when perturbed from healthy states toward diseased or disordered states, they inherently attempt to return to the healthy state. We simulated neural circuitry complexity and connectivity in a large parameter space via Monte Carlo sampling. Our results bear out the theory that more complex, connected systems can be inherently more stable than simpler systems. These results have implications for modeling neurological disorders, such as Parkinson’s disease, chronic pain, seizure, and age-related cognitive decline, and how to treat these conditions with neuromodulation.