Introduction

Over the past decades, a number of observations of chaos have been reported in the analysis of time series from a variety of neural systems, ranging from the simplest to the more complex1,2. It is generally accepted that the inherent instability of chaos in nonlinear systems dynamics, facilitates the extraordinary ability of neural systems to respond quickly to changes in their external inputs3, to make transitions from one pattern of behaviour to another when the environment is altered4, and to create a rich variety of patterns endowing neuronal circuits with remarkable computational capabilities5. These features are all suggestive of an underlying role of chaos in neural systems (For reviews, see5,6,7), however these ideas may have not been put to test thoroughly.

Chaotic dynamics in neural networks can emerge in a variety of ways, including intrinsic mechanisms within individual neurons8,9,10,11,12 or by interactions between neurons3,13,14,15,16,17,18,19,20,21. The first type of chaotic dynamics in neural systems is typically accompanied by microscopic chaotic dynamics at the level of individual oscillators. The presence of this chaos has been observed in networks of Hindmarsh-Rose neurons8 and biophysical conductance-based neurons9,10,11,12. The second type of chaotic firing pattern is the synchronous chaos. Synchronous chaos has been demonstrated in networks of both biophysical and non-biophysical neurons3,13,15,17,22,23,24, where neurons display synchronous chaotic firing-rate fluctuations. In the latter cases, the chaotic behaviour is a result of network connectivity, since isolated neurons do not display chaotic dynamics or burst firing. More recently, it has been shown that asynchronous chaos, where neurons exhibit asynchronous chaotic firing-rate fluctuations, emerge generically from balanced networks with multiple time scales in their synaptic dynamics20.

Different modelling approaches have been used to uncover important conditions for observing these types of chaotic behaviour (in particular, synchronous and asynchronous chaos) in neural networks, such as the synaptic strength25,26,27, heterogeneity of the numbers of synapses and their synaptic strengths28,29, and lately the balance of excitation and inhibition21. The results obtained by Sompolinsky et al.25 showed that, when the synaptic strength is increased, neural networks display a highly heterogeneous chaotic state via a transition from an inactive state. Other studies demonstrated that chaotic behaviour emerges in the presence of weak and strong heterogeneities, for example a coupled heterogeneous population of neural oscillators with different synaptic strengths28,29,30. Recently, Kadmon et al.21 highlighted the importance of the balance between excitation and inhibition on a transition to chaos in random neural networks. All these approaches identify the essential mechanisms for generating chaos in neural networks. However, they give little insight into whether chaotic neural oscillators make a significant contribution to relevant network behaviour, such as synchronization. In other words, whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons has not been systematically studied yet.

To cope with this question, in the present paper we studied synchronization transition in heterogeneous networks of interacting neurons. Here we make use of an oscillatory neuron model (Huber & Braun model + Ih, referred here as HB + Ih) that exhibits either chaotic or non-chaotic behaviour depending on parameter values. Compared to other conductance-based models that display a variety of firing patterns and chaos, the HB + Ih consists on fewer variables and parameters while still retaining a biophysical meaning of its parameters and equations. Moreover, chaos is found in biologically plausible parameter regions, as we showed in our previous study12. Taking advantage of the mapping of chaotic regions that we previously performed, we simulated small-world31 neural networks consisting on a heterogeneous population of HB + Ih neurons, connected by electrical synapses, and sampled their parameters from either chaotic or non-chaotic regions of the parameter space.

Our first finding is that isolated chaotic neurons in networks do not always make a visible difference in process of network synchronization. The heterogeneity of firing rates and the type of firing patterns make a greater contribution to the steepness of the synchronization transition curve. Moreover, macroscopic chaos is observed regardless of the dynamic nature of the neurons. However, the results of Functional Connectivity Dynamics (FCD) analysis show that chaotic nodes can promote what is known as the multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states. Finally, our results suggest that chaotic dynamics of the isolated neurons is not always a predictor of macroscopic chaos, but macroscopic chaos can be a predictor of meta and multi-stability.

Materials and Methods

Single neuron dynamics

We use a parabolic bursting model inspired by the static firing patterns of cold thermoreceptors, in which a slow sub-threshold oscillation is driven by a combination of a persistent Sodium current (I sd ), a Calcium-activated Potassium current (I sr ) and a hyperpolarization-activated current (I h ). Depending on the parameters, it exhibits a variety of firing patterns including irregular, tonic regular, bursting and chaotic firing12,32. Based on the Huber & Braun (HB) thermoreceptor model11, here it will be referred to as the HB + Ih model.

The membrane action potential of a HB + Ih neuron follows the dynamics:

$${{C}}_{{m}}\frac{{dV}}{{dt}}=-\,{{I}}_{{sd}}-{{I}}_{{sr}}-{{I}}_{{h}}-{{I}}_{{d}}-{{I}}_{{r}}-{{I}}_{{l}}+{{I}}_{{syn}},$$
(1)

where V is the membrane capacitance; I d , I r , I sd , I sr are depolarizing (NaV), repolarizing (Kdr), slow depolarizing (NaP/CaT) and slow repolarizing (KCa) currents, respectively. I h stands for hyperpolarization-activated current, I l represents the leak current, and lastly the term I syn is the synaptic current. Currents (except I syn ) are defined as:

$${{I}}_{{i}}={\rho }({T}){{g}}_{{i}}{{a}}_{{i}}({V}-{{E}}_{{i}})\,\,{i}={d},{r},{sd},{h},{l};$$
(2)
$${{I}}_{{sr}}={\rho }({T}){{g}}_{{sr}}\frac{{{a}}_{{sr}}^{2}}{{{a}}_{{sr}}^{2}+{0.4}^{2}}({V}-{{E}}_{{sr}}),$$
(3)

where a i is an activation term that represents the open probability of the channels (a l  ≡ 1), with the exception of a sr that represents intracellular Calcium concentration. Parameter g i is the maximal conductance density, E i is the reversal potential and the function ρ(T) is a temperature-dependent scale factor for the current.

The activation terms a r , a sd and a h follow the differential equations:

$$\frac{{d}{{a}}_{{i}}}{{dt}}={\varphi }({T})\frac{{{a}}_{{i}}^{\infty }({V})-{{a}}_{{i}}}{{{\tau }}_{{i}}}\,\,{i}={r},{sd},{h},$$
(4)

where

$${{a}}_{{i}}^{\infty }({V})=\frac{1}{1+\exp (-{{s}}_{{i}}({V}-{{V}}_{{i}}^{0}))}.$$
(5)

\({{V}}_{{i}}^{0}\) is the Voltage for half-activation and s i is the voltage-dependency or slope of the sigmoid function. On the other hand, a sr follows

$$\frac{{d}{{a}}_{{sr}}}{{dt}}={\varphi }({T})\frac{-{\eta }{{I}}_{{sd}}-{\kappa }{{a}}_{{sr}}}{{{\tau }}_{{sr}}}.$$
(6)

where η is a factor that relates the mixed Na/Ca I sd current to the increment of intracellular Calcium. This is made negative such that inward currents will produce an increase in a sr . κ is a rate for Calcium decrease, given by buffering and/or active extrusion. Finally,

$${{a}}_{{d}}={{a}}_{{d}}^{\infty }=\frac{1}{1+\exp (-{{s}}_{{d}}({V}-{{V}}_{{d}}^{{0}}))}.$$
(7)

The function ϕ(T) is a temperature factor for channel kinetics. The temperature-dependent functions for conductance ρ(T) in Eqs (2) and (3), and for kinetics ϕ(T) in Eqs (4) and (6) are given, respectively, by:

$${\rho }({T})={1.3}^{\frac{{T}-25}{10}}\,\,{\varphi }({T})={3}^{\frac{{T}-25}{10}}.$$
(8)

In the simulation, we vary the maximal conductance density g sd , g sr and g h values. Unless stated otherwise, the parameters used are given in Table 1.

Table 1 Parameters of the HB + I h model.

Synaptic interactions

The synaptic input current into neuron k is given by:

$${{I}}_{{syn},{k}}({t})=\sum _{{l}={1},{l}\ne {k}}^{{N}}{{C}}_{{kl}}{{I}}_{{kl}}({t})$$
(9)

In this article, the current I kl between neuron k and l is modelled as a gap junction (electrical synapse):

$${{I}}_{{kl}({t})}={{g}}_{{kl}}({{V}}_{{k}}-{{V}}_{{l}})$$
(10)

where g kl is the conductance (coupling strength) of synapse from cell k to cell l. In this work we selected a uniform value g kl  = g for all connections within the neural networks, and simulations were performed with different values of g in order to observe the transition to total network synchrony.

Structural connectivity matrix

We define the connectivity matrix by C kl  = 1 if the neuron k is connected to neuron l and C kl  = 0 otherwise. We employed a Newman-Watts small world topology31, implemented as two basic steps of the standard algorithm33,34,35: (1) Create a ring lattice with N nodes of mean degree 2 K. Each node is connected to its K nearest neighbours on either side. (2) For each node in the graph, add an extra edge with probability p, to a randomly selected node. The added edge cannot be a duplicate or self-loop. Finally, as we are simulating electrical synapses, the matrices were made symmetric. The results presented in the main text correspond to networks with N = 250, K = 5, p = 0.1. Results obtained with other networks are shown in the Supplementary Material. The random seed for adding extra edges was controlled in order to use the same set of connectivity matrices under each condition. 10 or 20 different seeds were used and the results reported are averages of the different network realizations.

Quantifying chaos

The method for establishing whether a system is chaotic or not is to use the Lyapunov exponents. In particular, Maximal Lyapunov exponent (MLE) greater than zero is widely used as an indicator of chaos36,37,38,39. We calculated MLEs from trajectories in the full variable space, following a standard numerical method based on Sprott36 (also see Jones et al.)37.

Measurement of network dynamics

The voltage trajectory of each neuron was low-pass filtered (50 Hz) and a continuous Wavelet transform40,41,42 was applied to determine in one step the predominant frequency and the instantaneous phase at that frequency. We use the complex Morlet wavelet as mother wavelet function to calculate instantaneous phase.

We describe global dynamical behaviour of the neural networks using the mean and the standard deviation of the order parameter amplitude over a time-course, which indicate respectively the global synchrony and the global metastability of the system43,44. The order parameter45,46, R, describes the global level of phase synchrony in a system of N oscillators, given by:

$${R}={\langle |{\langle {{e}}^{{i}{{\phi }}_{{k}}({t})}\rangle }_{{N}}|\rangle }_{{t}}$$
(11)

where φ k (t) is the phase of oscillator k at time t, 〈  f  〉 N  = ϕ c (t) denotes the average of f over all k in networks, |•| is absolute value and 〈  f  〉 t is the average in time. R = 0 corresponds to the maximally asynchronous (disordered) state, whereas R = 1 represents the state where all oscillators are completely synchronized (phase synchrony state). The global metastability χ of neural networks is given by:

$$\chi =\frac{1}{{\rm{\Delta }}t}\sum _{t\le {\rm{\Delta }}t}{({{\varphi }}_{c}({t})-{R})}^{2}$$
(12)

Metastability is zero if the system is either completely synchronized or completely desynchronized during the full simulation–a high value is present only when periods of coherence alternate with periods of incoherence43. Δt in Eq. (12) is the time windows to quantify the global metastability.

Functional Connectivity Dynamics

A series of functional connectivity (FC) matrices were calculated using phase synchrony (order parameter) in a pair-wise fashion. This was done in a series of M overlapping time windows T1, T2, T3, …, T M . The FC matrix at the window m is defined by:

$${F}{{C}}_{{k},{l}}({{T}}_{{m}})={\langle |\frac{{1}}{{2}}({{e}}^{{i}{\phi }_{{k}}({t})}+{{e}}^{{i}{\phi }_{{l}}({t})})|\rangle }_{{t}}$$
(13)

where t corresponds to all times inside window m. We chose 2s as the width of the time windows, with an overlap of 90% between consecutive windows. In this way, 20 to 25 oscillation cycles are included and the measured synchronization patterns consider this time scale. Then, the Functional Connectivity Dynamics (FCD) matrix47,48 consists on a pair-wise comparison of all the FCs, revealing how similar or different are the synchronization patterns found at different times. We performed this comparison by taking the values in the lower triangle of FC, discarding the diagonal and the values adjacent to it, and calculating a correlation matrix between the vectors. Thus the FCD matrix is defined by

$${FC}{{D}}_{{i},{j}}=\frac{{cov}({F}{{C}}_{{t}={i}},{F}{{C}}_{{t}={j}})}{{\sigma }({F}{{C}}_{{t}={i}}){\sigma }({F}{{C}}_{{t}={j}})}$$
(14)

where cov (X, Y) is the covariance between vectors X and Y, and σ(X) is the standard deviation of X. Note that the (i, j) indices in Eq. (14) refer to FC matrices obtained at different times, while the (k, l) indices in Eq. (13) refer to network nodes. Finally, an histogram of FCD values and their variance offer a rough measure of multi-stability (see Results section).

Numerical integration

Equations (18) were solved by the Euler method with a fixed step size dt = 0.025. Most simulations were also repeated with an adaptive integration algorithm (odeint routine of Scipy package) without noticeable difference in the results. Data analysis and plotting were performed with Python and the libraries Numpy, Scipy, and Matplotlib. Example code used in this work is available at https://github.com/patoorio/HBIh-synchrony.

Results

Synchronization transitions with parameters drawn from fixed-size regions

Our main goal is to study how the dynamics of isolated neural oscillators can propagate to the network level, in terms of relevant behaviours. To do this, we use a model of neural oscillator that can display either chaotic or non-chaotic behaviour depending on the parameters (Figs 1 and 2A, also see12). We simulated networks of 250 neurons connected by electrical coupling (gap junctions) in a small world topology. When drawing the g sd and g h parameters, we selected different regions of the parameter space that made them behave as either chaotic or non-chaotic oscillators, while maintaining a similar average firing rate (Fig. 2A). Then, the inter-cellular conductance g was varied from 0 to 1 in order to evidence the transition from asynchrony to complete synchronization. The whole procedure was repeated with 10 different random seeds for the generation of the networks, to check that the results are not particular to an specific connectivity.

Figure 1
figure 1

Examples of non-chaotic (A) and chaotic oscillation (B) of HB + Ih neurons at different combinations of conductance parameters (shown at the top together with the corresponding Maximum Lyapunov Exponent, MLE). Below each voltage time course, inter-spike interval (ISI) plots are shown. Right panels show three-dimensional phase space projections of variables a sd , a sr and a h .

Figure 2
figure 2

Synchronization transitions on the neural networks with parameters drawn from fixed-size regions of the parameter space. (A) Maximal Lyapunov exponent (MLE), firing rate (FR) and firing pattern (FP) obtained from each parameter values. Red rectangles denote regions with chaotic oscillations, characterized by MLE > 0, while regions in green are non-chaotic oscillators (MLE ≤ 0). The colour bar of FP indicates: 0, no oscillations; 1, sub-threshold oscillations (no spikes); 2, oscillations and spikes with skipping; 3, regular tonic spiking; 4, burst firing (the shade represents the number of spikes per bursts); 5, tonic with firing rate between 20 and 50 spikes/second; 6, firing rate higher than 50 spikes/second. (B) and (C) Transition dynamics in heterogeneous networks of 250 neurons, as the g coupling value is increased. Order parameter (R), metastability and MLE are shown. The results are averages of 10 realizations of the networks and the error bars indicate the standard deviation. Subplots B and C correspond to the parameters drawn from range 1 and range 2 of A, respectively. Throughout this article, Non-chaotic networks and Chaotic networks in the legend respectively denote networks built with non-chaotic and chaotic oscillators.

Figure 2B,C show the transition curves for networks built with parameters drawn from regions 1 and 2, showing the order parameter (global synchrony), metastability (time variability of the global synchrony) and the network MLE as a measure of global chaos. In the first pair of parameter regions, we observe that networks of chaotic oscillators show a shallower transition to synchrony, with a higher metastability and a higher network MLE. This suggests that the chaotic nature of the oscillators indeed impacts the network behaviour. In networks of non-chaotic oscillators, when g is between 10−4 and 10−2, we observe that the networks become chaotic while transition from asynchrony (low R) to synchrony (high R). However, when phase synchronization is reached, chaos is lost. Thus, only asynchronous chaos is observed. In the case of chaotic neurons, we find that the networks always exhibit chaotic behaviour at a wide range (with g from 0 to 1) of synaptic coupling. In that case, both asynchronous and synchronous chaos are observed. When the same analysis was applied to the second pair of parameter regions, we observe a strange behaviour of the non-chaotic oscillators, with a non-smooth transition associated to a higher metastability and network MLE. An inspection of the firing patterns (Fig. 2A, right) reveals that the regions labelled as ‘2’ contain transitions between different firing patterns: tonic regular to bursting for non-chaotic, and skipping to bursting for the chaotic region. This made us think that the ‘kink’ observed in the curves of non-chaotic oscillators, was due to a transition between firing patterns occurring in the network. It can be seen in Fig. 3A that, as synaptic coupling is increased in networks of non-chaotic neurons, bursting firing pattern disappears, while in chaotic neurons they are increased. Moreover, in non-chaotic neurons there is a rebound of bursting firing patterns at g = 0.0433. This finding suggests that the dramatic changes in firing patterns induce the non-smooth transition shown in Fig. 2C.

Figure 3
figure 3

(A) Mean fraction of bursting events (MB) in networks of chaotic and non-chaotic oscillators from region 2. The fraction of bursting events for a given neuron k is defined as: b k  = Nb/Te, where Nb and Te represent the number of bursts (two or more spikes) and total events for isolated neurons, respectively. The mean fraction of bursting events of the whole neural networks is expressed as \(\frac{1}{N}{\sum }_{k\mathrm{=1}}^{N}{b}_{k}\). (B) Histogram of the (isolated) firing rates in each pair of parameter regions that are shown in Fig. 2A.

A closer examination of the firing rates in each parameter range revealed another consequence of chaos. A histogram of the (isolated) firing rates in each parameter region (Fig. 3B) reveals that, when drawing from fix-sized regions of the parameter space, chaotic oscillators show a more heterogeneous distribution of firing rates than non-chaotic. Thus, the steeper transition to synchrony and lower metastability observed with non-chaotic oscillators could be due to a more homogeneous nature of the network, rather than the chaotic oscillation by itself. Although this is already an effect of chaos, it is of dubious biological relevance because neurons will never control their levels of channel expression within a fixed range, as we did here. If any, neurons control for function, and a simple approximation to this is to consider that they try to maintain a certain average firing rate with whatever ion channel density relationship that can attain it. Thus, we developed a parameter sampling procedure that replicated, for both chaotic and non-chaotic populations, a similar firing rate distribution rather than the mean. We also shifted to another parameter subspace (g sd /g sr ) to take advantage of the complete absence of chaos when g h  = 012. Nevertheless, the simulations that follow were also performed in the g sd /g h parameter subspace with very similar results (see Supplementary Material Fig. S1).

Synchronization transitions using same distribution of firing rates

Figure 4A shows a region of the g sr /g sd parameter subspace, plotting Maximal Lyapunov exponent and firing rate obtained with each parameter pair. The example of desired regions with lower firing rate (from 3.0 to 4.5 spikes/s) is plotted in the firing rate (Fig. 4A, middle and right). The regions shown in a darker tone of blue correspond, respectively, to chaotic (Fig. 4A, middle) and non-chaotic oscillations (Fig. 4A, right) behaviour. The 3.0–4.5 Hz interval was divided in bins of 0.1 Hz, and in each bin the same number of g sr /g sd combinations was randomly picked from each region (chaotic or non-chaotic). In addition, we picked parameter pairs from the model without the I h current (NoIh oscillators) that do not display chaotic behaviour under any parameter combination (See12 and Fig. 4B). The desired region of this case in firing rate is shown in the Fig. 4B (bottom). The histogram of firing rates in each selected set of parameters (Fig. 4C), shows that they have the same distribution of firing rates. This is supported by a pair-wise testing for equality of distributions using the non-parametric Kolmogorov-Smirnov test49. The same operation was used to select parameter sets with higher firing rate (from 7.0 to 9.5 Hz, not shown).

Figure 4
figure 4

Selection of parameter sets with same distribution of firing rates. (A) Maximum Lyapunov Exponent (MLE) and Firing Rate (FR) obtained in the selected g sr /g sd parameter region. The FR plot is shown twice, highlighting in light blue either chaotic (MLE > 0, left) or non-chaotic (MLE ≤ 0, right) oscillations with a Firing Rate between 3.0 and 4.5 spikes/s. (B) MLE and FR in the same g sr /g sd region as in (A), for the model without I h . Highlighted in light blue, FR between 3.0 and 4.5 spikes/s. (C) Histogram of firing rates in each selected set of parameters and p-value from a pair-wise comparison of distributions with the non-parametric Kolmogorov-Smirnov test.

Networks of 250 neurons were built by randomly picking g sr /g sd pairs from the populations described above. Fig. 5 plots the synchrony transition curves for networks using the parameters drawn from the lower (Fig. 5A) and higher firing rate (Fig. 5B) ranges, showing the order parameter, metastability and the network MLE. In the lower firing rate regions, all types of networks show a similar slope in their transition to synchrony. Networks of chaotic oscillators, however, show a higher metastability and a higher network MLE. In the higher firing rate regions, we observe that networks of both chaotic and non-chaotic oscillators show not only a similar transition to synchrony, but also the same degree of metastability and network MLE. These results suggest that MLE at the network level (macroscopic chaos) can be a predictor of metastability, however the chaotic nature of the isolated oscillators will not always translate to network chaos in a direct or predictable fashion. The blue curves of Fig. 5 show that networks of NoIh oscillators have a transition to synchrony at lower values of g, with the similar degree of metastability and lower values of network MLE compared to the other networks. However, it is worth to mention that NoIh systems have 250 dimensions less (1 per node) and thus the magnitude of the Lyapunov exponents may not be comparable. Simulations presented in the Supplementary Material (see Fig. S3 for details) show that the transition dynamics is robust to network size.

Figure 5
figure 5

Transition curves for networks with parameters g sr /g sd , as described in Fig. 4. Synchronization transition characterized by Order parameter, metastability and the network MLE. (A) and (B) denote parameters drawn from the lower and higher firing rate, respectively. NoIh networks refers to networks built with NoIh oscillators (g h  = 0).

Multi-stable behavior in neural networks

Finally, we measured the ability of our network models to display multi-stable behaviour by characterizing their functional connectivity dynamics (FCD). This analysis is being extensively applied to fMRI and M/EEG recordings47,48,50 and is explained in Fig. 6 and Methods. Briefly, the time series is divided in overlapping time windows and for each window a matrix of pair-wise synchrony between the nodes is calculated. Then, the synchrony matrices are compared against each other in the FCD matrix, where the axes represent time.

Figure 6
figure 6

Functional Connectivity Dynamics Analysis. (A) Time course of 50 chaotic nodes (only 5 traces are shown), showing the time windows for synchrony analysis. (B) Functional Connectivity (FC) matrices obtained in 5 sample time windows. (C) All the FCs are compared against each other by Pearson correlation and this constitutes the FCD matrix. The dotted lines and the colour dots at right and top represent the FCs shown in (B).

The FCD matrices in Fig. 7A show distinctive patterns for the unsynchronized and synchronized situations. In the first case (synaptic conductance g = 0), all values outside the diagonal are 0 or close to 0. This means that the pair-wise synchronization patterns or FC matrices continuously evolve in time and never repeated during the simulation. On the other hand, when synaptic conductance is maximal, all the values in the matrix are equal to 1, meaning that the synchronization is the same and maintained through all the simulation. However, at intermediate values of g, some FCD matrices show a mixture of values between 0 and 1, with noticeable ‘patches’ that evidence the transient maintenance of some synchronization patterns. We call this a multi-stable regime. The histograms of FCD values (shown in Fig. 7A below each FCD) are also useful in detecting the three situations described. As a rough measure of multi-stability, we took the variance of the FCD values (outside the diagonal) and plotted them against the synaptic conductance, averaging several simulations with different seed for the random connectivity matrix (Fig. 7B). In the 3.0 to 4.5 firing rate range (for the isolated oscillators), it is clear that chaotic nodes produce networks with higher multi-stability than both non-chaotic and NoIh nodes. Moreover, the g range in which the multi-stable behaviour is observed is wider. In the 7.0 to 9.5 firing rate range, the variance of the FCD is not higher for chaotic nodes, however the g range for multi-stability is still wider. This shows that FCDs with signatures of multi-stable behaviour are more easily obtained when the networks are composed of chaotic nodes. FCD analysis were also performed in the g sd /g h parameter subspace with very similar results (Fig. S2).

Figure 7
figure 7

FCD in networks of chaotic and non-chaotic oscillators. (A) FCD matrices obtained at different values of synaptic conductance g in networks of either chaotic, non-chaotic or NoIh nodes. Below each matrix, an histogram of the values is shown. The diagonal and the neighbouring values were not included. (B) Variance of the FCD values (the same values plotted in the histograms) plotted against g. Average (±SEM) of 20 simulations with different random seeds for the small-world connectivity and parameters.

Discussion

In this work, we investigated how a complex node dynamics can affect the synchronization behaviour of a heterogeneous neural network. While several works have focused on network connectivity, few studies have explored the impact of node dynamics to the network. An interesting study by Reyes et al.51 has shown that very small networks (2–3 neurons), when composed of irregular or chaotic nodes can provide a wider frequency range than when the nodes are regular. Moreover, in medium-size networks but using a much simpler node dynamics, Hansen et al.47 showed that bi-stable nodes enhance the dynamical repertoire of the network when looking at the FCD.

To focus on the dynamics of the nodes, we systematically controlled the chaotic nature of the oscillators, trying to keep other variables, such as network heterogeneity, constant. At first glance, our results are not as straightforward to interpret as in the previously mentioned works. Chaotic node oscillations do not always make a visible difference in terms of mean network synchronization and, most notably, chaos at the network level (macroscopic chaos) was always obtained regardless of the nature of the nodes. Other factors, such as network heterogeneity and the transitions between different firing patterns, seem to be more determinant to the steepness of the synchronization transition curves. However, we found that chaotic nodes can promote multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states. Our results suggest that macroscopic chaos can be a predictor of metastability, as the greatest values of this measure (as well as multi-stability) coincide with intermediate g values where the maximum MLEs were found. However this must be taken with caution as the MLE is not necessarily a quantitative measure of chaos52.

The chaotic nature of the isolated oscillators did not always convey to network chaos in a direct or predictable fashion. More specifically, our networks always showed chaotic behaviour at some g values regardless of the dynamics of the isolated nodes. This is not surprising, as chaotic behaviour arises in networks of very simple units and seems to depend more strongly on other factors such as synaptic weights and network topology25,26,27,53. Moreover, just the high-dimensionality of the systems seems to be enough to assure that chaos will emerge under some conditions, for example, the quasi-periodic route to chaos in high-dimensional systems54. On the other hand, assessing chaos in a large network or in a high-dimensional system can be a difficult task. It is generally accepted that a unique intrinsic and observable signature of systems exhibiting deterministic chaos is a fluctuating power spectrum with an exponential frequency dependency55. Thus, some studies introduced the broad power spectrum to characterize the chaos of networks13,56. Here we use the most popular and direct method of maximal Lyapunov exponent (MLE) to quantify chaos on the level of networks in the way as we did for single cells12,36,37. As usual, we define the state of the network as chaotic if the MLE is greater than zero.

Network heterogeneity has been shown to promote synchrony in neural networks57,58, as well as in other fields of physics59,60,61. As the assumption of non-identical units in the network is the most realistic setting for a biophysically-inspired system, and in order to focus on the effect of nodes dynamics, we intended to keep a constant degree of heterogeneity between the different simulated networks. In a first approach, maintaining a constant distribution of parameters yielded oscillatory nodes that were functionally more heterogeneous in the case of chaotic nodes than non-chaotic. Then we shifted to an approach consisting on obtaining sets (or ‘populations’) of chaotic, non-chaotic and NoIh nodes that shared the same distribution (or heterogeneity) in their firing rates. Still, this approach may be improved because the firing rate by itself may not be most relevant measure to take into account for the promotion of synchrony in this system. The role of heterogeneity–and finding a more functionally relevant measure for it–in the promotion of FCD can be the subject for future work.

Macroscopic chaos can arise from the network’s global properties, the propensity of isolated neurons to oscillate, the nature of synaptic dynamics, or a mixture of the them, as shown in earlier works25,26,27,28,29. In this paper, the focus is different. First, finding both asynchronous and synchronous chaos in the same network, only by changing the synaptic strength, is new. Secondly, the route from asynchronous to synchronous chaos in networks of chaotic and non-chaotic oscillations has a slight difference and has not been found in previous studies. Specifically, networks switch directly from asynchronous to synchronous chaos when composed of chaotic neurons, while networks of non-chaotic neurons usually can go through four phases of network state, that are asynchronous activity, asynchronous chaos, then again asynchronous activity and lastly synchronous chaos.

While discussing chaos in neural systems we have used completely deterministic dynamics. Random variables were used to define network connectivity and node parameters, but the time evolution of the networks and nodes was calculated in the absence of noise. However, neural systems are subject to a number of noise sources, being the most important the stochastic opening an closing of ion channels and synaptic variability62. How the synchronization transitions and meta/multi-stable behaviour will emerge in a noisy system remains to be studied, and it will be interesting to assess how much the dynamics introduced by chaos can prevail in the presence of noise.

In summary, we have shown that chaotic neural oscillators can make a significant contribution to relevant network behaviours, such as states transition and multi-stability. Our results open a new way in the study of the dynamical mechanisms and computational significance of the contribution of chaos in neuronal networks.