Dynamical properties of a small heterogeneous chain network of neurons in discrete time.

We propose a novel nonlinear bidirectionally coupled heterogeneous chain network whose dynamics evolve in discrete time. The backbone of the model is a pair of popular map-based neuron models, the Chialvo and the Rulkov maps. This model is assumed to proximate the intricate dynamical properties of neurons in the widely complex nervous system. The model is first realized via various nonlinear analysis techniques: fixed point analysis, phase portraits, Jacobian matrix, and bifurcation diagrams. We observe the coexistence of chaotic and period-4 attractors. Various codimension-1 and - 2 patterns for example saddle-node, period-doubling, Neimark-Sacker, double Neimark-Sacker, flip-and fold-Neimark Sacker, and 1 : 1 and 1 : 2 resonance are also explored. Furthermore, the study employs two synchronization measures to quantify how the oscillators in the network behave in tandem with each other over


Introduction
The fundamental units of the nervous system constitute these special cells called neurons and neurodynamics is the study of the dynamical properties of these cells.It is an evolving field of and signaling properties, allowing for adaptation to spatial and temporal patterns of neural signals [34].This remarkable adaptability enables neuron networks to autonomously organize, calibrate, and retain information based on experience.Despite the extensive research focused on unraveling the intricate structure of these networks, delving into simple networks remains imperative.It offers foundational insights into comprehending brain function, modeling complex neural systems, deciphering neurological disorders, fostering technological innovation, and facilitating educational advancements in neuroscience.Some of the simple networks studied include the ring network [56,79,31,38], star network [8,88,61,32], ringstar network [48,47,19,49], lattice network [82,57,62], and multiplex network [35,85].In ring structures, each neuron within the network is exclusively connected solely to its two nearest neighbors.The hippocampus, cerebellum, and neocortex are regions of the brain known for their complex processing capabilities and involvement in various cognitive functions.The presence of ring architectures has been identified in these brain regions, as well as in domains beyond neuroscience, such as chemistry and electrical engineering [30].The star configuration comprises a single central node, often referred to as the hub, to which all other nodes within the network are connected.Notably, in this configuration, nodes are linked exclusively to the central hub and are not interconnected with one another.
There is an enormous need to study complex systems in terms of minimal mathematical models.The nervous system being one of the most important studied complex systems, requires mathematical modelers to study it in terms of a reduced model of coupled oscillators forming a small network that can be treated as a unit of a macroscopic ensemble of neurons and the nervous system as a whole.Most of the studies focus on the dynamical analysis of a single neuron to an ensemble of neurons consisting of at least a hundred units.Note that for a bigger ensemble of neurons, the analytical studies of the dynamics of the system become quite intractable.Thus studying a "small network" is indispensable.First of all, because it acts as a bridge between a single oscillator and a network of coupled oscillators where there exists at least a hundred, and secondly because the mathematical analysis of the smallest possible oscillator network is still tractable.Small networks are reduced models consisting of three to ten coupled oscillators giving rise to collective dynamical properties.They can be thought of as network units that repeat themselves to form a more complex topological structure.Our focus is on a heterogeneous system of oscillators representing the nervous system.
Heterogeneous neuron networks have been recently modeled and studied in various forms.One important work is by Shen et al. [72] where the authors coupled a Fithugh-Nagumo neuron with a Hindmarsh-Rose neuron and studied their dynamical properties.Njitacke et al. [53] coupled a two-dimensional Hindmarsh-Rose neuron to a three-dimensional version through a multistable memristive connection.The same authors also did a similar study with Hindmarsh-Rose neurons but with a gap-junction [54] to induce heterogeneity in the neuron system.A variation of energy influx is also able to incorporate heterogeneity in the neuron network over time as shown by Yang et al. [89].Bradley et al. [5] studied a weakly coupled network of Wang-Buzskai and Hodgekin-Huxley neurons.Xie et al. [86] in their work showed that even continuous energy accumulation in neurons can incorporate heterogeneity via shape deformation under external stimulation.Thus heterogeneity in neuron networks is an invigorating phenomenon and requires the attention of mathematical modelers and neuroscientists.We recently studied a heterogeneous ring-star network of Chialvo neurons where the heterogeneities were realized with the introduction of additive noise to the centralperipheral and the peripheral-peripheral nodes with atleast a hundred nodes in the network, see Ghosh et al. [19].However, because of the complexity and the large number of nodes in that system, not many rooms were left to perform analytical calculations.To address this issue, this paper tries to study the smallest possible chain network of oscillators where heterogeneity is inculcated.In this case, the heterogeneity was included not only in the coupling strengths but also in the type of oscillators.This system is of course complicated than a single oscillator model, nonetheless is analytically tractable.Thus, we perform both algebraic calculations supported by numerics to make the study even more robust.The goal in mind was to delve into the intricate dynamical properties of a small network modeling the dynamics of the nervous system.By investigating the behavior of even the simplest network configurations, researchers can gain insights into phenomena such as synchronization, pattern formation, and information transfer, which are essential for understanding more complex neural networks and neuron functions as a whole.Furthermore, researchers could potentially predict and control the behavior of neurons, with implications for neural engineering and the development of neural prosthetics.Overall, simulating small networks of coupled neurons offers a powerful approach to advancing our understanding of neural function and dysfunction in both engineering and biological research contexts.heterogeneous neuron networks hold quite a potential to be applied in a wide array of fields like robust learning (See Perez-Nieves et al. [58]), reliable neuronal systems (See Lengleret al. [36]), and image encrypting procedures (See Yunliang et al. [90]).
The motivation for taking a small heterogeneous chain network made of three oscillators whose dynamics are governed by neuron maps is rooted in the fact that there are three types of neurons based on their functionalities: motor neurons, sensory neurons, and interneurons [16].Motor neurons act as transmission media for synaptic impulses from the central nervous system to the organs and tissues, whereas sensory neurons act as transmission media from the organs and tissues to the central nervous system.The interneurons, however, act as a bridge between the sensory and the motor neurons.Both motor and sensory neurons have similar functionalities and thus could correspond to the two end oscillators in our model represented by the Chialvo map.Whereas, an interneuron could be modeled by the Rulkov map which acts as a bridge between the two Chialvo neurons, representing a unit of the much more complex nervous system.The interlinks between the three types of neurons further correspond to the simplest type of couplings shown in our model.A chain network made of three neurons can be regarded as a special case of a star network.Moreover, studying the dynamics of small networks of interconnected neurons can provide insights into the functioning of larger brain circuits.A topologically similar tri-oscillator chain network model, but in continuous time, has been studied by Njitacke et al. [52].In a very recent study led by Cao et al. [7], the authors have considered a Chialvo neuron coupled with a Rulkov neuron through a memristence and have studied the corresponding dynamical properties.Although the base models of Cao et al. are similar to this work, it is to be noted that the major difference in this work from Cao's work is the topology of the network.First of all, we do not consider any memristence in this system, and secondly, our system has two Chialvo neurons on the edge with a Rulkov neuron at the center making a good unit candidate for a bigger ensemble of a star-network model.The motivation behind taking this model was to imitate the real-world functionalities of the three types of neurons mentioned above.
Once we have our model, it is imperative to dive right into unfolding its dynamical properties.The model is complex but can be to some extent analytically tractable.The first step is showcasing a collection of typical phase portraits of the system which exhibits chaotic attractors as expected.We also perform fixed point analysis, establish the Jacobian matrix of the model at the fixed point, explore the properties of the eigenvalues, study the concept of noninvertibility, and finally traverse the field of bifurcation analysis using sophisticated tools like MatContM [41].
As mentioned before, the advantage of a small network is it not only serves its purpose of being studied as a dynamical unit but also provides a model with which we can traverse a batch of spatiotemporal patterns that arise due to the collective behaviors of the oscillators that make up the model.Concerning this, we study the synchronization behavior of the neurons involved in this network model and try to unfold what kind of collective behavior it portrays in general.Synchronization is one of the sophisticated phenomena that drives neural communication.Synchronization refers to how two neurons arrange themselves to form a functionally specialized ensemble.To study synchronization, we employ two measures called the cross-correlation coefficient [80,70] and the Kuramoto order parameter [33,77,3].The motivation behind taking two measures is to corroborate the respective results with each other.The first is computed as a displacement between two oscillators involved in the network and the second is computed as the phase of an oscillator in the network.The concept of cross-correlation coefficient has been widely used to quantify synchronization in various network topologies [80,70,74,73,67,81,55,20]. Similarly, the Kuramoto order parameter has also had a prolific application in unfolding the same [60,1,68].Furthermore, we require a separate measure to quantify the abundant complexity of a neuron network.This can be realized in terms of information production, i.e, the entropy of the system.Shannon introduced the concept of entropy in the context of information theory [71], which since then has been successfully employed in the field of neuroscience [11,78,22,91,83,28].The entropy measure that we follow in this paper is by Richmond et al. [63], called the sample entropy.The authors devised the sample entropy to quantify the complexity in physiological time series data and thus serve as a perfect candidate in this paper quantifying the complexity in neuron-based dynamical systems.This is by far the most popular entropy measure besides approximate entropy put forward by Pincus [59].
The goals of this paper are cataloged herewith: 1. Introduce a novel heterogeneous small network of neurons coupled bi-directionally through linear coupling strengths, where each of these neurons is an oscillator whose dynamical properties are governed by popular discrete-time neuron maps, 2. report the dynamical properties of this network unit through phase portraits, fixed point analysis, and noninvertibility criterion, 3. analytically and numerically study various bifurcation patterns (codimension-1, and -2) of the network, especially using MatContM, 4. investigate the synchronization behavior in the small network using two quantitative measures called the cross-correlation coefficient and the Kuramoto order parameter, 5. statistically explore how complex the small network is in terms of information processing within the network via a time series analysis measure called the sample entropy, and 6. develop an overall grasp of a heterogeneous network that can eventually act as the unit of a large ensemble of neurons connected in a complicated topology.
We organize the paper as follows: In §2, we review the Chialvo and the Rulkov neuron maps which act as the building blocks of our heterogeneous tri-oscillator chain network.In §3 we put forward the novel heterogeneous network model constituting a nonlinear system of six coupled equations.We comment on the topology of the network and give an overview of what a typical phase portrait of this system looks like.In §4, we analyze the fixed points of this system, build the Jacobian of the system, and explore its eigenvalues at the fixed points, giving a birds-eye view of the dynamical properties of this network.In §5 we study where this discrete-time system based on neuron maps is noninvertible.Next, we delve into the innate bifurcation properties of the network in §6 and §7.In §8 we set up the synchronization measures for our network in terms of the cross-correlation coefficient and the Kuramoto order parameter and finally in §9 we perform a time series analysis of our model in terms of sample entropy to quantify the model's complexity.Concluding remarks and future directions are provided in §10.Note that all the numerical simulations have been performed in Python 3.9.7 with extensive usage of numpy, pandas, and matplotlib, except for the codimension-1 and -2 bifurcation patterns, where it is MATLAB and MatContM which have been employed.

Two-dimensional neuron maps
In this section, we review the dynamical structure of the maps that act as the building blocks of the tri-oscillator chain.Each of these oscillators is a two-dimensional iterated map used for modeling the dynamics of a neuron governed by either the Chialvo or the Rulkov map.For a detailed in-depth review of these topics please refer to Ibarz et al. [27].The Chialvo map [9] is given by where the dynamical variables x and y represent the activation and the recovery variables respectively for the action potential.Chialvo map has four control parameters a, b, c, and k 0 , where the first two represent the time constant of recovery and the activation dependence of recovery.These are kept at less than 1.The latter two represent the offset and the time-dependent additive perturbation respectively.This map represents a model showcasing excitable dynamics where y produces not slow but fast recovery.As mentioned in [27], the model exhibits an ensemble of important dynamics, for example, subthreshold oscillations, bistability, and chaotic orbits among many others.Thus it serves as a good candidate for map-based neuron models.This model has attracted a wide array of recent works from the research community, see [47,87,17,64,19].The Rulkov map, specifically the chaotic family of the map [65], is the main focus of this paper.This map is given by the following pair of equations where u is again the activation variable whereas v represents the slow variable of the model because 0 < µ << 1.The parameter α induces nonlinearity in the model, whereas γ is a DC modulator.There are two other versions of the map corresponding to non-chaotic behavior: the non-chaotic family [66], and the supercritical family [76].For a detailed review of these please refer to Ibarz et al. [27].The chaotic family does not constitute a piecewise behavior on u and v, whereas the non-chaotic family does.In (2.3)-(2.4) it can be seen that the map is unimodal.There exists a pair of fixed points, one stable and the other unstable, which disappears through saddle-node bifurcation on variation of parameters.Also, the spikes exhibit chaotic orbits, which can be made evident from the phase-plane plot of (2.3)-(2.4).This is left as an exercise for the reader.Like the Chialvo map, the Rulkov model has attracted quite a lot of attention from the research community recently, see [84,39,37,4,2].
In the next section, we put forward a novel map-based small network model made up of the above chaotic maps.This generates a linearly coupled six-dimensional dynamical system in discrete time which can be reflected as a unit of a larger ensemble of neurons modeling the nervous system.

Network Model
We have considered a small heterogeneous network of a tri-oscillator chain where the dynamics of the end nodes are represented by the Chialvo map and that of the central node by the Rulkov map.Thus, the system is a Chialvo-Rulkov-Chialvo chain (See Fig. 1), giving rise to a nonlinear system of six coupled equations given by Let the dynamical variable set for (3.1)-(3.6) is given by The indices 1 and 3 represent the oscillators following the Chialvo map and the index 2 represents an oscillator following the Rulkov map.The linear coupling strengths between the oscillators are represented by σ ij .Here σ ij ̸ = σ ji , that is the coupling between two oscillators is bidirectional.Note that the system is asymmetric because it changes its form under the transformation X → −X.The dynamics of the map are realized on the oscillators, however, the coupling links are static.Before delving into the deeper dynamics of the network, it would be interesting to look into the topological properties in simple mathematical terms.One such tool is the adjacency matrix.The adjacency matrix of this small network given in Fig. 1 is given by with the spectrum of the graph given as where the function sort sorts an array in ascending order.Here Λ is the set of eigenvalues of the adjacency matrix arranged in an ascending order.The first and the third oscillators (following the dynamics set by the Chialvo map) have degrees σ 12 + σ 21 and σ 23 + σ 32 respectively, whereas the degree of the second (middle) oscillator is given by The network is topologically invariant when Typical phase portraits of the system (3.1)-(3.6) showing chaotic attractors under the varying coupling strength σ 12 are illustrated in Fig. 2. The local parameters of the oscillators are fixed as a = 0.6, b = 0.6, c = 0.89, k 0 = −1, α = 5, µ = 0.0001, and γ = −0.5.The other coupling strengths are set as σ 21 = 0.2, σ 23 = 0.085, and σ 32 = −0.08.The simulation is run for 80000 iterates out of which the last 60000 are shown to make sure no transients creep in.All the dynamical variables are randomly initialized from the uniform distribution [0.2, 0.3].We are going to maintain these initial conditions for all our numerical simulations throughout the paper until specified otherwise.

Fixed point analysis of the network
Analyzing the fixed points of a system (continuous or discrete time) is the first step toward unfolding its complex dynamics.The fixed point of a map-based model f (x) is the point Let the fixed point of the system (3.1)-(3.6)be given by To compute X * , the following list of equations needs to be solved: x x A step towards gaining this is to try eliminating the y * i terms such that we end up with two transcendental equations involving x * 1 and x * 3 .For x * 2 , we can directly see that from (4.5), which we substitute in (4.6) to have requiring us to solve x * 3 .Thus, we substitute the above two equations in (4.7) to get the following transcendental equation of x * 3 , Solving (4.10) numerically will give us the corresponding value of y * 3 .Next, we substitute (4.8) in (4.2) to have requiring us to solve x * 1 .Again, similar substitutions in (4.3) give us Solving (4.12) numerically will give us the corresponding value of y * 1 .Finally, we substitute the values of x * 1 , x * 2 , and x * 3 in (4.4) to have We need to numerically compute x * 1 and x * 3 using any standard computational solvers, for example, fsolve() function from the scipy.optimizemodule.Once this is achieved, we substitute the values of x * i 's in the y * i 's to eventually evaluate X * .For X * , the dynamics of the perturbation vector δX = X − X * from X * is given by where X i = (x 1 , y 1 , x 2 , y 2 , x 3 , y 3 ) and J is the Jacobian of the system, The linear stability analysis of the fixed point depends on the absolute values of the eigenvalues of J .The eigenvalues λ i , i = 1, . . ., 6 can be evaluated from J at the fixed point X * by solving the equation on the sixth order polynomial P 6 (λ) = 0, where Again, solving this can be achieved by using any type of available standard computational software.Note that in (4.16) we have We observe a clear pattern in the equations of a 0 -a 6 .Furthermore, the terms J i are defined as and the terms L i are defined as (4.33) Note that the terms D i are defined in terms of x * j as, To check the stability of X * we have to monitor the signs of the eigenvalues where k is the number of eigenvalues whose absolute value is > 1.
For example, in Table 1 we list the set of six eigenvalues λ i on the parameter grid (σ 12 , σ 21 ) keeping fixed σ 23 = σ 32 = 2 and a = 0.6, b = 0.6, c = 0.89, k 0 = −1, α = 5, µ = 0.01 and γ = −0.5.The table lists fixed points of k−saddle types for k = 2, . . ., 5 and an unstable fixed point.Note that Fig. 4 is a two-parameter bifurcation diagram where we color code the grid according to the type of fixed point generated by the system at that specific parameter combination.Fig. 4-(  From Table 1, we can understand the dimensions of the stable and unstable manifolds around the saddle fixed point.Even though this fixed point exists in a six-dimensional space, it's important to consider simpler one-dimensional manifolds and spiral motions to grasp the overall dynamics near it.For instance, when (σ 12 , σ 21 ) = (0.3, 0.9), the fixed point has a onedimensional manifold spiraling expanding outward.This spiral motion occurs within a twodimensional plane within the six-dimensional space.Additionally, there's a one-dimensional stable manifold perpendicular to this plane, as shown in Fig. 3 (a).Similarly, in Fig. 3 (b), there's a one-dimensional stable manifold spirally contracting towards the fixed point, with a perpendicular one-dimensional unstable manifold.
Note that keeping track of the eigenvalues of the characteristic equation P 6 (λ) gives us an analytical intuition on different bifurcation patterns that might arise in the dynamics of (3.1)-(3.6),see Fig.   colored the points according to the index of the eigenvalues.Some important observations are noted here.We see that λ 1 is always real, whereas λ i for i = 2, . . ., 5 can be both real and complex-valued.Finally, λ 6 is always real and ≈ 1.As soon as the absolute value of one of the eigenvalues crosses 1, the system will give rise to either a saddle-node (fold) bifurcation or a period-doubling (flip) bifurcation.When a complex eigenvalue has modulus 1, the system gives rise to a Neimark-Sacker bifurcation.More on these are detailed in §7.

Noninvertibility criterion
Establishing the noninvertibility criterion of a map provides useful intricate details about its core dynamics.Noninvertibility in map-based systems has been extensively studied by Mira et al. [43,42].This property tells us about the stretching and folding behaviors of the concerned map.Depending on whether the map is one-, two-, or three-dimensional, there exists a critical line, curve, or surface that separates the phase plane into domains consisting of distinct preimages.The noninvertibility of the Chialvo neuron model with and without the application of memristive electromagnetic flux has been touched upon by Muni et al. [47].
Note that in that work the system was either two-or three-dimensional.In this paper, the system is six-dimensional, and hence the analysis becomes exorbitantly complex.Thus, we just come up with the critical curve C −1 where the preimages merge and is where the determinant D of the Jacobian matrix is 0. This is given by where, where )    6 Bifurcation structure of dynamical variables and coexistence In this section, we report the bifurcation structure of the action potentials to varying coupling strength, giving us an intuition on the emergence of chaotic and periodic attractors.This is achieved via both the forward continuation and backward continuation.The system is simulated for 40000 iterations out of which the last 4500 iterates are plotted for a specific value of the concerned parameter, see Fig. 7.The model continuation is done within the parameter range σ 12 ∈ [0.09, 0.1] with forward continuation points marked in red and the backward continuation points in black in the same plot environment.We have set the rest of the coupling strengths as σ 21 = 0.1, σ 23 = 0.05, and σ 32 = 0.06.The local parameters for the oscillators are set as a = 0.6, b = 0.6, c = 0.89, k 0 = −1, α = 5, µ = 0.0001, and γ = −0.5.Plotting the points in the same environment allows us to report coexistence of the periodic and the chaotic attractors.At σ 12 = 0.092, we see that both forward and backward continuation indicate the existence of chaotic attractor, whereas, at σ 12 = 0.094, they indicate the existence of period-4 attractor.However, at σ 12 = 0.096, the forward continuation indicates the existence of a chaotic attractor, and the backward continuation indicates the existence of a period-4 solution.The forward and the backward continuation points not overlapping generates a hysteresis loop indicating the coexistence of the chaotic and periodic solutions.These are also supported by the phase portrait plots given in Fig. 8 where out of 80000 iterates the last 60000 points are plotted to ensure that the transients are discarded.3 ) reduced from X * .Phase portraits are drawn according to the σ 12 values represented by the blue broken lines in Fig. 7.The last 60000 iterates are plotted out of the 80000 iterates run, to ensure the transients are discarded.

Codimension-1 and -2 bifurcation patterns
To investigate what type of complex dynamics our network is capable of, it requires to be studied in terms of sophisticated dynamical tools like codimension-1 and -2 bifurcation analysis.These give the readers an overall picture of the influence of relevant parameters on the behavior of the network.We have considered the coupling strengths σ ij as the main bifurcation parameters and have kept the local parameters of the oscillators fixed a = 0.6, b = 0.6, c = 0.89, k 0 = −1, α = 5, µ = 0.0001 and γ = −0.5.At first, we put forward three theorems corresponding to codimension-1 bifurcations that arise in our map when one of the eigenvalues of J at the fixed point X * has modulus equal to 1.These are analytically easy to handle and are also supported by numerical results in the later half of this section.Given that the algebraic calculations for codimension-2 bifurcations are difficult and timeconsuming, we only provide numerical results for those.It is MatContM that we use to report the numerical bifurcation patterns.Numerical bifurcation analysis using MatContM was also extensively employed to study the three-dimensional memristive Chialvo neuron map by Muni et al. [47].The summary of codimension-one and codimension-two bifurcation types observed is presented in Table 2.We first establish the codimension-1 bifurcation patterns (LP, PD, and NS) through Theorem 7.1, 7.2, and 7.3.Proof.Saddle-node bifurcation occurs when the Jacobian matrix J has an eigenvalue 1 at X * .Using the characteristic equation (4.16), we set P 6 (1) = 0. Solving this gives us a 0 + a 1 + a 2 + a 3 + a 4 + a 5 + a 6 = 0, which after some simple algebra reduces back to (7.1).
Proof.Period-doubling bifurcation occurs when the Jacobian matrix J has an eigenvalue −1 at X * .Using the characteristic equation (4.16), we set P 6 (−1) = 0. Solving this gives us which after some simple algebra reduces back to (7.2).

Synchronisation measures
After analyzing our model in terms of dynamical tools, it is time to look into its collective behavior that arises from the dynamics of each of the oscillators involved in the network.This is usually achieved by studying the synchronization behavior of the whole network.To do so, we employ two commonly used quantitative metrics from the literature called the cross-correlation coefficient, and the Kuramoto order parameter.

Cross-correlation coefficient
Our model has three oscillators with the second oscillator connected to either the first and the third oscillator by a pair of links.This calls for formulating the cross-correlation coefficient between the first and the second oscillators and also between the second and the third oscillators before taking the average that will indicate the global synchronization pattern of the network as a whole.Let us first define the cross-correlation coefficient between the first and the second oscillator as Similarly, between the second and the third oscillator, it is given by Thus, the averaged cross-correlation coefficient is given by The averages in (8.1) and (8.2) are calculated after the transient dynamics is discarded.The symbol xi (n) refers to the variance from the mean ⟨x i (n)⟩, i.e, xi (n) = x i (n) − ⟨x i (n)⟩, where i = 1, 2, 3.The symbol ⟨⟩ denotes the average of the action potential over time.In our simulation for calculating the synchronization measures we take 80000 iterates and discard the first 40000 iterates to ensure no transients creep in.When Γ = 1, it means the network has reached complete in-phase synchrony, whereas Γ = −1 represents the network reaching a complete anti-phase synchrony.Any value Γ ∈ (−1, 1) denotes partial synchronization to asynchronization.
In Fig. 11, we visualize a collection of two-dimensional color-coded plots, where the grid pixels are colored according to the value of Γ and the space is represented by the parameter combination given by either (σ 12 , σ 21 ) keeping fixed σ 12 and σ 21 (first row) or (σ 23 , σ 32 ) keeping fixed σ 23 and σ 32 (second row).Both in the first and the second rows, the varying coupling strengths lie in [−0.12, 0.12].The local parameter values are set to be a = 0.6, b = 0.6, c = 0.89, k 0 = −1, α = 5, µ = 0.0001, and γ = −0.5.Panel (a) has σ 23 = σ 32 = 0.12.We observe that −0.533 ≤ Γ ≤ 0.004.The cross-correlation coefficient has a maximum value of ≈ 0. In the range −0.1 ≤ σ 12 ≤ −0.068, 0.08885 ≤ σ 21 ≤ 0.12, the values of Γ are the lowest ≈ −0.531, illustrated by the black pixels.Furthermore, we see a reddish patch on the top right corner where Γ ≈ −0.2.We also observe a purple patch near the upper boundary where Γ ≈ −0.44.Otherwise, the whole parameter region is yellowish where Γ ≈ 0. This tells us that overall the system mostly remains asynchronous because the model is a chain with no links between the first and the third oscillator contributing to freer oscillations as compared to a ring network.Panel (b) has σ 23 = −σ 32 = −0.12.We observe −0.01 ≤ Γ ≤ 0.25.The system as a whole remains asynchronous with the fact that it tends towards a more synchronous behavior as σ 21 decreases.Panel (c) has σ 23 = −σ 32 = 0.12.At first, we report a handful of white pixels that appear in the upper left boundaries, corresponding to the diverging dynamics of the system.Other than that, the system has −0.739 ≤ Γ ≤ 0.029, meaning the system as a whole exhibits asynchronous behavior.In the domain −0.12 ≤ σ 12 ≤ −0.1 and 0.05 ≤ σ 21 ≤ 0.11, there appears a pool of pixels in the purple to black range of color representing −0.4 ≤ Γ − 0.7.This indicates that the system is approaching an anti-phase synchronization.In the rest of the domain Γ ≈ 0. Panel (d) shows σ 23 = σ 32 = −0.12,with the system mostly oscillating in an asynchronous manner (−0.07 ≤ Γ ≤ 0.121).An interesting phenomenon is observed in the second row, where we fix σ 12 , σ 21 and vary σ 23 , σ 32 .The qualitative behavior of the second row remains similar to the first row with some subtle differences.Panel (e) with σ 12 = σ 21 = 0.12 becomes rotationally symmetric to panel (a), panel (f) with σ 12 = −σ 21 = −0.12becomes rotationally symmetric to panel (c), panel (g) with σ 12 = −σ 21 = 0.12 becomes rotationally symmetric to panel (b), and panel (h) with σ 12 = σ 21 = −0.12becomes rotationally symmetric to panel (d).12.We mostly notice asynchrony and partial synchrony in the whole parameter domain.

Kuramoto order parameter
Another quantitative measure that has been proliferating in the synchronization literature is the Kuramoto order parameter, represented by the index I, first introduced to study the phase coherence behavior in Kuramoto oscillators.In order to define I, we need to first wrap our heads around the instantaneous phase Θ m of an oscillator m at time step n, given by This is utilized to define the complex-valued index where i = √ −1.Furthermore, at time step n, the index I(n) for our model (3.1)-(3.6) is given by where the notation inside the absolute value symbol represents the mean of all phases of the three oscillators inside the unit circle at iteration n.Finally, the index average over time is given by If I ≈ 0, the system stabilizes in an asynchronous regime, whereas I > 0 indicates partial synchrony and I = 1 indicates complete synchrony in the system.Like Fig. 11, we also visualize a collection of two-dimensional color-coded plots, where the grid pixels are colored according to the value of I and the space is represented by the parameter combination given by either (σ 12 , σ 21 ) keeping fixed σ 12 and σ 21 (first row) or (σ 23 , σ 32 ) keeping fixed σ 23 and σ 32 (second row), see Fig. 12. From the first look of it, we see that there exists some kind of linear correspondence between both the synchronization measures Γ and I. Panel (a) in Fig. 12, has 0.633 ≤ I ≤ 0.7827, indicating that the whole system remains in partial synchrony as was also reported in Fig. 11-(a).We also notice that in the range −0.1 ≤ σ 12 ≤ −0.07, 0.08985 ≤ σ 21 ≤ 0.12, there exists a yellowish patch, like the black patch in 9 Time series analysis via sample entropy One important physical aspect of these network dynamical systems is the overall complexity.Thus the question arises, "can we quantify the system complexity in terms of any entropy Similarly we can define B p+1 (ϵ).Thus the sample entropy measure of the given time series is defined as We apply this concept to the time series data of each of the action potentials before taking the average, A high SE value is indicative of a higher unpredictability in the complex system, corresponding to a higher complexity.A similar approach was employed in computing the sample entropy of an ensemble of memristive Chialvo neurons arranged in a ring-star topology by Ghosh et al. [19].Other relevant works considering sample entropy are [23,51,44,24].
We utilize an open-source Python package nolds [69] to compute the sample entropy of our time series via the nolds.sampen()function.This function is built following the algorithm by Richmond et al.Note that the default values of p and ϵ in (9.2) are 2 and 0.2σ, where σ is the standard deviation of the time series.This time we take 80000 iterates (as noticed in the time series plots), out of which we discard the first 25000 to get rid of the transients for computing the sample entropies.This makes N = 55000.
A collection of time series plots with their corresponding sample entropies are given in Fig. 13 and 14.The local parameters are set as a = 0.6, b = 0.6, c = 0.89, k 0 = −1, α = 5, µ = 0.0001, and γ = −0.5.The coupling strengths are set as σ 21 = 0.1, σ 23 = 0.05, and σ 32 = 0.06.In Fig. 13, we have σ 12 = 0.092.We have previously seen that for the above parameter combination, there exists a chaotic attractor, see Fig. 8-(a).The time series behavior for all three action potentials corroborates this, with points exhibiting a dense distribution over the time frame.In the range n = 20000 → 32000 approximately, the orbit oscillates periodically, whereas everywhere else it shows irregular bursting corresponding to chaos.We have SE x 1 ≈ 1.08819, SE x 2 ≈ 0.91167, and SE x 3 ≈ 1.06156.In Fig. 14 we have σ 12 = 0.096.Previous numerics have shown us that for this parameter combination, there exists a periodic attractor with a period 4. We see that after approximately n = 20000, the orbit oscillates periodically with all the sample entropies equal to 0. These results exhibit that chaotic behavior indicates a higher complexity in the system corresponding to a higher SE.
An important note to the reader is to highlight that instead of line plots, the time series in this paper has been represented by discrete points because the model is discrete in time.This gives the illusion that the time series plots look like bifurcation diagrams.It is to inform the reader that the plots manifesting a behavior similar to the dynamics reaching a period-4 attractor is a regular oscillatory behavior in the time series (had it been plotted with lines instead of dots).

Conclusions and future directions
In this paper, we have investigated a heterogeneous chain network to model the dynamics of neuron ensembles in the nervous system.This model can be realized as a unit that could repeat itself to generate more complicated neuron aggregates replicating the real-world functionalities of the nervous system.The model is built on two popular neuron maps: the Chialvo map (peripheral nodes) and the Rulkov map (central node) with bidirectional linear couplings between two neurons.The motivation behind this was to build a heterogeneous model that mimics the functionalities of three kinds of neurons present in the nervous system and the synaptic connections for information transfer among them.Heterogeneity is incorporated in two ways: first, the central node oscillates following the dynamics of the Rulkov map, whereas the end nodes oscillate following the Chialvo map, and second the coupling between two nodes i and j is bidirectional with σ ij ̸ = σ ji .One future direction in the study of the dynamics of this small network is to incorporate noise modulation via additive or multiplicative noises and characterize a battery of spatiotemporal patterns.
After we put forward our model which is a nonlinear system of six coupled equations, we look deep into the dynamical properties of the model.The first step is figuring out the fixed point and performing a stability analysis of the same.To do so, we need to first build the 6 × 6 Jacobian matrix and look into the eigenvalues of the matrix at the fixed point.We also set up the noninvertibility criterion of the model, i.e., where the determinant of the Jacobian matrix goes to 0. The next step toward unfolding the dynamics of the model was to look into various bifurcation patterns.At first, we plot the last 500 points of the action potentials from every simulation with a varying primary bifurcation parameter.We notice that the dynamics fluctuate between a chaotic attractor and a period-4 attractor.We also observe a coexistence of the two, supported by the existence of hysteresis loops.The concept of coexistence is also verified from the phase portraits where the same parameter set generates these two different attractors on two separate simulation runs.Then we perform a codimension-1 and -2 pattern analysis using MatContM as a tool and discover the existence of saddle-node, period-doubling, and Neimark-Sacker bifurcation patterns.These three are codimension-1 bifurcations and are also supported by analytical proofs.Furthermore, MatContM allows us to observe rich codimension-2 patterns like double Neimark-Sacker, flip-Neimark-Sacker, 1 : 1 resonance, fold-flip, fold-Neimark-Sacker, and 1 : 2 resonance.These show that our heterogeneous neuron model is a repository of a wide array of engrossing dynamical properties.Thus this model can be in future utilized in designing a ring network (infinite chains), a star network (repetition of the chain with one central node and an infinite number of peripheral nodes), and a combination of the two to further study the rich spatiotemporal behaviors that might arise due to the complexity induced.Another interesting candidate is a multiplex network made up of our model as the building block.
We have taken a step forward to also study the synchronization behavior of this small network model via the cross-correlation coefficient and the Kuramoto order parameter.Both these measures indicate that the model mostly remains in an excitatory state exhibiting asynchrony and partial synchrony (in-phase and anti-phase).These are illustrated using two-dimensional color-coded plots in this paper.These color-coded plots correspond to twodimensional bifurcation diagrams revealing parameter regions where the system turns from partial synchrony to complete asynchrony and vice versa, indicating a global behavior of the system.Increasing the number of nodes to determine whether there are solitary nodes, chimera patterns, cluster states, and wave structures as spatiotemporal patterns using these measures is an interesting avenue to investigate.Also, another future aspect is to build a metric to look into whether there exists a "weak chimera" in the tri-oscillator model.Another important step in studying a dynamical system is to look into its time series and perform a complexity analysis using an entropy metric.In this paper, we see time series with both chaotic and regulatory behavior.To quantify this we utilised the concept of sample entropy.We see that for an irregular and chaotic time series, the sample entropy value is high whereas when the time series is regular, the sample entropy value is close to zero.Using this metric on a noise-modulated tri-oscillator model and also a model with an infinite number of nodes in the thermodynamic limit is an important aspect to look into.An analytical relationship is also required to be set up to check how all these measures relate to each other.Note that throughout the paper, we have kept the local parameters of each of the three oscillators fixed and varied the coupling strengths between the oscillators as primary bifurcation parameters.
In the future, we want to look at the dynamical properties of this model using coupling strengths which change over every iteration number, making the network temporally heterogeneous.One question that also arises is what kind of conservative properties these kind of neuron maps have, for example, the conservation of Hamiltonian energy in the continuous time systems.Can we come up with an equivalent quantity that is being conserved in discrete-time neuron maps?Furthermore, as discrete-time systems are more computationally efficient, we suppose it would be an interesting problem to look into heterogeneous models of discretized versions of continuous-time neuron models, via a small-network topology.Small networks are reduced order models which are undoubtedly the best candidates to study before we consider networks in the thermodynamic limit.
As with any other model, our model is not perfect.But of course, we can work on making our model come closer to a real-world scenario.One step towards that is to fit our model from medically available EEG data from reliable sources.This would by itself be a captivating field to persuade.One challenge the authors have faced is to come up with a Lyapunov exponent study of the network itself, where the nodes are coupled.One approach could be motivated by Caligiuri et al. [6].This remains an open question for a static network like our model.Another way to make the model closer to reality is to perturb every node or the coupling strengths with external forces.
Our approach in this paper has been an amalgamation of both analysis and numerics which we believe will aid mathematical modelers, engineers, quantitative biologists, and neuroscientists the same.This model lays a step towards understanding the intricate dynamics of more topologically complicated ensembles of neurons involved in signal processing in the nervous system.

Figure 3 :
Figure3: Schematic representation of the dynamics at the saddle fixed point (denoted by black square).The red curves denote the unstable manifolds and the blue curves denote the stable manifolds.In (a), we observe a saddle-focus dynamics and in (b), we observe a stable focus dynamics.

Figure 9 :
Figure 9: Codimension-1 bifurcation diagram of the map (3.1)-(3.6)with σ 12 as the bifurcation parameter.Solid black curve correspond to fixed points of the map.The labels for the codimension-1 bifurcations are explained in Table2.

Figure 10 :
Figure 10: Codimension-2 bifurcation diagram in (σ 12 , σ 21 )-plane.The green, blue, and magenta curves are the loci of the LP, PD, and NS bifurcations.The labels for the codimension-2 bifurcations are explained in Table2.

Fig. 11 -
(a).The values of I in this region are the highest ≈ [0.74, 0.78], indicating a behavior approaching synchrony.Similar behavior is also noticed in the top right corner depicting another pool of yellow pixels.A region corresponding to the purple patch in Fig.11-(a) shows a value of I ≈ 0.7 (in this case, the pixels are yellow).The rest of the figure has I ∈ [0.633, 0.7] illustrating a partial synchronization.Panel (b) has σ 23 = −σ 32 = −0.12,with 0.703 ≤ I ≤ 0.82 again depicting partial synchronisation.Near the right bottom boundary, the system tends to have I ≈ 0.82, showing a high tendency towards synchronization.Panel (c) has σ 12 = −σ 21 = 0.12 with 0.686 ≤ I ≤ 0.9.Like Fig. 11-(c), we have a pool of pixels near the top left corner, where the system shows a high tendency towards synchronization with I even reaching approximately 0.9.The rest of the domain shows partial synchronization in the network.Panel (d) has σ 12 = σ 21 = −0.12 with 0.681 ≤ I ≤ 0.7808.White pixels denote a diverging behavior in the dynamics of the network.Again the qualitative behavior of the second row remains similar to the first row with some subtle differences.Panel (e) with σ 12 = σ 21 = 0.12 becomes rotationally symmetric to panel (a), panel (f) with σ 12 = −σ 21 = −0.12becomes rotationally symmetric to panel (c), panel (g) with σ 12 = −σ 21 = 0.12 becomes rotationally symmetric to panel (b), and panel (h) with σ 12 = σ 21 = −0.12becomes rotationally symmetric to panel (d).

Table 1 :
Eigenvalues associated with the stability analysis of the fixed points at the parameter point (σ 12 , σ 21