1 Introduction

The study of the dynamical properties of neurons quantifies the complexity of the nervous system in terms of mathematical tools like ordinary; partial; and delay differential equations, maps, cellular automata, etc. The nervous system controls all major activities in the human body, enabling communication between different parts primarily through the transmission of electrochemical signals [1] among the neurons. This information transfer demands the application of subtle nonlinear dynamical techniques [2,3,4] on a single neuron and a network of neurons for investigating various phenomena such as bifurcation analysis [5]. Some popular neurodynamical models studied extensively by the research community are the Hindmarsh–Rose model [6], the Hodgkin–Huxley model [7], the Morris–Lecar model [8], and the FitzHugh–Nagumo model [9, 10].

The majority of this literature focuses on continuous-time systems, leaving discrete systems relatively understudied [11], which however also play a significant role [12]. Notable examples include neuron models by Chialvo [13], Rulkov [14, 15], and Nekorkin [16]. Furthermore, discrete neuron maps have also been generated from their continuous counterparts using Euler discretization, for example, the discrete Izhikevich model [17] and the Hindmarsh–Rose model [18]. In discrete models, the dynamics are based on a finite number of time steps, governed by specific rules. Discrete models are computationally efficient, making them particularly useful for large-scale simulations and analyses. The Chialvo and the Rulkov models are the most prominently studied discrete models, and in our current investigation, we specifically focus on a heterogeneous network built on these two.

There is an enormous need to study complex systems in terms of minimal mathematical models. The nervous system being one of the most studied complex systems requires mathematical modelers to demystify it in terms of a reduced model of coupled neurons forming a small network that can be treated as a unit of a macroscopic ensemble of neurons. Most of the studies focus on the dynamical analysis of a single neuron to an ensemble of neurons consisting of at least a hundred nodes. Some of the simple networks modeling the vast array of spatial and temporal patterns of neural signal processing [19] include the ring network [20,21,22,23], star network [24,25,26,27], ring–star network [17, 28,29,30], lattice network [31,32,33], and multiplex network [34, 35]. Note that for a bigger ensemble of neurons, the analytical studies of the dynamics of the system become quite intractable. Thus studying a “small network” is indispensable. First of all, because it acts as a bridge between a single oscillator and a network of coupled oscillators where there exists at least a hundred, and secondly because the mathematical analysis of the smallest possible oscillator network is still tractable. Small networks are reduced models consisting of three to ten coupled oscillators giving rise to collective dynamical properties. They can be thought of as network units that repeat themselves to form a more complex topological structure. Our focus is on a heterogeneous system of oscillators representing the nervous system.

Heterogeneous neuron networks have been recently modeled and studied in various forms, for example, Shen et al. [36] (coupling of a FitzHugh–Nagumo neuron with a Hindmarsh–Rose neuron), Njitacke et al. [37, 38] (coupling of a two-dimensional Hindmarsh–Rose neuron with a three-dimensional version through a multistable memristive connection, and Hindmarsh–Rose neurons with a gap-junction, respectively), Yang et al. [39] (a variation of energy influx), Bradley et al. [40] (a weakly coupled network of Wang–Buzskai and Hodgekin–Huxley neurons), Xie et al. [41] (continuous energy accumulation in neurons inducing heterogeneity via shape deformation under external stimulation), and so on. Thus heterogeneity in neuron networks is an invigorating phenomenon and requires the attention of mathematical modelers and neuroscientists. We recently studied a heterogeneous ring–star network of Chialvo neurons where the heterogeneities were realized with the introduction of additive noise to the central–peripheral and the peripheral–peripheral nodes with at least a hundred nodes in the network, see Ghosh et al. [30]. However, because of the complexity and the large number of nodes in that system, not many rooms were left to perform analytical calculations. To address this issue, this paper tries to study the smallest possible chain network of oscillators where heterogeneity is inculcated. In this case, the heterogeneity was incorporated not only in the coupling strengths but also in the type of oscillators. This system is of course complicated than a single oscillator model, nonetheless is analytically tractable. Thus, we perform both algebraic calculations supported by numerics to make the study even more robust. The goal in mind was to delve into the intricate dynamical properties of a small network modeling the dynamics of the nervous system. By investigating the behavior of even the simplest network configurations, researchers can gain insights into phenomena such as synchronization, pattern formation, and information transfer, which are essential for understanding more complex neural networks and neuron functions as a whole.

The motivation behind taking a small heterogeneous chain network made of three oscillators whose dynamics are governed by neuron maps is rooted in the fact that there are three types of neurons based on their functionalities: motor neurons, sensory neurons, and interneurons [42]. Motor neurons act as transmission media for synaptic impulses from the central nervous system to the organs and tissues, whereas sensory neurons act as transmission media from the organs and tissues to the central nervous system. The interneurons, however, act as a bridge between the sensory and the motor neurons. Both motor and sensory neurons have similar functionalities and thus could correspond to the two end oscillators in our model represented by the Chialvo map. Whereas, an interneuron could be modeled by the Rulkov map which acts as a bridge between the two Chialvo neurons, representing a unit of the much more complex nervous system. The interlinks between the three types of neurons further correspond to the simplest type of couplings shown in our model.

A chain network made of three neurons can be regarded as a special case of a star network. Moreover, studying the dynamics of small networks of interconnected neurons can provide insights into the functioning of larger brain circuits. A topologically similar tri-oscillator chain network model, but in continuous time, has been studied by Njitacke et al. [43]. In a very recent study led by Cao et al. [44], the authors have considered a Chialvo neuron coupled with a Rulkov neuron through a memristence and have studied the corresponding dynamical properties. Although the base models of Cao et al. are similar to this work, it is to be noted that the major difference in this work from Cao’s work is the topology of the network. First of all, we do not consider any memristence in this system, and secondly, our system has two Chialvo neurons on the edge with a Rulkov neuron at the center making a good unit candidate for a bigger ensemble of a star network model. The motivation behind taking this model was to imitate the real-world functionalities of the three types of neurons mentioned above.

As mentioned before, the advantage of a small network is it not only serves its purpose of being studied as a dynamical unit but also provides a model with which we can traverse a batch of spatiotemporal patterns that arise due to the collective behaviors of the oscillators that make up the model. Concerning this, we study the synchronization behavior of the neurons involved in this network model and try to unfold what kind of collective behavior it portrays in general. Synchronization is one of the sophisticated phenomena that drives neural communication. Synchronization refers to how two neurons arrange themselves to form a functionally specialized ensemble. To study synchronization, we employ two measures called the cross-correlation coefficient [45, 46] and the Kuramoto order parameter [47,48,49]. The motivation behind taking two measures is to corroborate the respective results with each other. The first is computed as a displacement between two oscillators involved in the network, and the second is computed as the phase of an oscillator in the network. The concept of cross-correlation coefficient has been widely used to quantify synchronization in various network topologies [45, 46, 50,51,52,53,54]. Similarly, the Kuramoto order parameter has also had a prolific application in unfolding the same [55,56,57]. Furthermore, we require a separate measure to quantify the abundant complexity of a neuron network. This can be realized in terms of information production, i.e., the entropy of the system. Shannon introduced the concept of entropy in the context of information theory [58], which since then has been successfully employed in the field of neuroscience [59,60,61,62,63,64]. The entropy measures that we follow in this paper is by Richman et al. [65], called the sample entropy. The authors devised the sample entropy to quantify the complexity in physiological time series data and thus serve as a perfect candidate in this paper quantifying the complexity in neuron-based dynamical systems. This is by far the most popular entropy measure besides approximate entropy put forward by Pincus [66].

In terms of applications, heterogeneous small network models can be perceived as units of a bigger ensemble of neurons connected together, processing signals via electrical synapses in the nervous system. Studying their dynamical properties gives researchers an overall idea of the bigger picture that is the complexity of the neurons. Heterogeneity makes the model closer to the real-world nervous system and researchers could furthermore potentially predict and control the behavior of neurons, with implications for neural engineering and the development of neural prosthetics, by utilizing these models in understanding how brain functionalities are lost and how can they be reinstated. As our model is discrete time, it is computationally efficient and can be utilized to analyze empirical data (MRI and EEG). Studying the complexity of EEG data from patients could indicate an onset of Alzheimer’s disease [67] and could be easily verified by these heterogeneous models. Furthermore, simulating small networks of coupled neurons also offers a powerful approach to advancing our understanding of neural function and dysfunction in both engineering and biological research contexts. Heterogeneous neuron networks hold quite a potential to be applied in a wide array of fields like robust learning (Perez-Nieves et al. [68]), reliable neuronal systems (Lengleret al. [69]), and image encrypting procedures (Yunliang et al. [70]). Discrete-time neuron network models are computationally more efficient and provide a working solution to encoding and decoding purposes for brain–computer interface [71].

The bifurcation study of the model will help reveal the transition from a stable steady state to another stable state in the nervous system with arbitrary periods of chaotic firings in the neuron signals, under the performance of some activities. This chaotic behavior is an indication of neural disorders and identifying this is the first step toward treating them, making it a pivotal field of research, aiding medicine [72]. These bifurcation patterns are a good resource to understand a body’s circadian rhythms [73]. Furthermore, scrutinizing how the neurons synchronize over time tells us about the accuracy of information processing among the neurons. Neurons that are synchronized form functional ensembles, oscillating collectively with the same amplitude. In these ensembles, communications take place both locally and globally driving all the sensory, motor, and cognitive tasks [74] of the nervous system. When two neurons are synchronized, the neural activities between them are precise [75]. However, partial synchronization refers to the probable initiation of pathophysiological motor symptoms like Parkinson’s disease [74, 76] and Epilepsy [77]. Thus, these models are essential in analyzing empirical data and aiding biomedical sciences.

We organize the paper as follows: In Sec. 2, we review the Chialvo and the Rulkov neuron maps which act as the building blocks of our heterogeneous tri-oscillator chain network. In Sec. 3, we put forward the novel heterogeneous network model constituting a nonlinear system of six coupled equations. We comment on the topology of the network and give an overview of what a typical phase portrait of this system looks like. In Sec. 4, we analyze the fixed points of this system, build the Jacobian of the system, and explore its eigenvalues at the fixed points, giving a birds-eye view of the dynamical properties of this network. Next, we delve into the innate bifurcation properties of the network in Sec. 5 and Sec. 6. In Sec. 7, we set up the synchronization measures for our network in terms of the cross-correlation coefficient and the Kuramoto order parameter, and finally, in Sec. 8 we perform a time series analysis of our model in terms of sample entropy and statistically explore how complex the small network is. Concluding remarks and future directions are provided in Sec. 9. Note that all the numerical simulations have been performed in Python 3.9.7 with extensive usage of NumPy, Pandas, and Matplotlib, except for the codimension-1 and -2 bifurcation patterns, where it is MATLAB and MatContM [78] which have been employed.

2 Two-dimensional neuron maps

In this section, we review the dynamical structure of the maps that act as the building blocks of the tri-oscillator chain. Each of these oscillators is a two-dimensional iterated map used for modeling the dynamics of a neuron governed by either the Chialvo or the Rulkov map. For a detailed in-depth review of these topics, please refer to Ibarz et al. [79]. The Chialvo map [13] is given by

$$\begin{aligned} x(n+1)&= x(n)^2e^{(y(n) - x(n))}+k_0, \end{aligned}$$
(1)
$$\begin{aligned} y(n+1)&= ay(n) - bx(n) + c, \end{aligned}$$
(2)

where the dynamical variables x and y represent the activation and the recovery variables, respectively, for the action potential. Chialvo map has four control parameters a, b, c, and \(k_0\), where the first two represent the time constant of recovery and the activation dependence of recovery, respectively. These are kept at less than 1. The latter two represent the offset and the time-dependent additive perturbation, respectively. This map represents a model showcasing excitable dynamics where y produces not slow but fast recovery. As mentioned in [79], the model exhibits an ensemble of important dynamics, for example, subthreshold oscillations, bistability, and chaotic orbits among many others. Thus it serves as a good candidate for map-based neuron models. This model has attracted a wide array of recent works from the research community, see [29, 30, 80,81,82,83].

The Rulkov map, specifically the chaotic family of the map [14], is the main focus of this paper. This map is given by the following pair of equations

$$\begin{aligned} u(n+1)&= \frac{\alpha }{1+u(n)^2}+v(n), \end{aligned}$$
(3)
$$\begin{aligned} v(n+1)&= v(n) - \mu (u(n) - \gamma ), \end{aligned}$$
(4)

where u is again the activation variable whereas v represents the slow variable of the model because \(0<\mu<<1\). The parameter \(\alpha\) induces nonlinearity in the model, whereas \(\gamma\) is a DC modulator. There are two other versions of the map corresponding to non-chaotic behavior: the non-chaotic family [15] and the supercritical family [84]. For a detailed review of these, please refer to Ibarz et al. [79]. The chaotic family does not constitute a piecewise behavior on u and v, whereas the non-chaotic family does. In (3)–(4) it can be seen that the map is unimodal. There exists a pair of fixed points, one stable and the other unstable, which disappears through saddle-node bifurcation on variation of parameters. Also, the spikes exhibit chaotic orbits, which can be made evident from the phase-plane plot of (3)–(4). This is left as an exercise for the reader. Like the Chialvo map, the Rulkov model has attracted quite a lot of attention from the research community recently, see [85,86,87,88,89].

In the next section, we put forward a novel map-based small network model made up of the above chaotic maps. This generates a linearly coupled six-dimensional dynamical system in discrete time which can be reflected as a unit of a larger ensemble of neurons modeling the nervous system.

3 Network Model

We have considered a small heterogeneous network of a tri-oscillator chain where the dynamics of the end nodes are represented by the Chialvo map and that of the central node by the Rulkov map. Thus, the system is a Chialvo–Rulkov–Chialvo chain (See Fig. 1), giving rise to a nonlinear system of six coupled equations given by

$$\begin{aligned} x_1(n+1)&= x_1(n)^2e^{(y_1(n) - x_1(n))}+k_0+\sigma _{12}(x_2(n) - x_1(n)), \end{aligned}$$
(5)
$$\begin{aligned} y_1(n+1)&= ay_1(n) - bx_1(n) + c, \end{aligned}$$
(6)
$$\begin{aligned} x_2(n+1)&= \frac{\alpha }{1+x_2(n)^2} + y_2(n) + \sigma _{21}(x_1(n) - x_2(n)) + \sigma _{23}(x_3(n) - x_2(n)), \end{aligned}$$
(7)
$$\begin{aligned} y_2(n+1)&= y_2(n) - \mu (x_2(n) - \gamma ), \end{aligned}$$
(8)
$$\begin{aligned} x_3(n+1)&= x_3(n)^2e^{(y_3(n) - x_3(n))}+k_0+\sigma _{32}(x_2(n) - x_3(n)), \end{aligned}$$
(9)
$$\begin{aligned} y_3(n+1)&= ay_3(n) - bx_3(n) + c. \end{aligned}$$
(10)
Fig. 1
figure 1

A heterogeneous network of a tri-oscillator chain composed of end nodes (Chialvo neuron map) and central node (Rulkov neuron map). Bidirectional coupling strengths between node 1 and 2 are denoted by \(\sigma _{12}, \sigma _{21}\). Similarly, bidirectional coupling strengths between node 2 and 3 are denoted by \(\sigma _{23}, \sigma _{32}\)

Let the dynamical variable set for (5)–(10) is given by

$$\begin{aligned} X(n) = \left\{ x_1(n), y_1(n), x_2(n), y_2(n), x_3(n), y_3(n)\right\} . \end{aligned}$$
(11)

The indices 1 and 3 represent the oscillators following the Chialvo map, and the index 2 represents an oscillator following the Rulkov map. The linear coupling strengths between the oscillators are represented by \(\sigma _{ij}\). Here \(\sigma _{ij} \ne \sigma _{ji}\), that is the coupling between two oscillators is bidirectional. Note that the system is asymmetric because it changes its form under the transformation \(X \rightarrow -X\). The dynamics of the map are realized on the oscillators; however, the coupling links are static. Before delving into the deeper dynamics of the network, it would be interesting to look into the topological properties in simple mathematical terms. One such tool is the adjacency matrix. The adjacency matrix of this small network given in Fig. 1 is given by

$$\begin{aligned} {\mathcal {A}} = \begin{bmatrix} 0 &{} \sigma _{12} &{} 0\\ \sigma _{21} &{} 0 &{} \sigma _{23}\\ 0 &{} \sigma _{32} &{} 0 \end{bmatrix}. \end{aligned}$$
(12)

with the spectrum of the graph given as

$$\Lambda = {\texttt {srt}}(0, \sqrt{\sigma _{12}\sigma _{21} + \sigma _{23}\sigma _{32}}, -\sqrt{\sigma _{12}\sigma _{21} + \sigma _{23}\sigma _{32}}),$$

where the function srt sorts an array in ascending order. Here \(\Lambda\) is the set of eigenvalues of the adjacency matrix arranged in an ascending order. The first and the third oscillators (following the dynamics set by the Chialvo map) have degrees \(\sigma _{12}+\sigma _{21}\) and \(\sigma _{23}+\sigma _{32}\), respectively, whereas the degree of the second (middle) oscillator is given by \(\sigma _{12}+\sigma _{21}+\sigma _{23}+\sigma _{32}\). The network is topologically invariant when \(\sigma _{12}=\sigma _{21}=\sigma _{32}=\sigma _{23}=\sigma\).

Typical phase portraits of the system (5)–(10) showing chaotic attractors for all the three nodes are illustrated in Fig. 2. The local parameters of the oscillators are fixed as \(a=0.6\), \(b = 0.6\), \(c = 0.89\), \(k_0=-1\), \(\alpha = 5\), \(\mu = 0.0001\), and \(\gamma = -0.5\). The other coupling strengths are set as \(\sigma _{12}=0.0002\), \(\sigma _{21}=0.0009\), \(\sigma _{23}=-.00085\), and \(\sigma _{32}=.0008\). The simulation is run for 80000 iterates out of which the last 20000 are shown to ensure no transients creep in. We illustrate the phase portraits of the Chialvo nodes (first and third node in our system) in Fig. 2a. We see that they exhibit chaotic fingered shape attractor as was reported in Muni et al. [29]. The Rulkov node (second node in our system) is illustrated in Fig. 2b, exhibiting a fold-towel shaped chaotic attractor, filling up the phase space densely. Such chaotic attractors have been observed in coupled Logistic map by Kaneko [90]. Note that the nodes in the phase portraits are colored following the color code of the nodes in Fig. 1, i.e., the first and the third nodes are colored black and blue, respectively, whereas the second node is colored red. All the dynamical variables are randomly initialized from the uniform distribution [0.2, 0.3]. We maintain these initial conditions for all our numerical simulations throughout the paper until specified otherwise.

Fig. 2
figure 2

Phase portraits of (5)–(10) (last 20000 points of the simulation are plotted) with \(\sigma _{12}=0.0002\), \(\sigma _{21}=0.0009\), \(\sigma _{23}=-0.00085\), and \(\sigma _{32}=0.0008\). Local parameters are \(a=0.6\), \(b = 0.6\), \(c = 0.89\), \(k_0=-1\), \(\alpha = 5\), \(\mu = 0.0001\), and \(\gamma = -0.5\). The black dots represent the phase portrait of the first node, whereas the blue dots represent that of the third (panel (a)). The red dots (panel (b)) represent the phase portrait of the second node. The nodes are colored according to the color code used in Fig. 1

4 Fixed point analysis of the network

Analyzing the fixed points of a system (continuous or discrete time) is the first step toward unfolding its complex dynamics. The fixed point of a map-based model \(f({\textbf{x}})\) is the point \({\textbf{x}}^*\in {\mathbb {R}}^N\), such that \(f({\textbf{x}}^*)={\textbf{x}}^*\). Let the fixed point of the system (5)–(10) be given by

$$\begin{aligned} X^* = \left( x_1^*, y_1^*, x_2^*, y_2^*, x_3^*, y_3^* \right) . \end{aligned}$$
(13)

To compute \(X^*\), the following list of equations needs to be solved:

$$\begin{aligned} x_1^*&= {x_1^*}^2e^{(y_1^* - x_1^*)}+k_0+\sigma _{12}(x_2^* - x_1^*), \end{aligned}$$
(14)
$$\begin{aligned} y_1^*&= ay_1^* - bx_1^* + c, \end{aligned}$$
(15)
$$\begin{aligned} x_2^*&= \frac{\alpha }{1+{x_2^*}^2} + y_2 + \sigma _{21}(x_1^* - x_2^*) + \sigma _{23}(x_3^* - x_2^*), \end{aligned}$$
(16)
$$\begin{aligned} y_2^*&= y_2^* - \mu (x_2^* - \gamma ), \end{aligned}$$
(17)
$$\begin{aligned} x_3^*&= {x_3^2}^*e^{(y_3^* - x_3^*)}+k_0+\sigma _{32}(x_2^* - x_3^*), \end{aligned}$$
(18)
$$\begin{aligned} y_3^*&= ay_3^* - bx_3^*+ c. \end{aligned}$$
(19)

A step toward gaining this is to try eliminating the \(y_i^*\) terms such that we end up with two transcendental equations involving \(x_1^*\) and \(x_3^*\). For \(x_2^*\), we can directly see that

$$\begin{aligned} x_2^* = \gamma , \end{aligned}$$
(20)

from (17), which we substitute in (18) to have

$$\begin{aligned} y_3^* = x_3^* + \ln {\left\{ (\sigma _{32}+1)x_3^*-k_0-\gamma \sigma _{32}\right\} } - 2\ln {(x_3^*)}, \end{aligned}$$
(21)

requiring us to solve \(x_3^*\). Thus, we substitute the above two equations in (19) to get the following transcendental equation of \(x_3^*\),

$$\begin{aligned} (1-a+b)x_3^* +(1-a)\ln {\left\{ (\sigma _{32}+1)x_3^*-k_0-\gamma \sigma _{32}\right\} } + 2(a-1)\ln {(x_3^*)} - c = 0. \end{aligned}$$
(22)

Solving (22) numerically will give us the corresponding value of \(y_3^*\). Next, we substitute (20) in (14) to have

$$\begin{aligned} y_1^* = x_1^* + \ln {\left\{ (\sigma _{12}+1)x_1^*-k_0-\gamma \sigma _{12}\right\} } - 2\ln {(x_1^*)}, \end{aligned}$$
(23)

requiring us to solve \(x_1^*\). Again, similar substitutions in (15) give us

$$\begin{aligned} (1-a+b)x_1^* +(1-a)\ln {\left\{ (\sigma _{12}+1)x_1^*-k_0-\gamma \sigma _{12}\right\} } + 2(a-1)\ln {(x_1^*)} - c = 0. \end{aligned}$$
(24)

Solving (24) numerically will give us the corresponding value of \(y_1^*\). Finally, we substitute the values of \(x_1^*\), \(x_2^*\), and \(x_3^*\) in (16) to have

$$\begin{aligned} y_2^* = \gamma - \frac{\alpha }{1+\gamma ^2} - \sigma _{21}(x_1^* - \gamma ) - \sigma _{23}(x_3^* - \gamma ). \end{aligned}$$
(25)

We need to numerically compute \(x_1^*\) and \(x_3^*\) using any standard computational solvers, for example, fsolve() function from the scipy.optimize module. Once this is achieved, we substitute the values of \(x_i^*\)’s in the \(y_i^*\)’s to eventually evaluate \(X^*\).

For \(X^*\), the dynamics of the perturbation vector \(\delta X = X - X^*\) from \(X^*\) is given by

$$\begin{aligned} \begin{bmatrix} x_1(n+1) \\ y_1(n+1) \\ x_2(n+1) \\ y_2(n+1) \\ x_3(n+1) \\ y_3(n+1) \end{bmatrix} = {\mathcal {J}}.\begin{bmatrix} x_1(n) \\ y_1(n) \\ x_2(n) \\ y_2(n) \\ x_3(n) \\ y_3(n), \end{bmatrix} \end{aligned}$$
(26)

where \({\mathcal {J}}\) is the Jacobian of the system,

$$\begin{aligned} {\mathcal {J}} = \begin{bmatrix} {\mathcal {J}}_{x_1x_1} &{} {\mathcal {J}}_{x_1y_1} &{} \sigma _{12}&{} 0&{} 0&{} 0 \\ -b &{}a &{}0 &{}0 &{}0 &{}0 \\ -\sigma _{21}&{} 0&{} {\mathcal {J}}_{x_2x_2} &{} 1&{} \sigma _{23}&{} 0 \\ 0&{} 0&{} -\mu &{} 1&{} 0&{} 0\\ 0&{} 0&{} \sigma _{32}&{} 0 &{} {\mathcal {J}}_{x_3x_3} &{} {\mathcal {J}}_{x_3y_3} \\ 0&{} 0&{} 0&{} 0&{} -b&{} a \end{bmatrix}, \end{aligned}$$
(27)

and

$$\begin{aligned} {\mathcal {J}}_{x_1x_1}&= x_1(2-x_1)e^{y_1-x_1}-\sigma _{12}, \end{aligned}$$
(28)
$$\begin{aligned} {\mathcal {J}}_{x_1y_1}&= x_1^2e^{y_1-x_1}, \end{aligned}$$
(29)
$$\begin{aligned} {\mathcal {J}}_{x_2x_2}&= -\frac{2\alpha x_2}{(1+x_2^2)^2}+\sigma _{21}-\sigma _{23}, \end{aligned}$$
(30)
$$\begin{aligned} {\mathcal {J}}_{x_3x_3}&= x_3(2-x_3)e^{y_3-x_3}-\sigma _{32}, \end{aligned}$$
(31)
$$\begin{aligned} {\mathcal {J}}_{x_3y_3}&= x_3^2e^{y_3-x_3}. \end{aligned}$$
(32)

The linear stability analysis of the fixed point depends on the absolute values of the eigenvalues of \({\mathcal {J}}\). The eigenvalues \(\lambda _i\), \(i=1, \ldots , 6\) can be evaluated from \({\mathcal {J}}\) at the fixed point \(X^*\) by solving a sixth-order polynomial equation \(P_6(\lambda )=0\), where

$$\begin{aligned} P_6(\lambda )=a_0\lambda ^6+a_1\lambda ^5+a_2\lambda ^4+a_3\lambda ^3+a_4\lambda ^2+a_5\lambda +a_6. \end{aligned}$$
(33)

Again, solving this can be achieved by using any type of available standard computational software. Note that in (33) we have

$$\begin{aligned} a_0&= 1, \end{aligned}$$
(34)
$$\begin{aligned} a_1&= J_3+(\sigma _{12}-D_1-a)J_4, \end{aligned}$$
(35)
$$\begin{aligned} a_2&= J_2+(\sigma _{12}-D_1-a)J_3 + (D_1a-\sigma _{12}a+D_2b)J_4+L_4, \end{aligned}$$
(36)
$$\begin{aligned} a_3&= J_1+(\sigma _{12}-D_1-a)J_2 + (D_1a-\sigma _{12}a+D_2b)J_3+L_3, \end{aligned}$$
(37)
$$\begin{aligned} a_4&= J_0+(\sigma _{12}-D_1-a)J_1 + (D_1a-\sigma _{12}a+D_2b)J_2+L_2, \end{aligned}$$
(38)
$$\begin{aligned} a_5&= (\sigma _{12}-D_1-a)J_0 + (D_1a-\sigma _{12}a+D_2b)J_1+L_1, \end{aligned}$$
(39)
$$\begin{aligned} a_6&=(D_1a-\sigma _{12}a+D_2b)J_0+L_0. \end{aligned}$$
(40)

We observe a clear pattern in the equations of \(a_0\)\(a_6\). Furthermore, the terms \(J_i\) are defined as

$$\begin{aligned} J_0&= (\sigma _{21}-\sigma _{23}+D_3+\mu )(D_4a-\sigma _{32}a+D_5b)-\sigma _{23}\sigma _{32}b, \end{aligned}$$
(41)
$$\begin{aligned} J_1&= (\sigma _{21}-\sigma _{23}+D_3+\mu )(\sigma _{23}+\sigma _{32}-\sigma _{21}-D_3-D_4-a-1)+ \nonumber \\&\sigma _{23}\sigma _{32}b, \end{aligned}$$
(42)
$$\begin{aligned} J_2&= (D_4 - \sigma _{32})a +D_5b +(\sigma _{23}-\sigma _{21}-D_3-1)(\sigma _{32}-a-D_4) +\nonumber \\&\sigma _{21}-\sigma _{23}+D_3+\mu , \end{aligned}$$
(43)
$$\begin{aligned} J_3&= \sigma _{23}+\sigma _{32}-\sigma _{21}-D_4-D_3-a-1, \end{aligned}$$
(44)
$$\begin{aligned} J_4&= 1, \end{aligned}$$
(45)

and the terms \(L_i\) are defined as

$$\begin{aligned} L_0&= a(D_4a-\sigma _{32}a+D_5b), \end{aligned}$$
(46)
$$\begin{aligned} L_1&= a(\sigma _{32}-a-D_4) - (a+1)(D_4a - \sigma _{32}a +D_5b), \end{aligned}$$
(47)
$$\begin{aligned} L_2&= a(D_4-\sigma _{32}+1) - (a+1)(\sigma _{32}-a-D_4) +D_5b, \end{aligned}$$
(48)
$$\begin{aligned} L_3&= \sigma _{32}-2a-D_4-1, \end{aligned}$$
(49)
$$\begin{aligned} L_4&= 1. \end{aligned}$$
(50)

Note that the terms \(D_i\) are defined in terms of \(x_j^*\) as,

$$\begin{aligned} D_1&= x_1^*(2-x_1^*)e^{y_1^*-x_1^*}, \end{aligned}$$
(51)
$$\begin{aligned} D_2&= {x_1^*}^2e^{y_1^*-x_1^*}, \end{aligned}$$
(52)
$$\begin{aligned} D_3&= -\frac{2\alpha x_2^*}{(1+{x_2^*}^2)^2}, \end{aligned}$$
(53)
$$\begin{aligned} D_4&= x_3^*(2-x_3^*)e^{y_3^*-x_3^*}, \end{aligned}$$
(54)
$$\begin{aligned} D_5&= {x_3^*}^2e^{y_3^*-x_3^*}. \end{aligned}$$
(55)
Fig. 3
figure 3

Schematic representation of the dynamics at the saddle fixed point (denoted by black square). The red curves denote the unstable manifolds, and the blue curves denote the stable manifolds. In (a), we observe saddle-focus dynamics, and in (b), we observe stable-focus dynamics

To check the stability of \(X^*\) we have to carefully monitor the eigenvalues \(\lambda _i\), \(i=1, \ldots , 6\). If all \(|\lambda _i |<1\), then \(X^*\) is locally stable, if all \(|\lambda _i|>1\), then \(X^*\) is locally unstable, otherwise the fixed point is a \(k-\)saddle where k is the number of eigenvalues whose absolute value is \(>1\).

For example, in Table 1 we list the set of six eigenvalues \(\lambda _i\) on the parameter grid \((\sigma _{12}, \sigma _{21})\) keeping fixed \(\sigma _{23}=\sigma _{32}=2\) and \(a=0.6, b=0.6, c=0.89, k_0=-1, \alpha =5, \mu =0.01\) and \(\gamma =-0.5\). The table lists fixed points of \(k-\)saddle types for \(k=2,\ldots , 5\) and an unstable fixed point. Note that Fig. 4 is a two-parameter bifurcation diagram where we color code the grid according to the type of fixed point generated by the system at that specific parameter combination. Figure 4a shows the color-coded plot on the \((\sigma _{12}, \sigma _{21})\) plane for \(\sigma _{23}=2, \sigma _{32}=2\) and \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.01\), and \(\gamma =-0.5\). Note that there exist distinct bifurcation boundaries where the type of the saddle fixed point changes from having \(k=2,3,4\) to having \(k=4,5,6\) or \(k=2,3\) or \(k=4,5\). Figure 4a shows a similar plot but on the parameter plane \((\sigma _{23}, \sigma _{32})\) with \(\sigma _{12}=-1, \sigma _{21}=1\) and the other local parameters kept the same except \(\mu\) set to 0.0001. We observe a region where the fixed point is a 4-saddle and has a boundary with two types of regions where the k values are either 3, 4 or 4, 5. The initial condition for all the dynamical variables is sampled randomly from the uniform distribution (0.2, 0.3).

Fig. 4
figure 4

Two-dimensional color-coded stability region plots. Both panels show \(k-\)saddle type and unstable fixed points. Panel (a) shows a \((\sigma _{12}, \sigma _{21})\) plane with \(\sigma _{23}=\sigma _{32}=2\) and \(\mu =0.01\) and panel (b) shows a \((\sigma _{23}, \sigma _{32})\) plane with \(\sigma _{12}=-1\), \(\sigma _{21}=1\), and \(\mu =0.0001\). Other parameters are kept as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.01\), and \(\gamma =-0.5\). For the saddle type fixed points, k varies from 2 to 5. Interestingly no stable fixed points were found in the reported region of \(\sigma _{ij}\) values

Table 1 Eigenvalues associated with the stability analysis of the fixed points at the parameter point \((\sigma _{12}, \sigma _{21})\) with the fixed coupling strengths \(\sigma _{23}=2\), and \(\sigma _{32}=2\). The other parameters of the system are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0 = -1\), \(\alpha = 5\), \(\mu =0.01\), and \(\gamma =-0.5\)

From Table 1, we can understand the dimensions of the stable and unstable manifolds around the saddle fixed point. Even though this fixed point exists in a six-dimensional space, it is important to consider simpler one-dimensional manifolds and spiral motions to grasp the overall dynamics near it. For instance, when \((\sigma _{12}, \sigma _{21}) = (0.3, 0.9)\), the fixed point has a one-dimensional manifold spiraling expanding outward. This spiral motion occurs within a two-dimensional plane within the six-dimensional space. Additionally, there is a one-dimensional stable manifold perpendicular to this plane, as shown in Fig. 3a. Similarly, in Fig. 3b, there is a one-dimensional stable manifold spirally contracting toward the fixed point, with a perpendicular one-dimensional unstable manifold.

Note that keeping track of the eigenvalues of the characteristic equation \(P_6(\lambda )\) gives us an analytical intuition on different bifurcation patterns that might arise in the dynamics of (5)–(10), see Fig. 5. Here we have plotted the behavior of \(\lambda _i\) with the variation of \(\sigma _{12}\). The local parameters of the oscillators are \(a=0.6\), \(b = 0.6\), \(c = 0.89\), \(k_0=-1\), \(\alpha = 5\), \(\mu = 0.0001\), and \(\gamma = -0.5\). The other coupling strengths are \(\sigma _{21}=-2\), \(\sigma _{23}=1.5\), and \(\sigma _{32}=-1.5\). We have colored the points according to the index of the eigenvalues. Some important observations are noted here. We see that \(\lambda _1\) is always real, whereas \(\lambda _i\) for \(i=2, \ldots , 5\) can be both real and complex-valued. Finally, \(\lambda _6\) is always real and \(\approx 1\). As soon as the absolute value of one of the eigenvalues crosses 1, the system will give rise to either a saddle-node (fold) bifurcation or a period-doubling (flip) bifurcation. When a complex eigenvalue has modulus 1, the system gives rise to a Neimark–Sacker bifurcation. More on these are detailed in Sec. 6.

Fig. 5
figure 5

Solutions of \(P_6(\lambda )\) (33) with varying \(\sigma _{12}\). The local parameters of the oscillators are \(a=0.6\), \(b = 0.6\), \(c = 0.89\), \(k_0=-1\), \(\alpha = 5\), \(\mu = 0.0001\), and \(\gamma = -0.5\). The other coupling strengths are \(\sigma _{21}=-2\), \(\sigma _{23}=1.5\), and \(\sigma _{32}=-1.5\). The six eigenvalues are colored according to the legends mentioned in the plot. The three subplots show the real parts, the imaginary parts, and the absolute values of the eigenvalues. \(\lambda _1\) is always real, whereas \(\lambda _i\) for \(i=2, \ldots , 5\) can be both real and complex-valued. Finally, \(\lambda _6\) is always real and \(\approx 1\)

5 Bifurcation structure of dynamical variables and coexistence

In this section, we report the bifurcation structure of the action potentials to varying coupling strength, giving us an intuition on the emergence of chaotic and periodic attractors. This is achieved via both forward and backward transition. The system is simulated for 40000 iterations out of which the last 500 iterates are plotted for a specific value of the concerned parameter, see Fig. 6. The model transition is done within the parameter range \(\sigma _{12} \in [0.09, 0.1]\) with forward points marked in red and the backward points in black in the same plot environment. We have set the rest of the coupling strengths as \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). The local parameters for the oscillators are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). Plotting the points in the same environment allows us to report coexistence of the periodic and the chaotic attractors (with different initial conditions for the dynamical variables).

Notice that for \(\sigma _{12} \in [0.092, 0.09244]\) approximately, the model exhibits chaotic behavior, after which in the range \(\sigma _{12} \in [0.09244, 0.099475]\), the model exhibits windows of periodicity for both the forward and the backward transition (but not exactly at the same parameter values for both the cases). This means at certain parameter values (for example \(\sigma _{12}=0.096\)) there will be a coexistence of chaotic and periodic attractors (for different initial conditions): The forward transition indicates the existence of a chaotic attractor, and the backward indicates the existence of a period-4 solution. These give rise to hysteresis loops clearly observed in Fig. 6. When the attractor is periodic, it has a period of four, see Fig. 7c and 8-(a). Finally, for \(\sigma _{12} > 0.099475\), the dynamics fall back into the chaotic region. For the typical phase portraits when the attractors are chaotic, see Figs. 7a, b and 8b, c. In the phase portraits, out of the total 80000 iterates, only the last 20000 points are plotted to ensure that the transients are discarded.

Coexistence occurs when the system reaches two different behaviors (in this case, a period-4 attractor and a chaotic attractor) depending on its initial conditions. As we have randomized our initial condition (choosing values uniformly from [0.2, 0.3] for all the six dynamical variables), we will have two separate initial conditions for the period-4 attractor and the chaotic attractor. We have illustrated this in Fig. 8. In the first row (panel (a)), for the parameter value \(\sigma _{12}=0.096\), we have \(X(0) = (0.23543643\), 0.23928397, 0.27790324, 0.22462858, 0.2949352, 0.23620372), with the system reaching a period-4 attractor, whereas in the second row (panel (b), (c)), we have \(X(0) = (0.20190176\), 0.29863965, 0.23375426, 0.21215444, 0.2095847, 0.24849442) for the same parameter value, and the system reaches a chaotic attractor. This justifies coexistence.

Fig. 6
figure 6

Bifurcation structures of action potentials computed via a forward (red dots) and backward (black dots) transition, with the variation of the bifurcation parameter \(\sigma _{12}\). The other coupling strengths are \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). The local parameters for the oscillators are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). The coexistence of chaotic and period-4 attractors is observed as exhibited by the presence of hysteresis loops. Out of 40000 iterations at each \(\sigma _{12}\), only the last 500 are plotted. The blue dashed lines indicate the \(\sigma _{12}\) values where we notice chaotic attractors (\(\sigma _{12}=0.092\)), period-4 attractors (\(\sigma _{12}=0.094\)), and the coexistence of both (\(\sigma _{12}=0.096\))

Fig. 7
figure 7

Phase portraits of (5)–(10) with changing \(\sigma _{12}\). Other coupling strengths are \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). The local parameters for the oscillators are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). Phase portraits are drawn according to the \(\sigma _{12}\) values represented by the blue broken lines in Fig. 6. The last 20000 iterates are plotted out of the 80000 iterates run. At \(\sigma _{12}=0.092\), we notice a chaotic attractor (first row) for all the nodes, whereas, at \(\sigma _{12}=0.094\), we see that the attractor is periodic (second row) with period-4

Fig. 8
figure 8

Phase portraits of (5)–(10) with changing \(\sigma _{12}\). Other coupling strengths are \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). Phase portraits are drawn according to the \(\sigma _{12}\) values represented by the blue broken lines in Fig. 6. The local parameters for the oscillators are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). The last 20000 iterates are plotted out of the 80000 iterates run. This shows a coexistence of periodic and chaotic attractors at \(\sigma _{12}=0.096\). In the first row (panel (a)), we notice a period-4 attractor for all the nodes, whereas, in the second row (panel (b), (c)) at the same \(\sigma _{12}\) we see that the attractor is chaotic. Initial conditions in the first row for the period-4 attractor are \(X(0) = (0.23543643, 0.23928397, 0.27790324, 0.22462858, 0.2949352, 0.23620372)\), whereas the initial conditions chosen for the second row showing chaotic attractors for all the nodes are \(X(0) = (0.20190176, 0.29863965, 0.23375426, 0.21215444, 0.2095847, 0.24849442)\)

6 Codimension-1 and -2 bifurcation patterns

To investigate what type of complex dynamics our network is capable of, it requires to be studied in terms of sophisticated dynamical tools like codimension-1 and -2 bifurcation analysis. These give the readers an overall picture of the influence of relevant parameters on the behavior of the network. We have considered the coupling strengths \(\sigma _{ij}\) as the main bifurcation parameters and have kept the local parameters of the oscillators fixed \(a=0.6, b=0.6, c=0.89, k_{0}=-1, \alpha =5, \mu =0.0001\) and \(\gamma =-0.5\). At first, we put forward three theorems corresponding to codimension-1 bifurcations that arise in our map when one of the eigenvalues of \({\mathcal {J}}\) at the fixed point \(X^*\) has modulus equal to 1. These are analytically easy to handle and are also supported by numerical results in the later half of this section. Given that the algebraic calculations for codimension-2 bifurcations are difficult and time-consuming, we only provide numerical results for those. It is MatContM that we use to report the numerical bifurcation patterns. Numerical bifurcation analysis using MatContM was also extensively employed to study the three-dimensional memristive Chialvo neuron map by Muni et al. [29]. The summary of codimension-1 and -2 bifurcation types observed is presented in Table 2.

Table 2 Abbreviations of codimension-one and codimension-two bifurcations

We first establish the conditions for the occurrence of codimension-1 bifurcation patterns (LP, PD, and NS) through Theorem 12, and 3.

Theorem 1

Suppose

$$\begin{aligned}&\left[ (1-a)(1+\sigma _{12}-D_1)+D_2b \right] (1+J_0+J_1+J_2+J_3) \nonumber \\&\quad \quad + (L_0+L_1+L_2+L_3+L4) = 0. \end{aligned}$$
(56)

Then model (5)–(10) undergoes a saddle-node bifurcation.

Proof

Saddle-node bifurcation occurs when the Jacobian matrix \({\mathcal {J}}\) has an eigenvalue 1 at \(X^*\). Using the characteristic equation (33), we set \(P_6(1)=0\). Solving this gives us

$$a_0+a_1+a_2+a_3+a_4+a_5+a_6 = 0,$$

which after some simple algebra reduces back to (56). \(\hfill\square\)

Theorem 2

Suppose

$$\begin{aligned}&\left[ (1+a)(1-\sigma _{12}+D_1)+D_2b \right] (1+J_0-J_1+J_2-J_3)\nonumber \\&\quad \quad + (L_0-L_1+L_2-L_3+L4) = 0. \end{aligned}$$
(57)

Then model (5)–(10) undergoes a period-doubling bifurcation.

Proof

Period-doubling bifurcation occurs when the Jacobian matrix \({\mathcal {J}}\) has an eigenvalue \(-1\) at \(X^*\). Using the characteristic equation (33), we set \(P_6(-1)=0\). Solving this gives us

$$a_0-a_1+a_2-a_3+a_4-a_5+a_6 = 0,$$

which after some simple algebra reduces back to (57). \(\hfill\square\)

Another interesting bifurcation is the Neimark–Sacker bifurcation where \({\mathcal {J}}\) generates a complex eigenvalue having modulus 1.

Theorem 3

Suppose

$$\begin{aligned}&{\mathcal {D}}^{3/2}+J_3{\mathcal {D}}^{5/4}+J_2{\mathcal {D}} +J_1{\mathcal {D}}^{3/4}+J_0\sqrt{D} \nonumber \\&\quad \quad +(\sigma _{12}-D_1-a)\left( {\mathcal {D}}^{5/4}+J_3{\mathcal {D}}+J_2{\mathcal {D}}^{3/4}+J_1\sqrt{D} + J_0\root 4 \of {{\mathcal {D}}}\right) \nonumber \\&\quad \quad +(D_1a-\sigma _{12}a+D_2b)\left( {\mathcal {D}} + J_3{\mathcal {D}}^{3/4}+J_2\sqrt{{\mathcal {D}}} + J_1\root 4 \of {{\mathcal {D}}} + J_0 \right) \nonumber \\&\quad \quad +\left( L_4{\mathcal {D}} + L_3{\mathcal {D}}^{3/4} + L_2\sqrt{{\mathcal {D}}}+L_1\root 4 \of {{\mathcal {D}}}+L_0 \right) = 0. \end{aligned}$$
(58)

Then model (5)–(10) undergoes a Neimark–Sacker bifurcation.

Proof

Neimark–Sacker bifurcation occurs when an eigenvalue of \({\mathcal {J}}\) is complex with modulus 1. This is possible for our model when \(\lambda ^4={\mathcal {D}}\). We remind the reader that \({\mathcal {D}}\) is the determinant of the Jacobian \({\mathcal {J}}\). Here we utilize the fact from matrix algebra that the product of the eigenvalues will equal the determinant of \({\mathcal {J}}\), i.e., \({\mathcal {D}}=\lambda ^6\). Thus Neimark–Sacker bifurcation occurs in our model (5)–(10) if \(P_6(\root 4 \of {{\mathcal {D}}})=0\). This means

$$a_0{\mathcal {D}}^{3/2}+a_1{\mathcal {D}}^{5/4}+a_2{\mathcal {D}}+a_3{\mathcal {D}}^{3/4}+a_4\sqrt{{\mathcal {D}}}+a_5\root 4 \of {{\mathcal {D}}}+a_6=0,$$

which after some algebraic manipulation reduces to (58) (leveraging (34)–(40)). \(\hfill\square\)

6.1 Numerical bifurcation analysis

Figure 9 shows a codimension-1 bifurcation diagram of the map (5)–(10) in the \((\sigma _{12},x_1)\)-plane. For large values of \(\sigma _{12}\), the system has a single fixed point curve. As \(\sigma _{12}\) decreases, a supercritical period-doubling bifurcation (PD) occurs with normal form \(1.80e^{+02}\) at \((\sigma _{12}, x_1)=(-1.9032, -0.1102)\). A further decrease in \(\sigma _{12}\) results in a fold bifurcation (LP1) at \((\sigma _{12}, x_1)=(-2.0530, -0.0477)\), producing two fixed point curves. The upper branch then undergoes another fold bifurcation (LP2) at \((\sigma _{12}, x_1)=(-0.7598, 0.7134)\), generating a third branch of the fixed point curve. This implies that between LP1 and LP2, the map (5)–(10) has three fixed points. Additionally, a Neimark–Sacker (NS1) bifurcation was detected at \((\sigma _{12}, x_1)=(-2.0504, -0.0379)\) along the upper branch curve of the fixed point curve close to LP1. Extending the numerical continuation along the curve of fixed points, we detected another Neimark–Sacker (NS2) and fold (LP3) bifurcations along the third branch of the fixed point curves at \((\sigma _{12}, x_1)=(-0.9617, 1.3600)\) and \((\sigma _{12}, x_1)=(-1.1316, 2.7337)\), respectively.

Fig. 9
figure 9

Codimension-1 bifurcation diagram of the map (5)–(10) with \(\sigma _{12}\) as the bifurcation parameter. Solid black curves correspond to fixed points of the map. The labels for the codimension-1 bifurcations are explained in Table 2

Figure 10 depicts the codimension-1 bifurcation diagram of the map (5)–(10) in the \((\sigma _{12},\sigma _{21})\)-plane. The figure is composed of the curves of codimension-1 bifurcations detected in Fig. 9. The blue, green, and magenta curves represent the loci of the period-doubling bifurcation (PD), fold bifurcation (LP), and Neimark–Sacker bifurcation (NS), respectively. The continuation of the period-doubling bifurcation produces two codimension-2 points: the flip-Neimark–Sacker (PDNS) at \((\sigma _{12},\sigma _{21})=(-1.7226, 1.4820)\) and the 1:2 resonance (R2) at \((\sigma _{12},\sigma _{21})=(-1.5307, 2.2929)\). Tracing the LP1 curve, we detected a 1:1 resonance (R1) at \((\sigma _{12},\sigma _{21})=(-2.0530, -0.00004)\). This is a codimension-2 point from which the curve of Neimark–Sacker (NS1) emerges. Extending the continuation NS1 curve, we detected two codimension-2 points. First, we have the double Neimark–Sacker (NSNS) bifurcation at \((\sigma _{12},\sigma _{21})=(-1.9981, 0.8582)\), and the second is the flip-Neimark–Sacker (PDNS) bifurcation at \((\sigma _{12},\sigma _{21})=(-1.7226, 1.4820)\).

Next, the LP2 bifurcation is selected for continuation, and we detected another 1:1 resonance (R1) at \((\sigma _{12},\sigma _{21})=(-0.7598, 0.0003)\). Similarly, the curve of Neimark–Sacker (NS2) emanates from this codimension-2 point. Extending the continuation along the LP2 curve results in a fold-Neimark–Sacker (LPNS) bifurcation at \((\sigma _{12},\sigma _{21})=(-0.7598, 4.5233)\) and a fold-flip (LPPD) bifurcation at \((\sigma _{12},\sigma _{21})=(-0.7598, 5.4222)\). Lastly, continuation of the LP3 bifurcation produces the LP3 curve, and along the curve, we found a 1:1 resonance (R1) at \((\sigma _{12},\sigma _{21})=(-1.1316, 0.00006)\).

Fig. 10
figure 10

Codimension-2 bifurcation diagram in \((\sigma _{12},\sigma _{21})\)-plane. The green, blue, and magenta curves are the loci of the LP, PD, and NS bifurcations. The labels for the codimension-2 bifurcations are explained in Table 2

7 Synchronization measures

After analyzing our model in terms of dynamical tools, it is time to look into its collective behavior that arises from the dynamics of each of the oscillators involved in the network. This is usually achieved by studying the synchronization behavior of the whole network. To do so, we employ two commonly used quantitative metrics from the literature called the cross-correlation coefficient and the Kuramoto order parameter.

7.1 Cross-correlation coefficient

Our model has three oscillators with the second oscillator connected to either the first or the third oscillator by a pair of links. This calls for formulating the cross-correlation coefficient between the first and the second oscillators and also between the second and the third oscillators before taking the average that will indicate the global synchronization pattern of the network as a whole. Let us first define the cross-correlation coefficient between the first and the second oscillator as

$$\begin{aligned} \Gamma _{12} = \frac{\langle {\tilde{x}}_1(n) {\tilde{x}}_2(n) \rangle }{\sqrt{\langle {\tilde{x}}_1(n)^2 \rangle \langle {\tilde{x}}_2(n)^2 \rangle }}. \end{aligned}$$
(59)

Similarly, between the second and the third oscillator, it is given by

$$\begin{aligned} \Gamma _{23} = \frac{\langle {\tilde{x}}_2(n) {\tilde{x}}_3(n) \rangle }{\sqrt{\langle {\tilde{x}}_2(n)^2 \rangle \langle {\tilde{x}}_3(n)^2 \rangle }}. \end{aligned}$$
(60)

Thus, the averaged cross-correlation coefficient is given by

$$\begin{aligned} \Gamma = \frac{1}{2}\left( \Gamma _{12} + \Gamma _{23}\right) . \end{aligned}$$
(61)

The averages in (59) and (60) are calculated after the transient dynamics are discarded. The symbol \({\tilde{x}}_i(n)\) refers to the variance from the mean \(\langle x_i(n) \rangle\), i.e, \({\tilde{x}}_i(n) = x_i(n) - \langle x_i(n) \rangle\), where \(i=1,2,3\). The symbol \(\langle \rangle\) denotes the average of the action potential over time. In our simulation for calculating the synchronization measures, we take 80000 iterates and discard the first 40000 iterates to ensure no transients creep in. When \(\Gamma = 1\), it means the network has reached complete in-phase synchrony, whereas \(\Gamma =-1\) represents the network reaching a complete anti-phase synchrony. Any value \(\Gamma \in (-1, 1)\) denotes partial synchronization to asynchronization.

In Fig. 11, we visualize a collection of two-dimensional color-coded plots, where the grid pixels are colored according to the value of \(\Gamma\) and the space is represented by the parameter combination given by either \((\sigma _{12},\sigma _{21})\) keeping fixed \(\sigma _{12}\) and \(\sigma _{21}\) (first row) or \((\sigma _{23}, \sigma _{32})\) keeping fixed \(\sigma _{23}\) and \(\sigma _{32}\) (second row). Both in the first and the second rows, the varying coupling strengths lie in \([-0.12, 0.12]\). The local parameter values are set to be \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0 = -1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). Panel (a) has \(\sigma _{23}=\sigma _{32}=0.12\). We observe that \(-0.533 \le \Gamma \le 0.004\). The cross-correlation coefficient has a maximum value of \(\approx 0\). In the range \(-0.1 \le \sigma _{12} \le -0.068\), \(0.08885 \le \sigma _{21} \le 0.12\), the values of \(\Gamma\) are the lowest \(\approx -0.531\), illustrated by the black pixels. Furthermore, we see a reddish patch on the top right corner where \(\Gamma \approx -0.2\). We also observe a purple patch near the upper boundary where \(\Gamma \approx -0.44\). Otherwise, the whole parameter region is yellowish where \(\Gamma \approx 0\). This tells us that overall the system mostly remains asynchronous because the model is a chain with no links between the first and the third oscillator contributing to freer oscillations as compared to a ring network. Panel (b) has \(\sigma _{23}=-\sigma _{32}=-0.12\). We observe \(-0.01 \le \Gamma \le 0.25\). The system as a whole remains asynchronous with the fact that it tends toward a more synchronous behavior as \(\sigma _{21}\) decreases. Panel (c) has \(\sigma _{23}=-\sigma _{32}=0.12\). At first, we report a handful of white pixels that appear in the upper left boundaries, corresponding to the diverging dynamics of the system. Other than that, the system has \(-0.739 \le \Gamma \le 0.029\), meaning the system as a whole exhibits asynchronous behavior. In the domain \(-0.12 \le \sigma _{12} \le -0.1\) and \(0.05 \le \sigma _{21} \le 0.11\), there appears a pool of pixels in the purple to black range of color representing \(-0.4 \le \Gamma -0.7\). This indicates that the system is approaching an anti-phase synchronization. In the rest of the domain \(\Gamma \approx 0\). Panel (d) shows \(\sigma _{23}=\sigma _{32}=-0.12\), with the system mostly oscillating in an asynchronous manner (\(-0.07 \le \Gamma \le 0.121\)). An interesting phenomenon is observed in the second row, where we fix \(\sigma _{12}, \sigma _{21}\) and vary \(\sigma _{23}, \sigma _{32}\). The qualitative behavior of the second row remains similar to the first row with some subtle differences. Panel (e) with \(\sigma _{12}=\sigma _{21}=0.12\) becomes rotationally symmetric to panel (a), panel (f) with \(\sigma _{12}=-\sigma _{21}=-0.12\) becomes rotationally symmetric to panel (c), panel (g) with \(\sigma _{12}=-\sigma _{21}=0.12\) becomes rotationally symmetric to panel (b), and panel (h) with \(\sigma _{12}=\sigma _{21}=-0.12\) becomes rotationally symmetric to panel (d).

Fig. 11
figure 11

A collection of two-dimensional color-coded plots showing the cross-correlation coefficient \(\Gamma\). The first row shows the \((\sigma _{12}, \sigma _{21})\) plane, whereas the second row shows \((\sigma _{23}, \sigma _{32})\) plane. The local parameter values are set to be \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0 = -1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). The other two coupling strengths are set as (a) \(\sigma _{23}=\sigma _{32}=0.12\), (b) \(\sigma _{23}=-\sigma _{32}=-0.12\), (c) \(\sigma _{23}=-\sigma _{32}=0.12\), (d) \(\sigma _{23}=\sigma _{32}=-0.12\), (e) \(\sigma _{12}=\sigma _{21}=0.12\), (f) \(\sigma _{12}=-\sigma _{21}=-0.12\), (g) \(\sigma _{12}=-\sigma _{21}=0.12\), (h) \(\sigma _{12}=\sigma _{21}=-0.12\). We mostly notice asynchrony and partial synchrony in the whole parameter domain

7.2 Kuramoto order parameter

Another quantitative measure that has been proliferating in the synchronization literature is the Kuramoto order parameter, represented by the index I, first introduced to study the phase coherence behavior in Kuramoto oscillators.

In order to define I, we need to first wrap our heads around the instantaneous phase \(\Theta _m\) of an oscillator m at time step n, given by

$$\begin{aligned} \Theta _m(n) = \tan ^{-1}\left( \frac{y_m(n)}{x_m(n)}\right) . \end{aligned}$$
(62)

This is utilized to define the complex-valued index

$$\begin{aligned} I_m(n) = e^{i \Theta _m(n)}, \end{aligned}$$
(63)

where \(i = \sqrt{-1}\). Furthermore, at time step n, the index I(n) for our model (5)–(10) is given by

$$\begin{aligned} I(n) = |\frac{1}{3}\sum _{m=1}^3 I_m(n)|, \end{aligned}$$
(64)

where the notation inside the absolute value symbol represents the mean of all phases of the three oscillators inside the unit circle at iteration n. Finally, the index average over time is given by

$$\begin{aligned} I = \langle I(n) \rangle . \end{aligned}$$
(65)

If \(I \approx 0\), the system stabilizes in an asynchronous regime, whereas \(I >0\) indicates partial synchrony and \(I = 1\) indicates complete synchrony in the system. Like Fig. 11, we also visualize a collection of two-dimensional color-coded plots, where the grid pixels are colored according to the value of I and the space is represented by the parameter combination given by either \((\sigma _{12},\sigma _{21})\) keeping fixed \(\sigma _{12}\) and \(\sigma _{21}\) (first row) or \((\sigma _{23}, \sigma _{32})\) keeping fixed \(\sigma _{23}\) and \(\sigma _{32}\) (second row), see Fig. 12. From the first look of it, we see that there exists some kind of linear correspondence between both the synchronization measures \(\Gamma\) and I. Panel (a) in Fig. 12, has \(0.633 \le I \le 0.7827\), indicating that the whole system remains in partial synchrony as was also reported in Fig. 11a. We also notice that in the range \(-0.1 \le \sigma _{12} \le -0.07\), \(0.08985 \le \sigma _{21} \le 0.12\), there exists a yellowish patch, like the black patch in Fig. 11a. The values of I in this region are the highest \(\approx [0.74, 0.78]\), indicating a behavior approaching synchrony. Similar behavior is also noticed in the top right corner depicting another pool of yellow pixels. A region corresponding to the purple patch in Fig. 11a shows a value of \(I \approx 0.7\) (in this case, the pixels are yellow). The rest of the figure has \(I \in [0.633, 0.7]\) illustrating a partial synchronization. Panel (b) has \(\sigma _{23}=-\sigma _{32}=-0.12\), with \(0.703 \le I \le 0.82\) again depicting partial synchronization. Near the right bottom boundary, the system tends to have \(I \approx 0.82\), showing a high tendency toward synchronization. Panel (c) has \(\sigma _{12}=-\sigma _{21}=0.12\) with \(0.686 \le I \le 0.9\). Like Fig. 11c, we have a pool of pixels near the top left corner, where the system shows a high tendency toward synchronization with I even reaching approximately 0.9. The rest of the domain shows partial synchronization in the network. Panel (d) has \(\sigma _{12}=\sigma _{21}=-0.12\) with \(0.681 \le I \le 0.7808\). White pixels denote a diverging behavior in the dynamics of the network. Again the qualitative behavior of the second row remains similar to the first row with some subtle differences. Panel (e) with \(\sigma _{12}=\sigma _{21}=0.12\) becomes rotationally symmetric to panel (a), panel (f) with \(\sigma _{12}=-\sigma _{21}=-0.12\) becomes rotationally symmetric to panel (c), panel (g) with \(\sigma _{12}=-\sigma _{21}=0.12\) becomes rotationally symmetric to panel (b), and panel (h) with \(\sigma _{12}=\sigma _{21}=-0.12\) becomes rotationally symmetric to panel (d).

Fig. 12
figure 12

A collection of two-dimensional color-coded plots showing the Kuramoto order parameter I. The first row shows the \((\sigma _{12}, \sigma _{21})\) plane, whereas the second row shows \((\sigma _{23}, \sigma _{32})\) plane. The local parameter values are set to be \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0 = -1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). The other two coupling strengths are set as (a) \(\sigma _{23}=\sigma _{32}=0.12\), (b) \(\sigma _{23}=-\sigma _{32}=-0.12\), (c) \(\sigma _{23}=-\sigma _{32}=0.12\), (d) \(\sigma _{23}=\sigma _{32}=-0.12\), (e) \(\sigma _{12}=\sigma _{21}=0.12\), (f) \(\sigma _{12}=-\sigma _{21}=-0.12\), (g) \(\sigma _{12}=-\sigma _{21}=0.12\), (h) \(\sigma _{12}=\sigma _{21}=-0.12\). We mostly notice asynchrony and partial synchrony in the whole parameter domain

8 Time series analysis via sample entropy

One important physical aspect of these network dynamical systems is the overall complexity. Thus the question arises, “can we quantify the system complexity in terms of any entropy measure?”. To answer this, we perform a statistical analysis of the network dynamics through the concept of sample entropy. We generate the time series data of the action potentials of all three oscillators and evaluate the sample entropy of each denoted by \({\textrm{SE}}_{x_i}\) for \(i=1, 2, 3\). Then we take the average of all three sample entropies to get the sample entropy of the whole network. The simulation is run for 40000 iterates and the first 20000 iterates are discarded to ensure the transients have died down. Then we compute \({\textrm{SE}}_{x_i}\). Before we set up the formula of the sample entropy of the network, we introduce the definition of sample entropy for a time series data \(\{x(n), n = 1, \ldots , {\mathcal {N}} \}\), following Richman et al. [65].

For a non-negative integer \(p \le {\mathcal {N}}\) let the vectors \(x_p(j)\) be defined as

$$\begin{aligned} x_p(j) = \left\{ x(j+k) \mid 0 \le k \le p-1 \right\} ,\; 1 \le j \le {\mathcal {N}}-p+1, \end{aligned}$$
(66)

where each of these \({\mathcal {N}}-p+1\) sets consists of p data points, \(x(j) \rightarrow x(j+p-1)\). From these, we can define the Euclidean distance as

$$\begin{aligned} \Delta \left( x_p(j), x_p(n)\right) = \max _{0 \le k \le p-1} \left\{ |x(j+k) - x(n+k) |\right\} . \end{aligned}$$

For a positive real threshold value \(\epsilon\), \(B_j^p(\epsilon )\) is defined as the ratio of the number of vectors \(x_p(n)\) within \(\epsilon\) of \(x_p(j)\) (meaning \(\Delta \left( x_p(j), x_p(n)\right) \le \epsilon\)) and the value \({\mathcal {N}} - p - 1\). Note that here \(1 \le n \le {\mathcal {N}} - p\) with the constraint \(n\ne j\). Thus, the term \(B^p(\epsilon )\) is defined as

$$\begin{aligned} B^p(\epsilon ) = \frac{1}{{\mathcal {N}}-p} \sum _{j=1}^{{\mathcal {N}} - p} B_j^p(\epsilon ). \end{aligned}$$

Similarly we can define \(B^{p+1}(\epsilon )\). Thus the sample entropy measure of the given time series is defined as

$$\begin{aligned} {\textrm{SE}} = \lim _{{\mathcal {N}} \rightarrow \infty }\left( -\ln \frac{B^{p+1}(\epsilon )}{B^p(\epsilon )} \right) . \end{aligned}$$
(67)

We apply this concept to the time series data of each of the action potentials before taking the average,

$$\begin{aligned} {\textrm{SE}} = \frac{1}{3} \left( {\textrm{SE}}_{x_1} + {\textrm{SE}}_{x_2} + {\textrm{SE}}_{x_3}\right) . \end{aligned}$$
(68)

A high \({\textrm{SE}}\) value is indicative of a higher unpredictability in the complex system, corresponding to a higher complexity. A similar approach was employed in computing the sample entropy of an ensemble of memristive Chialvo neurons arranged in a ring–star topology by Ghosh et al. [30]. Other relevant works considering sample entropy are [91,92,93,94].

We utilize an open-source Python package nolds [95] to compute the sample entropy of our time series via the nolds.sampen() function. This function is built following the algorithm by Richman et al. Note that the default values of p and \(\epsilon\) in (67) are 2 and \(0.2\sigma\), where \(\sigma\) is the standard deviation of the time series. This time we take 80000 iterates (as noticed in the time series plots), out of which we discard the first 25000 to get rid of the transients for computing the sample entropies. This makes \({\mathcal {N}}=55000\).

A collection of time series plots with their corresponding sample entropies are given in Figs. 13 and 14. The local parameters are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu = 0.0001\), and \(\gamma =-0.5\). The coupling strengths are set as \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). In Fig. 13, we have \(\sigma _{12}=0.092\). We have previously seen that for the above parameter combination, there exists a chaotic attractor, see Fig. 7a, b. The time series behavior for all three action potentials corroborates this, with points exhibiting a dense distribution over the time frame. In the range \(n=20000 \rightarrow 32000\) approximately, the orbit oscillates periodically, whereas everywhere else it shows irregular bursting corresponding to chaos. We have \({\textrm{SE}}_{x_1} \approx 1.08819\), \({\textrm{SE}}_{x_2} \approx 0.91167\), and \({\textrm{SE}}_{x_3} \approx 1.06156\). In Fig. 14 we have \(\sigma _{12}=0.096\). Previous numerics have shown us that for this parameter combination, there exists a periodic attractor with a period-4. We see that after approximately \(n =20000\), the orbit oscillates periodically with all the sample entropies equal to 0. These results exhibit that chaotic behavior indicates a higher complexity in the system corresponding to a higher \({\textrm{SE}}\).

An important note to the reader is to highlight that instead of line plots, the time series in this paper has been represented by discrete points because the model is discrete in time. This gives the illusion that the time series plots look like bifurcation diagrams. It is to inform the reader that the plots manifesting a behavior similar to the dynamics reaching a period-4 attractor is a regular oscillatory behavior in the time series (had it been plotted with lines instead of dots).

Fig. 13
figure 13

A collection of time series plots of the action potentials with their corresponding sample entropy values with \(\sigma _{12}=0.092\), \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). The local parameters are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu = 0.0001\), and \(\gamma =-0.5\). We see a high complexity in the behavior corroborated by high sample entropy values. Qualitatively the time series also exhibits irregular chaotic bursts

Fig. 14
figure 14

A collection of time series plots of the action potentials with their corresponding sample entropy values with \(\sigma _{12}=0.096\), \(\sigma _{21}=0.1\), \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). The local parameters are set as \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu = 0.0001\), and \(\gamma =-0.5\). We see a low complexity in the behavior corroborated by sample entropy value \(\approx 0\). Qualitatively the time series also exhibits regular spikes

Next, we plot one-parameter bifurcation diagrams for the sample entropy of the network with the varying coupling strengths \(\sigma _{ij}\), see Fig. 15. Again the local parameters have been set to \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\). We run 40000 iterates and remove the first 20000 to ensure no residual transients. In the first row, we vary \(\sigma _{12}\) and fix \(\sigma _{23}=0.05, \sigma _{32}=0.06\). Panel (a) has \(\sigma _{21}=0.1\). We see a rise in the complexity of the system reaching a maximum of \({\textrm{SE}} \approx 1.1298\) at \(\sigma _{12}\approx -0.056\) as \(\sigma _{12}\) increases from \(-1\) before it meets a sharp dip of \({\textrm{SE}} \approx 0.021\) at \(\sigma _{12}\approx 0.093\). Following this, the complexity gradually goes down as \(0<\sigma _{12}<0.4\). Beyond this range, the dynamics of the network diverge. Panel (b) has \(\sigma _{21}=-0.1\). In this case, the network remains in a relatively high complexity state \({\textrm{SE}}>0.8\) in the domain \(-0.3<\sigma _{12}<0.367\) approximately, after which it sees a sharp fall in the complexity till \(\sigma _{12} \approx 0.486\). Beyond this range of \(\sigma _{12}\) the network diverges. The maximum of \({\textrm{SE}} \approx 1.11068\) is reached at \(\sigma _{12} \approx 1.112\). In the second row, \(\sigma _{21}\) is the primary bifurcation parameter, fixing \(\sigma _{23}=0.05\), and \(\sigma _{32}=0.06\). Panel (c) has \(\sigma _{12}=0.096\). Again, the complexity of the dynamics of the system is moderately high in the range \(0.6<{\textrm{SE}}<1.132\) approx, except for a few dips which happen when \(\sigma _{21}>0\). One of the dips happens at \(\sigma _{21} \approx 0.1\) where \({\textrm{SE}} \approx 0\). The dynamics diverges beyond the range \(-0.79<\sigma _{21}<0.62\) approximately. Panel (d) has \(\sigma _{12}=-0.096\). The complexity of the tri-oscillator model, in this case, exhibits similar behavior where \(0.5<{\textrm{SE}}<1.153\) is moderately to substantially high. Note that there is a break from \(0.797<\sigma _{21}<0.814\) indicating a diverging behavior. Also, the dynamics are divergent beyond the range \(-0.6<\sigma _{21}<0.83\). In the third row, we vary \(\sigma _{23}\) as the primary bifurcation parameter and \(\sigma _{12}=0.096\), \(\sigma _{21}=0.01\). Panel (e) has \(\sigma _{32}=0.06\). The complexity is moderately to substantially high (\(0.55<{\textrm{SE}}<1.136\)) when \(\sigma _{23}\le 0\). Otherwise, the sample entropy value fluctuates between a high and a low value with the lowest \(\approx 0\). The dynamics diverges beyond \(-0.8<\sigma _{23}<0.62\) approximately. Panel (f) has \(\sigma _{32}=-0.06\) and exhibits a similar complexity as in panel (e). Lastly the fourth row sees \(\sigma _{32}\)n as the primary bifurcation parameter with \(\sigma _{12}=0.096\), \(\sigma _{21}=0.01\). Panel (g) has \(\sigma _{23}=0.05\). For \(\sigma _{32}<0\), the system exhibits a moderate to substantial complexity (\(0.54<{\textrm{SE}}<1.138\)), and when \(\sigma _{32}>0\) the complexity fluctuates between a low and a high value with the lowest being \({\textrm{SE}} \approx 0\) at \(\sigma _{32}\approx 0.1\). Beyond \(-0.792<\sigma _{32}<0.616\), the system diverges. Panel (h) has \(\sigma _{23}=-0.05\) and shows a similar qualitative behavior as panel (g).

Fig. 15
figure 15

A collection of one-parameter bifurcation plots of the sample entropy of the network with varying \(\sigma _{ij}\). The local parameters have been set to \(a=0.6\), \(b=0.6\), \(c=0.89\), \(k_0=-1\), \(\alpha =5\), \(\mu =0.0001\), and \(\gamma =-0.5\)

9 Conclusion

In this paper, we have investigated a heterogeneous chain network to model the dynamics of neuron ensembles in the nervous system. This model can be realized as a unit that could repeat itself to generate more complicated neuron aggregates replicating the real-world functionalities of the nervous system. The model is built on two popular neuron maps: the Chialvo map (peripheral nodes) and the Rulkov map (central node) with bidirectional linear couplings between two neurons. The motivation behind this was to build a heterogeneous model that mimics the functionalities of three kinds of neurons present in the nervous system and the synaptic connections for information transfer among them. Heterogeneity is incorporated in two ways: First, the central node oscillates following the dynamics of the Rulkov map, whereas the end nodes oscillate following the Chialvo map, and second the coupling between two nodes i and j is bidirectional with \(\sigma _{ij} \ne \sigma _{ji}\). One future direction is to incorporate noise modulation via additive or multiplicative noises and characterize a battery of spatiotemporal patterns.

After we put forward our model which is a nonlinear system of six coupled equations, we look deep into the dynamical properties of the model for example figuring out the fixed point, performing stability analysis, and looking into the eigenvalues of the Jacobian matrix at the fixed point. We also notice from the bifurcation diagrams that the dynamics fluctuate between a chaotic attractor and a period-4 attractor, and the coexistence of the two. All these phenomena happen under considering different initial conditions uniformly sampled from a range for each simulation run. Then we perform a codimension-1 and -2 pattern analysis using MatContM as a tool and discover the existence of saddle-node, period-doubling, and Neimark–Sacker bifurcation patterns, supported by analytical proofs. Furthermore, MatContM allows us to observe rich codimension-2 patterns like double Neimark–Sacker, flip-Neimark–Sacker, 1 : 1 resonance, fold-flip, fold-Neimark–Sacker, and 1 : 2 resonance. These show that our heterogeneous neuron model is a repository of a wide array of engrossing dynamical properties. Thus this model can be in future utilized in designing a ring network (infinite chains), a star network (repetition of the chain with one central node and an infinite number of peripheral nodes), and a combination of the two to further study the rich spatiotemporal behaviors that might arise due to the complexity induced. Another interesting candidate is a multiplex network made up of our model as the building block.

We have taken a step forward to also study the synchronization behavior of this small network model via the cross-correlation coefficient and the Kuramoto order parameter. Both these measures indicate that the model mostly remains in an excitatory state exhibiting asynchrony and partial synchrony (in-phase and anti-phase). These are illustrated using two-dimensional color-coded plots in this paper. Increasing the number of nodes to determine whether there are solitary nodes, chimera patterns, cluster states, and wave structures as spatiotemporal patterns using these measures is an interesting avenue to investigate. Also, another future aspect is to build a metric to look into whether there exists a “weak chimera” in the tri-oscillator model. In this paper, we see time series with both chaotic and regulatory behavior via sample entropy. Using this metric on a noise-modulated tri-oscillator model and also a model with an infinite number of nodes in the thermodynamic limit is an important aspect to look into. An analytical relationship is also required to be set up to check how all these measures relate to each other. Note that throughout the paper, we have kept the local parameters of each of the three oscillators fixed and varied the coupling strengths between the oscillators as primary bifurcation parameters.

In the future, we want to look at the dynamical properties of this model using coupling strengths which change over every iteration number, making the network temporally heterogeneous. One question that also arises is what kind of conservative properties these kind of neuron maps have, for example, the conservation of Hamiltonian energy in the continuous-time systems. Can we come up with an equivalent quantity that is being conserved in discrete-time neuron maps? Furthermore, as discrete-time systems are more computationally efficient, we suppose it would be an interesting problem to look into heterogeneous models of discretized versions of continuous-time neuron models, via a small network topology. Small networks are reduced order models which are undoubtedly the best candidates to study before we consider networks in the thermodynamic limit.

As with any other model, our model is not perfect. But of course, we can work on making our model come closer to a real-world scenario. One step toward that is to fit our model from medically available EEG data from reliable sources. This would by itself be a captivating field to persuade. Another challenge is to come up with a Lyapunov exponent study of the network itself, where the nodes are coupled. One approach could be motivated by Caligiuri et al. [96]. The 0–1 test [97] can also be a suitable candidate. Another way to make the model closer to reality is to perturb every node or the coupling strengths with external forces.

Our approach in this paper has been an amalgamation of both analysis and numerics which we believe will aid mathematical modelers, engineers, quantitative biologists, and neuroscientists the same. This model lays a step toward understanding the intricate dynamics of more topologically complicated ensembles of neurons involved in signal processing in the nervous system.