Synchronization in STDP-driven memristive neural networks with time-varying topology

Synchronization is a widespread phenomenon in the brain. Despite numerous studies, the specific parameter configurations of the synaptic network structure and learning rules needed to achieve robust and enduring synchronization in neurons driven by spike-timing-dependent plasticity (STDP) and temporal networks subject to homeostatic structural plasticity (HSP) rules remain unclear. Here, we bridge this gap by determining the configurations required to achieve high and stable degrees of complete synchronization (CS) and phase synchronization (PS) in time-varying small-world and random neural networks driven by STDP and HSP. In particular, we found that decreasing P (which enhances the strengthening effect of STDP on the average synaptic weight) and increasing F (which speeds up the swapping rate of synapses between neurons) always lead to higher and more stable degrees of CS and PS in small-world and random networks, provided that the network parameters such as the synaptic time delay \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau _c$$\end{document}τc, the average degree \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle k \rangle$$\end{document}⟨k⟩, and the rewiring probability \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β have some appropriate values. When \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau _c$$\end{document}τc, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle k \rangle$$\end{document}⟨k⟩, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document}β are not fixed at these appropriate values, the degree and stability of CS and PS may increase or decrease when F increases, depending on the network topology. It is also found that the time delay \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau _c$$\end{document}τc can induce intermittent CS and PS whose occurrence is independent F. Our results could have applications in designing neuromorphic circuits for optimal information processing and transmission via synchronization phenomena.

signing neuromorphic circuits for optimal information processing and transmission via synchronization phenomena.

Introduction
Synchronization phenomena are processes wherein many dynamical systems adjust a given property (e.g., amplitude, phase, frequency, and even membrane potential in coupled neurons) of their motion due to suitable coupling configurations.In the brain, they can emerge from the collaboration between neurons or neural networks and significantly affect all neurons and network functioning.It is wellestablished that synchronization of neural activity within and across brain regions promotes normal physiological functioning, such as the precise temporal coordination of processes underlying cognition, memory, and perception [1].However, synchronization of neural activity is also well known to be responsible for some pathological behaviors such as epilepsy [2].It has been shown that changes in the strength of the synaptic coupling and the connectivity of the neurons could lead to epileptic-like synchronization behaviors.Furthermore, changes in neural connectivity can lead to hyper-synchronized states related to epileptic seizures that occur intermittently with asynchronous states [3].It has been demonstrated in [4] that by manipulating synaptic coupling and creating a hysteresis loop, square current pulses can induce abnormal synchronization similar to epileptic seizures.Synchronization may present various forms (see [5,6] for a comprehensive review), and the behavior of each form of synchronization may depend on the nature of the interacting systems, the type of coupling, the distances between the interacting systems, the time delays between the components of the systems, and also the network topology.
In this paper, we focus on two common forms of synchronization for reasons given alongside their descriptions: (i) Complete synchronization (CS) is the simplest (and probably the most intuitive) form of synchronization.A system made up of, e.g., two coupled sub-systems, say x 1 (t) and x 2 (t), is said to be completely synchronized when there is a set of initial conditions so that the coupled systems eventually evolve identically in time (i.e., |x 1 (t) − x 2 (t)| = 0, as t → ∞) [6,7,8,9].Because of the intuitiveness and simplicity of CS, it will be one of the main phenomena investigated in this paper.(ii) Phase synchronization (PS) was introduced by Rosenblum et al. [11,12] and experimentally confirmed in [13].It involves subsystem properties called phases [14] and is characterized by 2π phase locking of two or more oscillators with uncorrelated amplitudes.It has been shown that the phase synchronization between different brain regions supports both working memory and long-term memory and facilitates neural communication by promoting neural plasticity [16], making PS a good candidate for investigation in this paper.
In recent years, extensive research (see, e.g., the reviews in [17,18,19]) has been conducted on synchronization dynamics in non-adaptive neural systems with varying degrees of complexity.In particular, in the study [15], it was discovered that excitatory and inhibitory connections between brain areas are crucial for phase and anti-phase synchronization.It was found that the phase angles of neurons in the receiving area could be influenced by unidirectional non-adaptive synapses from the sender area.When the neurons in the sender area synchronize, the variability of phase angles in the receiver area can be reduced with certain conductance values.Additionally, the study observed both phase and anti-phase synchronization in the case of non-adaptive bidirectional interactions.It has also been demonstrated in [10] that the coupling strength and the probability of connections in a random network of adaptive exponential integrate-and-fire neurons can induce spike and bursting synchronization, with bursting synchronization being more robust than spike synchronization.
Furthermore, it has been shown that axonal time delays can play crucial roles in synchronization dynamics in neural networks [38,42,48,53,61].For example, in [53], the authors used phase oscillator and conductance-based neuron models to study synchronization and coupling between two bidirectionally coupled neurons in the presence of transmission delays and STDP, which influence emergent pairwise activity-connectivity patterns.Their results showed that depending on the range of transmission delays, the two-neuron motif could achieve in-phase/antiphase synchronization and symmetric/asymmetric coupling.The co-evolutionary dynamics of the neuronal system and synaptic weights, governed by STDP, stabilize the motif in these states through transitions at specific transmission delays.They further showed that these transitions are sensitive to the phase response curve of the neurons but are robust to heterogeneity in transmission delays and STDP imbalance.Motivated by such rich time-delay-induced dynamical behavior in synchronization dynamics, in the current paper, we shall investigate the effects of axonal time delays on CS and PS in neural networks driven by two forms of adaptive rules.
It is essential to consider the effects of the inherently adaptive nature of neural networks on information processing via synchronization.Besides the colossal efforts to study synchronization in neuronal networks with synaptic plasticity, see, e.g., [48,63,65,69,72,73], it is essential to be mindful of the need to explore more dynamic scenarios in order to fully comprehend the emergence of synchronous patterns in adaptive networks.Synaptic plasticity in neural networks refers to the ability to modify the strength of synaptic couplings over time and/or the architecture of neural network topology through specific rules.Two significant mechanisms associated with adaptive rules in neural networks are spike-timing-dependent plasticity (STDP) and homeostatic structural plasticity (HSP).STDP-induced synaptic modification relies on the repeated pairing of pre-and postsynaptic membrane potentials.The degree and direction of the modification depend on the relative timing of neuron firing.Depending on the precise timing of pre-and and postsynaptic spikes, the synaptic weights can either exhibit long-term depression (LTD) or longterm potentiation (LTP), which represent persistent weakening or strengthening of synapses, respectively.This concept has been extensively discussed in [35,54].
HSP-induced synaptic modification involves altering the connectivity between neurons by creating, pruning, or swapping synaptic connections.This results in changes to the network's architecture while maintaining its functional structure, which maximizes specific functions of interconnected groups of neurons and improves sensory processing efficiency [71].Early evidence of structural plasticity was observed through histological studies of spine density following new sensory experiences or training [37].Further research has shown that the micro-connectome, which describes the connectome at the level of individual synapses, undergoes rewiring [22,77,82].While brain networks adhere to specific topologies, such as small-world and random networks [40,76], despite their time-varying dynamics, recent studies suggest that these networks can benefit from homeostasis by increasing the efficiency of information processing [28].Motivated by these studies, the current paper focuses on time-varying small-world and random networks adhering to their respective topologies through HSP.
Previous studies [25,26,43,44] on synchronization in adaptive neural networks have focused on either time-invariant neural networks with STDP or timevarying neural networks without STDP.Research on time-invariant neural networks has shown that good synchronization improves via LTD of the averaged synaptic weight, while bad synchronization deteriorates via LTP [43].This effect is due to inhibitory STDP [43], which contrasts the findings on excitatory STDP [45], where good synchronization gets better and bad synchronization gets worse via LTP and LTD, respectively.The article [75] demonstrated that STDP enhances synchronization in inhibitory networks even when there is heterogeneity.Similarly, [62] revealed that noise can facilitate synchronization in spiking neural networks driven by STDP.It is shown that the average synaptic coupling of the network increases with an increase in the noise intensity, with an optimal noise level where the strength of average synaptic coupling reaches its maximum in a resonancelike fashion that maximizes synchronization.The research in [25] demonstrated the crucial combined effect of the uni-directional chemical synapses and STDP on the synchronization in random neural networks.The study also reveals that synchronization increases as the connection probability of the network grow in the presence of STDP and no external input current.
However, introducing a non-zero external input current results in spiking resynchronization.The study in [66] explores the behavior of an adaptive array of phase oscillators and highlights that a specially designed adaptive law can amplify the coupling between pairs of oscillators with greater phase incoherence, leading to improved synchronization.This approach yields more realistic coupling dynamics in networks of oscillators with varying intrinsic frequencies.Additionally, adjusting the parameters of the adaptive law can accelerate synchronization.The paper also demonstrated the method's versatility by examining nearest-neighbor ring coupling in addition to global coupling.
The research in [31,36,64] has shown that in networks with a time-varying topology but without STDP, a faster rewiring of the topology always leads to a higher degree of synchronization.However, in our current work, we challenge this notion and demonstrate that more rapid switching of synapses can actually also decrease the degree of synchronization in certain situations.The issues of synchronization phenomena in networks undergoing two adaptive processes have not received sufficient research attention.In one study, published in [29], the authors examined this problem by analyzing Kuramoto oscillator networks that undergo two adaptation processes: one that modifies coupling strengths and another that changes the network structure by pruning existing synaptic contacts and adding new ones.By comparing networks with only STDP to those with both STDP and structural plasticity, the authors assessed the effects of structural plasticity and found that it enhances the synchronized state of a network.
The current study aims to narrow the gap in the research on synchronization in time-varying neural networks driven by STDP and HSP rules in small-world and random networks.Specifically, we focus on determining: (i) the joint effect of the adjusting potentiation rate of the STDP rule and the characteristic rewiring frequency of the HSP rule on the degree of CS and PS, (ii) the joint effect of the synaptic time delay, the rewiring frequency of the HSP rule, and the adjusting potentiation rate of the STDP rule on the degree of CS and PS, (iii) the joint effect of the average degree of the network, the rewiring frequency of the HSP rule, and the adjusting potentiation rate of the STDP rule on the degree of CS and PS, and (iv) the joint effect of the rewiring probability of the Watts-Strogatz small-world network, the rewiring frequency of the HSP rule, and the adjusting potentiation rate of the STDP rule on the degree of CS and PS.The study employs extensive numerical simulations to investigate these issues.
Based on our numerical results, the stability of degrees of CS and PS are influenced by parameters governing STDP and HSP, as well as network topology parameters.For instance, decreasing the STDP potentiation rate parameter (P ) and increasing the HSP characteristic frequency parameter (F ) leads to more stable and higher levels of CS and PS in small-world and random networks, provided that average degree ( k ), rewiring probability (β), and synaptic time delay (τ c ) are at appropriate values.Furthermore, we found that PS can be achieved more reliably and at a higher degree than CS in both small-world and random networks.Additionally, the random network generates more stable and higher levels of CS and PS than the small-world network.Our findings on the variations in the degree of CS and PS are summarized in Table 1.
The paper is structured as follows: Sec. 2 describes the mathematical model, the STDP learning rule, and the HSP rewiring rules, which facilitate the adherence of time-varying small-world and random networks to their respective architecture.Sec. 3 outlines the computational methods utilized, while 4 presents and analyzes the numerical findings.In Sec. 5, we have conclusions.

Neural Network Model
The presence of intracellular and extracellular ions leads to the development of an electromagnetic field in biological neurons, which affects their membrane potential and, consequently, their firing modes.To incorporate these effects in a memristive neuron model, Lv et al. [52] proposed improved neuron models that include a variable for magnetic flux.The influence of this electromagnetic field is well-established [51].Thus, in the current work, we study the joint effects of HSP and STDP on synchronization in a memristive neural network.The FitzHugh-Nagumo (FHN) model [33,57], initially proposed to describe the spiking activity of neurons, now serves as a fundamental model for excitable systems.Its applications have expanded beyond neuroscience and biological processes [30] to include optoelectronics [68], chemical oscillators [70], and nonlinear electronic circuits [39].Although the FHN model lacks the same level of biophysical relevance as the Hodgkin-Huxley (HH) neuron model [41], it nevertheless does capture some essential aspects of the HH model's behavior.Moreover, the computational cost is reduced due to the lower dimensionality of the 2D FHN model compared to the 4D HH model, which is particularly advantageous when analyzing large networks.Our study considers the memristive FHN model, incorporating the memristive aspect via an additional equation as per [34]: where the variables v i , w i , and φ i correspond to the voltage, slow current variable, and magnetic flux, respectively.To maintain electrophysiological relevance, the parameter a is typically set within the (0, 1) range, with a 0.5 chosen for our purposes [80].The values of ε and d are fixed at 0.025 and 1, respectively, representing a specific set of values at which the non-memristive FHN model (i.e., Eq.( 1) with , where λ and β are fixed at 0.1 and 0.02, respectively [81].The memristor parameters k 1 = 0.5, k 2 = 0.9, k 3 = 1.0, and φ ext = 2.4 are also fixed.With these parameter values, the model in Eq.( 1) can only produce regular spiking [34] -the regime in which we are interested.

Synapses and STDP Rule
The term I syn i (t) in Eq.( 1) represents the uni-directional excitatory chemical synapses between neurons and governs the STDP learning rule between coupled neurons.The synaptic current I syn i (t) of the ith neuron at time t is defined in Eq.( 2): where the synaptic connectivity matrix L(= {ℓ ij (t)}) has ℓ ij (t) = 1 if neuron j is connected to neuron i and disconnected when ℓ ij (t) = 0. We model the synaptic connections as either a time-varying small-world network or a time-varying random network.Starting with a regular ring network with k nearest neighbors, we use the Watts-Strogatz algorithm [78] to generate small-world and random networks with parameters β and k , where β represents the rewiring probability and ranges from 0 to 1, and k , the average degree connectivity (i.e., the average number of synaptic inputs per neuron), which is calculated as where k i is the in-degree of the ith neuron (i.e., the number of synaptic inputs to neuron i) and is given by In the algorithm, β ∈ [0, 1] plays a crucial role in determining the type of network generated.If β falls between 0 and 1, a small-world network is created, while a completely random network is generated when β is 1.This work does not consider regular networks (when β is 0).The average degree connectivity k and the rewiring probability β serve as control parameters for the network topology.
The time-dependent behavior of the open synaptic ion channels in the jth neuron is denoted by s j (t) in Eq.( 2).The rate of change of s j (t) is determined by Chemical synapses involve the release and diffusion of neurotransmitters across the synaptic cleft, which takes a finite amount of time.Including time delays allows for a more accurate representation of the temporal dynamics and signal transmission between neurons.Thus, we incorporate a time delay parameter, τ c , which will be utilized to control the chemical synapses.With a time delay τ c , the action potential of the pre-synaptic neuron j fired at the earlier time given by t − τ c is represented by v j (t−τ c ) [53,85].The threshold of the membrane potential, denoted by v shp = 0.05, determines the threshold above which the pre-synaptic neuron j has an impact on post-synaptic neuron i.Additionally, the reversal potential, set at v syn = 2.0, ensures that all synapses are excitatory.In Eq.( 2), the strength of the synaptic connection between the jth pre-synaptic neuron and the ith post-synaptic neuron is denoted by g ij (t).The STDP mechanism states that the synaptic strength of each synapse is updated using a nearestspike pair-based STDP rule [55] as time t increases.There are two commonly used forms of STDP, see, e.g., [47,63,83] and [74,79,84], for each of the forms.In our study, the update of the synaptic coupling strength g ij (t) is determined by the synaptic modification function M , which is defined based on the current value of g ij (t) [74,79,84]: where ∆t = t i − t j , with t i and t j representing the spiking times of the postsynaptic neuron i and the pre-synaptic neuron j, respectively.We determine the spike occurrence times from the instant t when a membrane potential variable crosses the threshold value of v th = 0.5.It is worth noting that only the excitatoryto-excitatory synapses are modified by this learning rule [49,74], making it an ideal learning rule for our study since all the synapses in our network are excitatorythanks to the value of the reversal potential, v syn = 2.0, which ensures that all synapses are excitatory.The extent of synaptic modification is regulated by two parameters, namely the potentiation and depression rate represented by P and D, respectively.The temporal window for synaptic modification is determined by two additional parameters, τ p and τ d .Experimental results [24,32,74] suggest that Dτ d > P τ p , which ensures the overall weakening of synapses.Furthermore, experimental studies show that the temporal window for synaptic weakening is roughly the same as that for synaptic strengthening [74,86].Hence, to be consistent with experimental results, we chose the STDP parameters such that the STDP rule in Eq.( 4) is typically depression-dominated, i.e., we set τ p = τ d = 2.0, D/P = 1.05, and chose P as the control parameter of this STDP rule.In order to prevent unbounded growth, negative coupling strength, and elimination of synapses (i.e., g ij = 0), we set a range with the lower and upper bounds:

Time-varying Networks and HSP Rule
To investigate the impact of the time-varying nature of the network architectures on the synchronization dynamics of the coupled neurons, we consider a smallworld and random structure [20,21,50,56] constructed using a Watts-Strogatz network algorithm [78].The network's Laplacian matrix is a zero-row-sum matrix with an average degree connectivity of k and a rewiring probability β ∈ (0, 1].To generate a time-varying small-world network (with β ∈ (0, 1)) that adheres to its small-worldness at all times, we implement the following process during the rewiring of synapses: -During each integration time step dt, a synapse between two distant neurons is rewired to a nearest neighbor of one of the neurons with probability If the synapse is between two nearest neighbors, it is replaced by a synapse to a randomly chosen distant neuron with probability βF dt.
where k is the average degree of the original ring network used in the Watts-Strogatz algorithm.To generate a time-varying random network (also generated with the Watts-Strogatz algorithm when β = 1) that adheres to its randomness at all times, we implement the following process during the rewiring of synapses: -During each integration time step dt, if there is a synapse between neuron i and j, it will be rewired such that neuron i (j) connects to any other neuron except for neuron j (i) with a probability of 1 − k N −1 F dt.Note that the rewiring algorithms described above always maintain the smallworldness or randomness of the networks, even though the connectivity matrix changes over time -these are precisely the HSP rules we will use in this study.However, it is essential also to acknowledge that real neural networks may employ different rewiring processes to achieve such time-varying network structures, which may not necessarily align with the HSP rules described here.Nonetheless, for the purpose of our study, it is relevant that both small-world and random networks exhibit changing connections over time while preserving their respective smallworldness or randomness, similar to what is observed in real neural networks.
Here, we will use the characteristic rewiring frequency F as the control parameter for HSP.This parameter reflects the synapse changes over time, specifically during each integration time step dt.Notably, synapses in actual neural networks may change at varying rates, depending on factors such as the network's developmental stage or environmental stimuli.Therefore, this study aims to investigate a broad range of rewiring frequencies, ranging from 0.0 to 1.0 × 10 2 .

Computational Methods
As we need to quantify the degree of complete synchronization (CS) and phase synchronization (PS) of neural activity in the networks, we use the error for variable traces E for CS and the Kuramoto order parameter R for PS [23,46], respectively given by where , and • t is the time average obtained over a large time interval [0, T ].In the argument of the exponential function, we have z = √ −1, and the quantity Ψ k (t) approximates the phase of the kth neuron and linearly increases over 2π from one spike to the next.We determine the spike time occurrences from the instant the membrane potential variable v k crosses the threshold v th = 0.5 from below.The norm of this complex exponential function is represented by |•|.The time at which the ith neuron fires its ℓth spike (ℓ = 0, 1, 2, ...) is represented by t (ℓ) i .CS corresponds to when all neurons follow the same trajectory and yields zero synchronization error E = 0.The Kuramoto order parameter R ranges from 0 to 1, corresponding to the absence of PS to complete PS (i.e., all neurons fire at precisely the same times), respectively.It is worth noting that the error E, which measures the degree of CS, uses the actual and all the values of the membrane variable v k (t) (including subthreshold oscillations), while the Kuramoto order parameter uses only the spike times of v k (t) to inform us about the synchronization of spiking times.Thus, the synchronization behavior of the neurons during CS can be very different from what happens during PS.
For N neurons, we numerically integrate Eqs.( 1)-( 3) with the STDP learning rule of Eq.( 4) and the HSP rewiring models described above using a standard fourth-order Runge-Kutta algorithm with a time step dt = 0.01 and for a total integration time of T = 3.0 × 10 3 units.The results are shown in Section 4 below were averaged over 25 independent realizations for each set of parameter values and random initial conditions to warrant reliable statistical accuracy with respect to the small-world and random network generations and the global stability of CS and PS.For each realization, we choose random initial points [v k (0), w k (0), φ k (0)] for the kth (k = 1, ..., N ) neuron with uniform probability distribution in the range of 45, 3.5].It is worth pointing out that we have carefully excluded the transient behavior from simulations as with all the quantities calculated.After an initial transient time of T 0 = 2.4 × 10 3 units, we start recording the values of the variables (v k , w k , φ k ) and the spiking times t ℓ k (ℓ ∈ N counts the spiking times).Furthermore, the initial weights of all excitable synapses are normally distributed in the interval [g min , g max ] = [0.001,0.5], with mean g 0 = 0.35 and standard deviation σ 0 = 0.01.
The flow of control in the simulations is presented in Table 2 and the Algorithm in Appendix.The two outermost loops in the pseudo-code are on the parameters P and F , resulting in Fig. 1.Other parameters replace the parameter in the outermost loop (i.e., P ) to get results presented in the rest of the figures.
The global stability of CS and PS is analyzed using basin stability measure B, defined as where Ω represents the set of all possible random perturbations ω and h(ω) equals unity if the neural network converges to synchronized states after a perturbation ω and zero otherwise.The density of the perturbed states, represented by g(ω), satisfies the condition Ω g(ω)dω = 1.
In our computation, we integrate the system for a sufficiently large number Q of realizations.Each realization is executed with random initial conditions drawn uniformly from a prescribed region of phase space.If q is the number of initial conditions that eventually arrive at the synchronous state, then the basin stability B for the synchronous state is estimated as q/Q.Thus, B is bounded in the unit interval [0,1], whereby B = 0 indicates that the synchronized state is completely unstable and has the size of its basin of attraction tending to zero; and when B = 1, all sampled initial conditions are pulled to the synchronized state, implying a globally stable synchronized state; and when 0 < B < 1, the probability (in the classical sense) of getting the synchronous states for random initial conditions located in the prescribed region of the phase space.We can also interpret 0 < B < 1 as the coexistence of synchronized and desynchronized states within a given region of phase space.
As we indicated earlier, a full level synchronization is hardly attained in many real-world systems [67], including biological neurons, where we can have heterogeneous initial conditions and coupling strengths (which are controlled by STDP) and/or the presence of uncorrelated random perturbations.Even though their degree of synchronization could be very high (i.e., E ≤ δ, 0 < δ ≪ 1 or R ≤ δ 0 , 0 ≪ δ 0 < 1, as t → ∞), it is hardly full (i.e., it is hard to get exactly E = 0 and R = 1, as t → ∞).Thus, in our computations, we sample the phase space volume prescribed above and consider E < 10 −1 and R > 0.9 a satisfactory precision for CS and PS, respectively.In the rest of this paper, we use the notations B E and B R to distinguish between the basin stability measure of CS and PS, respectively.

Results
The purpose of our study is to examine the impact of the HSP, which is governed by the rewiring frequency parameter F , in conjunction with (i) the STDP, which is influenced by the adjusting rate parameter P , (ii) the time delay τ c , (iii) the average degree connectivity k , and (iv) the rewiring probability β, on the degree of CS and PS in small-world and random networks.Our findings on the alterations of the degree of CS and PS are summarized in Table 1.

Combined effects of F and P
From many previous research works, it is well established that the strength of the coupling between oscillators (neurons included) is crucial for their synchronization.Essentially, if the coupling strength is zero or below a non-zero threshold, the oscillators cannot synchronize or achieve a certain degree of synchronization.Thus, Table 1 Summary of the relevant combined effects of P , F , and network parameters on the degree of CS and PS.The inclined arrow ր or ց represents an increase or a decrease, respectively, in the parameter value in the interval indicated and the degree of synchronization.The vertical arrow ↑ or ↓ indicates, respectively, that the high or low degree of synchronization stays high or low as the parameters are varied.

Topology STDP parameter
Network parameters HSP parameter Degree of CS Degree of PS Small-world for a better understanding of synchronization as a function of the STDP parameter P , which controls the modification of the synaptic coupling strengths and F , it is necessary to first investigate how the average synaptic weight G given in Eq. ( 7) varies with P and F .
where • t is the average over time, g ij (t) ∈ [0.001, 0.5], and g ij (t = 0) ∼ N (0.35, 0.01).In Figs.1(a) and (b) , we present the variation of G as a function of P and F in Watts-Strogatz small-world (β = 0.25) and completely random (β = 1) networks, respectively.We observe that increasing P weakens the average synaptic weight in both small-world and random networks, and at the same time, for a given value of P , increasing F has no significant effect on the average synaptic weight.One major difference between the two topologies is that the weakening of synapses after STDP is significantly stronger in the random network, with average synaptic weight reaching a value as low as G = 0.0718 compared to G = 0.102 in the small-world network.
The fact that the synapses strengthen with decreasing P leads to the dominant depression of the synaptic weights (as D/P increases and G never exceeds the mean value of the initial synaptic weights distribution N (0.35, 0.01)) is in agreement with experimental studies [24,32].Hence, we expect that decreasing P would favor synchronization.
The variation of G in Fig. 1 is robust, as extensive numerical simulations (not shown) indicate that G displays the same qualitative behavior with respect to P and F and for other values of the synaptic time delay τ c ∈ [0, 80], average degree connectivity k ∈ [2,30], rewiring probability β ∈ (0, 1], and network size N ∈ [80,120].Thus, even though G is not a variable main interest in our study, it is worth pointing out that the way the dynamics of G relate to the degree of CS and PS can be inferred from its inverse and monotonic variation with P .In the next subsections, we will investigate the combined effect of F and a network parameter (τ c , k , β) on synchronization at the smallest value of P (= 1.0×10 −6 ), i.e., the largest value of average synaptic strength G(= 0.35) in the network.
In Figs.2(a) and (b), we show, respectively, the time series of a few spiking neurons and the spatiotemporal pattern of all the spiking neurons in a small world network of Fig. 1(a), when the STDP parameter is relatively large, i.e., P = 1.0 × 10 −3 , leading to a weak average synaptic strength G ≈ 0.1.In these figures, it can be seen the neurons exhibit a poor degree of CS (see the red curve in Fig. 3(b)) and a poor degree of PS at early times of the time-series (see the red curve in Fig. 3(c)) due to the weak average synaptic strength (see the red curve in Fig. 3(a)).Figures 2(c) and (d) display the time series of a few spiking neurons and the spatiotemporal pattern of all spiking neurons in the random network of Fig. 1(b), when the STDP parameter is relatively small, i.e., P = 1.0 × 10 −6 , leading to a stronger average synaptic strength G ≈ 0.35.In this case, the neurons exhibit good degree of CS (see the blue curve in Fig. 3(b)) and a good degree of PS (see the blue curve in Fig. 3(c)) due to the stronger average synaptic strength (see the blue curve in Fig. 3(a)).
The red curve in Fig. 3(a) represents the time series of the averaged synaptic weight G of the small-world network (β = 0.25) when the STDP parameter is relatively large P = 1.0 × 10 −3 -just as in Figs.2(a) and (b).In this case, we can see that G saturates at a relatively low value.Hence, the poor degree of CS, as indicated in Figs.2(a P (= 1.0 × 10 −3 ), we also observe a poor degree of PS measured by the relatively low Kuramoto order parameter R represented by the red curve in Fig. 3(c).However, towards the end of the time series in Fig. 3(b), the red curve increases from a relatively low value to higher values near 1, indicating a better degree of PS.This explains why in Figs.2(a), some neurons towards the end of the time series turn to synchronize their spiking times, leading to a higher degree of PS.Nevertheless, it is worth noting that CS is still very poor as most neurons have synchronized only their spiking times and not the traces of their membrane potentials.
Furthermore, we observe that towards the end of the time series in Fig. 3(a), there is no growth in the average synaptic strength G. Hence, the synaptic strength is not responsible for this improvement in the degree of PS toward the end of the time series in Fig. 2(a) and the red curve in Fig. 3(c).This behavior is explained by the fact that our oscillators (FHN neurons in Eq. ( 1)) are identical.Thus, with a sufficiently long transient time, identical oscillators with weak coupling can still synchronize because of the similarity of their attractors in phase space.In this case, the oscillators adjust their phases to align in a specific relationship, while their amplitudes may differ (hence, the poor degree of CS).This PS occurs due to the shared properties of the oscillators, such as having identical parameter values, natural frequencies, and similar dynamical behaviors.When the coupling between the identical oscillators weakens, their interaction is not strong enough to force CS.However, some or most oscillators may occasionally achieve a state of PS where their phases become correlated -like at the end of the time series in Fig. 2(a) (where the last spiking time of some neuron coincide), leading to a higher degree of PS as indicated by the higher values of R toward the end of the time series in Fig. 3(c).
On the other hand, the blue curve in Fig. 3(a) represents the time series of the averaged synaptic weight G of the random network (β = 1) when the STDP parameter is relatively small P = 1.0×10 −6 -just as in Figs.2(c) and (d).In this case, we can see that G saturates at a relatively high value.Hence, the high degree of CS, as indicated in Figs.2(c) and (d), and the relatively low synchronization error E represented by the blue curve in Fig. 3(b).Furthermore, for this same value of P (= 1.0 × 10 −6 ), we also observe a high degree of PS measured by the relatively high Kuramoto order parameter R represented by the blue curve in Fig. 3(c).
In the rest of the paper, we present the behaviors of CS and PS in the smallworld and random networks as a function of each network parameter and the network rewriting frequency F , at the best STDP parameter value (i.e., P = 1.0 × 10 −6 ) for both types of synchronization.In Figs.4(a) and (b), we depict the variations in the degree of CS and PS as a function of P and F in a smallworld network (β = 0.25), respectively.It is evident from these two figures that decreasing the value of P (i.e., strengthening the average synaptic weights in the network after STDP, as shown in Fig. 1) enhances the degree of CS (i.e., E → 0) and the degree of PS (i.e., R → 1).At the same time, for any given value of P , increasing the value of F has no significant effect on the degree of CS and PS, except in the case of CS for very small values of P (≈ 10 −6 ), where increasing F occasionally enhances the degree of CS to almost full synchrony (E ≈ 0).This implies that when the average synaptic weight is strong, a more rapidly changing small-world network can achieve larger windows of CS.Comparing the degree of CS and PS, we observe that a relatively weaker average synaptic weight (controlled by P ) is required to achieve a high degree of PS (shown in light yellow) as opposed to CS, which requires a much stronger average synaptic weight to attain a high degree.
In Figs.4(c) and (d), we present the basin stability of CS and PS corresponding to Figs. 4(a) and (b), respectively.Figure 4(c) indicates the highest degrees of CS (i.e., the dark blue regions in Fig. 4(a), with P ≈ 10 −6 and E < 10 −1 ) that are not globally stable (i.e., B E < 1) in the prescribed region of phase space.Instead, we have the co-existence of a desynchronized state and a synchronized state (the latter being more probable than the former since 0.7 < B E < 1).Furthermore, it can be observed in Fig. 4(c) that when P ≈ 10 −6 , increasing F leads to an increase in B E , indicating that small-world network with more rapidly switching synapses and a strong average synaptic weight after STDP will yield a globally stable CS. Figure 4(d) indicates that the highest degree PS achieved in Fig. 4(b) (light yellow regions) is globally stable (i.e., B R ≈ 1) for slightly lower values of P .It can also be seen that for 10 −6 < P < 10 −5 , increasing F yields an increase in B R from 0.6 to almost 1, indicating that, just like with CS, rapidly switching synapses increases the basin stability of PS.Moreover, comparing the basins stability of CS and PS, it is clear that PS is more stable than CS in the above-prescribed region of phase space.Qualitatively similar results (not shown) are obtained for the random network (β = 1).In the following sections, when we refer to the optimal value of P , we specifically indicate P = 1.0×10 −6 .The results in Fig. 4 indicate that this value of P yields the highest degrees of CS and PS.
In summary, in the P − F parameter plane, decreasing P (which increases the weakening effect of STDP on the synaptic weights) and increasing F (which speeds up the swapping rate of synapses between neurons) leads to a more stable and higher degree of CS and PS in both the small-world and random networks, provided that τ c , β, and k are fixed at suitable values.
4.2 Combined effect of F and τ c at the optimal P In Figs.5(a) and (b), we present the variations in the degree of CS and PS as a function of the synaptic time delay τ c ∈ [0, 160] and F at the optimal value of P in a small-world network.The results indicate that the small-world network exhibits intermittent CS and PS, irrespective of the switching frequency of synapses F .Next, we provide a mathematical explanation for intermittent CS and PS as τ c increases.First, we recall that if a deterministic delayed differential equation is generally given as ẋ = f (x(t), x(t − τ c )), where τ c is the time delay, possesses a solution x(t) with period τ , then x(t) also solves ẋ = f (x(t), x(t − τ c − nτ )), for all positive integers n ∈ N. It suffices to check if the distance between the horizontal bands of the maximum degree of CS and PS in Figs.5(a) and (b), compares to the average (over the total number of neurons) interspike interval (ISI), alias period of the neural activity which is computed and given by ISI ≈ 80.It is observed from Fig. 5(a) that three deep blue horizontal bands where the network exhibits the highest degree of CS are equidistant, and the distance between each is given τ ≈ 80 ≈ ISI.Hence, the synchronization pattern for CS repeats itself n times after nτ , n=0,1,2,..., waiting time.This explanation applies to the case of PS in Fig. 5(b).
Figures 5(c) and (d) display the basin stability measure of CS and PS presented in Figs.5(a) and (b), respectively.It can be observed from Fig. 5(c) that higher rewiring frequencies increase the basin stability of CS, especially at intermediate time delays, i.e., at τ c ≈ 80. Furthermore, we can again see that the highest degree of CS is less stable than that of PS.In the case of the random network (β = 1), we have obtained qualitatively similar results (not shown).
In summary, in the τ c − F parameter plane, both small-world and random networks display intermittent CS and PS as τ c increases, with the highest degrees of CS and PS occurring when the synaptic time delay τ c is multiple of the average inter-spike interval of the networks.

Combined effect of F and k at the optimal P
In Figs.6(a) and (b), we depict the variations in the degree of CS and PS, respectively, as a function of k and F in a small-world network (β = 0.25) at the optimal value of P indicated.The results suggest that higher values of the average degree connectivity ( k > 8 for CS and k > 5 for PS) yield a high degree of CS and PS, irrespective of the rewiring frequency F .This behavior can be explained  These behaviors can be explained by the fact that in a random network, neurons interact, on average, with as many nearest and as distant neighbors, while in the small-world network (with β = 0.25), most of the neurons interact only with their nearest neighbors and a relatively few distant neighbors.These fewer interactions in the small-world network reduce the degree of synchronization.In summary, in the k − F parameter plane, lower values of F and higher values of k yield higher and more stable degrees of CS and PS in small-world networks.While the higher values of F and higher values of k yield higher and more stable degrees of CS and PS in the random network.
4.4 Combined effect of F and β at the optimal P In Figs.8(a) and (b), we show the variations in the degree of CS and PS, respectively, as a function of β ∈ [0.05, 1] and F at the optimal value of P indicated.It can be seen that the degrees of CS and PS are relatively low for (i) small-world networks built with a low rewiring probability (i.e., β < 0.1) and have slowly switching synapses (i.e., F < 10 −3 ) and (ii) for almost all small-world networks with rapidly switching synapses (i.e., F > 1).For the random network (i.e., when β = 1), the degrees of CS and PS stay relatively high irrespective of F .In Figs.8(c) and (d), we present the basin stability measures of CS and PS corresponding to Figs. 8(a) and (b), respectively.It can be observed that the CS is more stable for more small-world networks with a higher number of random shortcuts (i.e., higher rewiring probability β > 0.4) and intermediate rewiring frequencies (i.e., 10 −2 < F < 10 0 ).For the case of a completely random network (i.e., β = 1), we have more stable CS for a wider range of the rewiring frequency (i.e., 10 −2 < F < 10 2 ).The degree of PS in Fig. 8 In summary, in the β − F parameter plane, higher values of β ∈ [0.05, 1] and intermediate values of F yield a higher and more stable degree of CS and PS (i.e., random network yields better synchronization than small-world networks).

Conclusions
This paper investigated the properties of two important phenomena, complete synchronization (CS) and phase synchronization (PS), in adaptive small-world and random neural networks.These networks were driven by two adaptive rules: Spike-Timing-Dependent Plasticity (STDP) and homeostatic structural plasticity (HSP).Our study yielded valuable insights into the factors that significantly affect the degree and stability of CS and PS.We found that various parameters, including the potentiation rate parameter P for STDP, the rewiring frequency parameter F for HSP, and the network topology parameters such as synaptic time delay τ c , average degree connectivity k , and rewiring probability β, play a crucial role in shaping the dynamics and stability of CS and PS.
Our results consistently demonstrated that PS exhibits greater stability compared to CS.This observation is particularly significant because precise spike timing is known to be crucial for information processing in neural systems [60].The greater stability of PS, as indicated by the basin stability measure, may explain why neurons rely on the precise timing of spikes to encode information rather than the trace of the voltage (represented by the actual values of the voltage v), which is used to evaluate the degree of CS through the error E.
Furthermore, recent experiments have shown that the modulation of STDP can be influenced by signaling molecules such as acetylcholine [27].Additionally, advances in Neuroscience research have made it possible to manipulate synapse control in the brain using drugs that affect neurotransmitters [59] or optical fibers to stimulate genetically engineered neurons selectively [58].Consequently, our findings hold practical implications for optimizing neural information processing through synchronization in experimental settings and designing artificial neural circuits that enhance signal processing through synchronization.n th spike time of the i th neuron rq order parameter of the q th realization eq synchronization error of the q th realization gq mean synaptic weight of the q th realization B E q basin stability of CS for the q th realization B R q basin stability of PS for the q th realization T ol E tolerance value of eq for CS

Fig. 1
Fig. 1 Variation of the average synaptic weight G as a function of P and F in (a) small-world (β = 0.25) and (b) random (β = 1) network.In both topologies, decreasing P strengthens the average synaptic weight after STDP learning, while F has no significant effect on G, especially at larger P .Parameter values: k = 10, τc = 0.0, N = 100.

Fig. 2
Fig. 2 Time series of some neurons' membrane potential in (a) and corresponding spatiotemporal pattern in (b) of a small-world network (β = 0.25) with P = 1.0 × 10 −3 exhibiting poor degree CS and PS.Time series of some neurons' membrane potential in (c) and corresponding spatiotemporal pattern in (d) of the random network (β = 1) with P = 1.0 × 10 −6 exhibiting a high degree of CS and PS.Parameter values: F = 100, k = 10, τc = 0.0, N = 100.

Fig. 3
Fig. 3 Time series of average synaptic weight G in (a) for small-world (β = 0.25, red curve) and random (β = 1.0, blue curve) networks at various STDP parameter P values.Time series of average error of membrane potential traces in (b) for small-world (β = 0.25, red curve) and random (β = 1.0, blue curve) networks at different P .Time series of average error for Kuramoto order parameter R in (a) for small-world (β = 0.25, red) and random (β = 1.0, blue) networks at different P .Parameter values: F = 100, k = 10, τc = 0.0, N = 100.

Fig. 6
Fig. 6 Variation in the degree of synchronization and the corresponding global stability w.r.t.k and F in a small-world network with an optimal STDP parameter value P .(a) and (c): degree of CS and the corresponding basin stability measure.(b) and (d): degree of PS and the corresponding basin stability measure.Parameter values: β = 0.25, τc = 3.0, N = 100.

7
(c) and (d), we present the basin stability measures of CS and PS corresponding to Figs. 7(a) and (b), respectively.It is evident that CS and PS are more stable in the random than in the small-world network depicted in Figs.6(c) and (d).

Fig. 7
Fig. 7 Variation in the degree of synchronization and the corresponding global stability w.r.t.k and F in the completely random network with an optimal STDP parameter value P .(a) and (c): degree of CS and the corresponding basin stability measure.(b) and (d): degree of PS and the corresponding basin stability measure.Parameter values: β = 1.0, τc = 3.0, N = 100.
(b) shows similar behavior.Comparing Figs.8(a) and (b), we see that PS is more stable than CS both in terms of the size of the region where B E and B R achieve their maximum values and of the actual maximum values of B E and B R .

Table 2
Definition of notations used in the Algorithm T ol R tolerance value of rq for PS C E # of initial conditions that finally arrive at T ol E C R # of initial conditions that finally arrive at T ol R E average synchronization error over Q R average order parameter over Q G average of mean synaptic weights over Q B E basin stability measure for CS B R basin stability measure for PS ℓ ij (t) ← ℓ ij (t) ; // Update the adjacency matrix with ℓ ij (t) obtained by randomly rewiring ℓ ij (t) with frequency F according to the small-world (β ∈ (0, 1)) or random (β = 1)network rewiring model − v 1 ) 2 + (w i − w 1 ) 2 + (φ i − φ 1 ) 2 // compute the number of initial conditions that finally reach or exceed the tolerance for CS R ← C R + 1;// compute the number of initial conditions that finally reach or exceed the tolerance for PS t ; i − t n i ) t ; // z = √ −1 21 C E ← 0; 22 if eq < T ol E then 23 C E ← C E + 1; 24 C R ← 0; 25 if rq > T ol R then 26 C 27 Add eq to e; 28 Add gq to g; 29 Add rq to r; 30 E ← e/Q; // Compute the average of E over Q 31 G ← g/Q; // Compute the average of G over Q 32 R ← r/Q; // Compute the average of R over Q 33 B E ← C E /Q; // Compute basin stability measure for CS 34 B R ← C R /Q; // Compute basin stability measure for PS 35 F ← F + ∆F ; // Increment the HSP control parameter 36 P ← P + ∆P ; // Increment the STDP control parameter