The stochastic Fitzhugh–Nagumo neuron model in the excitable regime embeds a leaky integrate-and-fire model

In this paper, we provide a complete mathematical construction for a stochastic leaky-integrate-and-fire model (LIF) mimicking the interspike interval (ISI) statistics of a stochastic FitzHugh–Nagumo neuron model (FHN) in the excitable regime, where the unique fixed point is stable. Under specific types of noises, we prove that there exists a global random attractor for the stochastic FHN system. The linearization method is then applied to estimate the firing time and to derive the associated radial equation representing a LIF equation. This result confirms the previous prediction in Ditlevsen and Greenwood (J Math Biol 67(2):239–259, 2013) for the Morris-Lecar neuron model in the bistability regime consisting of a stable fixed point and a stable limit cycle.


Introduction
Mathematical modeling has emerged as an important tool to handle the overwhelming structural complexity of neuronal processes and to gain a better understanding of their functioning from the dynamics of their model equations. However, the mathematical analysis of biophysically realistic neuron models such as the 4-dimensional Hodgkin-Huxley (HH) (1952) and the 2-dimensional Morris-Lecar (ML) (1981) equations is difficult, as a result of a large parameter space, strong nonlinearities, and a high dimensional phase space of the model equations. The search for simpler, mathematically tractable (small parameter space, weaker nonlinearities, low dimensional phase space) neuron models that still capture all, or at least some important dynamical behaviors of biophysical neurons (HH and ML) has been an active area of research.
The efforts in this area of research have resulted in easily computable neuron models which mimic some of the dynamics of biophysical neuron models. One of the resulting models is the 2-dimensional FitzHugh-Nagumo (FHN) neuron model (FitzHugh 1961). The FHN model has been so successful, because it is at the same time mathematically simple and produces a rich dynamical behavior that makes it a model system in many regards, as it reproduces the main dynamical features of the HH model. In fact, the HH model has two types of variables, and each type then is combined into a single variable in FHN: The (V , m) variables of HH correspond to the v variable in FHN, whose fast dynamics represents excitability; the (h, n) variables correspond to the w variable, whose slow dynamics represents accommodation and refractoriness.
The fact that the FHN model is low dimensional makes it possible to visualize the solution and to explain in geometric terms important phenomena related to the excitability and action potential generation mechanisms observed in biological neurons. Of course, this comes at the expense of numerical agreement with the biophysical neuron models (Yamakou 2018). The purpose of the model is not a close match with biophysically realistic high dimensional models, but rather a mathematical explanation of the essential dynamical mechanism behind the firing of a neuron. Moreover, the analysis of such simpler neuron models may lead to the discovery of new phenomena, for which we may then search in the biological neuron models and also in experimental preparations.
There is, however, an even simpler model than FHN, the leaky integrate-and-fire model (LIF). This is the simplest reasonable neuron model. It only requires a few basic facts about nerve cells: they have membranes, they are semipermeable, and they are polarizable. This suffices to deduce a circuit equivalent to that of the membrane potential of the neuron: a resistor-capacitor circuit. Such circuits charge up slowly when presented with a current, cross a threshold voltage (a spike), then slowly discharge. This behavior is modeled by a simple 1D equation together with a reset mechanism: the leaky integrate-and-fire neuron model equation (Gerstner and Kistler 2002). Combining sub-threshold dynamics with firing rules has led to a variety of 1D leaky integrate-and-fire descriptions of a neuron with a fixed membrane potential firing threshold (Gerstner and Kistler 2002;Lansky and Ditlevsen 2008) or with a firing rate depending more sensitively on the membrane potential (Pfister et al. 2006). In contrast to n−dimensional neuron models, n ≥ 2, such as the HH, ML, and FHN models, the LIF class of neuron models is less expensive in numerical simulations, which is an essential advantage when a large network of coupled neurons is considered.
Noise is ubiquitous in neural systems and it may arise from many different sources. One source may come from synaptic noise, that is, the quasi-random release of neurotransmitters by synapses or random synaptic input from other neurons. As a consequence of synaptic coupling, real neurons operate in the presence of synaptic noise. Therefore, most works in computational neuroscience address modifications in neural activity arising from synaptic noise. Its significance can however be judged only if its consequences can be separated from the internal noise, generated by the operations of ionic channels (Calvin and Stevens 1967). The latter is channel noise, that is, the random switching of ion channels. In many papers channel noise is assumed to be minimal, because typically a large number of ion channels is involved and fluctuations should average out, and therefore, the effects of synaptic noise should dominate. Consequently, channel noise is frequently ignored in the mathematical modeling. However, the presence of channel noise can also greatly modify the behavior of neurons (White et al. 2000). Therefore, in this paper, we study the effect of channel noise. Specifically, we add a noise term to the right-hand side of the gating equations (the equation for the ionic current variable).
In the stochastic model, the deterministic fixed point is no longer a solution of the system. The fixed point necessarily needs to vary and adapt to the noise. To account for this, in the theory of random dynamical systems, the notion of a random dynamical attractor was developed as a substitute for deterministic attractors in the presence of noise. In the first part of this paper, we therefore prove that our system admits a global random attractor, for both additive and multiplicative channel noises. This can be seen as a theoretical grounding of our setting.
In Ditlevsen and Greenwood (2013), it was shown that a stochastic LIF model constructed with a radial Ornstein-Uhlenbeck process is embedded in the ML model (in a bistable regime consisting of a fixed point and limit cycle) as an integral part of it, closely approximating the sub-threshold fluctuations of the ML dynamics. This result suggests that the firing pattern of a stochastic ML can be recreated using the embedded LIF together with a ML stochastic firing mechanism. The LIF model embedded in the ML model captures sub-threshold dynamics of a combination of the membrane potential and ion channels. Therefore, results that can be readily obtained for LIF models can also yield insight about ML models. In the second part of this paper, we here address the problem to obtain a stochastic LIF model mimicking the interspike interval (ISI) statistics of the stochastic FHN model in the excitable regime, where the unique fixed point is stable. Theoretically, we obtain such a LIF model by reducing the 2D FHN model to the one dimensional system that models the distance of the solution to the random attractor as shown in the first part of the paper. In fact, we show that this distance can be approximated to the fixed point, up to a rescaling, as the Euclidean norm R t of the solution of the linearization of the stochastic FHN equation along the deterministic equilibrium point, and hence the LIF model is approximated by the equation for R t . An action potential (a spike) is produced when R t exceeds a certain firing threshold R t ≥ r 0 > 0. After firing the process is reset and time is back to zero. The ISI τ 0 is identified with the first-passage time of the threshold, τ 0 = inf{t > 0 : R t ≥ r 0 > 0}, which then acts as an upper bound of the spiking time τ of the original system. By defining the firing as a series of first-passage times, the 1D radial process R t together with a simple firing mechanism based on the detailed FHN model (in the excitable regime), the firing statistics is shown to reproduce the 2D FHN ISI distribution. We also show that τ and τ 0 share the same distribution.
The rest of the paper is organized as follows: Sect. 2 introduces the deterministic version of the FHN neuron model, where we determine the parameter values for which the model is in the excitable regime. In Sect. 3, we prove the existence of a global random attractor of the random dynamical system generated by the stochastic FHN equation; and furthermore derive a rough estimate for the firing time using the linearization method. The corresponding stochastic LIF equation is then derived in Sect. 4 and its distribution of interspike-intervals is found to numerically match the stochastic FHN model.

The deterministic model and the excitable regime
In the fast time scale t, the deterministic FHN neuron model is where v t is the activity of the membrane potential and w t is the recovery current that restores the resting state of the model. I is a constant bias current which can be considered as the effective external input current. 0 < ε := t/τ 1 is a small singular perturbation parameter which determines the time scale separation between the fast t and the slow time scale τ . Thus, the dynamics of v t is much faster than that of w t . α and β are parameters. The deterministic critical manifold C 0 defining the set of equilibria of the layer problem associated to Eq. (2.1) (i.e., the equation obtained from Eq. (2.1) in the singular limit = 0, see Kuehn (2015) for a comprehensive introduction to slow-fast analysis), is obtained by solving f (v, w) = 0 for w. Thus, it is given by We note that for Eq. (2.1), C 0 coincides with the v-nullcline (the red curve in Fig. (1)). The stability of points on C 0 as steady states of the layer problem associated to Eq. (2.1) is determined by the Jacobian scalar This shows that on the critical manifold, points with |v| > 1 are stable while points with |v| < 1 are unstable.
The set of fixed points (v e , w e ) which define the resting states of the neuron is given by The sign of the discriminant = (1/β − 1) 3 + 9 4 (α/β − I ) 2 , determines the number of fixed points. C 0 can therefore intersect the w-nullcline (w = v+α β ) at one, two or three different fixed points. We assume in this paper that > 0, in which case we have a unique fixed point given by Here, we want to consider the neuron in the excitable regime (Ditlevsen and Greenwood 2013). A neuron is in the excitable regime when starting in the basin of attraction of a unique stable fixed point, an external pulse will result into at most one large excursion (spike) into the phase space after which the phase trajectory returns back to this fixed point and stays there (Izhikevich 2007).
In order to have Eq. (2.1) in the excitable regime, we choose I , α, and β such that > 0 (i.e., a unique fixed point) and ε such that the Jacobian (the linearization matrix M) of Eq. (2.1) at the fixed point (v e , w e ) has a pair of complex conjugate eigenvalues with negative real part (i.e., a stable fixed point). In that case, (v e , w e ) is the only stationary state and there is no limit cycle of system (2.1). In other words, (v e , w e ) is the global attractor of the system (Izhikevich 2007). Moreover, to apply the averaging technique (Baxendale and Greenwood 2011), it is necessary that μ ν, we therefore use throughout this paper the following parameters of the system: I = 0.265, α = 0.7, β = 0.75, ε = 0.08 so that (v e , w e ) = (−1.00125, −0.401665) is the unique stable fixed point and μ ν = 0.111059 1. Figure (1) shows the neuron in the excitable regime. Notice that although every trajectory finally converges to the fixed point, only a small change in the location of the starting point will result in different behavior of the trajectories (see the blue and purple curves).

The stochastic model
We consider this stochastic FHN model (3.1) where the deterministic fields f and g are given in Eq. (2.1). There are two important cases: either h(w) = σ 0 (additive channel noise) or h(w) = σ 0 w (multiplicative channel noise). •d B t stands for the Stratonovich stochastic integral with respect to the Brownian motion B t . Figure 2 shows the phase portraits of Eq. (3.1) starting with the initial condition (v 0 , w 0 ) = (−1.00125, −0.4), which is in the vicinity of the stable fixed point. Given an initial condition close to the stable fixed point (v e , w e ) = (−1.00125, −0.401665), the trajectory of the stochastic system might first rotate around the stable fixed point but then the noise may trigger a spike, that is, a large excursion into the phase space, before returning to the neighbourhood of the fixed point; the process repeats itself leading to alternations of small and large oscillations. A similar behavior can be observed when the deterministic system with an additional limit cycle is perturbed by noise (as seen in the bistable system Ditlevsen and Greenwood 2013). Figure 3 shows that the spiking frequency increases as the amplitude of the noise increases. For a fixed simulation time T = 1000, the system spikes only rarely, if at all, when the amplitude σ 0 ≤ 0.005, but spikes more frequently when σ 0 increases. This is similar for multiplicative noise.
Let X = (v, w) T and F(X), H (X) ∈ R 2 be the drift and diffusion coefficients of (3.1). The stochastic system is then of the form where H (X) = (0, σ 0 ) T for additive noise and H (X) = 0 0 0 σ 0 X = BX for multiplicative noise. It is easy to check that F is dissipative in the weak sense, i.e.
On the other hand, we have for multiplicative noise, while |H (X 1 ) − H (X 2 )| ≡ 0 for additive noise, so H is globally Lipschitz continuous.

The existence of a random attractor
In the sequel, we are going to prove that there exists a unique solution X(·, ω, X 0 ) of (3.1) and the solution then generates a so-called random dynamical system (see e.g. Arnold 1998, Chapters 1-2).
More precisely, let ( , F, P) be a probability space on which our Brownian motion B t is defined. In our setting, can be chosen as C 0 (R, R), the space of continuous real functions on R which are zero at zero, equipped with the compact open topology given by the uniform convergence on compact intervals in R, F as B(C 0 ), the associated Borel-σ -algebra and P as the Wiener measure. The Brownian motion B t can then be constructed as the canonical version B t (ω) := ω(t).
On this probability space we construct a dynamical system θ as the Wiener shift Then θ t (·) : → satisfies the group property, i.e. θ t+s = θ t • θ s for all t, s ∈ R, and is P-preserving, i.e. P(θ −1 t (A)) = P(A) for every A ∈ F, t ∈ R. The quadruple (( , F, P, (θ t ) t∈R ) is called a metric dynamical system.
Given such a probabilistic setting, Theorem 3.1 below proves that the solution mapping ϕ : R × × R 2 → R 2 defined by ϕ(t, ω)X 0 := X(t, ω, X 0 ) is a random dynamical system satisfying ϕ(0, ω)X 0 = X 0 and the cocycle property To investigate the asymptotic behavior of the system under the influence of noise, we shall first check the effect of the noise amplitude on firing. Under the stochastic scenario, the fixed point X e = (v e , w e ) is no longer the stationary state of the stochastic system (3.1). Instead, we need to find the global asymptotic state as a compact random set A(ω) ∈ R 2 depending measurably on ω ∈ such that A is invariant under ϕ, i.e. ϕ(t, ω)A(ω) = A(θ t ω), and attracts all other compact random sets D(ω) in the pullback sense, i.e.
where d(B|A) is the Hausdorff semi-distance. Such a structure is called a random attractor (see e.g. Crauel et al. 1997or Arnold 1998. The following theorem ensures that the stochastic system (3.1) has a global random pullback attractor. The proof is provided in the "Appendix".
Theorem 3.1 There exists a unique solution of (3.2) which generates a random dynamical system. Moreover, the system possesses a global random pullback attractor.
Theorem 3.1 shows that every trajectory would in the long run converge to the global random attractor. The structure and the inside dynamics of the global random attractor are still open issues which might help understand the firing mechanism.

The normal form at the equilibrium point
One way to study the dynamics of the stochastic system (3.1) is through its linearization. Therefore, in this section, we shall study the dynamics of (3.1) in a small vicinity of the fixed point X e = (v e , w e ). To do that, consider the shift system w.r.t. the fixed point X e which has the form for an increasing function γ (·) : R + → R + , r → r 2 3 + |v e |r , which implies that lim r →0 γ (r ) = 0. Since H (X) is either a constant or a linear function, we prove below that system (3.8) can be well approximated by its linearized system (3.9) Theorem 3.2 Given X 0 − X e < r and equations (3.8), (3.9), define the stopping time τ = inf{t > 0 : X t − X e ≥ r }. Then there exists a constant C independent of r such that for any t ≥ 0, the following estimates hold • For additive noise sup t≤τ X t − X e −X t ≤ Cγ (r )r . (3.10) • For multiplicative noise E X t∧τ − X e −X t∧τ 2 ≤ Cγ 2 (r )r 2 . (3.11) The proof is provided in the "Appendix". In practice we can even approximate (3.8) by the following linear system with additive noise (3.12) By the same arguments as in the proof of Theorem 3.2, we can prove the following estimate E X t∧τ − X e −X t∧τ 2 ≤ Cr 2 0 , (3.13) for the same stopping time τ = inf{t > 0 : X t − X e ≥ r 0 }. Another comparison between the processes {X t −X e } t and {X t } t can be obtained by using power spectral density estimation (see, for example, Fan and Yao 2003, Chapter 7). In Fig. 4, the estimated spectral densities of the shifted original and the linearized process are plotted. The spectral densities are estimated from paths started from 0 to 50 ms of subthreshold fluctuations, and scaled to have the same maximum at 40.

The embedded LIF model
In this section, we present two constructive methods to obtain 1-D LIF models corresponding to the stochastic FHN in the excitable regime in Eq. (3.1). The first method follows Baxendale and Greenwood (2011) (see also Ditlevsen and Greenwood 2013) by constructing the so-called radial Ornstein-Uhlenbeck equation. More precisely, we rewrite the linearized system (3.9) in the form We note that μ ν = 0.111059 1, therefore, by applying the technique of time average from (Baxendale and Greenwood 2011, Theorem 1),Ȳ t can be approximated by an Ornstein-Uhlenbeck process up to a rotation, i.e.
The second method is to considerȲ t in polar coordinates with where h e = Q −1 0 σ 0 . Its normR t := Ȳ t and its angle θ . This can also be tested by using the power spectral density estimation (see Fig. 5).

Firing mechanism
A spike in Eq. its right stable part and back to the vicinity of X e . This spike happens almost surely when a random trajectory with the starting point X 0 in the vicinity of X e crosses the threshold line v = 0. From the phase space of Eq. (3.1) (see Fig. 2), the probability of a spike increases as the starting point X 0 moves farther away from X e .
In order to construct the firing mechanism of Eq. (4.3) matching that of Eq. (3.1), we will calculate the conditional probability that Eq. (3.1) fires given that the trajectory crosses the line L = {(v e , w) : w ≤ w e }. Denote by L i = (v e , w e − l i ) with l i = iδ = i |w e +0.453| 20 for i = 0, 1, . . . , 34, then the distance between the equilibrium and L i is l i . The value |w e + 0.453| can be considered as the distance between the fixed point (v e , w e ) and the separatrix (see also Fig. 1) along L. For a given pair (σ 0 , l i ), a short trajectory starting in L i was simulated from (3.1), it was recorded whether a spike occurred (crossing the threshold v = 0) in the first cycle of the stochastic path around (v e , w e ). This was repeated 1000 times and we counted the ratio of the number of spikes, denoted byp(l i , σ 0 ), which is an estimate for the conditional probability of firing p(l, σ 0 ). The estimation was, furthermore, repeated for σ 0 = 0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009, 0.01, 0.015.
From the numerical simulation, for each σ 0 , the estimate of the conditional probability is close to zero when we start in the immediate neighborhood of the stable fixed point and close to one when we start at the L 34 , i.e., sufficiently far from the fixed point. Theses estimates appear to depend in a sigmoidal way on the distance from the stable fixed point. Therefore we assumed the conditional probability of firing to be of the form (4.5) The parameters a and b then are estimated by using a non-linear regression from the above simulation data and are plotted in Fig. 6 for some different values of the noise amplitude σ 0 = 0.003, 0.005, 0.007, 0.009, 0.01, and 0.015. We see that the family of estimates,p, fits the fitted curve quite well for each value of σ 0 . Regression estimates  Table 1. Note that p(a) = 1/2, i.e., a is the distance along L from w e at which the conditional probability of firing equals one half. For all values of σ 0 , the estimate of a is close to the distance along L between w e and the separatrix, which equals 0.05. In other words, the probability of firing, if the path starts at the intersection of L with the separatrix, is about 1/2. The estimate of b increases with respect to σ 0 , and the conditional probability approaches a step function as the amplitude of the noise goes to zero. A step function would correspond to the firing being represented by a first passage time of a fixed threshold.
To simplify calculations we will work on the transformed coordinatesȲ t . Then the distance l between (0, l) and (0, 0) inX t transforms to the distance Table 1 Estimates of regression parameters for the conditional probability of firing in the original space

ISI distributions
The comparison of the original stochastic FHN model (3.1) and the two LIF models (4.3) and (4.4) can be performed by studying the ISI statistics. Namely, one first simulates the trajectories of the system (3.1) with starting points X 0 close to the fixed point X e until the first spiking time, and thereafter resets to the starting points. Due to Theorem 3.2, we can simplify the simulation by choosing the starting point at exactly X e . This was done 1000 times, and the time of the first firing was recorded. A histogram for this data is shown in Fig. 7. The ISI-distribution of Eq. (4.3) is computed as follows (the ISI-distribution of Eq. (4.4) is computed similarly). Let τ 1 be the first firing time.
We computed the density of the distribution of τ 1 in terms of the conditional hazard rate (Ditlevsen and Greenwood 2013), This function is the density of the conditional probability, given the position on L is r at time t, of a spike occurring in the next small time interval, given that it has not yet occurred.
Notice that the estimated conditional probability of firing (4.6) is calculated in one cycle of the process, which on average takes 2π/ν time units. Therefore, we estimate the hazard rate as α(r , t) = α(r ) = ν 2π On the other hand, from standard results from survival analysis, see e.g. Aalen and Borgan (2008) we know that the density of the firing time can be calculated as (4.8) Due to the law of large numbers, for fixed t, we can numerically determine the density (4.8) up to any desired precision by choosing n and M large enough through the expression . . , n, and the integral has been approximated by the trapezoidal rule. The results are illustrated in Fig. 7 for σ 0 = 0.01, using M = 1000, n = 10. The estimated ISI distributions from our approximate LIF models (4.3) and (4.4) with the firing mechanism compare well with the estimated ISI histogram of FHN (3.1) reset to 0 after firings.

System (3.2) is then tranformed tȯ
Hence by the comparison principle, which can be computed explicitly as It is then easy to check that the vector field in (5.1) satisfies the local Lipschitz property and the solution is bounded and thus of linear growth on any fixed [0, T ], see e.g. Schenk-Hoppé (1996). Hence there exists a unique solution of (5.1) with initial condition, which also proves the existence and uniqueness of the solution of (3.2). The cocycle property (3.7) follows automatically from (Arnold 1998, Chapter 2). A direct computation shows that there exists a random radius which is the stationary solution of (5.2), such that X t (ω, X 0 ) ∈ B(η t , R * (θ t ω)) whenever X 0 ∈ B(η 0 , R * (ω)) by the comparison principle, and furthermore, lim sup Hence the random ball B(η, R * ) is a forward invariant pullback absorbing set of the random dynamical system generated by ϕ(t, ω)X 0 (3.2). By the classical theorem Crauel et al. (1997), there exists the global random pullback attractor for (3.2). • Multiplicative noise: In this case, we introduce the transformation Hence by the comparison principle, Y t 2 ≤ R t whenever Y 0 2 ≤ R 0 where R t is the solution ofṘ which can be computed explicitly as Using similar arguments as in the additive noise case, there exists a unique solution of (5.5) and (3.2). Also, the solution generates a random dynamical system. On the other hand, observe that by the Birkhorff ergodic theorem, there exists almost surely Moreover, Y t (ω, Y 0 ) 2 ≤R(θ t ω) whenever Y 0 2 ≤R(ω) and lim sup t→∞ Y t (θ −t ω, Y 0 ) 2 ≤ lim sup t→∞ R t (θ −t ω, R 0 ) =R(ω).
Hence, the ball B(0, R(ω)) is actually forward invariant under the random dynamical system generated by (5.5) and is also a pullback absorbing set. Again by applying Crauel et al. (1997), there exists a random attractor for (5.5). Due to the fact that z t is the stationary solution of (5.4), it is easy to see that the random linear transformation T (z) given in (5.3) is tempered (see Arnold 1998, pp. 164, 386), i.e. (1 + 2σ 0 |z t |) = 0.
Therefore, it follows from Imkeller and Schmalfuss (2001)   Define the difference Z t := Y t −Ȳ t , then Z t satisfies where We analyze these two cases separately.