Skip to main content
Log in

Hydrodynamic Limit for Interacting Neurons

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

This paper studies the hydrodynamic limit of a stochastic process describing the time evolution of a system with N neurons with mean-field interactions produced both by chemical and by electrical synapses. This system can be informally described as follows. Each neuron spikes randomly following a point process with rate depending on its membrane potential. At its spiking time, the membrane potential of the spiking neuron is reset to the value 0 and, simultaneously, the membrane potentials of the other neurons are increased by an amount of potential \(\frac{1}{N} \). This mimics the effect of chemical synapses. Additionally, the effect of electrical synapses is represented by a deterministic drift of all the membrane potentials towards the average value of the system. We show that, as the system size N diverges, the distribution of membrane potentials becomes deterministic and is described by a limit density which obeys a non linear PDE which is a conservation law of hyperbolic type.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Galves, A., Löcherbach, E.: Infinite systems of interacting chains with memory of variable length—a stochastic model for biological neural nets. J. Stat. Phys. 151, 896–921 (2013)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  2. Gerstner, W., Kistler, W.M.: Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)

    Book  MATH  Google Scholar 

  3. Davis, M.H.A.: Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models. J. R. Stat. Soc. Ser. B 46, 353–388 (1984)

    MATH  Google Scholar 

  4. Pakdaman, K., Thieullen, M., Wainrib, G.: Fluid limit theorems for stochastic hybrid systems with application to neuron models. Adv. Appl. Probab. 42(3), 761–794 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  5. Riedler, M.G., Thieullen, M., Wainrib, G.: Limit theorems for infinite-dimensional piecewise deterministic markov processes. Applications to stochastic excitable membrane models. Electron. J. Probab. 17(55), 1–48 (2012)

    MathSciNet  Google Scholar 

  6. Faugeras, O., Touboul, J., Cessac, B.: A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs. Front. Comput. Neurosci. 3, 1–28 (2009)

    Article  Google Scholar 

  7. Cessac, B., Doyon, B., Quoy, M., Samuelides, M.: Mean-field equations, bifurcation map and route to chaos in discrete time neural networks. Physica D 74(12), 24–44 (1994)

    Article  ADS  MATH  MathSciNet  Google Scholar 

  8. Moynot, Olivier, Samuelides, Manuel: Large deviations and mean-field theory for asymmetric random recurrent neural networks. Probab. Theory Relat. Fields 123(1), 41–75 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  9. Delarue, F., Inlis, J., Rubenthaler, S., Tanré, E.: Global Solvability of a Networked Integrate-and-Fire Model of McKean–Vlasov Type. (2012)

  10. Touboul, J.: Propagation of chaos in neural fields. Ann. Appl. Probab. 24(3), 1298–1328 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  11. Mitoma, I.: Tightness of probabilities on C([0,1];\(S_p\)) and D([0,1];\(S_p\)). Ann. Probab. 11, 989–999 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  12. De Masi, A., Presutti, E.: Mathematical Methods for Hydrodynamic Limits. Lecture Notes in Mathematics, vol. 1501. Springer, Berlin (1991)

    Book  MATH  Google Scholar 

Download references

Acknowledgments

We thank M. Cassandro and D. Gabrielli for their collaborative participation to the first stage of this work. Lemma 2 is due to discussions of EL with Nicolas Fournier. We also thank B. Cessac, P. Dai Pra and C. Vargas for many illuminating discussions. This article was produced as part of the activities of FAPESP Research, Dissemination and Innovation Center for Neuromathematics (Grant 2013/07699-0, S. Paulo Research Foundation). This work is part of USP project “Mathematics, computation, language and the brain”, USP/COFECUB project “Stochastic systems with interactions of variable range” and CNPq project “Stochastic modeling of the brain activity” (Grant 480108/2012-9). ADM is partially supported by PRIN 2009 (prot. 2009TA2595-002). AG is partially supported by a CNPq fellowship (Grant 309501/2011-3). AG and EL thank GSSI for hospitality and support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to E. Löcherbach.

Appendices

Appendix 1: Proof of Theorem 1

We are working at fixed N and therefore drop the superscript N from \(U^N .\) Recall that the average potential of configuration \(U(t) \) is given by \( \bar{U}_N(t) = \frac{1}{N} \sum _{i=1}^N U_i(t) \) and let

$$\begin{aligned} K (t) = \sum _{ i = 1 }^N \int _0^t 1_{ \{ U_i (s-) \le 2 \} } d N_i ( s) \end{aligned}$$
(8.70)

be the total number of fires in \([0,t]\) when \(U_i\le 2 .\) Recall (2.3). The key element of our proof is the following lemma which is due to discussions with Nicolas Fournier.

Lemma 2

We have

$$\begin{aligned} \bar{U}_N(t) \le \bar{U}_N(0) + \frac{K(t)}{N} , \quad \frac{N(t)}{N} \le \bar{U}_N ( 0) + 2 \frac{K(t)}{N} \end{aligned}$$
(8.71)

and

$$\begin{aligned} \Vert U(t)\Vert \le 2 \Vert U (0)\Vert + 2 \frac{K(t)}{N} . \end{aligned}$$
(8.72)

Proof

Suppose \(U_i\) fires at time \(t\), then

$$\begin{aligned} \bar{U}_N(t)= \frac{1}{N} \sum _{j\ne i} \left( U_j(t-) + \frac{1}{N}\right) = \bar{U}_N(t-) + \frac{N-1}{N^2} - \frac{U_i(t-)}{N} . \end{aligned}$$

Thus the average potential decreases if \(U_i(t-)\ge 1\) (and a fortiori if \( U_i ( t- ) \ge 2 \)) which implies the first assertion of (8.71). Concerning the second assertion of (8.71), we start with

$$\begin{aligned} \bar{U}_N (t) = \bar{U}_N ( 0 ) + \frac{1}{N} \sum _{i=1}^N \int _0^t \left( \frac{N-1}{N} - U_i (s-) \right) d N_i (s) , \end{aligned}$$

which implies, since \( \bar{U}_N ( t) \ge 0 \) and \( \frac{N-1}{N} \le 1 , \) that

$$\begin{aligned} \frac{1}{N} \sum _{i=1}^N \int _0^t ( U_i ( s- ) - 1 ) d N_i (s) \le \bar{U}_N ( 0 ) . \end{aligned}$$

We use that \( x- 1 \ge \frac{x}{2} 1_{\{ x \ge 2 \}} - 1_{\{ x \le 1 \}} \) and obtain from this that

$$\begin{aligned} \frac{1}{N} \sum _{i=1}^N \int _0^t \frac{U_i ( s- ) }{2} 1_{\{ U_i ( s-) \ge 2 \}} d N_i ( s)&\le \bar{U}_N ( 0) + \frac{1}{N} \sum _{i=1}^N \int _0^t 1_{\{ U_i ( s-) \le 1 \}} d N_i ( s) \\&\le \bar{U}_N ( 0) + \frac{K(t)}{N} . \end{aligned}$$

Observing that \( 1 \le \frac{x}{2} 1_{\{ x \ge 2\}} + 1_{\{x \le 2\}} , \) we deduce from the above that

$$\begin{aligned} \frac{N(t)}{N} = \frac{1}{N} \sum _{i=1}^N \int _0^t d N_i ( s) \le \bar{U}_N (0) + \frac{K(t)}{N} + \frac{K(t)}{N} , \end{aligned}$$

implying the second assertion of (8.71).

Since between successive jumps the largest \(U_i(t)\) is attracted towards the average potential we can upper bound its position by neglecting the action of the gap junction, implying that

$$\begin{aligned} \Vert U(t) \Vert \le \Vert U(0) \Vert + \frac{ N(t) }{N} , \end{aligned}$$

which, together with (8.71), gives (8.72), since \(\bar{U}_{N} (0) \le \Vert U (0) \Vert .\) \(\square \)

Proof of Theorem 1

By (8.72) we have

$$\begin{aligned} \Vert U (t) \Vert \le 2 \Vert U(0 \Vert + 2 \frac{ K(T) }{N} , \end{aligned}$$

for all \( t \le T .\) But the process \(K (t) \) is stochastically bounded by a Poisson process with intensity \( N f(2) .\) Therefore, there exists a constant \(K\) such that

$$\begin{aligned} P \Big [ | K (T) | \le K N \Big ] \ge 1 - e^{ - C N T } . \end{aligned}$$

This implies (2.4). Finally, notice that the above arguments give implicitly the proof of the existence of the process \(U(t), \) since the process can be constructed explicitly, by piecing together trajectories of the deterministic flow between successive jump times, once we know that the number of jumps of the process is finite almost surely on any finite time interval. \(\square \)

Appendix 2:Proof of Theorem 4

In what follows, \(C\) is a constant which may change from one appearance to another.

1.1 The Stopped Process

A technical difficulty in the proof of Theorem 4 comes from the possible occurrence of an anomalously large number of fires in one of the time steps \([(n-1)\delta ,n\delta ]\). To avoid the problem we stop the process as soon as this happens and prove the theorem for such a stopped process. We then conclude by a large deviation estimate for the probability that the process is stopped before reaching the final time \(T\).

Recalling from Proposition 1 and Proposition 3 that the number of fires in an interval \([(n-1)\delta ,n\delta ]\) in either one of the two processes is stochastically bounded by a Poisson variable of intensity \(f^* \delta N\) we stop the algorithm defining the coupled process as soon as the number of firings in either one of the two processes exceeds \(2f^*\delta N\) in one of the time steps \([(n-1)\delta ,n\delta ]\). We call \(E\) the event when the process is stopped before reaching the final time. Then uniformly in the initial datum \( Y^{(\delta )} (0) = U(0 ) =x\),

$$\begin{aligned} P_x (E) \le 2 \frac{T}{\delta } e^{-C N\delta } . \end{aligned}$$
(8.73)

By an abuse of notation we denote by the same symbol the stopped processes and in the sequel, unless otherwise stated, we refer to the stopped process. We fix arbitrarily \(A>0\) and consider the process starting from \( Y^{(\delta )} (0) = U(0 ) = x \) with \(\Vert x\Vert \le A .\)

Writing \(B^* := B + A + 2 f^* T,\) we have by (3.21) for all \(t\le T\) and all \(n\delta \le T\)

$$\begin{aligned} \Vert U(t)\Vert \le B^*,\;\; \Vert Y^{(\delta )}(n\delta )\Vert \le B^* \quad \text{ for } \text{ the } \text{ stopped } \text{ processes. } \end{aligned}$$
(8.74)

It follows that the same bounds hold for the unstopped process with probability \(\ge 1 - e^{-CN\delta }\).

Thus by restricting to the stopped process we have

  • the firing rate of each neuron is \(\le f^*\) and the number of fires of all neurons in any of the steps \([(n-1)\delta ,n\delta ]\) is \(\le 2f^*\delta N\).

  • The bounds (8.74) are verified and as a consequence the average potentials in the \(U\) and \(Y^{(\delta )}\) processes are \(\le B^*\) so that the gap-junction drift on each neuron is \(\le {\lambda }B^*\).

1.2 Bounds on the Increments of \(M_n\)

We write \( M_n = M_{ n- 1 } + | A_n \cap \mathcal G_{n-1} | + | B_n \cap \mathcal G_{n-1} | \le M_{ n- 1 } + | A_n | + | B_n \cap \mathcal G_{n-1} |\) where recalling the algorithm given in Sect. 5.2 and Definition 2

  • \(A_n\) is the set of all labels \(i\) for which a clock associated to label \(i\) rings at least twice during \( [ (n- 1) \delta , n \delta ] .\)

  • \(B_n\) is the set of all labels \(i\) for which a clock associated to label \(i\) rings only once during \( [ (n- 1) \delta , n \delta ] ,\) and it is the clock \(\tau _i^{2}.\)

We shall prove that (for the stopped processes)

$$\begin{aligned}&P \Big [ |A_n | > N (\delta f^* )^2 \Big ] \le e^{ - C N \delta ^2} , \end{aligned}$$
(8.75)
$$\begin{aligned}&P \Big [ | B_n \cap \mathcal G_{n-1} | > 2 C N \delta \left[ \theta _{n-1} + \delta \right] \Big ] \le e^{ - C N \delta ^2 } \end{aligned}$$
(8.76)

(recall \(C\) is a constant whose value may change at each appearance).

It will then follow that with probability \(\ge 1 - 2 e^{ - C N \delta ^2 } \)

$$\begin{aligned} M_n \le M_{n-1} + N (\delta f^* )^2 + 2 C N \delta \left[ \theta _{n-1} + \delta \right] \le M_{n-1} + C N \delta \left[ \theta _{n-1} + \delta \right] . \end{aligned}$$
(8.77)

Iterating the upper bound and using that \( n \delta \le T,\) we will then conclude that with probability \(\ge 1 - 2n e^{ - C N \delta ^2 } \ge 1 - \frac{C}{\delta } e^{ - C N \delta ^2 },\) where \(C\) depends on \(T\),

$$\begin{aligned} \frac{M_n}{N} \le C \delta \sum _{k=1}^{n-1} \theta _k + C \delta \le C (\theta _{n-1} + \delta ) \text{ for } \text{ all } n \le \frac{T}{\delta } , \end{aligned}$$
(8.78)

having used that, by definition, \(\theta _k \le \theta _{n-1} .\)

Proof of (8.75).

\( |A_n |\) is stochastically upper bounded by \(S^*:= \sum _{ i= 1}^N 1_{ \{ N_i^* \ge 2 \} } ,\) where \( N_1^*, \ldots N_N^* \) are independent Poisson variables of parameter \(f^*\delta \), \(f^* = \Vert f \Vert _\infty .\) We write \(p^* = P ( N_i^* \ge 2 ) \) and have

$$\begin{aligned} e^{ - \delta f^* } \frac{1}{2} \delta ^2 (f^*)^2 \le p^* \le \frac{1}{2} (\delta f^*)^2,\quad p^* \approx \frac{ 1}{2 }\,(\delta f^*)^2 \text{ as } \delta \rightarrow 0 . \end{aligned}$$

\(S^*\) is the sum of N Bernoulli variables, each with average \(p^*\). Then by the Hoeffding’s inequality, we get (8.75).

Proof of (8.76).

We shall prove that the random variable \( |B_n \cap \mathcal G_{n-1} | \) (for the stopped process) is stochastically upper bounded by \(\sum _{i= 1}^N \mathbf 1_{\{\bar{N}_i \ge 1\}}, \) where \( \bar{N}_i, i = 1 , \ldots , N , \) are independent Poisson variables of parameter \( C ( \theta _{n-1} + \delta ) \delta .\) (8.76) will then follow straightly.

We shorthand

$$\begin{aligned}&y:= Y^{(\delta )} ( (n-1) \delta ), \;x:= U ( (n-1) \delta ),\;\; y(\delta ):= Y^{(\delta )} ( n\delta ),\\&x(t):= U ( (n-1) \delta +t ), t\in [0,\delta ] , \end{aligned}$$

and introduce independent random times \(\tau _i^2\), \(i=1,\ldots ,N\), of intensity \(|f(y_i)- f(x_i(t)|\), \(t\in [0,\delta [\). Then \(|B_n|\) is stochastically bounded by \(\sum _{i=1}^N \mathbf 1_{\{ \tau _i^2<\delta \}}\) because we are neglecting some of the conditions for being in \(B_n\). We also obviously have

$$\begin{aligned} |B_n \cap \mathcal G_{n-1}| \le \sum _{i=1}^N \mathbf 1_{\{\tau _i^2<\delta , i\in \mathcal G_{n-1}\}} \quad \text {stochastically.} \end{aligned}$$

To control the right hand side we bound

$$\begin{aligned} |f(x_i(t)) - f(y_i) | \le \Vert f \Vert _{Lip} |x_i(t)-y_i|, \end{aligned}$$

\( \Vert f \Vert _{Lip} \) the Lipschitz constant of the function \(f.\) Denote by \(N_j(s,t)\) the number of spikes of \(U_j(\cdot )\) in the time interval \([s,t]\), then analogously to (5.26),

$$\begin{aligned} |x_i (t) - y_i | \!\le \! | x_i - y_i | \!+\!\int _0^t \lambda e^{ - \lambda ( t- s) } |\bar{x} (s) - x_i | ds + \frac{1}{ N} \sum _{ j \ne i } N_j ( [ (n-1)\delta ,(n-1)\delta + t ] ) . \end{aligned}$$

We have \( | \bar{x} (s) - x_i | \le B^* \) and \(\sum _{ j \ne i } N_j ( [ (n-1)\delta ,(n-1)\delta + t ] )\le 2 f^* \delta N\) because we are considering the stopped process. Then if \( i \in \mathcal G_{n-1} \),

$$\begin{aligned} |x_i (t) - y_i | \le \theta _{n-1} + B^* \delta + 2 f^* \delta \end{aligned}$$

and therefore

$$\begin{aligned} |f(x_i(t)) - f(y_i) | \le \Vert f \Vert _{Lip} \left( \theta _{n-1} + B^* \delta + 2 f^* \delta \right) \le C ( \theta _{n-1} + \delta ) \end{aligned}$$

so that

$$\begin{aligned} \sum _{i=1}^N \mathbf 1_{\{{\sigma }_i<\delta , i\in \mathcal G_{n-1}\} } \le \sum _{i=1}^N \mathbf 1_{\{ \bar{N}_i \ge 1 \}} \quad \text {stochastically,} \end{aligned}$$

where the \(\bar{N}_i\) are independent Poisson random variables of intensity \(C ( \theta _{n-1} + \delta )\). This proves (8.76).

1.3 Bounds on \(\theta _n\)

The final bound on \(\theta _n\) is reported in (8.87) at the end of this subsection. We start by characterizing the elements \(i \in \mathcal{G}_{n}\) as \(i \in \mathcal{G}_{n- 1} \cap (C_n \cup F_n )\) where:

  1. 1.

    \(C_n\) is the set of all labels \(i\) for which a clock associated to label \(i\) rings only once during \( [ (n- 1) \delta , n \delta ] ,\) and it is a clock \(\tau _i^{1}\).

  2. 2.

    \(F_n\) is the set of all labels \(i\) which do not have any jump during \( [ (n- 1) \delta , n \delta ] .\)

In other words, we study labels \(i\) which are good at time \( (n-1) \delta \) and which stay good at time \( n \delta \) as well. We shall use in the proofs the following formula for the potential \(U_i(t)\) of a neuron which does not fire in the interval \([t_0,t]\):

$$\begin{aligned} U_i(t) = e^{-{\lambda }(t-t_0)}U_i(t_0) + \int _{t_0}^t \lambda e^{-{\lambda }(t-s)}\{\bar{U}(s) ds + \frac{1}{{\lambda }N} dN(s)\} , \end{aligned}$$
(8.79)

\(N(t)\) denoting the total number of fires till time \(t\). For the \(Y^{(\delta )}\) process we shall instead use (5.26) and the expressions thereafter.

  • Labels \(i\in C_n \cap \mathcal{G}_{n- 1}\).

For such labels \(i\) there is a random time \( t \in [(n-1)\delta ,n\delta [\) at which a \(\tau _i^{1 } \) event happens. Then by (8.79)

$$\begin{aligned} U_i (n \delta ) = \int _t^{ \delta } \lambda e^{ - \lambda ( \delta - s)} \bar{U}_N (s) ds + e^{ - \lambda \delta }\frac{1}{N} \int _t^\delta e^{\lambda s} d N (s) , \end{aligned}$$

because \(U_i(t^+)=0\). Since we are considering the stopped process, \(\bar{U} (\cdot ) \le B^*\) and \(N(n\delta )-N((n-1)\delta ) \le 2f^*\delta N\) so that \(U_i (n \delta ) \le C \delta \). In the same way, \( Y^{(\delta )}_i (n \delta ) \le C \delta ,\) and therefore

$$\begin{aligned} D_i (n ) = |U_i ( n \delta ) - Y^{(\delta )}_i ( n \delta ) | \le C \delta ,\quad \text {for the stopped process} . \end{aligned}$$
(8.80)

Notice that the bound does not depend on \(D_i(n-1)\).

  • Labels \(i\in F_n \cap \mathcal{G}_{n- 1}\).

This means that \(i\) is good at time \( (n-1) \delta \) and does not jump, neither in the \(U\) nor in the \(Y^{(\delta )}\) process. Let \( U( (n-1) \delta ) = x \) and \( Y^{(\delta )} ( (n-1) \delta ) = y.\) By (8.79) and (5.27) \(|U_i ( n \delta ) - Y^{(\delta )}_i ( n \delta ) | = D_i(n)\) is bounded by

$$\begin{aligned} D_i (n)&\le e^{-\lambda \delta }| x_i - y_i | + (1 - e^{-{\lambda }\delta }) |\bar{x}-\bar{y}| + \int _{(n-1)\delta }^{n\delta } \lambda e^{- \lambda (n\delta -t )} |\bar{U}_N (t) - \bar{U}_N(0) ] dt \nonumber \\&\quad +\, \frac{1}{N} |\int _{(n-1)\delta }^{n\delta } e^{-{\lambda }(n\delta -t)} dN(t) - Nq| \end{aligned}$$
(8.81)

where \(Nq\) is the total number of fires in the process \(Y^{(\delta )}\) in the step from \((n-1)\delta \) to \(n\delta \).

We bound the right hand side of (8.81) as follows. We have \(e^{-\lambda \delta }| x_i - y_i | \le \theta _{n-1}.\) Moreover,

$$\begin{aligned} (1 - e^{-{\lambda }\delta }) |\bar{x}-\bar{y}| \le {\lambda }\delta \Big (\theta _{n-1} + B^{*}{\frac{M_{n-1}}{N}\Big ),\quad B^{*}\;\mathrm{as}\;\mathrm{in}\; (7.74)} , \end{aligned}$$

and

$$\begin{aligned} \int _{(n-1)\delta }^{n\delta } \lambda e^{- \lambda (n\delta -t )} |\bar{U}_N (t) - \bar{U}_N(0) ] dt \le {\lambda }\delta \frac{1}{N} N((n-1)\delta ,n\delta ) , \end{aligned}$$

where \(N((n-1)\delta ,n\delta )= N(n\delta )-N((n-1)\delta )\). Writing

$$\begin{aligned} \int _{(n-1)\delta }^{n\delta } e^{-{\lambda }(n\delta -t)} dN(t) = N((n-1)\delta ,n\delta ) + \int _{(n-1)\delta }^{n\delta } \{ e^{-{\lambda }(n\delta -t)}-1\} dN(t) , \end{aligned}$$

we bound the last term on the right hand side of (8.81) as

$$\begin{aligned} \frac{1}{N} |\int _{(n-1)\delta }^{n\delta } e^{-{\lambda }(n\delta -t)} dN(t) - Nq| \le \frac{1}{N}\Big ( |N((n-1)\delta ,n\delta )-Nq| + {\lambda }\delta N((n-1)\delta ,n\delta )\Big ) . \end{aligned}$$

Collecting all these bounds we then get

$$\begin{aligned} D_i (n)&\le \theta _{n-1}(1+{\lambda }\delta ) + {\lambda }\delta B^*\frac{M_{n-1}}{N}+ 2{\lambda }\delta \frac{1}{N} N((n-1)\delta ,n\delta ) \\&\quad +\,\,\frac{1}{N} |N((n-1)\delta ,n\delta )-Nq| , \end{aligned}$$

and since we are considering the stopped process

$$\begin{aligned} D_i (n) \le \theta _{n-1}(1+{\lambda }\delta ) + {\lambda }\delta B^* \frac{M_{n-1}}{N}+ 2{\lambda }\delta 2f^*\delta +\frac{1}{N} |N((n-1)\delta ,n\delta )-Nq| .\qquad \quad \end{aligned}$$
(8.82)

By the definition of the sets \(A_n ,\ldots , F_n\) we have

$$\begin{aligned} |N((n-1)\delta ,n\delta ) -Nq| \le \sum _{j\in A_n} N_j((n-1)\delta ,n\delta ) +|B_n| . \end{aligned}$$
(8.83)

What follows is devoted to the control of the rhs of (8.83). We start with \(|B_n|.\) With probability \(\ge 1- e^{-C\delta ^2N}\)

$$\begin{aligned} |B_n| \le |B_n \cap \mathcal G_{n-1}|+ |B_n \cap M_{n-1}| \le 2C N\delta ^2 +2C N \delta \theta _{n-1}+ |M_{n-1}| 2 f^*\delta , \end{aligned}$$
(8.84)

having used (8.76) and that the number of neurons among those in \(M_{n-1}\) which fire in a time \(\delta \) is bounded by a Poisson variable of intensity \(f^* \delta |M_{n-1}|\). Moreover,

$$\begin{aligned} P\Big [ \sum _{j\in A_n} N_j((n-1)\delta ,n\delta )&\ge 4 (f^*\delta )^2 N\Big ] \le P\Big [ \sum _{j\in A_n} N_j((n-1)\delta ,n\delta )\nonumber \\&\ge 4 (f^*\delta )^2 N; |A_n| \le (f^*\delta )^2 N \Big ] + P\Big [ |A_n| > (f^*\delta )^2 N \Big ] .\nonumber \\ \end{aligned}$$
(8.85)

The last term is bounded using (8.75).

We are now going to bound the first term in (8.83). For that sake, let \(A \subset \{1,\ldots ,N\}\), \(|A| \le (f^*\delta )^2 N ,\) then

$$\begin{aligned} P\Big [ \sum _{j\in A_n} N_j((n-1)\delta ,n\delta ) \ge 4 (f^*\delta )^2 N\;|\; A_n =A \Big ] \le P^*\Big [ \sum _{j\in A}( N^*_j-2) \ge 2 (f^*\delta )^2 N \Big ] , \end{aligned}$$

where \(P^*\) is the law of independent Poisson variables \(N^*_j\), \(j\in A\), each one of parameter \(f^*\delta \) and conditioned on being \(N^*_j\ge 2\). Thus the probability that \(N_j^*-2 = k\) is

$$\begin{aligned} P^*[N^*_j-2 = k ]= Z_\xi ^{-1} \frac{\xi ^k}{(k+2)!},\quad Z_\xi = \xi ^{-2} \Big (e^\xi - 1 -\xi \Big ),\quad \xi = f^*\delta . \end{aligned}$$

Denote by \(X_j\) independent Poisson variables of parameter \(\xi .\) Then it is easy to see that \(N^*_j-2 \le X_j\) stochastically for \(\xi \) small enough, hence for \(\delta \) small enough. Notice that \(X=\sum _{j\in A} X_j\) is a Poisson variable of parameter \(|A| \xi \le (f^*\delta )^2 N f^*\delta \) having expectation \(E^* ( X) \le (f^*\delta )^2 N\) for \(\delta \) small. As a consequence we may conclude that

$$\begin{aligned} P^*\Big [ \sum _{j\in A}( N^*_j-2) \ge 2 (f^*\delta )^2 N \Big ] \le P^*\Big [X \ge 2 (f^*\delta )^2 N \Big ] \le e^{-CN \delta ^{2}} . \end{aligned}$$

In conclusion for \(i\in F_n \cap \mathcal{G}_{n- 1}\):

$$\begin{aligned} D_i (n) \le \theta _{n-1}(1+ C \delta ) + C\delta \frac{M_{n-1}}{N}+ C\delta ^2 \end{aligned}$$
(8.86)

with probability \(\ge 1- e^{-C\delta ^2 N}\). Together with (8.80) this proves that with probability \(\ge 1- e^{-C\delta ^2 N}\)

$$\begin{aligned} \theta _n \le \max \left\{ C\delta ; \theta _{n-1}(1+ C \delta ) + C\delta \frac{M_{n-1}}{N}+ C\delta ^2\right\} . \end{aligned}$$
(8.87)

1.4 Iteration of the Inequalities

By (8.78), \(\frac{M_n}{N} \le C (\theta _{n-1} + \delta )\) for all \(n\delta \le T\) with probability \(\ge 1 - \frac{T}{\delta }e^{-CN\delta ^2}\). By (8.87), with probability \(\ge 1- \frac{T}{\delta }e^{-C\delta ^2 N}\) we have

$$\begin{aligned} \theta _n \le \max \left\{ C\delta ; \theta _{n-1}(1+ C \delta ) + C\delta \frac{M_{n-1}}{N}+ C\delta ^2 \right\} . \end{aligned}$$

Thus

$$\begin{aligned} \theta _n \le \max \Big ( C \delta ,\left[ 1 + C \delta \right] \theta _{n-1} + C \delta ^2 \Big ) , \end{aligned}$$

since \( \theta _{n-2} \le \theta _{n-1} .\) Iterating this inequality we obtain

$$\begin{aligned} \theta _n \le C \sum _{ k=0}^{n-1} \left[ 1 + C \delta \right] ^k \delta ^2 + (1 + C \delta )^n C \delta&= C \frac{ \left[ 1 + C \delta \right] ^n - 1 }{ C \delta } \delta ^2 + (1 + C \delta )^n C \delta \\&\le C e^{ C T } \delta , \end{aligned}$$

where we have used once more that \( n \delta \le T .\) Hence

$$\begin{aligned} \theta _n \le C \delta \end{aligned}$$

for all \( \delta \le \delta _0, \) with probability \(\ge 1 - \frac{C}{\delta ^2 } e^{ - C N \delta ^2 }.\) This finishes the proof of Theorem 4.

Appendix 3: Proof of Theorem 5

The proof is by induction on \(n.\) Firstly, (6.61) holds for \( n = 0, \) since \(B_0 = 0 \) and \(x^N_1, \ldots , x^N_N \) i.i.d. according to \(\psi _0 (x) dx .\) We then suppose that (6.61) holds for all \(j\le n\). We condition on \(\mathcal F_n\) and introduce \(G_n=\displaystyle \mathop {\bigcap }_{j\le n} \{d_j ( Y^{(\delta )} ,\rho ^{(\delta )}) \le \kappa _j \}.\) Then

$$\begin{aligned}&S_{x}^{(\delta ,N, \lambda )}\Big [d_{n+1}( Y^{(\delta )} ,\rho ^{(\delta )}) > \kappa _{n+1}\Big ] \nonumber \\&\le S_{x}^{(\delta ,N, \lambda )}\Big ( \mathbf 1_{G_n} S_{x}^{(\delta ,N, \lambda )}\Big [d_{n+1}( Y^{(\delta )} ,\rho ^{(\delta )}) > \kappa _{n+1}\;|\; \mathcal F_n\Big ] \Big )\nonumber \\&\quad +\,\, n c_1(n) e^{ - c_2 N^\gamma } . \end{aligned}$$
(8.88)

Therefore, we need to prove that for some constant \(c\)

$$\begin{aligned} S_{x}^{(\delta ,N, \lambda )}\Big [d_{n+1}( Y^{(\delta )} ,\rho ^{(\delta )}) > \kappa _{n+1}\;|\; \mathcal F_n\Big ] \le c e^{ - c_2 N^\gamma }, \quad on\,\, G_\textit{n}. \end{aligned}$$
(8.89)

From the conditioning we know that \(d_j( Y^{(\delta )} ,\rho ^{(\delta )}) \le \kappa _j\) for all \(j\le n\); we know also the value of \( Y^{(\delta )} ( n \delta ),\) say \( Y^{(\delta )} ( n \delta )= y ,\) we know the location of the edges \(R'_j, j\le n , \) and we know which are the good and the bad intervals at time \(n\delta \).

1.1 Consequences of Being in \(G_n\)

The condition to be in \(G_n\) does not only allow to control the quantities directly involved in the definition of \(d_n\) but also several other quantities. The first one is the difference \(|R'_n-R_n|\). Indeed, recalling (6.43) and (6.52),

$$\begin{aligned} |R'_n-R_n|&\le e^{- \lambda \delta }|R_{n-1}'-R_{n-1}| + | V((n-1)\delta ) -p_{(n-1) \delta }^{(\delta )} | \delta \nonumber \\&\quad +\, (1-e^{-\lambda \delta }) | \bar{Y}^{(\delta )} ((n-1)\delta ) - {\bar{\rho }}^{(\delta )}_{ (n-1) \delta }|. \nonumber \\&\le e^{- \lambda \delta }|R_{n-1}'-R_{n-1}| + \kappa _{n} \ell + \lambda \delta \kappa _{ n-1} \ell . \end{aligned}$$
(8.90)

Iterating this argument yields

$$\begin{aligned} |R'_n-R_n| \le \Big (\sum _{j=1}^{n} \kappa _{j}\Big )(1+\lambda \delta )\ell . \end{aligned}$$

Writing \(I_{i,n}= [a_{i,n},b_{i,n}]\) and \(I'_{i,n}= [a'_{i,n},b'_{i,n}],\) we obtain in particular

$$\begin{aligned} |a_{i,n}-a'_{i,n}| = |b'_{i,n}-b_{i,n}| \le \Big (\sum _{j=1}^{n} \kappa _{j}\Big )(1+\lambda \delta )\ell . \end{aligned}$$
(8.91)

We also get a bound on the increments of the number of bad intervals. Recalling (6.59) for notation we have indeed

$$\begin{aligned} B_{n} \le B_{n-1} + 1+ \frac{ | R_{n} - R'_{n} |}{\ell } . \end{aligned}$$
(8.92)

We finally have bounds on \( N'_{i,n} .\) Firstly we suppose that \(I_{i,n}\) is a good interval such that \( I_{ i, 0 } \subset {\mathbb R}_+ .\) Then \(N'_{i,n} \le N'_{i,0}\), whence for N large enough, since \( N \mathrm{w}_i = N_{i, 0} , \)

$$\begin{aligned} N'_{i,n} \le N_{i,0} + \kappa _0 \mathrm{w}_i N \ell \le (1+\kappa _0 \ell ) N \mathrm{w}_i \le 2N \mathrm{w}_i . \end{aligned}$$
(8.93)

We also have a lower bound. By (6.46), \(N_{i,n} \ge N_{i,0} e^{-f^* \delta n}\), hence

$$\begin{aligned} N'_{i,n} \ge N_{i,n}- \kappa _n \mathrm{w}_i N \ell \ge \mathrm{w}_i N \Big (e^{-f^* T}-\kappa _n \ell \Big ) \ge c \mathrm{w}_i N \ge c \ell ^3 N = c r^3 N^{ 1 - 3 \alpha } , \end{aligned}$$
(8.94)

for N large enough.

Now consider a good interval \(I_{i,n} \subset {\mathbb R}_+\) such that \( I_{ i, 0 } \subset {\mathbb R}_- .\) Then there exists \(k \le n \) such that \( I_{i, k- 1 } \subset {\mathbb R}_- \) and \(I_{i, k } \subset {\mathbb R}_+ .\) Recalling the definition of \(d_k \) in (5.29) and the definition (5.30) we notice that \( d_k \ge 1/N , \) if \( N \delta V (k \delta ) \ge 2 ,\) i.e. in case that at least two neurons spike. At step \(k,\) the number of neurons falling into the interval \( I_{i, k }\) is upper bounded by \( \frac{ \ell }{d_k } + 1 ,\) if \( N \delta V (k\delta ) \ge 2,\) otherwise, there is at most one neuron falling into it. In both cases, this yields the upper bound \( \ell N + 1 \) for the number of neurons falling into the interval. After time \(k,\) neurons originally in \( I_{i,k } \) can only disappear due to spiking. Hence,

$$\begin{aligned} N'_{i,n} \le N'_{i, k } \le N ( \ell + N^{ - 1 } ) \le C N \mathrm{w}_i , \end{aligned}$$
(8.95)

by definition of \(\mathrm{w}_i.\) In order to obtain a lower bound, we first use that

$$\begin{aligned} N'_{i,n} \ge N_{i,n}- \kappa _n \mathrm{w}_i N \ell . \end{aligned}$$

Since \( I_{i, k -1} \subset {\mathbb R}_-, \) we have that \( \rho _{k\delta }^{(\delta )} \equiv \rho _{k \delta }^{(\delta )} ( 0 ) \) on \(I_{i, k },\) hence \( N_{i, k } = N \rho _{ k \delta }^{(\delta ) } ( 0) \ell e^{ - \lambda \delta k}.\) Using Proposition 6 and (6.46),

$$\begin{aligned} N_{i, n } \ge N_{i, k } e^{ - f^* \delta (n -k ) } \ge N \rho _{ k \delta }^{(\delta ) } ( 0) \ell e^{ - \lambda T} e^{ - f^* T } \ge C N \psi _0 ( 0 ) \ell = C N \mathrm{w}_i , \end{aligned}$$

where \(C\) depends on \(T,\) which allows to conclude as above.

In case that \(I_{i,n}\) is a bad interval it is easy to see that the upper bound

$$\begin{aligned} N'_{i,n } + N_{i, n} \le C N \ell \end{aligned}$$
(8.96)

holds.

1.2 Expected Fires in Good Intervals

Recall that we are working on \(G_n\) and conditionally on \(Y^{(\delta )} (n \delta ) = y.\) Using (5.25) and (6.51), we write

$$\begin{aligned} V ( n \delta ) = \frac{1}{\delta N} \sum _{ i } \Delta _i , \text{ where } \Delta _i = \sum _{ j : y_j \in I'_{i,n } } \Phi _j (n ) , \end{aligned}$$
(8.97)

and call \(\langle \Delta _i \rangle \) its conditional expectation (given \(\mathcal F_n,\) and hence given that \(Y^{(\delta )} ( n \delta ) = y \)). Then

$$\begin{aligned} \langle \Delta _i \rangle = \sum _{ j : y_j \in I'_{i,n } } \left( 1 - e^{ -\delta f( y_j)} \right) . \end{aligned}$$

Write \(I'_{i,n}=[a'_{i,n},b'_{i,n}].\) Since \(f\) is non decreasing, we have

$$\begin{aligned} \langle \Delta _i \rangle \le N'_{i,n} (1- e^{-\delta f(b'_{i,n})}) \le N'_{i,n} (1- e^{-\delta f(a'_{i,n})}) + N'_{i,n}\left| e^{-\delta f(b'_{i,n})}-e^{-\delta f(a'_{i,n})}\right| . \end{aligned}$$

Moreover, \(f(b'_{i,n}) \le f( a'_{i,n}) + \Vert f\Vert _{Lip} \, \ell ,\) which implies that

$$\begin{aligned} \left| e^{-\delta f(b'_{i,n})}-e^{-\delta f(a'_{i,n})}\right| \le \delta C\ell . \end{aligned}$$

Suppose first that \(I'_{i,n}\) is good so that \(|N'_{i,n}- N_{i,n}| \le \kappa _n \mathrm{w}_i N\ell \). Then by (8.93) and (8.95),

$$\begin{aligned} \langle \Delta _i \rangle&\le \{N_{i,n} + \kappa _n \mathrm{w}_i N\ell \}(1- e^{-\delta f(a'_{i,n})})+\delta C\ell N\mathrm{w_i} \\&\le N_{i,n} (1- e^{-\delta f(a'_{i,n})}) + C (\kappa _n +1 ) \delta \mathrm{w}_i N\ell . \end{aligned}$$

Write \(I_{i,n}=[a_{i,n},b_{i,n}],\) so that by (8.91), \(|a'_{i,n}-a_{i,n}| \le K_n \ell ,\) where \(K_n = (\sum _{j=1}^n \kappa _n ) (1 + \lambda \delta ) .\) Then

$$\begin{aligned} \langle \Delta _i \rangle \le N_{i,n} (1- e^{-\delta f(a_i)}) + \big (K_n +C(1+\kappa _n)\big )\delta \mathrm{w}_i N\ell . \end{aligned}$$

Since \(f\) is non decreasing and \(N_{i,n} = N \int _{I_{i,n}} \rho ^{(\delta )}_{n\delta }(x)\, dx ,\)

$$\begin{aligned} \langle \Delta _i \rangle \le N \int _{I_{i,n}} \rho ^{(\delta )}_{n\delta }(x) \left( 1- e^{-\delta f(x)}\right) dx + \big (K_n +C(1+\kappa _n)\big )\delta \mathrm{w}_i N\ell . \end{aligned}$$
(8.98)

An analogous argument gives

$$\begin{aligned} \langle \Delta _i \rangle \ge N \int _{I_{i,n}} \rho ^{(\delta )}_{n\delta }(x) \left( 1- e^{-\delta f(x)}\right) dx - \big (K_n +C(1+\kappa _n)\big )\delta \mathrm{w}_i N\ell . \end{aligned}$$
(8.99)

1.3 Fires Fluctuations in Good Intervals

Hoeffding’s inequality implies that for any \( b > 0, \)

$$\begin{aligned} P \left[ | \Delta _i - \langle \Delta _i \rangle | \ge (N'_{ i,n })^{b + \frac{1}{2}} \right] \le 2 e^{- 2 (N'_{ i,n })^{2b} } . \end{aligned}$$
(8.100)

We introduce the contribution of \(I_{i,n}\) to \( p^{(\delta )}_{n\delta }\)

$$\begin{aligned} p^{(\delta )}_{i,n\delta }= \frac{N}{\delta } \int _{I_{i,n}} \rho ^{(\delta )}_{n\delta }(x) (1- e^{-\delta f(x)})dx . \end{aligned}$$

We then use (8.93), (8.94) and (8.95) together with (6.58 8.98) and (8.100) to get

$$\begin{aligned}&P\,\left[ \bigcap _{I_{i,n}\;\mathrm{good}}\left\{ \Delta _i \le \delta p^{(\delta )}_{i,n \delta }+ \big (K_n +C(1+\kappa _n)\big )\delta \mathrm{w}_i N\ell +\big (C N \mathrm{w}_i\big )^{\frac{1}{2} + b} \right\} \right] \nonumber \\&\ge 1- 2 m_n e^{-2 ( c r^3 N^{1-3\alpha })^{2b}} , \end{aligned}$$
(8.101)

where \(m_n \) is an upper bound on the number of good intervals which can be upper bounded by

$$\begin{aligned} m_n \le \left( \frac{R_n }{e^{ - \lambda n \delta } \ell } \vee \frac{R'_n}{e^{ - \lambda n \delta } \ell } \right) + 1 \le C \left( \frac{R_n}{\ell } + [ \sum _{ j = 1 }^n \kappa _j ] \right) +1 \le C N^\alpha , \end{aligned}$$

because \( R_n \le R_0 + c^* T \) and \(n \delta \le T.\) As a consequence, the right hand side of (8.101) can be lower bounded by \( 1 - C N^\alpha e^{-C ( N^{1-3\alpha })^{2b}} .\) By an analogous argument

$$\begin{aligned}&P\left[ \bigcap _{I_{i,n}\;\mathrm{good}}\{\Delta _i \ge \delta p^{(\delta )}_{i,n \delta }- \big (K_n +C(1+\kappa _n)\big )\delta \mathrm{w}_i N\ell -\big (2 N \mathrm{w}_i\big )^{\frac{1}{2} + b} \}\right] \nonumber \\&\ge 1-C N^\alpha e^{- C ( N^{1-3\alpha })^{2b}} . \end{aligned}$$
(8.102)

Now we choose \(b \) and \( \alpha \) sufficiently small such that for N large enough, \((C N \mathrm{w}_i )^{\frac{1}{2} + b} \le C\delta (1+\kappa _n)\big )\mathrm{w}_i N\ell .\) Then

$$\begin{aligned}&P\,\left[ \bigcap _{I_{i,n}\;\mathrm{good}}\left\{ |\Delta _i- \delta p^{(\delta )}_{i,n \delta }| \le \big (K_n +2C(1+\kappa _n)\big )\delta \mathrm{w}_i N\ell \right\} \right] \nonumber \\&\ge 1- C N^\alpha e^{-C ( N^{1-3\alpha })^{2b}} \ge 1 - C e^{ - C N^\gamma } , \end{aligned}$$
(8.103)

where \( \gamma = (1 - 3 \alpha ) 2 b \) and \(C\) a suitable constant.

1.4 The Bounds on \(V (n\delta ) \) and on \(B_{n+1}\)

By (8.96) and (8.103), with probability \(\ge 1 - C e^{ - C N^\gamma } ,\)

$$\begin{aligned} | \delta V (n \delta ) - \delta p_{n\delta }^{(\delta ) } | \le \left[ \sum _{I_{i,n}\;\mathrm{good}} \left( K_n +2C(1+\kappa _n)\right) \delta \mathrm{w}_i \ell \right] + \frac{C N\ell }{N} B_n \le \kappa '_{n +1 } \ell .\qquad \quad \end{aligned}$$
(8.104)

Hence, we have proven the desired assertion for \( V_n \) at time \( (n+1 ) \delta .\)

To bound \(B_{n+1},\) the number of bad intervals at time \((n+1) \delta ,\) we use the first inequality in (8.90) with \(n\rightarrow n+1\) together with (8.104), so that by (8.92)

$$\begin{aligned} B_{n+1} \le B_n + 1+ \frac{ | R_{n+1} - R'_{n+1} |}{\ell } \le B_n + 1+ \Big (\sum _{j=1}^{n} \kappa _{j}+\kappa '_{n+1}+ \kappa _n\Big )(1+\lambda \delta ), \end{aligned}$$

whence the assertion concerning \(B_{n+1}\).

1.5 Bounds on \(|N'_{i, n+1} - N_{i, n+1} |\)

Let \(I_{i,n}\) be a good interval at time \(n\delta \) which is contained in \(\mathbb R_+\). Then it is good also at time \((n+1)\delta \) and we have

$$\begin{aligned} N'_{i, n+1} = \sum _{ j : y_j \in I'_{n,i} } (1 - \Phi _j (n ) ) = N'_{i, n } - \Delta _i \text{ and } N_{i, n+1} = N_{i, n }- \delta p^{(\delta )}_{i,n\delta } . \end{aligned}$$

Thus

$$\begin{aligned} \frac{|N'_{i,n+1}- N_{i,n+1}| }{ \mathrm{w}_i N\ell } \le \kappa _n + \frac{|\Delta _i- \delta p^{(\delta )}_{i,n\delta }| }{ \mathrm{w}_i N\ell } , \end{aligned}$$

and the desired bound follows from (8.103).

It remains to consider a good interval \( I'_{i, n+1} \) such that \(I'_{i,n} \subset \mathbb R_-\) (and hence also \(I_{i,n} \subset \mathbb R_-\)). Thus \( I'_{i, n+1} \) consists entirely of “new born” neurons which arise due to firing events where the energies are reset to \(0.\) For such an interval,

$$\begin{aligned} \frac{ N'_{i, n+1}}{N \ell } \in [ \frac{1}{N d_n} e^{ - \lambda \delta n} , \frac{1}{N d_n} e^{ - \lambda \delta n } + 1 ] . \end{aligned}$$

But, recalling the definition of \(d_n\) in (5.29) and of \(\rho _{(n+1) \delta }^{(\delta )} ( 0 ) \) in (6.41), by continuity of \( (u,p ) \rightarrow \frac{ p \delta }{ p \delta + (1- e^{ - \lambda \delta } )u} ,\) we have

$$\begin{aligned} \left| \frac{1}{N d_n} - \rho ^{(\delta )}_{(n+1) \delta } (0) \right| \le C \kappa _n \ell . \end{aligned}$$
(8.105)

Since \(\rho ^\delta _{(n+1) \delta } (x) 1_{ I_{i, n+1} } (x) \equiv \rho _{(n+1) \delta }^{(\delta )} (0) \) on this interval, this implies that also for such intervals,

$$\begin{aligned} \frac{1}{N \ell } \left| N' _{i, n+1} - N_{i, n+1} \right| \le C \kappa _n \ell = C \kappa _n \mathrm{w}_i , \end{aligned}$$

by definition of \(\mathrm{w}_i .\) This concludes the bound of \(| N' _{i, n+1} - N_{i, n+1} |\).

The bound on \(| \bar{Y}^{(\delta )}( (n+1)\delta ) - {\bar{\rho }}_{(n+1) \delta }^{(\delta ) }| \) follows from the bounds on \(|N'_{i,n+1}- N_{i,n+1}|\) and \(B_{n+1}\); details are omitted. This concludes the proof of Theorem 5.

Appendix 4: Proof of Theorem 2 for General Firing Rates

Let \(f\), \(T\), \(A\), \(B\) as in Theorem 1, \(x^{N}\) the initial state of the neurons as in Theorem 2 and such that \(\Vert x^{N}\Vert \le A\). Let \(\psi \) be a bounded continuous function on \( D ( [0, T ] , \mathcal{S}') \). We need to prove that

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathcal{P}^N_{[0, T ] }(\psi )= \psi (\rho ) \end{aligned}$$

where \(\mathcal{P}^N_{[0, T ] }(\psi )\) is the expected value of \(\psi \) under the law of \((\mu _{ U^N})_{ [0, T ] } \) when the process \(U^N\) starts from \(x^{N}\) and \(\psi (\rho )\) is the value of \(\psi \) on the element \(\rho :=(\rho _t dx)_{t\in [0,T]}\) of \( D ( [0, T ] , \mathcal{S}') \).

Let \(\mathbf 1_{\mathcal U}\) be the characteristic function of the event \(\{\Vert U^N(t) \Vert \le B, t\in [0,T]\}\). Then by Theorem 1

$$\begin{aligned} \lim _{N\rightarrow \infty }\big | \mathcal{P}^N_{[0, T ] }(\psi ) - \mathcal{P}^N_{[0, T ] }(\psi \mathbf 1_{{\mathcal U}})\big | =0 . \end{aligned}$$
(8.106)

By an abuse of notation we call \(\mathcal{P}^{*,N}_{[0, T ] }\) the law of the process with a firing rate \(f^*(\cdot )\) which satisfies Assumption 3 and coincides with \(f\) for \(x\le B\). Then

$$\begin{aligned} \mathcal{P}^N_{[0, T ] }(\psi \mathbf 1_{\mathcal U}) = \mathcal{P}^{*,N}_{[0, T ] }(\psi \mathbf 1_{\mathcal U}) . \end{aligned}$$
(8.107)

Since we have proved Theorem 2 under Assumption 3, we have convergence for the process with rate \(f^*(\cdot )\) to a limit density that we call \(\rho ^*=(\rho ^*_t)_{t\in [0,T]}\), so that

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathcal{P}^{*,N}_{[0, T ] }(\psi 1_{\mathcal U}) = \psi (\rho ^* 1_{\mathcal U}) . \end{aligned}$$
(8.108)

As a consequence of (8.106) and (8.107),

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathcal{P}^N_{[0, T ] }(\psi ) = \psi (\rho ^* 1_{\mathcal U} ) . \end{aligned}$$

By the arbitrariness of \(\psi \), \(\rho ^*=\rho ^* \mathbf 1_{{\mathcal U}}.\) Indeed, taking \( \psi (\omega )= \sup \{ \omega _t ( 1 ) , t \le T \} \wedge 1 , \) we have \(\lim _{N\rightarrow \infty } \mathcal{P}^N_{[0, T ] }(\psi ) \equiv 1 , \) which implies that \(\rho ^*\) must have support in \([0,B] .\) As a consequence,

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathcal{P}^N_{[0, T ] }(\psi ) = \psi (\rho ^* 1_{\mathcal U} ) = \psi ( \rho ^* ) , \end{aligned}$$

and the limit \(\rho ^* \) is equal to the solution of the equation with the true firing rate \(f\). This concludes the proof of the theorem.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Masi, A., Galves, A., Löcherbach, E. et al. Hydrodynamic Limit for Interacting Neurons. J Stat Phys 158, 866–902 (2015). https://doi.org/10.1007/s10955-014-1145-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-014-1145-1

Keywords

Mathematics Subject Classification

Navigation