Skip to main content
Log in

A Markovian event-based framework for stochastic spiking neural networks

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. And possibly on different other variables depending on the model considered.

  2. Note that formally most of the derivations done in the present manuscript might be performed with the use of the discontinuous (point processes) noise S i . However, the expressions of the random variables involved in the dynamics of the countdown process would be harder to express in closed-form, and numerical evaluations of such random variables are generally not as efficient as the powerful tools of the stochastic theory.

  3. The case of PIF neurons is the only case where both the interaction and the reset random variable can be expressed in closed form with usual functions.

  4. Indeed, if the action of the spike fired by neuron i at time t i affects neuron j after a delay Δ ij , then \(t^*=t_i+\Delta_{ij}\) and if t j is the last firing time of neuron j and the refractory period corresponds to modulating the interaction by a function κ, then the synaptic weight w ij is multiplied by \(\kappa(t^*-t_j)\). The way to include such phenomena in a generic manner is presented in details in Appendix B.

References

  • Arbib (1998). The handbook of brain theory and neural networks. Cambridge: MIT Press.

    Google Scholar 

  • Asmussen, S., & Turova, T. S. (1998). Stationarity properties of neural networks. Journal of Applied Probabilities, 35, 783–794.

    Article  Google Scholar 

  • Brette, R. (2006). Exact simulation of integrate-and-fire models with synaptic conductances. Neural Computation, 18(8), 2004–2027.

    Article  PubMed  Google Scholar 

  • Brette, R. (2007). Exact simulation of integrate-and-fire models with exponential currents. Neural Computation, 19(10), 2604–2609.

    Article  PubMed  Google Scholar 

  • Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., et al. (2007). Simulation of networks of spiking neurons: A review of tools and strategies. Journal of Computational Neuroscience, 23(3), 349–398.

    Article  PubMed  Google Scholar 

  • Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11, 1621–1671.

    Article  PubMed  CAS  Google Scholar 

  • Cessac, B. (2008). A discrete time neural network model with spiking neurons. Journal of Mathematical Biology, 56(3), 311–345. doi:10.1007/s00285-007-0117-3.

    Article  PubMed  CAS  Google Scholar 

  • Cessac, B. (2010). A discrete time neural network model with spiking neurons: II: Dynamics with noise. Journal of Mathematical Biology, 1–38. doi:10.1007/s00285-010-0358-4.

  • Claverol, E., Brown, A., & Chad, J. (2002). Discrete simulation of large aggregates of neurons. Neurocomputing, 47, 277–297.

    Article  Google Scholar 

  • Cottrell, M. (1992). Mathematical analysis of a neural network with inhibitory coupling. Stochastic Processes and their Applications, 40, 103–127.

    Article  Google Scholar 

  • Cottrell, M., & Turova, T. (2000). Use of an hourglass model in neuronal coding. Journal of applied probability, 37, 168–186.

    Article  Google Scholar 

  • Davis, M. (1984). Piecewise-deterministic markov processes: A general class of non-diffusion stochastic models. Journal of the Royal Society, Series B (Methodological), 46, 353–388.

    Google Scholar 

  • Delorme, A., & Thorpe, S. (2001). Face processing using one spike per neuron: resistance to image degradation. Neural Networks, 14, 795–804.

    Article  PubMed  CAS  Google Scholar 

  • Delorme, A., & Thorpe, S. (2003). 57 spikenet: An event-driven simulation package for modelling large networks of spiking neurons. Network, 14(4), 613–627.

    Article  PubMed  Google Scholar 

  • Fabre-Thorpe, M., Richard, G., & Thorpe, S. (1998). Rapid categorization of natural images by rhesus monkeys. Neuroreport, 9(2), 303–308.

    Article  PubMed  CAS  Google Scholar 

  • Fricker, C., Robert, P., Saada, E., & Tibi, D. (1994). Analysis of some networks with interaction. Annals of Applied Probability, 4, 1112–1128.

    Article  Google Scholar 

  • Gerstner, W., & Kistler, W. (2002a). Spiking neuron models. Cambridge: Cambridge University Press.

    Google Scholar 

  • Gerstner, W., & Kistler, W. M. (2002b). Mathematical formulations of hebbian learning. Biological Cybernetics, 87, 404–415.

    Article  PubMed  Google Scholar 

  • Gobet, E. (2000). Weak approximation of killed diffusion using Euler schemes. Stochastic Processes and their Applications, 87(2), 167–197.

    Article  Google Scholar 

  • Goldman, M. (1971). On the first passage of the integrated Wiener process. Annals of Mathematical Statistics, 42, 2150–2155.

    Article  Google Scholar 

  • Gromoll, H., Robert, P., & Zwart, B. (2008). Fluid limits for processor sharing queues with impatience. Mathematics of Operations Research, 33(2), 375–402.

    Article  Google Scholar 

  • Holden, A. (1976). Models of the stochastic activity of neurones. Lecture Notes in Biomathematics, 12, 1–368.

    Google Scholar 

  • Izhikevich, E. M., & Edelman, G. M. (2008). Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences of the United States of America, 105(9), 3593–3598.

    Article  PubMed  CAS  Google Scholar 

  • Kandel, E., Schwartz, J., & Jessel, T. (2000). Principles of neural science (4th ed.). New York: McGraw-Hill.

    Google Scholar 

  • Karatzas, I., & Shreve, S. (1987). Brownian motion and stochatic calculus. New York: Springer.

    Google Scholar 

  • Kloeden, P., & Platen, E. (1992). Numerical solution of stochastic differential equations. New York: Springer.

    Google Scholar 

  • Lachal, A. (1991). Sur le premier instant de passage de l’intégrale du mouvement brownien. Annales de l’IHP, Section B, 27, 385–405.

    Google Scholar 

  • Lachal, A. (1996). Sur la distribution de certaines fonctionnelles de l’int’egrale du mouvement Brownien avec d’erives parabolique et cubique. Communications on Pure and Applied Mathematics, 49, 1299–1338.

    Article  Google Scholar 

  • Makino, T. (2003). A discrete-event neural network simulator for general neuron models. Neural Computing & Applications, 11, 210–223.

    Article  Google Scholar 

  • Marian, I., Reilly, R., & Mackey, D. (2002). Efficient event-driven simulation of spiking neural networks. In Proceedings of the 3rd WSEAS international conference on neural networks and applications.

  • McKean, H. P. (1963). A winding problem for a resonator driven by a white noise. Journal of Mathematics of Kyoto University, 2, 227–235.

    Google Scholar 

  • Plesser, H. E. (1999). Aspects of signal processing in noisy neurons. PhD thesis, Georg-August-Universität.

  • Ricciardi, L., & Smith, C. (1977). Diffusion processes and related topics in biology. New York: Springer.

    Google Scholar 

  • Rolls, E., & Deco, G. (2010). The noisy brain: Stochastic dynamics as a principle of brain function. London: Oxford University Press.

    Google Scholar 

  • Roxin, A., Brunel, N., & Hansel, D. (2005). Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Physical Review Letters, 94(23), 238103.

    Article  PubMed  Google Scholar 

  • Rudolph, M., & Destexhe, A. (2006). Analytical integrate-and-fire neuron models with conductance-based dynamics for event-driven simulation strategies. Neural Computation, 18, 2146–2210.

    Article  PubMed  Google Scholar 

  • Shadlen, M. N., & Newsome, W. T. (1994). Noise, neural codes and cortical organization. Current Opinion in Neurobiology, 4(4), 569–579.

    Article  PubMed  CAS  Google Scholar 

  • Softky, W. R., & Koch, C. (1993). The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. Journal of Neuroscience, 13, 334–350.

    PubMed  CAS  Google Scholar 

  • Thorpe, S., Delorme, A., & VanRullen, R. (2001). Spike based strategies for rapid processing. Neural Networks, 14, 715–726.

    Article  PubMed  CAS  Google Scholar 

  • Tonnelier, A., Belmabrouk, H., & Martinez, D. (2007). Event-driven simulations of nonlinear integrate-and-fire neurons. Neural Computation, 19(12), 3226–3238.

    Article  PubMed  Google Scholar 

  • Touboul, J. (2008). Nonlinear and stochastic models in neuroscience. PhD thesis, Ecole Polytechnique.

  • Touboul, J., & Faugeras, O. (2007). The spikes trains probability distributions: A stochastic calculus approach. Journal of Physiology, Paris, 101(1–3), 78–98.

    Article  PubMed  Google Scholar 

  • Touboul, J., & Faugeras, O. (2008). First hitting time of double integral processes to curved boundaries. Advances in Applied Probability, 40(2), 501–528.

    Article  Google Scholar 

  • Tuckwell, H. C. (1988). Introduction to theoretical neurobiology. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Turova, T. (2000). Neural networks through the hourglass. BioSystems, 58, 159–165.

    Article  PubMed  CAS  Google Scholar 

  • Turova, T. S. (1996). Analysis of a biological plausible neural network via an hourglass model. Markov Processes and Related Fields, 2, 487–510.

    Google Scholar 

  • Watts, L. (1994). Event-driven simulation of networks of spiking neurons. Advances in Neural Information Processing System, 7, 927–934.

    Google Scholar 

Download references

Acknowledgements

The authors warmly acknowledge Romain Brette for very insightful discussions on the concepts, Philippe Robert for interesting discussions and for reading suggestions, Olivier Rochel for his introduction to MVA Spike and for sharing his code, and Renaud Keriven and Alexandre Chariot for developing a GPU simulation code (not presented here). This work was partially supported by the ERC advanced grant NerVi number 227747.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jonathan D. Touboul.

Additional information

Action Editor: Nicolas Brunel

Appendices

Appendix A: Inhibitory networks for a wider class of integrate-and-fire models

We study more refined descriptions of the neuronal activity and show that in these cases an event-based description of the network activity may fail.

1.1 LIF model with general post-synaptic current pulse

We consider a LIF neuron described by Eq. (5), but consider that the spike integration is governed by a general postsynaptic potential function α(·), the Green function of the differential operator \(\mathcal{L}\). In that case, the synaptic input to the neuron j, denoted \(I_a^{(j)}\), is solution of \(\mathcal{L} I_a^{(j)} =0\) between two spikes, and when neuron i fires a spike, the synaptic current I a is added the synaptic weight w ij . In the case of the LIF neuron, the equations of the model read:

$$ \begin{cases} \tau_i dV_t^{(i)} &= \left(-V_t+I_e(t)+I_a^{(i)}(t)\right)\;dt + \sigma\;dW^{(i)}_t\\ \mathcal{L} I_a^{(i)} (t) &=0 \end{cases} $$

When the membrane potential \(V_t^{(i)}\) reaches the threshold θ, a spike is fired, the membrane potential is reset to the value \(V_r^{(i)}\) and the synaptic current \(I_a^{(i)}\) is added the synaptic weight w ij . The reset random variable is therefore simply the first hitting time of the Ornstein–Uhlenbeck process to the threshold, with synaptic current I a (t) = 0. If neuron j receives a spike from neuron i at time t *, then the synaptic current I a is added w ij . Therefore, the updated membrane potential at time t * + X (j)(t *) reads:

$$ \theta\left(t^*+X^{(j)}(t^*)\right) + w_{ij} e^{-X^{(j)}(t^*)}\int_{0}^{X^{(j)}(t^*)}\alpha(s)e^{s/\tau_j}\, ds $$
(15)

and \(I_a^{(j)}(t^*+X^{(j)}(t^*))\) is the value at time t * + X (j)(t *) of the solution of the ordinary differential equation \(\mathcal{L} I_a^{(j)}=0\) with the initial condition at time t * equal to \(I_a^{(j)}(t^{*-})+w_{ij}\). Note that if the operator \(\mathcal{L}\) is of order d greater than one, the initial condition involve the derivatives of \(I_a^{(j)}\) up to order d − 1. These are unchanged by the spike reception. Hence the additional time induced by the reception of a presynaptic spike from neuron i has the law of the first hitting time to the boundary θ of the stochastic process V (j) with initial condition at time t * + X (j)(t *) given by Eq. (15) and \(I_a^{(j)}(t^*+X^{(j)}(t^*))+w_{ij}\Psi^{j}(X^{(j)}(t^*))\) where Ψ denotes the flow of the linear equation \(\mathcal{L} \Psi = 0\). This random variable has a law independent of the value of the membrane potential at time t * and therefore one can build a Markov chain governing the times of the spikes. In details, in order to take into account this synaptic integration of spikes in our framework, we have to extend the phase space of our Markov chain. The Markovian variable we consider is the process (X t , I a (t)) t ≥ 0. When a neuron i elicits a spike, i.e. when its countdown reaches 0 at time t *, its countdown value is reset by drawing from the law of the first hitting time of the membrane potential with initial condition \((V_r^{(i)}, I_a^{(i)}(t^*))\) to the threshold and for all neuron \(j \in \mathcal{V}(i)\), their spike-induced current \(I^{(j)}_a(t^*)\) are instantaneously updated by adding the synaptic efficiency w ij : \(I_a^{(j)}(t^*) = I_a^{(j)}(t^{*-}) + w_{ij}\). Simulating this Markov process, that can be sampled at the times of the spike emission, is equivalent from the spikes point of view as simulating the whole membrane potential process (see Theorem 2.2).

1.2 LIF models with noisy conductances

We consider the case of noisy conductance-based models. In this case, the membrane potential is solution of equation:

$$ \left \{ \begin{array}{@{}rcl} dV^{(i)}_t &=& \left(I_e^{(i)}(t) - \lambda_i \left(V^{(i)}_t-V_{rev}^{(i)}\right)\right)\, dt + I_s^{(i)}(t)\,dt \\ &{~}&+\, \sigma_i \, g_i \, \left(V^{(i)}_t-V_{rev}^{(i)}\right) \, dW_t^i \\ V^{(i)}(t^-) &=& \theta \Rightarrow V^{(i)}(t) = V_r^{(i)} \end{array} \right. , $$
(16)

The term \(I_s^{(i)}\) corresponds to the current generated by the reception of spikes emitted from neurons in the network. \(V_{rev}^{(i)}\) is the reversal potential of the synapse.

Between two spikes, the stochastic differential equation can be integrated in closed-form. For instance, provided that \(V^{(i)}_{t_0}\) is known, we have for t ≥ t 0 (see e.g. Karatzas and Shreve 1987):

$$ V^{(i)}_t = Z^{(i)}_{t-t_0} \left( V^{(i)}_{t_0} +\int_{t_0}^t \frac 1 {Z_{u-t_0}} (I_e(u)+I_s(u))\,du \right) $$

where \(Z_t^{(i)}\) is the process:

$$ Z_t^{(i)} = \exp \left(-\left(\lambda_i+\frac{g_i^2\sigma_i^2}{2}\right) \; t + g_i\,\sigma_i W^{(i)}_t\right). $$

This models makes it natural to consider that the spike reception modifies the conductance of the network. This synapse model is considered first, and instantaneous current synapses next.

1.2.1 Conductance-based synapses

When neuron j receives a spike from one of its neighbors i, a current is generated, which has the value \(w_{ij} g (V^{(j)} - V_{rev})\) . Note that we artificially introduced V rev in the leak term, which amounts to formally changing the current \(I_e^{(i)}\), in order to integrate the equation in a simpler way. When neuron j receives a spike at time t * from neuron i, the conductance is hence increased by a coefficient w ij g. The solution of the membrane potential equation after time t * reads:

$$ V^{(j)}(t+t^*) = V_{(j)}^* Z_t^{(j)} + \int_0^t I_e^{(j)}(s+t^*)Z_{t-s}^{(j)}\,ds $$

where \(Z_t^{(j)} = \exp \{ -(\lambda_j+ \frac{\sigma_j^2}{2} -w_{ij} g) (t-t^*) + \sigma_j W_t^{(j)}\}\). The reception of the spike therefore modifies the value of the membrane potential, which would have been equal to

$$ \begin{array}{rll} \widetilde{V}^{(j)}(t+t^*) &=& V_{(j)}^* \, {Z_t}e^{w_{ij}\,g\,t} \\ &&+\, \int_0^t I_e^{(j)}(s+t^*)\,{Z}_{t-s}e^{w_{ij}\,g\,(t-s)}\,ds \end{array} $$

if neuron j did not receive any spike. At time \(X^{(j)}_*\), the value of \(\widetilde{V}^{(j)}(X^{(j)}_*+t^*)\) is equal to \(\theta(X^{(j)}_*+t^*)\), but after inhibition at time t *, the actual value of the membrane potential reads:

$$ \begin{array}{rll} V^{(j)}\left(X^{(j)}_*+t^*\right) &=& \theta e^{w_{ij}\,g \,X^{(j)}_*} \\ &&+\, \int_{0}^{X^{(j)}_*}I_e^{(j)}(s+t^*)Z_{t-s}(e^{w_{ij}\,g\,s}-1)\,ds \end{array} $$

This value therefore depends on the history of the Brownian motion from time t * on. We are interested in finding the first spike time if nothing occurs in the network meanwhile. This is described by the random variable:

$$ \begin{array}{rll} \eta_{ij}&=&\inf\left\{t>t^*, V^{(j)}_{t}=\theta(t) \vert \widetilde{V}^{(j)}\left(X^{(j)}_{t^*}+t^*\right)\right.\\ &&\left.\qquad=\theta\left(X^{(j)}_{t^*}+t^*\right)\right\}\\ &=& X^{(j)}_{t^*} + \inf\left\{t>t^*, V^{(j)}_{t}=\theta(t) \vert \widetilde{V}^{(j)}\left(X^{(j)}_{t^*}+t^*\right)\right.\\ &&\quad\qquad\qquad\left.=\theta\left(X^{(j)}_{t^*}+t^*\right)\right\} \end{array} $$

This case is therefore substantially more complex than the previous ones. Indeed, conditioning on the event that \(\widetilde{V}^{(j)}(t^*+X^{(j)}_{t^*})\) is the first crossing time of the threshold of V (j) conditions the trajectories of the Brownian motion in the interval \([t^*, t^*+X^{(j)}_{t^*}]\), and we observe that the value of the updated process V (j) depends on the whole history of the Brownian motion in this interval (except in the very particular case where I e  = 0, in which case the problem can be treated as previously). The problem of finding the law of the interaction random variable is therefore very tricky because of this complicated conditioning.

1.2.2 Noisy conductances and current-based synaptic interactions

We now consider the case of instantaneous interactions. In that case, if neuron j receives a spike from neuron i at time t *, its membrane potential is instantaneously increased by a quantity equal to w ij . Similarly to the computations done in the previous case, the updated membrane potential now reads:

$$ V^{(j)}(t+t^*) = \left(V_{(j)}^*+w_{ij}\right) Z_t^{(j)} + \int_0^t I_e^{(j)}(s+t^*)Z_{t-s}^{(j)}\,ds $$

and therefore at time \(X^{(j)}_{t^*}\), this value reads:

$$ V^{(j)}\left(X^{(j)}_{t^*}+t^*\right) = (\theta + w_{ij}) Z_{X^{(j)}_{t^*}}^{(j)} $$

We therefore need to characterize the probability distribution of \(Z_{X^{(j)}_{t^*}}^{(j)}\) conditioned on the fact that the neuron elicited a spike at time \(X^{(j)}_{t^*}\) if it did not receive a spike at time t * from neuron i. Here again, this problem depends in an intricate fashion on the distribution of the values of the underlying Brownian motion W (j) at this time conditioned on the fact that the first crossing time of the process with the threshold was \(X^{(j)}_{t^*}\). To the best of our knowledge, no solution is available so far.

1.3 Conclusion

The current-based synapses are a necessary hypothesis in order to be able to express in closed-form the random variables involved in the transitions of the countdown process. The same problem arises in the case of nonlinear models: it remains very difficult to express the interaction random variable conditioned on the value of the countdown process at the time of the spike reception.

It is important to note here that this limitation is also present in the deterministic case. Extensions of the framework of simple linear integrate-and-fire models are scarce and apply to very particular models. The only successful extension to our knowledge was done by Tonnelier and collaborators (Tonnelier et al. 2007) in the case of the quadratic integrate-and-fire neuron with constant input and constant threshold. It is based on the fact that the authors provide a closed-form expression for the membrane-potential. This fact is no more possible in the stochastic case, because even if we were to find a closed form expression for the membrane potential, the integration random variable will be very complex to express because of the conditioning on the value of the countdown variable as discussed previously.

Appendix B: Including synaptic delays and the refractory period

In this appendix we include two biologically plausible phenomena in the description of the network activity, and see how this affects the Markovian framework.

1.1 Refractory period

The refractory period is a transient phase just after the firing during which it is either impossible or difficult to communicate with the cell. This phenomenon is linked with the dynamics of ion channels and the hyperpolarization phase of the spike emission, lasts a few milliseconds, and prevents the neuron from firing spikes at an arbitrary high firing rate. It can be decomposed into two phases: the absolute refractory period, which is a constant period of time corresponding loosely to the hyperpolarization of the neuron during which is it impossible to excite or inhibit the cell no matter how great the stimulating current applied is (see for instance Kandel et al. (2000, Chapter 9) for a further biological discussion of the phenomenon and Gerstner and Kistler (2002a) and Arbib (1998) for a discussion on the modeling this refractory period). Immediately after this phase begins the relative refractory period during which the initiation of a second action potential is inhibited but still possible. Modeling this relative refractory period amounts to considering that the synaptic inputs received at the level of the cell are weighted by a function depending on the time elapsed since the spike emission.

For technical reasons, the relative refractory period is taken into account only for the spike integration, and not for the noise integration. This assumption does not affect significantly the dynamics of the network, since the probability that the noisy current integrated during a time period as short as 1 or 2 ms be substantial is very small. For the spike integration this remark is no more valid, since a single spike induces large changes in the membrane voltage. During the relative refractory period, we consider that synaptic efficiencies are weighted by a function depending on the time elapsed since the last spike was fired. We denote this function κ(t), following the notation of Gerstner and Kistler (2002a). In our case this function is unspecified, it is zero at t = 0 and increases to 1 with a characteristic time of around 2 ms. It can be of bounded support or defined on ℝ (see Fig. 3). We will see how taking in account this effect modifies the above instantaneous analysis, after introducing another essential effect occurring at the same time scale as the refractory period: synaptic delays.

Fig. 3
figure 3

The refractory period at a spike emission, and the related κ function weighting the synaptic inputs

1.2 Synaptic delays

Delays are known to be very important, for instance in shaping spatio-temporal dynamics of neuronal activity (Roxin et al. 2005) or for generating global oscillations (Brunel and Hakim 1999). These synaptic delays affect the interaction variable in a quite intricate fashion by adding a non-trivial memory-like phenomenon in the network. As a consequence of this we are not able to update instantaneously the countdown variable at each spike time: the transmission delays imply that one needs to keep the memory of a certain number of spikes times. Fortunately, because of the absolute refractory period, we only have to take into account a finite number of spikes that can possibly affect the postsynaptic potential after it elicits a spike (see Fig. 4). The maximal number of spikes concerned is given by \(M \overset{\rm def}{=} {\max_{i,j}} \lfloor \frac{\Delta_{ij}}{R_j} \rfloor\) where R j is the length of the absolute refractory period and \(\lfloor x \rfloor\) is the floor function, i.e. the largest integer smaller than or equal to x.

Fig. 4
figure 4

There is a finite number of incoming spikes emitted before the postsynaptic cell fires

Instead of considering the last firing times variables which contained the last firing time for each neuron of the network, we now consider the last M firing times variables. To this end, we define the matrix \(H_M \in \mathbb{R}^{N \times M}\) whose row i contains the M last firing times of neuron i. At the initial time, the M components of this row are set to the value min ij { − R i  − Δ ij }. If the neuron spikes at time t i , then each component of the row are modified: for all k ∈ {2, ..., N}, H i,k − 1 = H i,k and H i,M  = t i , and between two consecutive spikes of neuron i, the elements of the row i remain constant.

Provided that the spike times are known, this matrix is a Markov chain.

Indeed, let us denote by X n the countdown process of the network. The updates of the Markov chain (X,H M ) occur at spike emission times and spike arrivals at postsynaptic cells. The next spike if no delayed interaction occurs will be fired after a time given by \(\tau = \min_{i} X_i^n\), and the first arrival of a possible spike at a cell is given by

$$ \nu = \min_{\substack{i,j \in \{1,\,\ldots\,,N\}\\ k\in \{1,\,\ldots,\,M\}}} \{ x= H_{i,k} + \Delta_{ij} - t; x>0\} $$
  • If this set is empty, the min is set to + ∞. If τ < ν, a spike is fired by the neuron i having the lowest countdown value. At this instant, the related countdown variable of neuron i is instantaneously reset by drawing in the law of the reset variable as described in Section 3.1, and row i of the matrix H is updated as indicated above. Other variables, such as the absolute time, are also to be updated at this point (e.g. the absolute time is updated to t m − 1 + X i , ...).

  • If ν < τ, assume that the minimum is achieved for the value H i,k  + Δ ij for some i, j, k. This means that the k th latest spike of neuron i reaches the cell j and affects the cell in the same fashion as if there were no delay and a spike was emitted at this time. Therefore, the related interaction variable η ij of this connection will be added, and the countdown value of neuron j is updated together with the other Markov chain variables. The time is advanced to t + ν. Note also that the min can be achieved at many different 3-tuples (i,j,k) at the same time. Moreover, it is also possible that an excitatory interaction makes a postsynaptic neuron fire instantaneously at the reception of the spike. All these cases can be treated sequentially, by iterating the mechanism we just described. Nevertheless, we are ensured that no avalanche can occur, because of the absolute refractory period and of the delays.

1.3 The reset random variable

Let us consider the effect of these features from the countdown process viewpoint. The reset variable is only affected by the absolute refractory period, and in a very simple way. Indeed, we formally consider that the neuron i is stuck at its reset value \(V_r^{(i)}\) during a fixed period of time R i after having fired. After this period, the neuron membrane potential follows the evolution that depends on the particular model chosen. Therefore, the time of the next spike starting from time t + R i has the same law as the reset variable in the case where we do not take into account the refractory period and the synaptic delay, i.e. it has the law of the first hitting time, noted τ i , of the membrane potential process to the spike threshold with the time-shifted input \(I_e(t+t^*+R_i)\). The new reset variable of the related countdown process has simply the law of Y i  = τ i  + R i .

The initialization random variable is clearly unchanged by taking into account synaptic delays, refractory periods and excitatory interactions.

However, the case of the interaction variable is a little bit more intricate, as shown below.

1.4 Interaction random variable

Let us consider the effect of a spike emitted by neuron i and reaching cell j at time t in an inhibitory network. This spike will affect the membrane potential depending on the connectivity model chosen but corresponding to a synaptic weight κ j (t − t j )w ij where t j is the time of the last spike emitted by the cell j.

Since we assumed the connection inhibitory, then the interaction random variable is readily deduced from the analysis of Section 3 and of Appendix A, by changing the synaptic weight value w ij to w ij κ j (t i  − t j ). Therefore in addition to the complementary variables necessary to define the countdown value, we need to keep in memory the last spike time for each neuron to define the synaptic weights and therefore the interaction random variable. Note that this random variable is almost surely equal to zero, whatever the neuron model considered, if the spike arrives during the absolute refractory period (i.e. t ∈ [t j , t j  + R j ]), since in that case κ(t − t j ) = 0.

The spike transmission from the presynaptic cell to the postsynaptic one depends on the distance between the two cells, the speed of transmission of the signal along the axon and the transmission time at the synapse, and has a typical duration of a few milliseconds. To model the synaptic delay we consider that spikes emitted by a neuron affect the postsynaptic neurons after some delay Δ ij which can depend on both the presynaptic and the postsynaptic neurons.

If neuron i fires a spike at time t i , its effect on the postsynaptic neuron j depends on the synaptic delay Δ ij , the countdown value of neuron j at this time \(X^{(j)}(t_i)\), and the time of the last spike emitted by j:

  1. (i)

    If \(\Delta_{ij} < X^{(j)}(t)\), then the reception of a spike at time t i acts on the post-synaptic neuron at time t + Δ ij in the same way as discussed for the different models considered in Section 2, but the effect can now be either excitatory or inhibitory, with a synaptic efficiency w ij κ j (t i  + Δ ij  − t j ).

  2. (ii)

    If \(\Delta_{ij} > X^{(j)}(t_i)\), the postsynaptic neuron will fire before receiving the spike from the presynaptic cell i, and it will act on the postsynaptic cell membrane with an efficiency w ij κ j (t i  + Δ ij  − X j ) at time t i  + Δ ij .

Rights and permissions

Reprints and permissions

About this article

Cite this article

Touboul, J.D., Faugeras, O.D. A Markovian event-based framework for stochastic spiking neural networks. J Comput Neurosci 31, 485–507 (2011). https://doi.org/10.1007/s10827-011-0327-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-011-0327-y

Keywords

Navigation