Advertisement

SN Computer Science

, 1:35 | Cite as

Event-Scheduling Algorithms with Kalikow Decomposition for Simulating Potentially Infinite Neuronal Networks

  • Tien Cuong Phi
  • Alexandre Muzy
  • Patricia Reynaud-BouretEmail author
Original Research
  • 261 Downloads
Part of the following topical collections:
  1. Modelling methods in Computer Systems, Networks and Bioinformatics

Abstract

Event-scheduling algorithms can compute in continuous time the next occurrence of points (as events) of a counting process based on their current conditional intensity. In particular, event-scheduling algorithms can be adapted to perform the simulation of finite neuronal networks activity. These algorithms are based on Ogata’s thinning strategy (Ogata in IEEE Trans Inf Theory 27:23–31, 1981), which always needs to simulate the whole network to access the behavior of one particular neuron of the network. On the other hand, for discrete time models, theoretical algorithms based on Kalikow decomposition can pick at random influencing neurons and perform a perfect simulation (meaning without approximations) of the behavior of one given neuron embedded in an infinite network, at every time step. These algorithms are currently not computationally tractable in continuous time. To solve this problem, an event-scheduling algorithm with Kalikow decomposition is proposed here for the sequential simulation of point processes neuronal models satisfying this decomposition. This new algorithm is applied to infinite neuronal networks whose finite time simulation is a prerequisite to realistic brain modeling.

Keywords

Kalikow decomposition Discrete event simulation Point process Infinite neuronal networks 

Introduction

Point processes in time are stochastic objects that model efficiently event occurrences with a huge variety of applications: time of deaths, earthquake occurrences, gene positions on DNA strand, etc. [1, 20, 22]).

Most of the time, point processes are multivariate [7] in the sense that either several processes are considered at the same time, or in the sense that one process regroups together all the events of the different processes and marks them by their type. A typical example consists in considering either two processes, one counting the wedding events of a given person and one counting the children birth dates of the same person or only one marked process which regroups all the possible dates of birth or weddings independently and adds one mark per point, here wedding or birth.

Consider now a network of neurons each of them emitting action potentials (spikes). These spike trains can be modeled by a multivariate point process with a potentially infinite number of marks, each mark representing one given neuron. The main difference between classical models of multivariate point processes and the ones considered in particular for neuronal networks is the size of the network. A human brain consists in about \(10^{11}\) neurons whereas a cockroach contains already about \(10^6\) neurons. Therefore, the simulation of the whole network is either impossible or a very difficult and computationally intensive task for which particular tricks depending on the shape of the network or the point processes have to be used [6, 14, 19].

Another point of view, which is the one considered here, is to simulate, not the whole network, but the events of one particular node or neuron, embedded in and interacting with the whole network. In this sense, one might consider an infinite network. This is the mathematical point of view considered in a series of papers [9, 10, 18] and based on Kalikow decomposition [12] coupled with perfect simulation theoretical algorithms [4, 8]. However, these works are suitable in discrete time and only provide a way to decide at each time step if the neuron is spiking or not. They cannot operate in continuous time, i.e., they cannot directly predict the next event (or spike). Up to our knowledge, there exists only one attempt of using such decomposition in continuous time [11], but the corresponding simulation algorithm is purely theoretical in the sense that the corresponding conditional Kalikow decomposition should exist given the whole infinite realization of a multivariate Poisson process, with an infinite number of marks, quantity which is impossible to simulate in practice.

The aim of the present work is to present an algorithm which
  • can operate in continuous time in the sense that it can predict the occurrence of the next event. In this sense, it is an event-scheduling algorithm;

  • can simulate the behavior of one particular neuron embedded in a potentially infinite network without having to simulate the whole network;

  • is based on an unconditional Kalikow decomposition and, in this sense, can only work for point processes with this decomposition.

In “Event-Scheduling Simulation of Point Processes”, we specify the links between event-scheduling algorithms and the classical point process theory. In “Kalikow Decomposition”, we give the Kalikow decomposition. In “Backward Forward Algorithm”, we present the backward–forward perfect simulation algorithm and prove why it almost surely ends under certain conditions. In “Illustration”, we provide simulation results and a conclusion is given in “Conclusion”.

Event-Scheduling Simulation of Point Processes

On the one hand, simulation algorithms of multivariate point processes [17] are quite well known in the statistical community but as far as we know quite confidential in the simulation (computer scientist) community. On the other hand, event-scheduling simulation first appeared in the mid-1960s [21] and was formalized as discrete event systems in the mid-1970s [23] to interpret very general simulation algorithms scheduling “next events”. A basic event-scheduling algorithm “jumps” from one event occurring at a time stamp \(t \in \mathbb {R}_0^+\) to a next event occurring at a next time stamp \(t' \in \mathbb {R}_0^+\), with \(t'\ge t\). In a discrete event system, the state of the system is considered as changing at times \(t,t'\) and conversely unchanging in between [24]. In [14], we have written the main equivalence between the point processes simulation algorithms and the discrete event simulation setup, which led us to a significant improvement in terms of computational time when huge but finite networks are into play. Usual event-scheduling simulation algorithms have been developed considering independently the components (nodes) of a system. Our approach considers new algorithms for activity tracking simulation [5]. The event activity is tracked from active nodes to children (influencees).

Here, we just recall the main ingredients that are useful for the sequel.

To define point processes, we need a filtration or history \(({\mathcal {F}}_t)_{t\ge 0}\). Most of the time, and this will be the case here, this filtration (or history) is restricted to the internal history of the multivariate process \(({\mathcal {F}}^{int}_t)_{t\ge 0}\), which means that at time \(t-\), i.e., just before time t, we only have access to the events that have already occurred in the past strictly before time t, in the whole network. The conditional intensity, \(\phi _i(t|{\mathcal {F}}^{int}_{t-})\), of the point process representing neuron i, gives the instantaneous firing rate, that is the frequency of spikes, given the past contained in \({\mathcal {F}}^{int}_{t-}\). Let us just mention two very famous examples.

If \(\phi _i(t|{\mathcal {F}}^{int}_{t-})\) is a deterministic constant, say M, then the spikes of neuron i form a homogeneous Poisson process with intensity M. The occurrence of spikes is completely independent from what occurs elsewhere in the network and from the previous occurrences of spikes of neuron i.

If we denote by \(\mathbf{I}\) the set of neurons, we can also envision the following form for the conditional intensity:
$$\begin{aligned} \phi _i(t|{\mathcal {F}}^{int}_{t-})= \nu _i + \sum _{j\in \mathbf{I}} w_{ij} (\mathbf{Nb}^j_{[t-A,t)}\wedge M). \end{aligned}$$
(1)
This is a particular case of generalized Hawkes processes [3]. More precisely \(\nu _i\) is the spontaneous rate (assumed to be less than the deterministic upper bound \(M>1\)) of neuron i. Then, every neuron in the network can excite neuron i: more precisely, one counts the number of spikes that have been produced by neuron j just before t, in a window of length A, this is \(\mathbf{Nb}^j_{[t-A,t)}\); we clip it by the upper bound M and modulate its contribution to the intensity by the positive synaptic weight between neuron i and neuron j, \(w_{ij}\). For instance, if there is only one spike in the whole network just before time t, and if this happens on neuron j, then the intensity for neuron i becomes \(\nu _i+w_{ij}\). The sum over all neurons j mimics the synaptic integration that takes place at neuron i. As a renormalization constraint, we assume that \(\sup _{i\in \mathbf{I}}\sum _{j\in \mathbf{I}} w_{ij} <1\). This ensures in particular that such a process has always a conditional intensity bounded by 2M.
Hence, starting for instance at time t, and given the whole past, one can compute the next event in the network by computing for each node of the network the next event in absence of other spike apparition. To do so, remark that in the absence of other spike apparition, the quantity \(\phi _i(s|{\mathcal {F}}^{int}_{s-})\) for \(s>t\) becomes for instance in the previous example
$$\begin{aligned} \phi ^{abs}_i(s,t)=\nu _i + \sum _{j\in \mathbf{I}} w_{ij} (\mathbf{Nb}^j_{[s-A,t)}\wedge M), \end{aligned}$$
meaning that we do not count the spikes that may occur after t but before s. This can be generalized to more general point processes. The main simulation loop is presented in Algorithm 1

Note that the quantity \(\phi ^{abs}_i(s,t)\) can be also seen as the hazard rate of the next potential point \(T_i^{(1)}\) after t. It is a discrete event approach with the state corresponding to the function \(\phi ^{abs}_i(.,t)\).

Ogata [17], inspired by Lewis’ algorithm [13], added a thinning (also called rejection) step on top of this procedure because the integral \(\int _t^{T_i^{(1)}} \phi ^{abs}_i(s,t) \mathrm{d}s\) can be very difficult to compute. To do so (and simplifying a bit), assume that \(\phi _i(t|{\mathcal {F}}^{int}_{t-})\) is upper bounded by a deterministic constant M. This means that the point process has always less points than a homogeneous Poisson process with intensity M. Therefore, Steps 5–6 of Algorithm 1 can be replaced by the generation of an exponential of parameter M, \(E'_i\) and deciding whether we accept or reject the point with probability \(\phi ^{abs}_i(t+E'_i,t)/M\). There are a lot of variants of this procedure: Ogata’s original one uses actually the fact that the minimum of exponential variables is still an exponential variable. Therefore, one can propose a next point for the whole system, then accept it for the whole system and then decide on which neuron of the network the event is actually appearing. More details on the multivariate Ogata’s algorithm can be found in [14].

As we see here, Ogata’s algorithm is very general but clearly needs to simulate the whole system to simulate only one neuron. Moreover, starting at time \(t_0\), it does not go backward and, therefore, cannot simulate a Hawkes process in stationary regime. There have been specific algorithms based on cluster representation that aim at perfectly simulating particular univariate Hawkes processes [16]. The algorithm that we propose here will also overcome this flaw.

Kalikow Decomposition

Kalikow decomposition relies on the concept of neighborhood, denoted by \(\upsilon\), which is picked at random and which gives the portion of time and neuron subsets that we need to look at, to move forward. Typically, for a positive constant A, such a v can be
  • \(\{(i, [-A,0))\}\), meaning we are interested only by the spikes of neuron i in the window \([-A,0)\);

  • \(\{(i,[-2A,0)),(j,[-2A,-A))\}\), that is, we need the spikes of neuron i in the window \([-2A,0)\) and the spikes of neuron j in the window \([-2A,-A)\);

  • the emptyset \(\emptyset\), meaning that we do not need to look at anything to pursue.

We need to also define l(\(\upsilon\)) the total time length of the neighborhood \(\upsilon\) whatever the neuron is. For instance, in the first case, we find \(l(\upsilon) = A\), in the second \(l(\upsilon) = 3 A\) and in the third \(l(\upsilon)= 0\).

We are only interested by stationary processes, for which the conditional intensity, \(\phi _{i}(t \mid {\mathcal {F}}^{int}_{t^{-}})\), only depends on the intrinsic distance between the previous points and the time t and not on the precise value of t per se. In this sense, the rule to compute the intensity may be only defined at time 0 and then shifted by t to have the conditional intensity at time t. In the same way, the timeline of a neighborhood \(\upsilon\) is defined as a subset of \(\mathbb {R}_-^*\) so that information contained in the neighborhood is included in \({\mathcal {F}}^{int}_{0-}\), and \(\upsilon\) can be shifted (meaning its timeline is shifted) at position t if need be. We assume that \(\mathbf{I}\) the set of neurons is countable and that we have a countable set of possibilities for the neighborhoods \({\mathcal {V}}\).

Then, we say that the process admits a Kalikow decomposition with bound M and neighborhood family \({\mathcal {V}}\), if for any neuron \(i \in {\mathbf {I}}\), for all \(v \in {\mathcal {V}}\) there exists a non-negative M-bounded quantity \(\phi _i^{v}\), which is \({\mathcal {F}}^{int}_{0-}\) measurable and whose value only depends on the points appearing in the neighborhood \(\upsilon\), and a probability density function \(\lambda _{i}(.)\) such that
$$\begin{aligned} \phi _{i}(0 \mid {\mathcal {F}}^{int}_{0^{-}}) = \lambda _i(\emptyset ) \phi _i^{\emptyset } + \sum \limits _{v \in {\mathcal {V}}, v\ne \emptyset }\lambda _i(v) \times \phi _i^{v} \end{aligned}$$
(2)
with \(\lambda _i({\emptyset }) + \sum \nolimits _{v \in {\mathcal {V}}, v\ne \emptyset } \lambda _i(v) =1.\)

Note that because of the stationarity assumptions, the rule to compute the \(\phi _i^{v}\)’s can be shifted at time t, which leads to a predictable function that we call \(\phi _i^{v_t}(t)\) which only depends on what is inside \(v_t\), which is the neighborhood \(\upsilon\) shifted by t. Note also that \(\phi _i^{\emptyset }\), because it depends on what happens in an empty neighborhood, is a pure constant.

The interpretation of (2) is tricky and is not as straightforward as in the discrete case (see [18]). The best way to understand it is to give the theoretical algorithm for simulating the next event on neuron i after time t (cf. Algorithm 2).

This Algorithm is close to Algorithm 1 but adds a neighborhood choice (Step 5) with a thinning step (Steps 6–9).

In Appendix A, we prove that this algorithm indeed provides a point process with an intensity given by (2) shifted at time t.

The previous algorithm cannot be put into practice because the computation of \(\phi _i^{V_{T}}\) depends on the points in \(V_T\) that are not known at this stage. That is why the efficient algorithm that we propose in the next section goes backward in time before moving forward.

Backward Forward Algorithm

Let us now describe the complete Backward Forward algorithm (cf. Algorithm 3). Note that to do so, the set of points \(\mathbf{P }\) is not reduced, as in the two previous algorithms, to the set of points that we want to simulate but this contains all the points that need to be simulated to perform the task.

At the difference with Algorithm 2, the main point is that in the backward part we pick at random all the points that may influence the thinning step. The fact that this loop ends comes from the following Proposition.

Proposition 1

If
$$\begin{aligned} \sup _{i\in \mathbf{I}}\sum \limits _{v \in {\mathcal {V}}} \lambda _i(v) l(v) M < 1. \end{aligned}$$
(3)
then the backward part of Algorithm 3 ends almost surely in finite time.

The proof is postponed to Appendix B. It is based on branching process arguments. Basically if in Steps 8–16, we produce in average less than one point, either because we picked the empty set in \(V_T\) or because the simulation of the Poisson process ended up with a small amount of points, eventually none, then the loop ends almost surely because there is an extinction of the corresponding branching process.

In the backward part, one of the most delicate parts consists in being sure that we add new points only if we have not visited this portion of time/neurons before (see Steps 11–13). If we do not make this verification, we may not have the same past depending on the neuron we are looking at and the procedure would not simulate the process we want.

In the forward part, because the backward algorithm stopped just before, we are first sure to have assess all \(V_T\)’s. Since \(\phi _{j}^{V_t}(t)\) is \({\mathcal {F}}^{int}_{t-}\) measurable for all t, \(\phi _{j_T}^{V_T}(T)\) only depends on the points in \(\mathbf{P}\) with mark \(X_T=1\) inside \(V_T\). The problem in Algorithm 2, phrased differently, is that we do not know the marks \(X_T\) of the previous points when we have to compute \(\phi _{j_T}^{V_T}(T)\). But in the forward part of Algorithm 3, we are sure that the most backward point for which the thinning (\(X_T=\) n.a.) has not taken place satisfies
  • either \(V_T=\emptyset\)

  • or \(V_T\ne \emptyset\) but either there are no simulated points in the corresponding \(V_T\) or the points there come from previous rounds of the loop (Step 5). Therefore, their marks \(X_T\) have been assigned.

Therefore, with the Backward Forward algorithm, and at the difference to Algorithm 2, we take the points in an order for which we are sure that we know the previous needed marks.
Figure 1 describes an example to go step by step through Algorithm 3. The backward steps determine all the points that may influence the acceptation/rejection of point \(T_{next}\). Notice that whereas usual activity tracking algorithms for point processes [14] automatically detect the active children (influencees), activity tracking in the backward steps detects the parents (influencers). The forward steps finally select the points.
Fig. 1

Main flow example for Algorithm 3, with backward steps in green (cf. Algorithm 3, Steps 7–17) and forward steps in purple (cf. Algorithm 3, Steps 18–25). Following arrow numbers: (1) The next point \(T_{next} = t +E\) (cf. Algorithm 3, Step 4) is scheduled, (2) The neighborhood \(V_{T_{next}}\) is selected in the first backward step, a first generation of three points (ab on neuron k and c on neuron \(\ell\)) is drawn (cf. Algorithm 3, Step 9), thanks to a Poisson process, (cf. Algorithm 3, Steps 11–12) and appended to \(\mathbf{P}\) (cf. Algorithm 3, Step 13), (3) at the second generation, a non-empty neighborhood is found, i.e., \(V_b \ne \emptyset\) (cf. Algorithm 3, Steps 9–1), but the Poisson process simulation does not give any point in it (cf. Algorithm 3, Step 12), (4) at the second generation, the neighborhood \(V_a\) is picked, it is not empty and overlap the neighborhood of the first generation (cf. Algorithm 3, Steps 9–11): therefore, there is no new simulation in the overlap (c is kept and belongs to \(V_b\) as well as \(V_a\)) but there is a new simulation thanks to a Poisson process outside of the overlap leading to a new point d (cf. Algorithm 3, Step 12) (5) at the second generation, for point c, one pick the empty neighborhood, i.e., \(V_c = \emptyset\) (cf. Algorithm 3, Step 9) and, therefore, we do not simulate any Poisson process, (6) at third generation, similarly no point and no interval are generated, i.e., \(V_d = \emptyset\) (cf. Algorithm 3, Step 9). This is the end of the backward steps and the beginning of the forward ones, (7) the point d is not selected, acceptation/selection taking place with probability \(\frac{\phi _{\ell }^{\emptyset }}{M}\) (cf. Algorithm 3, Step 20), (8) the point c is accepted, here again with probability \(\frac{\phi _{\ell }^{\emptyset }}{M}\) (cf. Algorithm 3, Step 20), (9) the point b is not selected, acceptation taking place, here, with probability \(\frac{\phi _{k}^{V_b}(b)}{M}\) (cf. Algorithm 3, Step 20), (10) the point a is selected, acceptation taking place, here, with probability \(\frac{\phi _{k}^{V_a}(a)}{M}\) (cf. Algorithm 3, Step 20), (11) The neighborhood of neuron i contains two points, one on neuron k and one on neuron \(\ell\) and one selects \(T_{next}\) with probability \(\frac{\phi _{i}^{V_{T_{next}}}(T_{next})}{M}\)

Illustration

To illustrate in practice the algorithm, we have simulated a Hawkes process as given in (1). Indeed such a process has a Kalikow decomposition (2) with bound M and neighborhood family \({\mathcal {V}}\) constituted of the \(\upsilon\)’s of the form \(v=\{(j,[-A,0))\}\) for some neuron j in \(\mathbf{I}\). To do that, we need the following choices:
$$\begin{aligned} \lambda _i(\emptyset ) = 1 - \sum \limits _{j \in \mathbf{I}} w_{ij} \quad \text{ and } \quad \phi _i^{\emptyset } = \dfrac{\nu _i}{\lambda _i(\emptyset )} \end{aligned}$$
and for \(\upsilon\) of the form \(v=\{(j,[-A,0))\}\) for some neuron j in \(\mathbf{I}\),
$$\begin{aligned} \lambda _i(v)= w_{ij} \quad \text{ and } \quad \phi _i^v=\mathbf{Nb}^j_{[-A,0)}\wedge M. \end{aligned}$$
We have taken \(\mathbf{I}=\mathbb {Z}^2\) and the \(w_{ij}\) proportional to a discretized centred symmetric bivariate Gaussian distribution of standard deviation \(\sigma\). More precisely, once \(\lambda _i(\emptyset )=\lambda _\emptyset\) fixed, picking according to \(\lambda _i\) consists in
  • choosing whether V is empty or not with probability \(\lambda _\emptyset\)

  • if \(V\ne \emptyset\), choosing \(V=\{(j,[-A,0))\}\) with \(j-i=\text{ round }(W)\) and W obeys a bivariate \({\mathcal {N}}(0,\sigma ^2)\).

In all the sequel, the simulation is made for neuron \(i=(0,0)\) with \(t_0=0\), \(t_1=100\) (see Algorithm 3). The parameters \(M,\lambda _\emptyset\) and \(\sigma\) vary. The parameters \(\nu _i=\nu\) and A are fixed accordingly by
$$\begin{aligned} \nu =0.9M\lambda _\emptyset \quad \text{ and } \quad A=0.9 M^{-1}(1-\lambda _\emptyset )^{-1}, \end{aligned}$$
to ensure that \(\phi _i^{\emptyset }<M\) and (3), which amounts here to \((1-\lambda _\emptyset )AM<1\).

In Fig. 2a, with \(M=2\), \(\sigma = 1\) and \(\lambda _{\emptyset }\) small, we see the overall spread of the algorithm around the neuron to simulate (here (0, 0)). Because we chose a Gaussian variable with small variance for the \(\lambda _i\)’s, the spread is weak and the neurons very close to the neuron to simulate are requested a lot of time at Steps 9–11 of the algorithm. This is also where the algorithm spent the biggest amount of time to simulate Poisson processes. Note also that roughly to simulate 80 points, we need to simulate 10 times more points globally in the infinite network. Remark also in Fig. 2b the avalanche phenomenon, typical of Hawkes processes: for instance the small cluster of black points on neuron with label 0 (i.e., (0, 0)) around time 22, is likely to be due to an excitation coming for the spikes generated (and accepted) on neuron labeled 8 and self excitation. The beauty of the algorithm is that we do not need to have the whole time line of neuron 8 to trigger neuron 0, but only the small blue pieces: we just request them at random, depending on the Kalikow decomposition.

In Fig. 3, we can first observe that when the parameter which governs the range of \(\lambda _i\)’s increases, the global spread of the algorithm increases. In particular, comparing the top left of Figs. 3 to 2 where the only parameter that changes is \(\sigma\), we see that the algorithm is going much further away and simulates much more points for a sensible equal number of points to generate (and accept) on neuron (0,0). Moreover, we can observe that
  • From left to right, by increasing \(\lambda _\emptyset\), it is more likely to pick an empty neighborhood and as a consequence, the spread of the algorithm is smaller. By increasing \(\nu =0.9M\lambda _\emptyset\), this also increases the total number of points produced on neuron (0;0).

  • From top to bottom, by increasing M, there are more points which are simulated in the Poisson processes (Step 12 of Algorithm 3) and there is also a stronger interaction (we do not truncate that much the number of points in \(\phi ^v\)). Therefore, the spread becomes larger and more uniform too, because there are globally more points that are making requests. Moreover, by having a basic rate M which is 10 times bigger, we have to simulate roughly 10 times more points.

Fig. 2

Simulation for \(M=2\), \(\sigma =1\), \(\lambda _{\emptyset } = 0.25\). For each neuron in \(\mathbb {Z}^2\), that have been requested in Steps 9:11, except the neuron of interest (0, 0), have been counted the total number of requests, that is the number of time a \(V_T\) pointed towards this neuron (Steps 9 and 11) and the total time spent at this precise neuron simulating a homogeneous Poisson process (Step 12). Note that since the simulation is on [0, 100] the time spent at position (0, 0) is at least 100. On a, the summary for one simulation with below the plot, number of points accepted at neuron (0, 0) and total number of points that have been simulated. Also annotated on (a), with labels between 0 and 8, the 9 neurons for which the same simulation in [20, 40] is represented in more details on (b). More precisely on (b), in abscissa is time and the neuron labels in ordinate. A plain dot represents an accepted point, by the thinning step (Step 20 of Algorithm 3), and an empty round, a rejected point. The blue pieces of line represent the non-empty neighborhoods that are in \(\mathbf{V}\) (color figure online)

Fig. 3

Simulation for 4 other sets of parameters, all of them with \(\sigma =3\). Summaries as explained in Fig. 2. On top, \(M=2\); on bottom, \(M=20\). On the left part, \(\lambda _{\emptyset } =0.25\), on the right part, \(\lambda _{\emptyset }=0.5\)

Conclusion

We derived a new algorithm for simulating the behavior of one neuron embedded in an infinite network. This is possible thanks to the Kalikow decomposition which allows picking at random the influencing neurons. As seen in the last section, it is computationally tractable in practice to simulate open systems in the physical sense. A question that remains open for future work is whether we can prove that such a decomposition exists for a wide variety of processes, as it has been shown in discrete time (see [9, 10, 18]).

Notes

Acknowledgements

This work was supported by the French government, through the UCA\(^{Jedi}\) Investissements d’Avenir managed by the National Research Agency (ANR-15-IDEX-01) and by the interdisciplinary Institute for Modeling in Neuroscience and Cognition (NeuroMod) of the Université Côte d’Azur. The authors would like to thank Professor E.Löcherbach from Paris 1 for great discussions about Kalikow decomposition and Forward Backward Algorithm.

References

  1. 1.
    Andersen PK, Borgan O, Gill R, Keiding N. Statistical models based on counting processes. Berlin: Springer; 1996.zbMATHGoogle Scholar
  2. 2.
    Brémaud P. Point processes and queues: martingale dynamics. Berlin: Springer; 1981.CrossRefGoogle Scholar
  3. 3.
    Brémaud P, Massoulié L. Stability of nonlinear Hawkes processes. Ann Probab. 1996;24:1563–88.MathSciNetCrossRefGoogle Scholar
  4. 4.
    Comets F, Fernandez R, Ferrari PA. Processes with long memory: regenerative construction and perfect simulation. Ann Appl Probab. 2002;3:921–43.MathSciNetzbMATHGoogle Scholar
  5. 5.
    Muzy A. Exploiting activity for the modeling and simulation of dynamics and learning processes in hierarchical (neurocognitive) systems. Mag Comput Sci Eng. 2019;21:83–93.CrossRefGoogle Scholar
  6. 6.
    Dassios A, Zhao H. Exact simulation of Hawkes process with exponentially decaying intensity. Electron Commun Probab. 2013;18(62):1–13.MathSciNetzbMATHGoogle Scholar
  7. 7.
    Didelez V. Graphical models of markes point processes based on local independence. J R Stat Soc B. 2008;70(1):245–64.MathSciNetCrossRefGoogle Scholar
  8. 8.
    Fernandéz R, Ferrari P, Galves A. Coupling, renewal and perfect simulation of chains of infinite order. In: Notes of a course in the V Brazilian School of Probability, Ubatuba, July 30–August 4, 2001. http://www.staff.science.uu.nl/~ferna107/.
  9. 9.
    Galves A, Löcherbach E. Infinite systems of interacting chains with memory of variable length—a stochastic model for biological neural nets. J Stat Phys. 2013;151(5):896–921.MathSciNetCrossRefGoogle Scholar
  10. 10.
    Galves A, Löcherbach E. Modeling networks of spiking neurons as interacting processes with memory of variable length. Journal de la Société Française de Statistiques. 2016;157:17–32.MathSciNetzbMATHGoogle Scholar
  11. 11.
    Hodara P, Löcherbach E. Hawkes Processes with variable length memory and an infinite number of components, Adv. Appl. Probab. 2017;49: 84–107 title = Hawkes Processes with variable length memory and an infinite number of components, year = 2016MathSciNetCrossRefGoogle Scholar
  12. 12.
    Kalikow S. Random markov processes and uniform martingales. Isr J Math. 1990;71(1):33–54.MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lewis PAW, Shedler GS. Simulation of nonhomogeneous Poisson processes. Monterey, California: Naval Postgraduate School; 1978.CrossRefGoogle Scholar
  14. 14.
    Mascart C, Muzy A, Reynaud-Bouret P. Centralized and distributed simulations of point processes using local independence graphs: A computational complexity analysis, under finalization for Arxiv deposit 2019Google Scholar
  15. 15.
    Méléard S, Aléatoire: introduction à la théorie et au calcul des probabilités, Editions de l’école polytechnique, 2010, pp. 185–194.Google Scholar
  16. 16.
    Møller J, Rasmussen JG. Perfect simulation of Hawkes processes. Adv Appl Probab. 2005;37:629–46.MathSciNetCrossRefGoogle Scholar
  17. 17.
    Ogata Y. On Lewis’ simulation method for point processes. IEEE Trans Inf Theory. 1981;27:23–31.CrossRefGoogle Scholar
  18. 18.
    Ost G, Reynaud-Bouret P. Sparse space-time models: concentration inequalities and Lasso. Under revision. https://arxiv.org/abs/1807.07615.
  19. 19.
    Peters EAJF, de With G. Rejection-free MonteCarlo sampling for general potentials. Phys Rev E. 2012;85:026703.CrossRefGoogle Scholar
  20. 20.
    Reynaud-Bouret P, Schbath S. Adaptive estimation for Hawkes processes; application to genome analysis. Ann Stat. 2010;38(5):2781–822.MathSciNetCrossRefGoogle Scholar
  21. 21.
    Tocher KD. PLUS/GPS III Specification, United Steel Companies Ltd, Department of Operational Research. 1967Google Scholar
  22. 22.
    Vere-Jones D, Ozaki T. Some examples of statistical estimation applied to earthquake data. Ann Inst Stat Math. 1982;34(B):189–207.CrossRefGoogle Scholar
  23. 23.
    Zeigler BP. Theory of modelling and simulation. New York: Wiley-Interscience Publication; 1976.zbMATHGoogle Scholar
  24. 24.
    Zeigler BP, Muzy A, Kofman E. Theory of modeling and simulation: discrete event & iterative system computational foundations. New York: Academic Press; 2018.zbMATHGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd 2019

Authors and Affiliations

  • Tien Cuong Phi
    • 1
  • Alexandre Muzy
    • 2
  • Patricia Reynaud-Bouret
    • 1
    Email author
  1. 1.Université Côte d’Azur, CNRS, LJADNiceFrance
  2. 2.Université Côte d’Azur, CNRS, I3SNiceFrance

Personalised recommendations