Skip to main content
Log in

Construction and Simulation of Generalized Multivariate Hawkes Processes

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

The main contribution of the paper amounts to providing a mathematical construction of a generalized multivariate Hawkes process (GMHP), as well as a simulation algorithm based on this construction. In particular, we justify the construction by demonstrating that the constructed process is indeed a GMHP with desired properties. Towards this end we provide some new results regarding conditional Poisson random measures and doubly stochastic marked Poisson processes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. See Appendix A.2 in Last and Brandt Last and Brandt (1995) for the definition of the kernel.

  2. Note that here d is the number of components in \(X_n\), and n is the index of the \(n-th\) element in the sequence \(( X_n)_{n \ge 1}\).

  3. Conditional independence between random measures is understood as conditional independence between random elements taking values in the space of probability measures. We refer to Kallenberg (2002), Chapter 12, for definition of random elements taking values in the space of \(\sigma\)-finite measures on a measurable space, and to Chapter 6 therein for definition of conditional independence between random elements.

  4. We refer to the Appendix for some of the concepts used below, and for their properties.

  5. \(T_\infty =\lim _{n\rightarrow \infty } T^{\infty }_n\), where \(T^{\infty }_n:= \inf {\{ t: H^{\infty }((0,t] \times E^\Delta ) \ge n\}}\).

  6. We recall our notational convention that \(\eta =\eta _T:= \mathbb {1}_{[0,T]}\eta\) and \(f =f_T:= \mathbb {1}_{[0,T]}f\).

  7. In the ensuing two formulae \(i=\sqrt{-}1\).

References

  • Bacry E, Delattre S, Hoffmann M, Muzy JF (2013) Modelling microstructure noise with mutually exciting point processes. Quant. Finance. 13(1):65–77

    Article  MathSciNet  MATH  Google Scholar 

  • Bessy-Roland Y, Boumezoued A, Hillairet C (2020) Multivariate Hawkes process for cyber insurance. Annals of Actuarial Science. 15(1):10–26

    Google Scholar 

  • Bielecki TR, Jakubowski J, Niewęgłowski M (2020) Structured dependence between stochastic processes. Cambridge University Press

  • Bielecki TR, Jakubowski J, Niewęgłowski M (2021) Multivariate Hawkes processes with simultanous excitations. Preprint

  • Brémaud P, Massoulié L (1996) Stability of nonlinear Hawkes processes. Ann Probab 24(3):1563–1588

    Article  MathSciNet  MATH  Google Scholar 

  • Brix A, Kendall WS (2002) Simulation of cluster point processes without edge effects. Adv. in Appl. Probab. 34(2):267–280

    Article  MathSciNet  MATH  Google Scholar 

  • Dassios A, Hongbiao Z (2013) Exact simulation of Hawkes process with exponentially decaying intensity. Electron Commun Probab. 18(62):13

  • Hawkes AG (1971a) Point Spectra of Some Mutually Exciting Point Processes. J Royal Stat Soc. Series B (Methodological), 33(3):438–443

  • Hawkes AG (1971b) Spectra of Some Self-Exciting and Mutually Exciting Point Processes. Biometrika. 58(1):83–90

  • Hawkes AG (2017) Hawkes processes and their applications to fnance: a review. Quantitative Finance. 18(2):193–198

    Article  Google Scholar 

  • Jacod J (1974) Multivariate point processes: predictable projection, Radon-Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 31:235–253, 1974/75

  • Jeanblanc M, Yor M, Chesney M (2009) Mathematical methods for financial markets. Springer Finance. Springer-Verlag, London Ltd, London

    Book  MATH  Google Scholar 

  • Kallenberg O (2002) Foundations of modern probability. Probability and its Applications (New York). Springer-Verlag, New York, second edition

  • Kelly JD, Park J, Harrigan RJ et al (2019) Real-time predictions of the 2018 – 2019 Ebola virus disease outbreak in the Democratic Republic of the Congo using Hawkes point process models. Epidemics. 28(100354)

  • Last G, Brandt A (1995) Marked point processes on the real line: The dynamic approach. Probability and its Applications (New York). Springer-Verlag, New York

  • Lim KW, Lee Y, Hanlen L, Zhao H (2016) Simulation and calibration of a fully bayesian marked multidimensional Hawkes process with dissimilar decays. In Robert J. Durrant and Kee-Eung Kim, editors, Proceedings of The 8th Asian Conference on Machine Learning, volume 63 of Proceedings of Machine Learning Research, pages 238–253, The University of Waikato, Hamilton, New Zealand, 16–18. PMLR

  • Liniger TJ (2009) Multivariate Hawkes processes. PhD thesis, ETH Zurich

  • Møller J, Rasmussen JG (2005) Perfect simulation of Hawkes processes. Adv Appl Probab 37(3):629–646

    Article  MathSciNet  MATH  Google Scholar 

  • Møller J, Rasmussen JG (2006) Approximate Simulation of Hawkes Processes. Methodology and Computing in Applied Probability 8(1):53–64

    Article  MathSciNet  MATH  Google Scholar 

  • Ogata Y (2006) On Lewis’ simulation method for point processes. IEEE Trans. Inf. Theor. 27(1):23–31

    Article  MATH  Google Scholar 

  • Ogata Y (1999) Seismicity Analysis through Point-process Modeling: A Review. Pure Appl Geophys 155:471–507

    Article  Google Scholar 

  • Sato KI (2013) Lévy processes and infinitely divisible distributions, volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge. Translated from the 1990 Japanese original, Revised edition of the 1999 English translation

  • Swartz C (2009) Multiplier convergent series. World Scientific

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomasz R. Bielecki.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this Appendix we provide some additional concepts and results that are needed in Section 3.

1.1 Conditional Poisson Random Measures and Doubly Stochastic Marked Poisson Processes

We begin with a discussion of conditional Poisson random measures, which relevant for this paper. Then, we connect these measures with doubly stochastic marked Poisson processes.

1.1.1 Conditional Poisson random measure: definition and specific construction

Let \((\Omega ,\mathcal {F},\mathbb {P})\) be a probability space and \((\mathcal {X},\varvec {\mathcal{X}} )\) be a Borel space. For a given sigma field \(\mathcal {G}\subseteq \mathcal {F}\), we define a \(\mathcal {G}\)-conditionally Poisson random measure on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) as follows:

Definition 5

Let \(\nu\) be a \(\sigma\)-finite random measure on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\). A random measure N on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) is a \(\mathcal {G}\)-conditionally Poisson random measure with intensity measure \(\nu\) if the following two properties are satisfied:

  1. 1.

    For every \(C \in \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}}\) such that \(\nu (C) < \infty\), we have

    $$\begin{aligned} \mathbb {P}( N(C) = k| \mathcal {G}) = e^{-\nu (C)} \frac{(\nu (C))^k}{k!}. \end{aligned}$$
  2. 2.

    For arbitrary \(n=1,2,\ldots ,\) and arbitrary disjoint sets \(C_1, \ldots , C_n\) from \(\mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}}\) such that \(\nu (C_m) < \infty\), \(m=1,\ldots ,n\), the random variables \(N(C_1), \ldots , N(C_n)\) are \(\mathcal {G}\)-conditionally independent.

Clearly \(\nu\) is \(\mathcal {G}\)-measurable. Note that if \(\mathcal {G}\) is trivial \(\sigma\)-field (or if N is independent of \(\mathcal {G}\)), then N is a Poisson random measure (see Section 4.19 in Sato (2013)), which sometimes is referred to as the Poisson process on \(\mathbb {R}_+ \times \varvec {\mathcal{X}}\) (see e.g. Kallenberg (2002)). In this case \(\nu\) is a deterministic \(\sigma\)-finite measure. For \(\mathcal {G}=\sigma (\nu )\), the \(\sigma (\nu )\)-conditional Poisson random measure is also known in the literature as Cox process directed by \(\nu\) (see Chapter 12 in Kallenberg (2002)).

Now we will provide a construction of a \(\mathcal {G}\)-conditional Poisson random measure with the intensity measure given in terms of a specific kernel g. In fact, the measure constructed below is supported on sets from \(\mathcal {B}((0,T]) \otimes \varvec {\mathcal{X}}\), in the sense that for any set C that has an empty intersection with \((0,T]\times \mathcal {X}\) the value of the measure is 0 almost surely.

We begin by letting g(tydx) be a finite kernel from \((\mathbb {R}_+ \times \mathcal {Y}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{Y}})\) to \((\mathcal {X}, \varvec {\mathcal{X}})\), where \((\mathcal {Y}, \varvec {\mathcal{Y}})\) and \((\mathcal {X}, \varvec {\mathcal{X}})\) are Borel spaces, satisfying

$$\begin{aligned} g(t,y,\mathcal {X}) = 0\quad \text {for}\quad t > T. \end{aligned}$$
(36)

Next, let \(\partial\) be an element external to \(\mathcal {X}\), and define kernel \(g^\partial\) from \((\mathbb {R}_+ \times \mathcal {Y}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{Y}})\) to \((\mathcal {X}^\partial , \varvec {\mathcal{X}}^\partial )\) as

$$\begin{aligned} g^\partial (t,y,dx) = \lambda (t,y)\gamma (t,y,dx), \end{aligned}$$

where

$$\begin{aligned} \lambda (t,y)= g(t,y, \mathcal {X}), \quad \gamma (t,y,dx)=\frac{g(t,y, dx)}{g(t,y, \mathcal {X})} \mathbb {1}_{\{g(t,y, \mathcal {X}) > 0\}} + \delta _\partial (dx) \mathbb {1}_{\{g(t,y, \mathcal {X}) = 0\}}. \end{aligned}$$

Suppose that

$$\begin{aligned} \sup _{t \in [\ell (y),T] } \lambda (t,y) \le \widehat{\lambda }(y) <\infty , \qquad \gamma (t,y,A) = \int _{{(0,1]}} \mathbb {1}_A(\Gamma (t,y,u)) du, \quad A \in \varvec {\mathcal{X}}, \end{aligned}$$

for some measurable mappings \(\ell : \mathcal {Y}\rightarrow [0,T] \cup {\{ \infty \}}\), \(\widehat{\lambda }: \mathcal {Y}\rightarrow (0, \infty )\) and \(\Gamma : \mathbb {R}_+ \times \mathcal {Y}\times {(0,1]} \rightarrow \mathcal {X}\). Existence of such mapping \(\Gamma\) is asserted by Lemma 3.22 in Kallenberg (2002). In addition, let \(D:[0, \infty ) \times {(0,1]} \rightarrow \mathbb {N}\) be as in Step 1 of our construction done in Section 3.1.

Next, take Y to be a \((\mathcal {Y},\varvec {\mathcal{Y}})\)-valued random element, which is \(\mathcal {G}\)-measurable, and let Z and \((U_m, V_m, W_m)_{m=1}^\infty\) be independent random variables uniformly distributed on (0, 1] and independent of \(\mathcal {G}\). We now define a random measure N on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) as

$$\begin{aligned} \begin{aligned} N (dt,dx)&= \sum _{m =1}^{\infty } \delta _{(T_m, X_m)} (dt,dx) \mathbb {1}_{\{\ell (Y)< T, \, m \le P, \, A_m \le \lambda (T_m, Y) \}}\\&=\sum _{m =1}^P \delta _{(T_m, X_m)} (dt,dx) \mathbb {1}_{\{\ell (Y) < T, \, A_m \le \lambda (T_m, Y) \}}, \end{aligned} \end{aligned}$$
(37)

where we use the usual convention that \(\sum _{m=1}^0 ... = 0\), P, \((T_m,A_m, X_m)_{m=1}^{\infty }\) are random variables defined by transformation of the sequence Z,\((U_m, V_m, W_m)_{m=1}^\infty\) and the random element Y in the following way:

$$\begin{aligned} P&= D\big ((T - \ell (Y)) \widehat{\lambda }(Y) \mathbb {1}_{\{ \ell (Y)< T\}}, Z \big )\\&= D\big ((T - \ell (Y)) \widehat{\lambda }(Y) ,Z\big ) \mathbb {1}_{\{ \ell (Y)< T\}},\\ T_m&= \big (\ell (Y) + (T-\ell (Y))U_m \big )\mathbb {1}_{\{ \ell (Y)< T\}} + {\infty \mathbb {1}_{\{ \ell (Y) \ge T \}}},\\ A_m&=\widehat{\lambda }(Y) V_m \mathbb {1}_{\{ \ell (Y)< T\}}, \\X_m &= \left\{ \begin{array}{ll} \Gamma (T_m, Y, W_m), &{} {\text {if}\; \ell (Y) < T,} \\ \partial , {} \;\text {if}\; {\ell (Y) \ge T}. \end{array} \right.\end{aligned}$$
(38)

Using the above set-up we see that, for each \(m=1,2,\ldots ,\)

$$\begin{aligned} \begin{aligned}&\mathbb {P}((T_m, A_m, X_m)\in dt\times da \times dx | \mathcal {G})\\&=\mathbb {1}_{\{ \ell (Y) < T \}}\frac{1}{(T - \ell (Y))\widehat{\lambda }(Y) }\mathbb {1}_{(\ell (Y),T]\times (0,\widehat{\lambda }(Y) ]}(t,a) \gamma (t,Y,dx) dt da\\&\quad +\mathbb {1}_{\{ \ell (Y) \ge T\}} \delta _{(\infty ,0,\partial )}(dt,da,dx), \end{aligned} \end{aligned}$$
(39)

where \(\delta _{(\infty ,0,\partial )}\) is a Dirac measure.

Note that even though the random elements \(X_m\), \(m=1,2,\ldots ,\) may take value \(\partial\), the measure N given in Eq. (37) is a random measure on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) having support belonging to \(\mathcal {B}([0,T]) \otimes \varvec {\mathcal{X}}\).

Given the above, we now have the following result.

Lemma 2

The random measure N defined by Eq. (37) is a \(\mathcal {G}\)-conditionally Poisson random measure with intensity measure \(\nu\) given by

$$\begin{aligned} {\nu (C) = \int _C g(v, Y, dx) \mathbb {1}_{ {(} \ell (Y), {\infty )}} (v) dv}, \qquad C \in \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}}. \end{aligned}$$
(40)

Proof

To prove the result we consider \(N((s,t] \times B)\) for fixed \(0 \le s \le t\), \(B \in \varvec {\mathcal{X}}\). We have

$$\begin{aligned} N((s,t] \times B)&= \sum _{m =1}^{\infty } \delta _{(T_m, X_m)} ((s,t] \times B) \mathbb {1}_{\{ \ell (Y)< T, m \le P , A_m \le \lambda (T_m, Y) \}} \\&= \sum _{m =1}^{P } \mathbb {1}_{\{\ell (Y)< T, s < T_m \le t, X_m \in B , A_m \le \lambda (T_m, Y) \}}. \end{aligned}$$

First we will prove that, conditionally on \(\mathcal {G}\), the random variable \(N((s,t] \times B)\) has the Poisson distribution with mean \(\nu ((s,t] \times B)\). Towards this end we observe that P has, conditionally on \(\mathcal {G}\), the Poisson distribution with mean \((T - \ell (Y))\widehat{\lambda }(Y) \mathbb {1}_{\{\ell (Y) < T\}}\) (see Eq. (38)), so

$$\begin{aligned} \mathbb {P}( P =k | \mathcal {G}) = e^{-(T - \ell (Y))\widehat{\lambda }(Y)\mathbb {1}_{\{\ell (Y)< T\}}} \frac{\Big ((T - \ell (Y))\widehat{\lambda }(Y)\mathbb {1}_{\{\ell (Y) < T\}}\Big )^k}{k!}, \quad k=0,1,\ldots \ . \end{aligned}$$

Moreover, we conclude from Eq. (39) that for \(m=1,2,\ldots ,\)

$$\begin{aligned} \begin{aligned}&\mathbb {P}(\ell (Y)< T ,s< T_m \le t, X_m \in B , A_m \le \lambda (T_m, Y)| \mathcal {G})\\&= \mathbb {1}_{\{\ell (Y)< T\}}\int _s^t\bigg ( \!\!\! \int _B \bigg ( \int _0^{\lambda (u, Y )}\!\!\!\!\!\!\!\!\!\!\!\!\frac{1}{(T - \ell (Y))\widehat{\lambda }(Y) }\mathbb {1}_{(\ell (Y),T]\times [0,\widehat{\lambda }(Y)]}(u,a) da \bigg )\gamma (u,Y,dx) \bigg )du\\&= \mathbb {1}_{\{\ell (Y)< T\}} \frac{1}{(T - \ell (Y))\widehat{\lambda }(Y) } \int _s^t\mathbb {1}_{(\ell (Y),T]}(u) \lambda (u, Y )\gamma (u,Y,B) \, du\\&={\mathbb {1}_{\{\ell (Y) < T\}} \frac{1}{(T - \ell (Y))\widehat{\lambda }(Y) } \int _s^t \mathbb {1}_{(\ell (Y), {\infty )}}(u) g(u,Y,B) \, du}=:p(Y), \end{aligned} \end{aligned}$$
(41)

where the last equality follows from Eq. (36).

Note that for \(u\in \mathbb {R}\) and \(m=1,2,\ldots ,\) we have Footnote 7

$$\begin{aligned} \mathbb {E}(e^{iu\mathbb {1}_{\{\ell (Y)< T , s < T_m \le t, X_m \in B , A_m \le \lambda (T_m, Y) \}}}| \mathcal {G})=(1-p(Y))+p(Y)e^{iu}. \end{aligned}$$

This and the \(\mathcal {G}\)-conditional independence of P and \((T_m,A_m, X_m)_{m=1}^\infty\) imply that

$$\begin{aligned} \mathbb {E}( e^{ iu N((s,t] \times B)} | \mathcal {G})&={ e^{(e^{iu} -1) p(Y)(T - \ell (Y))\widehat{\lambda }(Y)\mathbb {1}_{\{\ell (Y) < T\}} }} = e^{(e^{iu} -1) \int _s^t \mathbb {1}_{(\ell (Y), {\infty )}}(v) g(v,Y,B) dv} \\&= e^{(e^{iu} -1) \nu ((s,t] \times B)} . \end{aligned}$$

Thus, the random variable \(N((s,t] \times B)\) has the \(\mathcal {G}\)-conditional Poisson distribution with mean equal to \(\nu ((s,t] \times B)\).

Using standard monotone class arguments we obtain that for arbitrary \(C \in \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}}\) random variable N(C) has, conditionally on \(\mathcal {G}\), the Poisson distribution with mean \(\nu (C)\).

Next, we will show that for \(0 \le s_1< t_1 \le s_2< t_2 \le \ldots \le s_n < t_n\) and for sets \(B_1, \ldots , B_n \in \varvec {\mathcal{X}}\) the random variables

$$\begin{aligned} N((s_1,t_1] \times B_1) , \ldots , N((s_n,t_n] \times B_n) \end{aligned}$$
(42)

are conditionally independent given \(\mathcal {G}\). Towards this end let us define

$$\begin{aligned} S_r((s,t] \times B) :=\sum _{m=1}^r I_m((s,t] \times B), \end{aligned}$$

for \(r \in \mathbb {N}\), \(0 \le s < t\), \(B \in \varvec {\mathcal{X}}\), where

$$\begin{aligned} I_m((s,t] \times B) : = \mathbb {1}_{\{s < T_m \le t, \, X_m \in B , \, A_m \le \lambda (T_m, Y ) \}}. \end{aligned}$$

Note that the random variable \(N((s,t] \times B)\) can be represented as

$$\begin{aligned} N((s,t] \times B) = S_{P}((s,t] \times B). \end{aligned}$$

Using this representation we obtain that

$$\begin{aligned}&J:=\mathbb {P}( N((s_1,t_1] \times B_1) = l_1 , \ldots , N((s_n,t_n] \times B_n) = l_n | \mathcal {G}) \\&=\sum _{r=0}^\infty \mathbb {P}\bigg ( \bigcap _{j=1}^n S_r((s_j,t_j] \times B_j) =l_j ,P = r \, \Big | \mathcal {G}\bigg ) \\&=\sum _{r=l}^\infty \mathbb {P}\bigg ( \bigcap _{j=1}^n S_r((s_j,t_j] \times B_j) =l_j, S_r\Big (\mathbb {R}_+ \times \mathcal {X} \setminus \bigcup _{j=1}^n (s_j,t_j] \times B_j \Big ) =r - l \,\Big | \mathcal {G}\bigg ) \mathbb {P}( P = r | \mathcal {G}) , \end{aligned}$$

where \(l = \sum _{j=1}^n l_j\). Now, from Eq. (41), we see that the random vector

$$\begin{aligned} \bigg ( S_r((s_1,t_1] \times B_1) , \ldots , S_r((s_n,t_n] \times B_n) , S_r\Big (\mathbb {R}_+ \times \mathcal {X} \setminus \bigcup _{j=1}^n (s_j,t_j] \times B_j \Big ) \bigg ) \end{aligned}$$

has, conditionally on \(\mathcal {G}\), the multinomial distribution with parameters \(p_1, \ldots , p_{n+1}\) given by:

$$\begin{aligned} p_j&= p_j(Y) :=\mathbb {P}(\ell (Y)< T , s_j < T_1 \le t_j, X_1 \in B_j , A_1 \le \lambda (T_1, Y ) | \mathcal {G}), \end{aligned}$$
(43)

for \(j=1, \ldots ,n\), and

$$\begin{aligned} p_{n+1} = 1- p_1 - \ldots -p_{n}. \end{aligned}$$

Hence, using the fact that \(l = \sum _{j=1}^n l_j\), we deduce that

$$\begin{aligned} J&=\sum _{r=l}^\infty \frac{r!}{l_1! \ldots l_n! (r-l)!} p_1^{l_{1}} \cdots p_{n}^{l_n} p_{n+1}^{r - l} \mathbb {P}( P = r | \mathcal {G}) \\&= \frac{1}{l_1! \ldots l_n!} p_1^{l_{1}} \cdots p_{n}^{l_n} \sum _{r=0}^\infty \frac{(r+l)!}{r!} p_{n+1}^{r} \mathbb {P}( P = r+l | \mathcal {G}) \\&= \frac{1}{l_1! \ldots l_n!} p_1^{l_{1}} \cdots p_{n}^{l_n} \sum _{r=0}^\infty \frac{(r+l)!}{r!} p_{n+1}^{r} e^{-(T - \ell (Y))\widehat{\lambda }(Y)\mathbb {1}_{\{ \ell (Y)< T \}}} \frac{\big ((T - \ell (Y))\widehat{\lambda }(Y) \mathbb {1}_{\{ \ell (Y)< T \}} \big )^{r+l}}{(r+l)!} \\&=\Bigg [ \prod _{j=1}^n \frac{\Big (p_j (T - \ell (Y))\widehat{\lambda }(Y) \mathbb {1}_{\{ \ell (Y)< T \}}\Big )^{l_j} }{l_j!} \Bigg ] e^{(p_{n+1}-1)(T - \ell (Y))\widehat{\lambda }(Y) \mathbb {1}_{\{ \ell (Y)< T \}}} \\&= \prod _{j=1}^n \frac{\Big (p_j (T - \ell (Y))\widehat{\lambda }(Y)\mathbb {1}_{\{ \ell (Y)< T \}}\Big )^{l_j} }{l_j!} e^{-p_j(T - \ell (Y))\widehat{\lambda }(Y) \mathbb {1}_{\{ \ell (Y) < T \}}} \\&=\prod _{j=1}^n \mathbb {P}( N((s_j,t_j] \times B_j) = l_j | \mathcal {G}), \end{aligned}$$

where the last equality follows from the fact that \(N((s_j,t_j] \times B_j)\) has the \(\mathcal {G}\)-conditional Poisson distribution with mean equal to \(\nu ((s_j,t_j] \times B_j) = p_j (T - \ell (Y))\widehat{\lambda }(Y)\mathbb {1}_{\{ \ell (Y) < T \}}\), which is a consequence of Eqs. (40), (41) and (43). Using standard use the monotone class arguments we conclude from Eq. (42) that for arbitrary disjoint sets \(C_1, \ldots , C_n \in \mathcal {B}(R_+) \otimes \varvec {\mathcal{X}}\) that random variables \(N(C_1), \ldots , N(C_n)\) are \(\mathcal {G}\)-conditionally independent. The proof is now complete.

1.1.2 Relation Between Conditional Poisson Random Measures and Doubly Stochastic Marked Poisson Processes

We begin by recalling (cf. Chapter 6 in Last and Brandt (1995)) the concept of a doubly stochastic marked Poisson process. For this, we consider a filtration \(\mathbb {F}\) on \((\Omega ,\mathcal {F},\mathbb {P})\). A marked point process N on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) is an \(\mathbb {F}\)-doubly stochastic marked Poisson process if there exist an \(\mathcal {F}_0\)-measurable random measure \(\nu\) on on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) such that

$$\begin{aligned} \mathbb {P}( N((s,t] \times B ) = k | \mathcal {F}_s ) = e^{\nu ((s,t] \times B)} \frac{(\nu ((s,t] \times B))^k}{k!}, \qquad 0 \le s < t, \ B \in \varvec {\mathcal{X}}. \end{aligned}$$
(44)

Thus, for \(0 \le s < t\), \(B \in \varvec {\mathcal{X}}\) we have

$$\begin{aligned} \nu ((s,t] \times B) = \mathbb {E}( N((s,t] \times B) | \mathcal {F}_0). \end{aligned}$$
(45)

Hence, by analogy with the concept of the intensity of a Poisson random measure, the measure \(\nu\) is called the \(\mathcal {F}_0\)-intensity kernel of N (see Chapter 6 in Last and Brandt (1995)).

Let now \(\widetilde{N}\) be marked point process on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) such that its \(\mathbb {F}\)-compensator \(\widetilde{\nu }\) is the \(\mathcal {F}_0\)-intensity kernel in a sense that the property analogous to Eq. (45) holds, namely

$$\begin{aligned} \widetilde{\nu }((s,t] \times B) = \mathbb {E}(\widetilde{N}((s,t] \times B) | \mathcal {F}_0),\ 0 \le s < t,\ B \in \varvec {\mathcal{X}}. \end{aligned}$$

Then, one can show (see Theorem 6.1.4 in Last and Brandt (1995)) that \(\widetilde{N}\) is an \(\mathbb {F}\)-doubly stochastic marked Poisson process, i.e. the analog of Eq. (44) holds with \(\widetilde{N}\) and \(\widetilde{\nu }\). The opposite statement is true as well (see Theorem 6.1.4 in Last and Brandt (1995)): if \(\widetilde{N}\) is an \(\mathbb {F}\)-doubly stochastic marked Poisson process, then the \(\mathbb {F}\)-compensator \(\widetilde{\nu }\) of \(\widetilde{N}\) is an \(\mathcal {F}_0\)-intensity kernel of \(\widetilde{N}\).

Conditional Poisson random measures on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\) are closely related to \(\mathbb {F}\)-doubly stochastic marked Poisson processes. It can be shown that if N is an \(\mathbb {F}\)-doubly stochastic marked Poisson process with intensity kernel \(\nu\), then N considered as a random measure is an \(\mathcal {F}_0\)-conditionally Poisson random measure with intensity kernel \(\nu\).

This implies that for sets \(B_1, \ldots , B_n \in \varvec {\mathcal{X}}\) and for \(0 \le s_1< t_1 \le s_2< t_2 \le \ldots \le s_n < t_n \le t\), \(n \in \mathbb {N}\), we have

$$\begin{aligned} \begin{aligned}&\mathbb {P}\Big ( \bigcap _{i=1}^n {\{ N((s_i, t_i] \times B_i )=l_i \}} | \mathcal {F}_0 \Big ) = \prod _{i=1}^{n} e^{\nu ((s_i,t_i] \times B_i)} \frac{(\nu ((s_i,t_i] \times B_i))^{l_i}}{l_i!} \\&= \prod _{i=1}^n \mathbb {P}\big ({\{ N((s_i, t_i] \times B_i )=l_i \}} | \mathcal {F}_0 \big ) . \end{aligned} \end{aligned}$$
(46)

The next result, in a sense, complements our discussion of conditional Poisson random measures and doubly stochastic marked Poisson processes.

Proposition 6

i) Let M be a marked point process on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\), which is a \(\mathcal {G}\)-conditional Poisson random measure with intensity measure \(\nu\), and let \(\widehat{\mathbb {F}}^M\) be a filtration defined by family of \(\sigma\)-fields

$$\begin{aligned} \widehat{\mathcal {F}}^M_t = \mathcal {G}\vee \mathcal {F}^{M}_t, \qquad t \ge 0. \end{aligned}$$

Then M is an \(\widehat{\mathbb {F}}^M\)-doubly stochastic marked Poisson process with \(\widehat{\mathcal {F}}^M_0\)-intensity kernel \(\nu\) being also \(\widehat{\mathbb {F}}^M\)-compensator of M.

ii) Let \(\mathcal {N}= (N_j)_{j \ge 1}\) be a family of marked point processes on \((\mathbb {R}_+ \times \mathcal {X}, \mathcal {B}(\mathbb {R}_+) \otimes \varvec {\mathcal{X}})\), which are \(\mathcal {G}\)-conditional Poisson random measures (each \(N_j\) with intensity measure \(\nu _j\)), and let \(\widehat{\mathbb {F}}^\mathcal {N}\) be a filtration defined by the family of \(\sigma\)-fields

$$\begin{aligned} \widehat{\mathcal {F}}^\mathcal {N}_t = \mathcal {G}\vee \bigvee _{k \ge 1} \mathcal {F}^{N_k}_t, \qquad t \ge 0. \end{aligned}$$

Suppose that \((N_j)_{j \ge 1}\) are \(\mathcal {G}\)-conditionally independent. Then each \(N_j\) is an \(\widehat{\mathbb {F}}^\mathcal {N}\)-doubly stochastic marked Poisson process with \(\widehat{\mathcal {F}}^\mathcal {N}_0\)-intensity kernel \(\nu _j\) being also \(\widehat{\mathbb {F}}^\mathcal {N}\)-compensator of \(N_j\).

In the proof of Proposition 6 we will use the following elementary result, whose derivation is omitted:

Lemma 3

Let \(\mathcal {G}\) be a sigma field and let \(A \in \mathcal {G}\). Then for arbitrary measurable sets B and C which are conditionally independent given \(\mathcal {G}\) we have

$$\begin{aligned} \mathbb {E}( \mathbb {1}_B \mathbb {1}_{A \cap C}) = \mathbb {E}( \mathbb {P}(B|\mathcal {G}) \mathbb {1}_{A \cap C}). \end{aligned}$$

Proof

(of Proposition 6) We will prove ii), the proof of i) is similar in spirit to the proof of ii) and in fact a bit simpler. Fix arbitrary \(j \ge 1\). By assumption \(N_j\) is a \(\mathcal {G}\)-conditional Poisson random measure, so we have for fixed \(0 \le s < t\) and \(D \in \varvec {\mathcal{X}}\)

$$\begin{aligned} \mathbb {P}( N_j ((s,t] \times D) = i | \mathcal {G}) = e^{-\nu _j((s,t] \times D)} \frac{(\nu _j((s,t] \times D))^i}{i!}, \end{aligned}$$
(47)

where \(\nu _j\) is \(\mathcal {G}= \widehat{\mathcal {F}}^{\mathcal {N}}_0\)-measurable random measure. In view of the definition of \(\widehat{\mathbb {F}}^\mathcal {N}\)-doubly stochastic marked Poisson process and of the above formula it suffices to show that for arbitrary set \(F \in \widehat{\mathcal {F}}^\mathcal {N}_s\) it holds

$$\begin{aligned} \mathbb {E}( \mathbb {1}_{\{N_j ((s,t] \times D)=k\}} \mathbb {1}_{F}) = \mathbb {E}( \mathbb {P}( N_j ((s,t] \times D) = k | \mathcal {G})\mathbb {1}_{F}) . \end{aligned}$$
(48)

Indeed, Eqs. (48) and (47) imply

$$\begin{aligned} \mathbb {P}( N_j ((s,t] \times D) = i | \widehat{\mathcal {F}}^\mathcal {N}_s) = e^{-\nu _j((s,t] \times D)} \frac{(\nu _j((s,t] \times D))^i}{i!}. \end{aligned}$$

So that \(N_j\) is a \(\widehat{\mathbb {F}}^\mathcal {N}\)-doubly stochastic marked Poisson process with \(\widehat{\mathcal {F}}^{\mathcal {N}}_0\)-intensity kernel \(\nu _j\). Then Proposition 6.1.4 in Last and Brandt (1995) implies that \(\nu _j\) is \(\widehat{\mathbb {F}}^\mathcal {N}\)-compensator of \(N^j\).

To prove Eq. (48) we will use the Monotone Class Theorem. First note that sets F for which Eq. (48) holds constitute \(\lambda\)-system. Thus it suffices to show the above equality for a \(\pi\)-system of sets which generates \(\widehat{\mathcal {F}}^\mathcal {N}_s\). Towards this end consider family of sets:

$$\begin{aligned} {\mathcal {A}}_s := \Big \{ A \cap C : \,&A \in \mathcal {G}, C = \cap _{r=1}^n \cap _{l=1}^{p_r} {\{ N_{m_r}( (s_l^r , t_l^r ] \times D_l^r ) = k_l^r\}}, \\&0 \le s_{1}^r< t_1^r \le \ldots \le s_{p_r}^r < t_{p_r}^r \le s , \ D^{r}_1, \ldots D^r_{p_r } \in \varvec {\mathcal{X}}, \ k^r_1, \ldots , k^r_{p_r} \in \mathbb {N}, \\&0 \le p_1 \le \ldots \le p_r, \ 0 \le m_1 \le \ldots \le m_r, \quad r=1, \dots ,n , \, n \in \mathbb {N}\Big \}. \end{aligned}$$

Clearly, \({\mathcal {A}}_s\) is a \(\pi\)-system and \(\sigma ( {\mathcal {A}}_s) = \widehat{\mathcal {F}}^\mathcal {N}_s\). Let us take \(F \in {\mathcal {A}}_s\), so \(F = A \cap C\), and let \((s,t] \times D\) be disjoint with sets \((s_l^r , t_l^r ] \times D_l^r\) which define C. This and \(\mathcal {G}\)-conditional independence of \({\{N_j\}}_{j\ge 1}\) imply that events \({\{N_j ((s,t] \times D)=k\}}\) and C are conditionally independent given \(\mathcal {G}\). Hence, by applying Lemma 3, we obtain that Eq. (48) holds for \(F \in {\mathcal {A}}_s\). Then, invoking the Monotone Class Theorem, we conclude that Eq. (48) holds for sets \(F \in \widehat{\mathcal {F}}^\mathcal {N}_s\). The proof is complete.

1.2 Additional Technical Result

Lemma 4

Let \((\mu ^{k})_{ k=1}^\infty\) be a sequence of measures. Let \(\mu\) be a mapping \(\mu : \mathcal {X} \rightarrow [0,\infty ]\) defined by

$$\begin{aligned} \mu (A) = \lim _{n \rightarrow \infty } \sum _{k=1}^n \mu _k( A). \end{aligned}$$

Then \(\mu\) is a measure. Moreover for any measurable non negative function \(F: \mathcal {X}\rightarrow \mathbb {R}_+\) we have

$$\begin{aligned} \int _{\mathcal {X}} F d \mu = \lim _{n \rightarrow \infty } \sum _{k=1}^n \int _{\mathcal {X}} F d \mu _k \end{aligned}$$

Proof

The first part follows from the Nikodym convergence theorem (see e.g. Theorem 7.48 in Swartz (2009)).

To prove the second assertion it suffices to consider simple step functions only, i.e. functions F of the form

$$\begin{aligned} F(x) := \sum _{i=1}^n a_i \mathbb {1}_{A_i}(x), \quad a_i \in \mathbb {R}_+, A_i \in \mathcal {X}. \end{aligned}$$

For such F it holds

$$\begin{aligned} \int _{\mathcal {X}} F d \mu = \sum _{i=1}^n a_i \mu (A_i) = \sum _{i=1}^n a_i \sum _{ k=1}^\infty \mu _k(A_i) = \sum _{ k=1}^\infty \sum _{i=1}^n a_i \mu _k(A_i) = \sum _{k=1}^\infty \int _{\mathcal {X}} F d \mu _k. \end{aligned}$$

Using usual approximation technique and the monotone convergence theorem we finish the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bielecki, T.R., Jakubowski, J. & Niewęgłowski, M. Construction and Simulation of Generalized Multivariate Hawkes Processes. Methodol Comput Appl Probab 24, 2865–2896 (2022). https://doi.org/10.1007/s11009-022-09934-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-022-09934-5

Keywords

Mathematics Subject Classification (2020)

Navigation