Skip to main content
Log in

White-noise driven conditional McKean–Vlasov limits for systems of particles with simultaneous and random jumps

  • Published:
Probability Theory and Related Fields Aims and scope Submit manuscript

Abstract

We study the convergence of N-particle systems described by SDEs driven by Brownian motion and Poisson random measure, where the coefficients depend on the empirical measure of the system. Every particle jumps with a jump rate depending on its position and on the empirical measure of the system. Jumps are simultaneous, that is, at each jump time, all particles of the system are affected by this jump and receive a random jump height that is centred and scaled in \(N^{-1/2}\). This particular scaling implies that the limit of the empirical measures of the system is random, describing the conditional distribution of one particle in the limit system. We call such limits conditional McKean–Vlasov limits. The conditioning in the limit measure reflects the dependencies between coexisting particles in the limit system such that we are dealing with a conditional propagation of chaos property. As a consequence of the scaling in \(N^{-1/2}\) and of the fact that the limit of the empirical measures is not deterministic the limit system turns out to be solution of a non-linear SDE, where not independent martingale measures and white noises appear having an intensity that depends on the conditional law of the process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Availability of data and material

Not applicable.

References

  1. Aldous, D.: Exchangeability and related topics. In: Ecole d’Eté de Probabilités de Saint-Flour: XIII—1983, No. 1117 in Lecture Notes in Mathematics. Springer (1983)

  2. Andreis, L., Dai Pra, P., Fischer, M.: McKean–Vlasov limit for interacting systems with simultaneous jumps. Stoch. Anal. Appl. 36(6), 960–995 (2018). https://doi.org/10.1080/07362994.2018.1486202

    Article  MathSciNet  MATH  Google Scholar 

  3. Billingsley, P.: Convergence of Probability Measures. Wiley Series in Probability and Statistics, 2nd edn. Wiley, New York (1999)

    Book  Google Scholar 

  4. Chevallier, J., Ost, G.: Fluctuations for spatially extended Hawkes processes. Stoch. Process. Their Appl. 130(9), 5510–5542 (2020). https://doi.org/10.1016/j.spa.2020.03.015

    Article  MathSciNet  MATH  Google Scholar 

  5. Chevallier, J., Duarte, A., Löcherbach, E., Ost, G.: Mean field limits for nonlinear spatially extended Hawkes processes with exponential memory kernels. Stoch. Process. Their Appl. 129(1), 1–27 (2019). https://doi.org/10.1016/j.spa.2018.02.007

    Article  MathSciNet  MATH  Google Scholar 

  6. De Masi, A., Galves, A., Löcherbach, E., Presutti, E.: Hydrodynamic limit for interacting neurons. J. Stat. Phys. 158(4), 866–902 (2015). https://doi.org/10.1007/s10955-014-1145-1

    Article  MathSciNet  MATH  Google Scholar 

  7. Duarte, A., Ost, G., Rodríguez, A.A.: Hydrodynamic limit for spatially structured interacting neurons. J. Stat. Phys. 161(5), 1163–1202 (2015). https://doi.org/10.1007/s10955-015-1366-y

    Article  MathSciNet  MATH  Google Scholar 

  8. El Karoui, N., Méléard, S.: Martingale measures and stochastic calculus. Probab. Theory Relat. Fields 84(1), 83–101 (1990). https://doi.org/10.1007/BF01288560

    Article  MathSciNet  MATH  Google Scholar 

  9. Erny, X., Löcherbach, E., Loukianova, D.: Conditional propagation of chaos for mean field systems of interacting neurons. Electron. J. Probab. 26, 1–25 (2021). https://doi.org/10.1214/21-EJP580

    Article  MathSciNet  MATH  Google Scholar 

  10. Fournier, N., Löcherbach, E.: On a toy model of interacting neurons. Ann. l’Inst. Henri Poincaré Probab. Stat. 52, 1844–1876 (2016)

    MathSciNet  MATH  Google Scholar 

  11. Fournier, N., Meleard, S.: A stochastic particle numerical method for 3d Boltzmann equations without cutoff. Math. Comput. 71(238), 583–604 (2002)

    Article  MathSciNet  Google Scholar 

  12. Fournier, N., Mischler, S.: Rate of convergence of the Nanbu particle system for hard potentials and Maxwell molecules. Ann. Probab. 44(1), 589–627 (2016). https://doi.org/10.1214/14-AOP983

    Article  MathSciNet  MATH  Google Scholar 

  13. Graham, C.: McKean-Vlasov Ito-Skorohod equations, and nonlinear diffusions with discrete jump sets. Stoch. Process. Their Appl. 40(1), 69–82 (1992). https://doi.org/10.1016/0304-4149(92)90138-G

    Article  MathSciNet  MATH  Google Scholar 

  14. Graham, C.: Chaoticity for multiclass systems and exchangeability within classes. J. Appl. Probab. 45(4), 1196–1203 (2008). https://doi.org/10.1239/jap/1231340243

    Article  MathSciNet  MATH  Google Scholar 

  15. Graham, C., Méléard, S.: Stochastic particle approximations for generalized Boltzmann models and convergence estimates. Ann. Probab. 25(1), 115–132 (1997). https://doi.org/10.1214/aop/1024404281

    Article  MathSciNet  MATH  Google Scholar 

  16. Gärtner, J.: On the McKean–Vlasov limit for interacting diffusions. Math. Nachr. 137(1), 197–248 (1988). https://doi.org/10.1002/mana.19881370116

    Article  MathSciNet  MATH  Google Scholar 

  17. Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, 2nd edn. Springer, Berlin (2003)

    Book  Google Scholar 

  18. Major, P.: On the invariance principle for sums of independent identically distributed random variables. J. Multivar. Anal. 8(4), 487–517 (1978). https://doi.org/10.1016/0047-259X(78)90029-5

    Article  MathSciNet  MATH  Google Scholar 

  19. Meleard, Sylvie: Stochastic approximations of the solution of a full Boltzmann equation with small initial data. ESAIM Probab. Stat. 2, 23–40 (1998). https://doi.org/10.1051/ps:1998102

    Article  MathSciNet  MATH  Google Scholar 

  20. Seppäläinen, T.: Basics in Stochastic Analysis. Lecture Notes. https://people.math.wisc.edu/~seppalai/courses/735/notes2014.pdf

  21. Sznitman, A.S.: Topics in propagation of chaos. In: Ecole d’Eté de Probabilités de Saint-Flour: XIX—1989, No. 1464 in Lecture Notes in Mathematics, pp. 167–251. Springer, Berlin (1989). OCLC: 23253880

  22. Tanaka, H.: Probabilistic treatment of the Boltzmann equation of Maxwellian molecules. Z. Wahrscheinlichkeitstheorie verw. Gebiete 46, 67–105 (1978)

    Article  MathSciNet  Google Scholar 

  23. Villani, C.: Optimal Transport, Old and New. Springer, Berlin (2008)

    MATH  Google Scholar 

  24. Walsh, J.B.: An introduction to stochastic partial differential equations. In: École d’Été de Probabilités de Saint Flour XIV—1984. Lecture Notes in Mathematics, pp. 265–439. Springer, Berlin (1986)

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the conception of the model and to the proofs. All authors read and approved the manuscript.

Corresponding author

Correspondence to Xavier Erny.

Ethics declarations

Conflict of interest

Not applicable.

Code availability

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 A priori estimates

Lemma 6.1

Grant Assumptions 12 and 3. For all \(T>0,\)

$$\begin{aligned} \underset{N\in {\mathbb {N}}^*}{\sup }{\mathbb {E}}\left[ \underset{t\le T}{\sup } \left| X^{N,1}_t\right| ^2\right] <\infty . \end{aligned}$$

Proof

Notice that

$$\begin{aligned} \underset{0\le s\le t}{\sup } |X^{N,1}_s|\le (X^{N,1}_0) + ||b||_\infty t + \underset{0\le s\le t}{\sup }\left| \int _0^s \sigma (X^{N,1}_r,\mu ^N_r)d\beta ^1_r\right| + \frac{1}{\sqrt{N}}\underset{0\le s\le t}{\sup }|M^N_s|, \end{aligned}$$

where \(M^N\) is the local martingale

$$\begin{aligned} M^N_t := \sum _{k=2}^N\int _{[0,t]\times {\mathbb {R}}_+\times E}\varPsi (X^{N,k}_{s-},X^{N,1}_{s-},\mu ^N_{s-},u^k,u^1)\mathbbm {1}_{\left\{ z\le f(X^{N,k}_{s-},\mu ^N_{s-})\right\} }d\pi ^k(s,z,u). \end{aligned}$$

Consequently, by Burkholder–Davis–Gundy’s inequality and Assumption 3,

$$\begin{aligned} {\mathbb {E}}\left[ \underset{0\le s\le t}{\sup } |X^{N,1}_s|^2\right]\le & {} C + C||b||^2_{\infty }t^2 \\&+ ||\sigma ||^2_\infty t + t||f||_\infty \frac{N-1}{N}\int _E\underset{x,y,m}{\sup }\varPsi (x,y,m,u^1,u^2)^2 d\nu (u). \end{aligned}$$

This proves the result. \(\square \)

1.2 Proof of (20)

Lemma 6.2

Grant Assumptions 12 and 3. With the notation introduced in the proof of Theorem 4.3, we have

$$\begin{aligned} {\mathbb {E}}\left[ F(\mu ^N)\right] \underset{N\rightarrow \infty }{\longrightarrow }{\mathbb {E}}\left[ F(\mu )\right] . \end{aligned}$$

Proof

Let us recall that \(\mu ^N\) denotes the empirical measure of \((X^{N,i})_{1\le i\le N}\) and that \(\mu \) is the limit in distribution of (a subsequence of) \(\mu ^N.\)

Step 1. We first show that almost surely, \( \mu \) is supported by continuous trajectories. For that sake, we start showing that \( P^N := {\mathbb {E}}\left[ \mu ^N \right] = {{\mathcal {L}}} ( X^{N, 1 } ) \) is C-tight. This follows from Prop VI. 3.26 in [17], observing that

$$\begin{aligned} \lim _{N \rightarrow \infty } {\mathbb {E}}\left[ \sup _{s \le T } | \varDelta X_s^{N, 1 } |^3 \right] = 0 , \end{aligned}$$

which follows from our conditions on \( \psi .\) Indeed, writing \( \psi ^* ( u^1, u^2 ) :=\sup _{x, y ,m } \psi ( x, y , m, u^1, u^2 ), \) we can stochastically upper bound

$$\begin{aligned} \sup _{s \le T } | \varDelta X_s^{N, 1 } |^3 \le \sup _{k \le K } | \psi ^* ( U^{k, 1 }, U^{k, 2} ) |^3/N^{3/2} , \end{aligned}$$

where \(K \sim Poiss ( N T \Vert f\Vert _\infty ) \) is Poisson distributed with parameter \( N T \Vert f\Vert _\infty ,\) and where \( (U^{k, 1 }, U^{k,2 } )_k \) is an i.i.d. sequence of \( \nu _1 \otimes \nu _1\)-distributed random variables, independent of K. The conclusion then follows from the fact that due to our Assumption 3, \( {\mathbb {E}}\left[ | \psi ^* ( U^{k, 1 }, U^{k, 2} ) |^3 \right] < \infty \) such that

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{k \le K } | \psi ^* ( U^{k, 1 }, U^{k, 2} ) |^3/N^{3/2} \right] \le {\mathbb {E}}\left[ \frac{1}{N^{3/2}} \sum _{k=1}^K | \psi ^* ( U^{k, 1 }, U^{k, 2} ) |^3 \right] \\&\quad \le \frac{{\mathbb {E}}\left[ | \psi ^* ( U^{k, 1 }, U^{k, 2} ) |^3 \right] }{N^{3/2} } {\mathbb {E}}\left[ K\right] =\frac{{\mathbb {E}}\left[ | \psi ^* ( U^{k, 1 }, U^{k, 2} ) |^3 \right] }{N^{3/2} } N T \Vert f\Vert _\infty \rightarrow 0 \end{aligned}$$

as \( N \rightarrow 0.\)

As a consequence of the above arguments, we know that \( {\mathbb {E}}\left[ \mu ( \cdot ) \right] \) is supported by continuous trajectories. This means that there exists a Borel set \( G \in {\mathcal {D}} ( {\mathbb {R}}_+, {\mathbb {R}}) \) such that \( G \subset C( {\mathbb {R}}_+, {\mathbb {R}}) \) and \( {\mathbb {E}}\left[ \mu ( G) \right] = 1.\) In particular, almost surely, \( \mu (G) = 1,\) which implies that almost surely, \( \mu \) is supported by continuous trajectories. Indeed, \(\mu (G)\) is a r.v. taking values in [0, 1],  and its expectation equals one. Thus \(\mu (G)\) equals one a.s.

We now turn to the heart of this proof and show that \({\mathbb {E}}\left[ F( \mu ^N) \right] \rightarrow {\mathbb {E}}\left[ F( \mu \right] .\) The latter expression contains terms like

$$\begin{aligned} \int _s^tb(Y^1_r,\mu _r)\partial _{x^1}g(Y^1_r,Y^2 _r)dr \end{aligned}$$

for some bounded smooth function g. However, by our assumptions, the continuity of \( m \mapsto b ( x, m) \) is expressed with respect to the Wasserstein 1-distance. Yet, we only have information on the convergence of \(\mu ^N_r\) to \( \mu _r\) for the topology of the weak convergence.

In what follows we make use of Skorokhod’s representation theorem and realize all random measures \( \mu ^N \) and \(\mu \) on an appropriate probability space such that we have almost sure convergence of these realizations (we do not change notation), that is, we know that almost surely,

$$\begin{aligned} \mu ^N \rightarrow \mu \end{aligned}$$

as \(N \rightarrow \infty .\) (Recall that we have already chosen a subsequence in the beginning of the proof of Theorem 4.3). Since \(\mu \) is almost surely supported by continuous trajectories, we also know that almost surely, \(\mu _t^N \rightarrow \mu _t\) weakly for all t (this is a consequence of Theorem 12.5.(i) of [3]).

Step 2. In a first time, let us prove that, a.s., for all t\(\mu ^N_t\) converges to \(\mu _t\) for the metric \(W_1\). Thus we need to show additionally that almost surely, for all \(t \ge 0, \) \( \int |x| d \mu _t^N(x) \rightarrow \int |x| d \mu _t(x).\)

To prove this last fact, it will be helpful to consider rather the convergence of the triplets \( ( \mu ^N, X^{N, 1 }, \mu ^N (|x|)).\) Since the sequence of laws of these triplets is tight as well (the tightness of \((\mu ^N)_N\) and \((X^{N,1})_N\) have been stated in Sect. 4.1, and the tightness of \((\mu ^N(|x|)_N)\) is classical from Aldous’ criterion since \(\mu ^N_t(|x|) = N^{-1}\sum _{k=1}^N |X^{N,k}_t|\)), we may assume that, after having chosen another subsequence and then a convenient realization of this subsequence, we dispose of a sequence of random triplets such that almost surely, as \(N \rightarrow \infty , \)

$$\begin{aligned} ( \mu ^N, X^{N, 1 }, \mu ^N ( |x| ) ) \rightarrow ( \mu , Y, A), \end{aligned}$$

where \( A = ( A_t)_t\) is some process having càdlàg trajectories. In addition, it can be proven that the sequence \((\mu ^N(|x|))_N\) is C-tight (for similar reasons as \((X^{N,1})_N\)), hence A has continuous trajectories.

Taking a bounded and continuous function \( \varPhi : D( {\mathbb {R}}_+, {\mathbb {R}}) \rightarrow {\mathbb {R}}, \) we observe that, as \( N \rightarrow \infty , \)

$$\begin{aligned} {\mathbb {E}}\left[ \int _{D( {\mathbb {R}}_+, {\mathbb {R}}) } \varPhi d \mu \right] \leftarrow {\mathbb {E}}\left[ \int _{D( {\mathbb {R}}_+, {\mathbb {R}}) } \varPhi d \mu ^N \right] = {\mathbb {E}}\left[ \varPhi ( X^{N, 1} ) \right] \rightarrow {\mathbb {E}}\left[ \varPhi ( Y) \right] , \end{aligned}$$

such that \( {\mathbb {E}}\left[ \mu \right] = {{\mathcal {L}}} ( Y). \)

Notice that from the above follows that Y is necessarily a continuous process, since \({\mathbb {E}}\left[ \mu \right] \) is supported by continuous trajectories. Notice also that for the moment we do not know if \( A = \mu ( |x|).\)

Using that \( \sup _N {\mathbb {E}}\left[ \sup _{t \le T } |X_t^{N, 1 }|^2 \right] < \infty \) (see our a priori estimates Lemma 6.1), we deduce that the sequence \( (\sup _{ t \le T} |X_t^{N, 1 }|^{3/2} )_N\) is uniformly integrable. Therefore, \( {\mathbb {E}}\left[ \sup _{t \le T} |X_t^{N, 1 } |^{3/2}\right] \rightarrow {\mathbb {E}}\left[ \sup _{ t \le T} | Y_t|^{3/2}\right] < \infty .\) In particular, we also have that

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{t\le T} \mu _t ( |x|^{3/2})\right]< \infty \quad \text{ and } \text{ thus } \quad \sup _{t\le T} \mu _t ( |x|^{3/2}) < \infty \text{ almost } \text{ surely, } \end{aligned}$$

for all T,  since

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{t\le T} \mu _t ( |x|^{3/2})\right]= & {} {\mathbb {E}}\left[ \sup _{ t \le T } \int _{D({\mathbb {R}}_+, {\mathbb {R}})} | \gamma _t|^{3/2} \mu (d \gamma ) \right] \\\le & {} {\mathbb {E}}\left[ \int _{D({\mathbb {R}}_+, {\mathbb {R}})} \sup _{ t \le T } | \gamma _t|^{3/2} \mu (d \gamma ) \right] \\= & {} {\mathbb {E}}\left[ \sup _{ t \le T } | Y_t|^{3/2}\right] < \infty . \end{aligned}$$

We know that, a.s., \(\mu ^N\) converges weakly to \(\mu \) and \(\mu (G)=1,\) where \( G \subset C( {\mathbb {R}}_+, {\mathbb {R}}).\) Let us fix some \(\omega \in \varOmega \) for which the two previous properties hold. In the following, we omit this \(\omega \) in the notation. Let \(\varepsilon > 0, \) \(t \le T\) and choose M such that \( \int |x|\wedge M d \mu _t \ge \int |x| d \mu _t - \varepsilon . \) Then, as \(N \rightarrow \infty , \) almost surely,

$$\begin{aligned} \int |x| d \mu _t^N \ge \int |x| \wedge M d \mu _t^N \rightarrow \int |x| \wedge M d \mu _t . \end{aligned}$$

Thus

$$\begin{aligned} \liminf _N \int |x| d \mu _t^N \ge \int |x| d \mu _t - \varepsilon , \text{ such } \text{ that } \liminf _N \int |x| d \mu _t^N \ge \int |x| d \mu _{t}.\nonumber \\ \end{aligned}$$
(38)

Fatou’s lemma implies that

$$\begin{aligned} {\mathbb {E}}\left[ \liminf _N \int |x| d \mu _t^N \right]\le & {} \liminf _N {\mathbb {E}}\left[ \int |x| d \mu _t^N \right] = \liminf _N {\mathbb {E}}\left[ |X_t ^{N, 1 } |\right] \\= & {} {\mathbb {E}}\left[ |Y_t|\right] = {\mathbb {E}}\left[ \int |x| d \mu _t\right] . \end{aligned}$$

Together with (38) this implies that, almost surely,

$$\begin{aligned} \liminf _N \int |x| d \mu _t^N = \int |x| d \mu _t . \end{aligned}$$

Finally, since \( \int |x| d\mu ^N \rightarrow A \) and since A is continuous, for all t

$$\begin{aligned} \liminf _N \int |x| d \mu _t^N = \limsup _N \int |x| d \mu _t^N = \int |x| d \mu _t . \end{aligned}$$

This implies that almost surely, for all \(t \ge 0, \) \( \int |x| d \mu _t^N(x) \rightarrow \int |x| d \mu _t(x) = A_t < \infty .\) In particular, almost surely, for all \( t \ge 0, \)

$$\begin{aligned} W_1 ( \mu _t^N, \mu _t ) \rightarrow 0 \end{aligned}$$

(see e.g. Theorem 6.9 of [23]).

Step 3. Now we prove that \({\mathbb {E}}\left[ F(\mu ^N)\right] \) converges to \({\mathbb {E}}\left[ F(\mu )\right] ,\) where we recall that

$$\begin{aligned} F(\mu )= & {} \psi _1(\mu _{s_1}) \cdot \ldots \cdot \psi _k(\mu _{s_k})\int _{D({\mathbb {R}}_+,{\mathbb {R}})^2}\mu \otimes \mu (d\gamma )\varphi _1(\gamma _{s_1})\ldots \varphi _k(\gamma _{s_k})\\&\left[ g(\gamma _t)-g(\gamma _s)-\int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg (\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx)dr\right] , \end{aligned}$$

where \(\psi _i\in C_b({\mathcal {P}}({\mathbb {R}})),\varphi _i\in C_b({\mathbb {R}}^2)\) (\(1\le i\le k\)) and \(g\in C^3_b({\mathbb {R}}^2).\) By the boundedness of the functions \(\psi _i\) (\(1\le i\le k\)) and our boundedness Assumption 3, it is sufficient to prove the following two convergence results:

$$\begin{aligned}&{\mathbb {E}}\left[ |\psi _1(\mu ^N_{s_1}) \cdot \ldots \cdot \psi _k(\mu ^N_{s_k})-\psi _1(\mu _{s_1}) \cdot \ldots \cdot \psi _k(\mu _{s_k})|\right] \underset{N\rightarrow \infty }{\longrightarrow }0, \end{aligned}$$
(39)
$$\begin{aligned}&{\mathbb {E}}\left[ |G(\mu ^N)-G(\mu )|\right] \underset{N\rightarrow \infty }{\longrightarrow }0, \end{aligned}$$
(40)

with

$$\begin{aligned} G(\mu ):= & {} \int _{D({\mathbb {R}}_+,{\mathbb {R}})^2}\mu \otimes \mu (d\gamma )\varphi _1(\gamma _{s_1})\ldots \varphi _k(\gamma _{s_k})\\&\left[ g(\gamma _t)-g(\gamma _s)-\int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}L g (\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx)dr\right] . \end{aligned}$$

Indeed, since the functions \(\psi _i\) (\(1\le i\le k\)) and G are bounded, we have

$$\begin{aligned} {\mathbb {E}}\left[ |F(\mu ^N)-F(\mu )|\right]\le & {} C {\mathbb {E}}\left[ |\psi _1(\mu ^N_{s_1}) \cdot \ldots \cdot \psi _k(\mu ^N_{s_k})-\psi _1(\mu _{s_1}) \cdot \ldots \cdot \psi _k(\mu _{s_k})|\right] \\&+ C {\mathbb {E}}\left[ |G(\mu ^N)-G(\mu )|\right] . \end{aligned}$$

The convergence (39) follows from dominated convergence and the fact that the function

$$\begin{aligned} m\in {\mathcal {P}}(D({\mathbb {R}}_+,{\mathbb {R}}))\mapsto \psi _1(m_{s_1})...\psi _k(m_{s_k})\in {\mathbb {R}}\end{aligned}$$

is bounded and continuous at \(\mu ,\) since \(\mu \) is supported by continuous trajectories. To prove the convergence (40), let us recall that we have already shown that

  1. 1.

    \(\underset{N}{\sup }\underset{0\le s\le t}{\sup }{\mathbb {E}}\left[ \mu ^N_s(|x|^{3/2}) \right] <\infty ,\)

  2. 2.

    \(\underset{0\le s\le t}{\sup }{\mathbb {E}}\left[ \mu _t(|x|^{3/2})\right] <\infty ,\)

  3. 3.

    \(\mu (G)=1~a.s.\) for some Borel subset \( G \subset C({\mathbb {R}}_+,{\mathbb {R}}), \) \(G \in {\mathcal {D}} ({\mathbb {R}}_+, {\mathbb {R}}),\)

  4. 4.

    a.s. \(\forall r,\) \(\mu ^N_r\) converges to \(\mu _r\) for the metric \(W_1,\)

  5. 5.

    for all \(x,x'\in {\mathbb {R}},y,y'\in {\mathbb {R}}^2,m,m'\in {\mathcal {P}}_1({\mathbb {R}}),v\in {\mathbb {R}},\)

    $$\begin{aligned}&|Lg(y,m,x,v)-Lg(y',m',x',v)|\\&\quad \le C(v)(||y-y'||_1+|x-x'|+W_1(m,m')), \end{aligned}$$

    such that \(\int _{\mathbb {R}}C(v)\nu _1(dv)<\infty ,\)

  6. 6.
    $$\begin{aligned} \int _{\mathbb {R}}\underset{x,y,m}{\sup } Lg(y,m,x,v)\nu _1(dv)<\infty . \end{aligned}$$

In order to simplify the presentation, let us assume that the function G is of the form

$$\begin{aligned} G(\mu )=\int _{D^2}\mu \otimes \mu (d\gamma )\int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg(\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx)dr. \end{aligned}$$

Now, let us show that \({\mathbb {E}}\left[ |G(\mu ^N)-G(\mu )|\right] \) vanishes as N goes to infinity. Clearly,

$$\begin{aligned}&|G(\mu )-G(\mu ^N)|\\&\quad \le \left| G(\mu ) - \int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\left( \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg(\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx)dr\right) \right| \\&\qquad +\left| \int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\left( \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg(\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx)dr\right) \right. \\&\qquad \left. - \int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\left( \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg(\gamma _r,\mu _r,x,v)\nu _1(dv)\mu ^N_r(dx)dr\right) \right| \\&\qquad +\left| G(\mu ^N) - \int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\left( \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg(\gamma _r,\mu _r,x,v)\nu _1(dv)\mu ^N_r(dx)dr\right) \right| \\&\quad =:A_1+A_2+A_3. \end{aligned}$$

We first show that \(A_1\) vanishes a.s. (this implies that \({\mathbb {E}}\left[ A_1\right] \) vanishes by dominated convergence). \(A_1\) is of the form

$$\begin{aligned} A_1 = \left| \int _{D^2}\mu \otimes \mu (d\gamma ) H(\gamma ) - \int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )H(\gamma )\right| , \end{aligned}$$

with

$$\begin{aligned} H : \gamma \in D^2 \mapsto \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}Lg (\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx)dr\in {\mathbb {R}}. \end{aligned}$$

We just have to prove that H is continuous and bounded. The boundedness is obvious, so let us verify the continuity. Let \((\gamma ^n)_n\) converge to \(\gamma \) in \(D({\mathbb {R}}_+,{\mathbb {R}})^2\). We have

$$\begin{aligned} |H(\gamma ) - H(\gamma ^n)|&\le \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}|H(\gamma _r,\mu _r,x,v) - H(\gamma ^n_r,\mu _r,x,v)|\nu _1(dv)\mu _r(dx)dr\\&\le \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}C(v) ||\gamma _r-\gamma ^n_r||_1 \nu _1(dv)\mu _r(dx)dr\nonumber \\ {}&\le C\int _s^t ||\gamma _r-\gamma ^n_r||_1dr, \end{aligned}$$

which vanishes by dominated convergence: the integrand vanishes at every continuity point r of \(\gamma \) (hence for a.e. r), and, for n big enough, \(\sup _{r\le t}||\gamma ^n_r||_1 \le 2\sup _{r\le t}||\gamma _r||_1.\)

Now we show that \({\mathbb {E}}\left[ A_2\right] \) vanishes. We have

$$\begin{aligned} A_2\le & {} \int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\\&\left( \int _s^t\left| \int _{\mathbb {R}}\int _{\mathbb {R}}Lg(\gamma _r,\mu _r,x,v)\nu _1(dv)\mu _r(dx) \right. \right. \\&\left. \left. - \int _{\mathbb {R}}\int _{\mathbb {R}}Lg (\gamma _r,\mu _r,x,v)\nu _1(dv)\mu ^N_r(dx)\right| dr\right) . \end{aligned}$$

Since the function \(x\in {\mathbb {R}}\mapsto \int _{\mathbb {R}}L g(\gamma _r,\mu _r,x,v)\nu _1(dv)\) is Lipschitz continuous (with Lipschitz constant independent of \(\gamma _r\) and \(\mu _r\)), we have, by Kantorovich-Rubinstein duality (see e.g. Remark 6.5 of [23]),

$$\begin{aligned} A_2\le C\int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\int _s^t W_1(\mu ^N_r,\mu _r)dr=C\int _s^tW_1(\mu ^N_r,\mu _r)dr. \end{aligned}$$

Hence

$$\begin{aligned} {\mathbb {E}}\left[ A_2\right] \le C\int _s^t{\mathbb {E}}\left[ W_1(\mu ^N_r,\mu _r)\right] dr, \end{aligned}$$

which vanishes by dominated convergence: the integrand vanishes thanks to Step 2, and the uniform integrability follows from the fact that

$$\begin{aligned} \underset{N}{\sup }\int _s^t{\mathbb {E}}\left[ W_1(\mu ^N_r,\mu _r)^{3/2}\right] dr\le & {} C(t-s)\underset{N}{\sup }\underset{0\le s\le t}{\sup }{\mathbb {E}}\left[ \mu ^N_s(|x|)^{3/2}\right] \\&+C(t-s)\underset{0\le s\le t}{\sup }{\mathbb {E}}\left[ \mu _s(|x|)^{3/2}\right] . \end{aligned}$$

We finally show that \({\mathbb {E}}\left[ A_3\right] \) vanishes.

$$\begin{aligned} A_3\le&\int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\left( \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}\left| Lg(\gamma _r,\mu ^N_r,x,v)\right. \right. \\&\left. \left. -Lg(\gamma _r,\mu _r,x,v)\right| \nu _1(dv)\mu ^N_r(dx)dr\right) \\ \le&\int _{D^2}\mu ^N\otimes \mu ^N(d\gamma )\left( \int _s^t\int _{\mathbb {R}}\int _{\mathbb {R}}C(v)W_1(\mu ^N_r,\mu _r)\nu _1(dv)\mu ^N_r(dx)dr\right) , \end{aligned}$$

implying

$$\begin{aligned} {\mathbb {E}}\left[ A_3\right] \le C\int _s^t{\mathbb {E}}\left[ W_1(\mu ^n_r,\mu _r)\right] dr, \end{aligned}$$

which vanishes for the same reasons as in the previous step where we have shown that \({\mathbb {E}}\left[ A_2\right] \) vanishes. \(\square \)

1.3 Martingale measures

For the reader’s convenience we resume in this section the definition and the essential properties of martingale measures. This section is widely inspired by [8, 24] and [20]. Let \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space and \((E,{\mathcal {E}})\) a Lusin space. Let \({\mathcal {A}}\subset {\mathcal {E}}\) be a ring, i.e. a family of sets closed under unions and set differences. We suppose moreover that \( \sigma ( {\mathcal {A}}) = {\mathcal {E}}.\) Consider a set function \(U:{\mathcal {A}}\times \varOmega \rightarrow {\mathbb {R}}\).

Definition 6.3

\(U:{\mathcal {A}}\times \varOmega \rightarrow {\mathbb {R}}\) is an \({\mathbb {L}}^2\)-valued measure on \({\mathcal {A}}\) if

  1. (i)

    \(\forall A\in {\mathcal {A}},\) \(U(A)\in {\mathbb {L}}^2(\varOmega , {\mathcal {F}},{\mathbb {P}});\)

  2. (ii)

    U is a.s. finitely additive: \(\forall A\in {\mathcal {A}},\; \forall B\in {\mathcal {A}}\) s.t. \(A\cap B=\emptyset ,\) \(U(A\cup B)=U(A)+U(B)\) a.s.;

  3. (iii)

    U is \({\mathbb {L}}^2\)-sigma-additive: \(\forall (A_n)_{n\in {\mathbb {N}}}\) s.t. \( A_n\in {\mathcal {A}}\) and \(A_n\cap A_m=\emptyset \) for \(n\ne m;\)

    $$\begin{aligned} \lim _{n\rightarrow \infty }\Vert U(\bigcup _{ k=1}^n A_k)-\sum _{k=1}^n U(A_k)\Vert _2=0. \end{aligned}$$

If \(\sup \{\Vert U(A)\Vert _2;\; A\in {\mathcal {A}}\}<\infty ,\) we say that U is finite. We say that U is sigma-finite if

  1. (i)

    there exists a sequence \((E_n)_{n\in {\mathbb {N}}} \subset {\mathcal {A}}\) such that \(E_n\subset E_{n+1}\) and \(\bigcup _n E_n=E;\)

  2. (ii)

    for all \(n\in {\mathbb {N}},\) \({\mathcal {E}}|_{E_n}\subset {\mathcal {A}}\) and U is finite on \((E_n,{\mathcal {E}}|_{E_n})\).

Let \(({\mathcal {F}}_t)_{t\ge 0}\) be a right-continuous and complete filtration on \((\varOmega , {\mathcal {F}},{\mathbb {P}}).\)

Definition 6.4

\(M=\{M_t (A),\; t\ge 0,\;A\in {\mathcal {A}}\}\) is a \(({\mathcal {F}}_t)_{t\ge 0}\)- martingale measure on \({\mathbb {R}}_+\times {\mathcal {A}}\) if

  1. (i)

    \(\forall t\ge 0,\) \(M_t\) is an \({\mathbb {L}}^2\)-valued sigma-finite measure on \({\mathcal {A}};\)

  2. (ii)

    \(\forall A\in {\mathcal {A}},\) \(M(A)=\{M_t(A),\; t\ge 0\} \) is a \(({\mathcal {F}}_t)_{t\ge 0}\) -martingale with \(M_0(A)=0 \quad a.s.;\)

  3. (iii)

    \(\forall A\in {\mathcal {A}},\) \(\forall B\in {\mathcal {A}},\) such that \(A\cap B=\emptyset ,\) M(A) and M(B) are orthogonal martingales.

To each martingale measure we can associate its intensity measure as stated in the following theorem which is due to [24].

Theorem 6.5

(Theorem 2.7 of [24]) If M is a \(({\mathcal {F}}_t)_{t\ge 0}\)-martingale measure on \( {\mathbb {R}}_+ \times {\mathcal {A}},\) there exists a random sigma-finite positive measure \(\nu (ds,dx)\) on \(({\mathbb {R}}_+\times E,{\mathcal {B}}({\mathbb {R}}_+)\otimes {\mathcal {E}}),\) \(({\mathcal {F}}_t)_{t\ge 0}\)- predictable, such that for each \(A\in {\mathcal {E}},\) the process \((\nu ((0,t]\times A))_{t\ge 0}\) is predictable, right continuous and satisfies

$$\begin{aligned} \forall A\in {\mathcal {A}},\; \forall t>0,\ \nu ((0,t]\times A)=\langle M(A)\rangle _t\; a.s. \end{aligned}$$

The measure \(\nu \) is called the intensity of M.

Since for all \(t\ge 0,\) \(A\mapsto M_t(A) \) is additive, using (iii) of Definition 6.4 and the fact that for orthogonal martingales the angle bracket process is zero,

$$\begin{aligned} \langle M(A),M(B)\rangle _t= & {} \langle M(A\setminus B)+M(A\cap B),\\&\times M(B\setminus A)+M(A\cap B)\rangle _t=\langle M(A\cap B)\rangle _t, \end{aligned}$$

which, due to the previous theorem, equals

$$\begin{aligned} \langle M(A),M(B)\rangle _t=\nu ((0,t]\times A\cap B)). \end{aligned}$$
(41)

Let M be a \(({\mathcal {F}}_t)_{t\ge 0}\)-martingale measure on \({\mathbb {R}}_+\times {\mathcal {A}}\) with intensity \(\nu \) and let \({\mathcal {P}}\) be the predictable sigma-field on \(\varOmega \times {\mathbb {R}}.\) We introduce the space

$$\begin{aligned} {\mathbb {L}}_{\nu }^2:= & {} \left\{ f:\varOmega \times {\mathbb {R}}_+\times E\rightarrow {\mathbb {R}}, \; {\mathcal {P}}\otimes {\mathcal {E}}\; \text {measurable},\right. \\&\times \left. {\mathbb {E}}\left[ \int _{{\mathbb {R}}_+\times E}f^2(\omega , s, x)\nu (\omega ,ds,dx)\right] <\infty \right\} \end{aligned}$$

and its dense subset of simple predictable functions defined by

$$\begin{aligned} {\mathcal {S}}=\left\{ h(\omega ,s,x)=\sum _{i=1}^nh_i(\omega )\mathbbm {1}_{]u_i,v_i]}(s)\mathbbm {1}_{B_i}(x),\; B_i\in {\mathcal {A}},\; h_i\in {\mathcal {F}}_{u_i}\; \text {bounded}\right\} . \end{aligned}$$

Using Itô’s method, a stochastic integral \((g\cdot M)_t\) with respect to the time variable can be constructed for any \(g\in {\mathbb {L}}_{\nu }^2 ,\) following [24]. Namely, if \(h\in {\mathcal {S}}, \) the linear mapping \(h\rightarrow \{h\cdot M_t(A), \; t\ge 0, A\in {\mathcal {A}}\},\) defined by

$$\begin{aligned} h\cdot M_t(A)=\sum _{i=1}^nh_i(M_{v_i\wedge t }(A\cap B_i)-M_{u_i\wedge t }(A\cap B_i)) , \end{aligned}$$

can be extended to \({\mathbb {L}}^2_{\nu }.\) We denote \((g\cdot M) _t:=(g\cdot M (E))_t.\) It is straightforward to show that the following important property holds.

For all \(f,g \in {\mathbb {L}}^2_{\nu },\) \(\forall t>0,\)

$$\begin{aligned} \langle f\cdot M (A),g\cdot M(B)\rangle _t=\int _{(0,t]}\int _{A\cap B} f(\omega ,s,x)g(\omega ,s,x)\nu (\omega ,ds,dx)\quad a.s.. \end{aligned}$$

Example 4

Let \((X,{\mathcal {B}},\mu )\) be a sigma-finite measure space. A white noise is a centred Gaussian process \(W=\{W(A);\; A\in {\mathcal {B}},\; \mu (A)<\infty \}\) with covariance

$$\begin{aligned} {\mathbb {E}}\left[ W(A)W(B)\right] =\mu (A\cap B). \end{aligned}$$

If \(A\cap B=\emptyset ,\) then W(A) and W(B) are independent. Moreover, W is a.s. finitely additive, since

$$\begin{aligned} {\mathbb {E}}\left[ |W(A\cup B)-W(A)-W(B)|^2\right] =0. \end{aligned}$$

Note that W is also \({\mathbb {L}}^2\)-sigma-additive on \({\mathcal {A}}=\{A\in {\mathcal {B}},\ \mu (A)<\infty \}\). Indeed, if \((A_k)_{ k \ge 0 } \) are pairwise disjoint, \(A=\bigcup _{k\ge 0} A_k,\) and \(\mu (A)<\infty ,\) then

$$\begin{aligned} {\mathbb {E}}\left[ (W(A)-\sum _{k=1}^nW(A_k))^2\right] ={\mathbb {E}}\left[ (W(\bigcup _{k\ge n+1} A_k))^2\right] =\mu (\bigcup _{k\ge n+1}A_k)\rightarrow 0 \end{aligned}$$

Also \(\sum _kVar(W(A_k))=\sum _k\mu (A_k)=\mu (A)<\infty ,\) implying that \(\sum _kW(A_k)=W(\cup _kA_k)\) a.s.

We now discuss the particular case \(X={\mathbb {R}}_+\times E,\) \({\mathcal {B}}={\mathcal {B}}({\mathbb {R}}_+)\otimes {\mathcal {E}}. \) Let \(\mu \) be \(\sigma \)-finite measure on \( (E, {\mathcal {E}}) \) and define \({\mathcal {A}}= \{A\in {\mathcal {E}},\; s.t.\; \mu ( A)<\infty \} .\) For any \( t \ge 0 \) and \( A \in {\mathcal {A}},\) we put

$$\begin{aligned} W_t(A):=W((0,t]\times A). \end{aligned}$$

For all \(A\in {\mathcal {A}},\) \(\{W_t(A),\; t\ge 0\}\) is a centred Gaussian process with independent increments, hence an \({\mathbb {L}}^2\)-martingale. Finally \(W=\{W_t(A), t\ge 0, A\in {\mathcal {A}}\}\) is an \({\mathbb {L}}^2\)-martingale measure on \({\mathbb {R}}_+\times {\mathcal {A}},\) with intensity \( \nu (ds, dx) = ds \mu ( dx),\) with respect to its natural filtration \(({\mathcal {F}}_t)_{t\ge 0},\) where \({\mathcal {F}}_t:=\sigma \{W_u(A),\; u\le t; A\in {\mathcal {A}}\}.\) Finally, for any \(f\in {\mathbb {L}}^2_{\nu },\) we denote the stochastic integral \((f\cdot W)_t=: \int _0^t\int _{E}f(s,x)d W(s,x).\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Erny, X., Löcherbach, E. & Loukianova, D. White-noise driven conditional McKean–Vlasov limits for systems of particles with simultaneous and random jumps. Probab. Theory Relat. Fields 183, 1027–1073 (2022). https://doi.org/10.1007/s00440-022-01139-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-022-01139-8

Keywords

Mathematics Subject Classification

Navigation