Skip to main content
Log in

Total variation distance between a jump-equation and its Gaussian approximation

  • Published:
Stochastics and Partial Differential Equations: Analysis and Computations Aims and scope Submit manuscript

Abstract

We deal with stochastic differential equations with jumps. In order to obtain an accurate approximation scheme, it is usual to replace the “small jumps” by a Brownian motion. In this paper, we prove that for every fixed time t, the approximate random variable \(X^\varepsilon _t\) converges to the original random variable \(X_t\) in total variation distance and we estimate the error. We also give an estimate of the distance between the densities of the laws of the two random variables. These are done by using some integration by parts techniques in Malliavin calculus.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data avaibility statement

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Alfonsi, A., Cancès, E., Turnici, G., Di Ventura, B., Huisinga, W.: Adaptive simulation of hybrid stochastic and deterministic models for biochemical systems. ESAIM Proc. 14, 1–13 (2005)

    Article  MathSciNet  Google Scholar 

  2. Alfonsi, A., Cancès, E., Turnici, G., Di Ventura, B., Huisinga, W.: Exact simulation of hybrid stochastic and deterministic models for biochemical systems. Research Report RR-5435, INRIA (2004)

  3. Asmussen, S., Rosinski, J.: Approximations of small jumps of Lévy processes with a view towards simulation. J. Appl. Probab. 38, 482–493 (2001)

    Article  MathSciNet  Google Scholar 

  4. Ball, K., Kurtz, T.G., Popovic, L., Rempala, G.: Asymptotic analysis of multiscale approximations to reaction networks. Ann. Appl. Probab. 16, 1925–1961 (2006)

    Article  MathSciNet  Google Scholar 

  5. Bally, V., Caramellino, L., Poly, G.: Convergence in distribution norms in the CLT for non identical distributed random variables. Electron. J. Probab. 23, 51 (2018)

    Article  MathSciNet  Google Scholar 

  6. Bally, V., Caramellino, L., Poly, G.: Regularization lemmas and convergence in total variation. (2020). \(<\)hal-02429512\(>\)

  7. Bally, V., Clément, E.: Integration by parts formula and applications to equations with jumps. Probab. Th. Rel. Fields 151, 613–657 (2011)

    Article  MathSciNet  Google Scholar 

  8. Bally, V., Goreac, D., Rabiet, V.: Regularity and stability for the semigroup of jump diffusions with state-dependent intensity. Ann. Appl. Probab. 28(5), 3028–3074 (2018)

    Article  MathSciNet  Google Scholar 

  9. Bally, V., Pardoux, E.: Malliavin Calculus for white noise driven parabolic SPDEs. Potent. Anal. 9, 27–64 (1998)

    Article  MathSciNet  Google Scholar 

  10. Bichteler, K., Gravereaux, J.B., Jacod, J.: Malliavin calculus for processes with jumps. Gordon and Breach (1987)

  11. Carpentier, A., Duval, C., Mariucci, E.: Total variation distance for discretely observed Lévy processes: a Gaussian approximation of the small jumps (2019) arXiv:1810.02998 [math.ST]

  12. Cont, R., Tankov, P.: Finacial Modelling with Jump Processes. Chapman & Hall/CRC (2004)

  13. Crudu, A., Debussche, A., Muller, A., Radulescu, O.: Convergence of stochastic gene networks to hybrid piecewise deterministic processes. Ann. Appl. Probab. 10(5), 1822–1859 (2012)

    MathSciNet  MATH  Google Scholar 

  14. Dereich, S.: Multilevel Monte Carlo algorithms for Lévy-driven SDEs with Gaussian correction. Ann. Appl. Probab. 21(1), 283–311 (2011)

    Article  MathSciNet  Google Scholar 

  15. Duval, C., Mariucci, E.: Spectral free estimates of Lévy densities in high level frequency, to appear in Bernoulli. arXiv:1702.08787 [Math PR]

  16. Fournier, N.: Simulation and approximation of Lévy driven stochastic differential equations. ESAIM Probab. Stat. 15, 249–269 (2011)

    Article  MathSciNet  Google Scholar 

  17. Ikeda, N., Watanabe, S.: Stochastic differential equations and diffusion processes, 2nd edn. Netherlands, North Holland, Amsterdam (1989)

  18. Ishikawa, Y.: Stochastic Calculus of Variations for Jump Processes, Berlin. De Gruyter, Boston (2013)

    Book  Google Scholar 

  19. Jum, E.: Numerical approximation of stochastic differential equations driven by Lévy motion with infinitely many jumps. University of Tennessee, PhD diss. (2015)

  20. Kohatsu-Higa, A., Tankov, P.: Jump-adapted discretization schemes for Lévy-driven SDEs. Stoch. Process. Appl. 120, 2258–85 (2010)

    Article  Google Scholar 

  21. Kohatsu-Higa, A., Ortiz-Latorre, S., Tankov, P.: Optimal simulation schemes for Lévy driven stochastic differential equations. Math. Comput. 83, 2293–2324, 201 (2010)

    Article  Google Scholar 

  22. Kulik, A.M.: Malliavin calculus for Lévy processes with arbitrary Lévy measures. Theor. Probab. Math. Stat. 72, 75–92 (2006)

    Article  Google Scholar 

  23. Kulik, A.M.: Stochastic calculus of variations for general Lévy processes and its applications to jump-type SDE’s with non-degenerated drift (2007) arXiv:math/0606427

  24. Kunita, H.: Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms. In: Rao, M.M. (ed.) Real and Stochastic Analysis, pp. 305–373. Birkhaäuser, Boston (2004)

    Chapter  Google Scholar 

  25. Kunita, H.: Stochastic Flows and Jump-Diffusions. Springer (2019)

  26. Kurtz, T.: Limits theorems for sequences of jump Markov processes approximating ordinary differential processes. J. Appl. Prob. 8, 344–356 (1971)

    Article  MathSciNet  Google Scholar 

  27. Kurtz, T.: Strong approximation theorems for density dependent Markov chains. Stoch. Proc. Appl. 6, 223–240 (1978)

    Article  MathSciNet  Google Scholar 

  28. Marinelli, C., Röckner, M.: On the maximal inequalities of Burkholder, Davis and Gundy. ScienceDirect, Expo. Math. 34 (2016)

  29. Mariucci, E., Reiß, M.: Wasserstein and total variation distance between marginals of Lévy processes. Electron. J. Stat. 12, 2482–2514 (2018)

    Article  MathSciNet  Google Scholar 

  30. Mordecki, E., Szepessy, A., Tempone, R., Zouraris, G.E.: Adaptive weak approximation of diffusions with jumps (2006). arXiv:math/0609186v1

  31. Nualart, D.: The Malliavin Calculus and Related Topics. Springer (2006)

  32. Protter, P., Talay, D.: The Euler scheme for Lévy driven stochastic differential equations. Ann. Probab. 25(1), 393–423 (1997)

    Article  MathSciNet  Google Scholar 

  33. Song, Y., Zhang, X.: Regularity of density for SDEs driven by degenerate Lévy noises (2014). arXiv:1401.4624

  34. Walsh, J.B.: An introduction to stochastic partial differential equations. In: Hennequin, L. (ed.) Ecole d’été de probabilités de Saint Flour, pp. 265–439. Springer (1984)

  35. Zhang, X.: Densities for SDEs driven by degenerate \(\alpha \)-stable processes. Ann. Probab. 42(5), 1885–1910 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vlad Bally.

Additional information

This article is dedicated to István Gyöngy on the occasion of his 70th birthday.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 5.1 Proof of Lemma 4.1

In the following, we will only work with the measure \(\nu \) supported on \([1,\infty )\) and with the processes \((\widehat{X}_t)_{t\in [0,T]}\) and \((\widehat{X}_t^M)_{t\in [0,T]}\). So in order to simplify the notation, from now on we denote \(\widehat{X}_t=X_t\) and \(\widehat{X}_t^M=X_t^M\). We remark that \(M=\frac{1}{\varepsilon }\) is generally not an integer, but for simplicity, we assume in the following that M is an integer.

Here is the idea of the proof. Since \(X_t^M\) is not a simple functional, we construct first the Euler scheme \((X_{t}^{n,M})_{t\in [0,T]}\) in Sect. 1 and check that \(X_t^{n,M}\rightarrow X_t^M\) in \(L^1\) when \(n\rightarrow \infty \). We will prove that \(\mathbb {E}\vert X_t^{n,M}\vert _l^p\) and \(\mathbb {E}\vert LX_t^{n,M}\vert _l^p\) are bounded (uniformly in nM) in Sect. 3. Then based on Lemma 3.2, we obtain that \(X^M_t\in \mathcal {D}_\infty \) and the norms \(\Vert X_t^{M}\Vert _{L,l,p}\) are bounded (uniformly in M).

1.1.1 5.1.1 Construction of the Euler scheme

We take a time-partition \(\mathcal {P}_{t}^{n}=\{r_j=\frac{jt}{n}, j=0,\ldots ,n\}\) and a space-partition \(\tilde{\mathcal {P}}_{M}^{n}=\{z_j=M+\frac{j}{n}, j=0,1,\ldots \}\). We denote \(\tau _{n}(r)=r_j\) when \(r\in [r_j,r_{j+1})\), and denote \(\gamma _{n}(z)=z_j\) when \(z\in [z_j,z_{j+1})\). Let

$$\begin{aligned} X_{t}^{n,M}= & {} x+\int _{0}^{t}\int _{[1,M)}\widetilde{c}(\tau _{n}(r),z,X_{\tau _{n}(r)-}^{n,M })N_{\nu }(dr,dz) \nonumber \\&+\int _{0}^{t}b_{M }(\tau _{n}(r),X_{\tau _{n}(r)}^{n,M })dr\nonumber \\&+\int _{0}^{t}\int _{\{z \ge M \}}\widetilde{c}(\tau _{n}(r),\gamma _{n}(z),X_{\tau _{n}(r)}^{n,M })W_\nu (dr,dz). \end{aligned}$$
(52)

Then we can obtain the following lemma.

Lemma 5.1

Assume that the Hypothesis 2.1 holds true with \(q^*\ge 1\). Then for any \(p\ge 1,M\ge 1\), we have \(\mathbb {E}\vert X_{t}^{n,M}-X_{t}^{M}\vert ^p\rightarrow 0\) as \(n\rightarrow \infty \).

Proof

We first notice that since \(\bar{c}(z)\) (in Hypothesis 2.1) is decreasing, \(\sup \limits _{n\in \mathbb {N}}\bar{c}(\gamma _{n}(z))\le \bar{c}(\gamma _{1}(z)).\) So

$$\begin{aligned} \int _{1}^\infty \sup \limits _{n\in \mathbb {N}}\vert \bar{c}(\gamma _{n}(z))\vert ^2\nu (dz)\le & {} \int _{1}^\infty \vert \bar{c}(\gamma _{1}(z))\vert ^2\nu (dz)\le \vert \bar{c}(1)\vert ^2\nu [1,2]\nonumber \\&+\int _{1}^\infty \vert \bar{c}(z)\vert ^2\nu (dz) \le C<\infty . \end{aligned}$$
(53)

Then by the Lebesgue dominated convergence theorem, (53) implies that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\sup \limits _{x\in \mathbb {R}}\int _0^T\int _{[1,\infty )}\vert \widetilde{c}(s,z,x)-\widetilde{c}(\tau _{n}(s),\gamma _{n}(z),x)\vert ^2\nu (dz)ds=0, \end{aligned}$$
(54)

and

$$\begin{aligned} \sup \limits _{n\in \mathbb {N}}\sup \limits _{x\in \mathbb {R}}\int _0^T\int _{[1,\infty )}\vert \widetilde{c}(\tau _{n}(s),\gamma _{n}(z),x)\vert ^2\nu (dz)ds\le C. \end{aligned}$$
(55)

In the following proof, \(C_p(T)\) will be a constant depending on p and T which may be changed from line to line. For \(p\ge 2\), we write \(\mathbb {E}\vert X_{t}^{n,M}-X_{t}^{M}\vert ^p\le C_p(T)(E_1+E_2+E_3)\), where

$$\begin{aligned} E_1&=\mathbb {E}\left| \int _0^t\int _{[1,M)}\widetilde{c}(\tau _n(r),z,X_{\tau _{n}(r)-}^{n,M })-\widetilde{c}(r,z,X_{r-}^{M })N_\nu (dr,dz)\right| ^p, \\ E_2&=\mathbb {E}\left| \int _{0}^{t} b_{M }(\tau _{n}(r),X_{\tau _{n}(r)}^{n,M })-b_{M}(r,X_{r}^{M })dr\right| ^p, \\ E_3&=\mathbb {E}\left| \int _{0}^{t}\int _{\{z \ge M \}}\widetilde{c}(\tau _{n}(r),\gamma _{n}(z),X_{\tau _{n}(r)}^{n,M})-\widetilde{c}(r,z,X_{r}^{M })W_\nu (dr,dz)\right| ^p. \end{aligned}$$

Then, compensating \(N_\nu \) and using Burkholder’s inequality (see for example the Theorem 2.11 in [24]),

$$\begin{aligned} E_1\le & {} C_p(T)\left[ \mathbb {E}\int _0^t(\int _{[1,M)}\vert \widetilde{c}(\tau _n(r),z,X_{\tau _{n}(r)-}^{n,M })-\widetilde{c}(r,z,X_{r-}^{M })\vert ^2\nu (dz))^{\frac{p}{2}}dr\right. \\&\left. +\mathbb {E}\int _0^t\int _{[1,M)}\vert \widetilde{c}(\tau _n(r),z,X_{\tau _{n}(r)-}^{n,M })-\widetilde{c}(r,z,X_{r-}^{M })\vert ^p\nu (dz)dr\right. \\&\left. +\mathbb {E}\int _0^t\vert \int _{[1,M)} \widetilde{c}(\tau _n(r),z,X_{\tau _{n}(r)-}^{n,M })-\widetilde{c}(r,z,X_{r-}^{M })\nu (dz)\vert ^pdr\right] \\\le & {} C_p(T)\left[ R_n^1+((\bar{c}_2)^{\frac{p}{2}}+\bar{c}_p+(\bar{c}_1)^p)\int _0^t\mathbb {E}\vert X_{\tau _{n}(r)}^{n,M }-X_{r}^{M }\vert ^pdr\right] , \end{aligned}$$

with

$$\begin{aligned} R_n^1= & {} \mathbb {E}\int _0^t\left( \int _{[1,M)}\vert \widetilde{c}(\tau _n(r),z,X_{r-}^{M })-\widetilde{c}(r,z,X_{r-}^{M })\vert ^2\nu (dz)\right) ^{\frac{p}{2}}dr\\&+\mathbb {E}\int _0^t\int _{[1,M)}\vert \widetilde{c}(\tau _n(r),z,X_{r-}^{M })-\widetilde{c}(r,z,X_{r-}^{M })\vert ^p\nu (dz)dr\\&+\mathbb {E}\int _0^t\left| \int _{[1,M)} \widetilde{c}(\tau _n(r),z,X_{r-}^{M })-\widetilde{c}(r,z,X_{r-}^{M })\nu (dz)\right| ^pdr. \end{aligned}$$

Since \(\vert \widetilde{c}(\tau _n(r),z,X_{r-}^{M })-\widetilde{c}(r,z,X_{r-}^{M })\vert ^p\le \vert 2\bar{c}(z)\vert ^p\in L^1(\Omega \times [1,\infty )\times [0,T],\mathbb {P}\times \nu \times Leb)\), we apply the Lebesgue’s dominated convergence theorem and we obtain that \(R_n^1\rightarrow 0\). Next,

$$\begin{aligned} E_2\le & {} {C}_p(T)\mathbb {E}\int _0^t\left| \int _{\{z\ge M \}}\widetilde{c}(\tau _n(r),z,X_{\tau _{n}(r)}^{n,M })-\widetilde{c}(r,z,X_{r}^{M })\nu (dz)\right| ^pdr\\\le & {} {C}_p(T)\left[ R_n^2+(\bar{c}_1)^p\int _0^t\mathbb {E}\vert X_{\tau _{n}(r)}^{n,M }-X_{r}^{M }\vert ^pdr\right] , \end{aligned}$$

with

$$\begin{aligned} R_n^2=\mathbb {E}\int _0^t\left| \int _{\{z\ge M\}} \widetilde{c}(\tau _n(r),z,X_{r}^{M })-\widetilde{c}(r,z,X_{r}^{M })\nu (dz)\right| ^pdr\rightarrow 0. \end{aligned}$$

Finally, using Burkholder’s inequality,

$$\begin{aligned} E_3\le & {} {C}_p(T)\mathbb {E}\left| \int _{0}^{t}\int _{\{z \ge M \}}\vert \widetilde{c}(\tau _{n}(r),\gamma _{n}(z),X_{\tau _{n}(r)}^{n,M})-\widetilde{c}(r,z,X_{r}^{M })\vert ^2\nu (dz)\right| ^\frac{p}{2}dr\\\le & {} {C}_p(T)\left[ R_n^3+(\bar{c}_2)^\frac{p}{2}\int _0^t\mathbb {E}\vert X_{\tau _{n}(r)}^{n,M }-X_{r}^{M }\vert ^pdr\right] , \end{aligned}$$

where (by (54)),

$$\begin{aligned} R_n^3=\mathbb {E}\left| \int _{0}^{t}\int _{\{z \ge M \}}\vert \widetilde{c}(\tau _{n}(r),\gamma _{n}(z),X_{r}^{M})-\widetilde{c}(r,z,X_{r}^{M })\vert ^2\nu (dz)\right| ^\frac{p}{2}dr\rightarrow 0. \end{aligned}$$

Therefore, \(\mathbb {E}\vert X_{t}^{n,M}-X_{t}^{M}\vert ^p\le {C_p}(T)[R_{n}+\int _0^t\mathbb {E}\vert X_{\tau _{n}(r)}^{n,M }-X_{r}^{M }\vert ^pdr]\), with \(R_{n}=R_n^1+R_n^2+R_n^3\rightarrow 0\) as \(n\rightarrow \infty \). One can easily check that \(\mathbb {E}\vert X_{t}^{n,M}-X_{\tau _{n}(t)}^{n,M}\vert ^p\rightarrow 0\). Also there exists a constant \(C_p(T)\) depending on p and T such that for any \(n,M\in \mathbb {N}\) and any \(t\in [0,T]\), \(\mathbb {E}\vert X_{t}^{n,M}\vert ^p\le C_p(T)\) (see (71) for details). Then, by the dominated convergence theorem, these yield \(\int _0^t\mathbb {E}\vert X_{r}^{n,M}-X_{\tau _{n}(r)}^{n,M}\vert ^pdr\rightarrow 0\). So we have \(\mathbb {E}\vert X_{t}^{n,M}-X_{t}^{M}\vert ^p\le {C_p}(T)[\widetilde{R_{n}}+\int _0^t\mathbb {E}\vert X_{r}^{n,M }-X_{r}^{M }\vert ^pdr]\), with \(\widetilde{R_{n}}\rightarrow 0\) as \(n\rightarrow \infty \). We conclude by using Gronwall’s lemma. \(\square \)

Remark

Some results on the convergence of the Euler scheme of a jump-diffusion can be found for example in [30, 32]. The special thing in our paper is that we deal with the space-time Brownian motion instead of the classical Brownian motion, and this is why we need to assume (54).

Now we represent the jump’s part of \((X_t^{n,M})_{t\in [0,T]}\) by means of compound Poisson processes. We recall that for each \(k\in \mathbb {N}\), we denote by \(T_i^k,i\in \mathbb {N}\) the jump times of a Poisson process \((J_t^k)_{t\in [0,T]}\) of parameter \(m_k\), and we consider a sequence of independent random variables \(Z_i^k\sim \mathbb {1}_{I_k}(z)\frac{\nu (dz)}{m_k}, i\in \mathbb {N}\), which are independent of \(J^k\) as well. Then we write

$$\begin{aligned} X_{t}^{n,M }= & {} x+\int _{0}^{t}\sum _{k=1}^{M-1}\int _{\{z \in I_k\}}\widetilde{c}(\tau _{n}(r),z,X_{\tau _{n}(r)-}^{n.M})N_{\nu }(dr,dz) \nonumber \\&+\int _{0}^{t}b_{M}(\tau _{n}(r),X_{\tau _{n}(r)}^{n,M})dr \nonumber \\&+\int _{0}^{t}\int _{\{z\ge M\}}\widetilde{c}(\tau _{n}(r),\gamma _{n}(z),X_{\tau _{n}(r)}^{n,M})W_\nu (dr,dz) \nonumber \\= & {} x+\sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}\widetilde{c}(\tau _{n}(T_i^k),Z_i^k,X_{\tau _{n}(T_i^k)-}^{n,M}) \nonumber \\&+\int _{0}^{t}b_{M}(\tau _{n}(r),X_{\tau _{n}(r)}^{n,M})dr\nonumber \\&+\int _{0}^{t}\int _{\{z\ge M\}}\widetilde{c}(\tau _{n}(r),\gamma _{n}(z),X_{\tau _{n}(r)}^{n,M})W_\nu (dr,dz) . \end{aligned}$$
(56)

So for every \(t\in [0,T]\), \(X_{t}^{n,M }\) is a simple functional.

1.1.2 5.1.2 Preliminary estimates

In order to estimate the Sobolev norms of the Euler scheme, we need the following preliminary lemmas.

Lemma 5.2

We fix \(M\ge 1\). Let \(y:\Omega \times [0,T]\times [M,\infty )\rightarrow \mathbb {R}\) be a function which is piecewise constant with respect to both t and z. We assume that \(y_t(z)\) is progressively measurable with respect to \(\mathcal {F}_t\) (defined in (11)), \(y_t(z)\in \mathcal {S}\), and \(\mathbb {E}(\int _{0}^{t}\int _{\{z\ge M \}}\left| y_r(z)\right| ^{2}\nu (dz)dr)<\infty \). We denote \(I_t({y})=\int _0^t\int _{\{z\ge M\}}{y}_r(z)W_\nu (dr,dz)\). Then for any \(l\ge 1,p\ge 2\), there exists a constant \(C_{l,p}(T)\) such that

$$\begin{aligned}&a)\quad \mathbb {E}\vert I_t(y)\vert _{l}^p\le C_{l,p}(T)\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert y_r(z)\vert _{l}^2\nu (dz)\right) ^{\frac{p}{2}}dr, \\&\quad b)\quad \mathbb {E}\vert LI_t(y)\vert _l^p\le C_{l,p}(T)[\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert Ly_r(z)\vert _{l}^2\nu (dz)\right) ^{\frac{p}{2}}dr\nonumber \\&\qquad +\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert y_r(z)\vert _{l}^2\nu (dz)\right) ^{\frac{p}{2}}dr]. \end{aligned}$$

Proof

Proof of a): Let \(C_{l,p}(T)\) be a constant depending on lp and T which may change from one line to another. For any \(l\ge 1\), we take \(l_W\ge 0\) and \(l_Z\ge 0\) such that \(0<l_W+l_Z\le l\).

It is easy to check that

$$\begin{aligned} D^{Z,l_Z}_{(k_1,i_1)\cdots (k_{l_Z},i_{l_Z})}I_t(y)=\int _0^t\int _{\{z\ge M\}}D^{Z,{l_Z}}_{(k_1,i_1)\cdots (k_{l_Z},i_{l_Z})} y_r(z)W_\nu (dr,dz). \end{aligned}$$

And by recurrence, one can show that

$$\begin{aligned} D^{W,{l_W}}_{(s_1,z_1)\cdots (s_{l_W},z_{l_W})}I_t(y)&=\int _0^t\int _{\{z\ge M\}}D^{W,{l_W}}_{(s_1,z_1)\cdots (s_{l_W},z_{l_W})} y_r(z)W_\nu (dr,dz)\\&\quad +\sum _{j=1}^{l_W}D^{W,{{l_W}-1}}_{\widehat{(s_j,z_j)}^{{l_W}-1}}y_{s_j}(z_j)\mathbb {1}_{s_j\le t}, \end{aligned}$$

with

$$\begin{aligned} \widehat{(s_j,z_j)}^{{l_W}-1}:=(s_1,z_1)\cdots (s_{j-1},z_{j-1})(s_{j+1},z_{j+1})\cdots (s_{l_W},z_{l_W}). \end{aligned}$$

We denote

$$\begin{aligned} \bar{y}_r(z)(k_1,i_1,\ldots ,k_{l_Z},i_{l_Z}):=D^{Z,{l_Z}}_{(k_1,i_1)\cdots (k_{l_Z},i_{l_Z})} y_r(z),\quad \bar{y}_r^{l_Z}(z):=D^{Z,{l_Z}}y_r(z)\in l_2^{\otimes l_Z}. \end{aligned}$$

Then \(D^{Z,{l_Z}}I_t(y)=I_t(\bar{y}^{l_Z})\), and

$$\begin{aligned}&D^{W,l_W}_{(s_1,z_1)\cdots (s_{l_W},z_{l_W})}D^{Z,l_Z}_{(k_1,i_1)\cdots (k_{l_Z},i_{l_Z})}I_t(y)\\&\quad =\int _0^t\int _{\{z\ge M\}}D^{W,l_W}_{(s_1,z_1)\cdots (s_{l_W},z_{l_W})} \bar{y}_r(z)(k_1,i_1,\ldots ,k_{l_Z},i_{l_Z})W_\nu (dr,dz)\\&\qquad +\sum _{j=1}^{l_W}D^{W,{l_W-1}}_{\widehat{(s_j,z_j)}^{l_W-1}}\bar{y}_{s_j}(z_j)(k_1,i_1,\ldots ,k_{l_Z},i_{l_Z})\mathbb {1}_{s_j\le t}. \end{aligned}$$

Let \({H}_{{l_Z},{l_W},T}=l_2^{\otimes {l_Z}}\otimes L^2([0,T]\times [M,\infty ),Leb\times \nu )^{\otimes {l_W}}.\) We have

$$\begin{aligned}&\vert D^{W,{l_W}}D^{Z,{l_Z}}I_t(y)\vert _{H_{l,\bar{l},T}}^2\\&\quad =\int _{[0,T]^{l_W}}\int _{[M,\infty )^{l_W}}\vert D^{W,{l_W}}_{(s_1,z_1)\cdots (s_{l_W},z_{l_W})} I_t(\bar{y}^{l_Z})\vert _{l_2^{\otimes {l_Z}}}^2\nu (dz_1)ds_1\cdots \nu (dz_{l_W})ds_{l_W}\\&\quad \le 2\left| \int _0^t\int _{\{z\ge M\}}D^{W,{l_W}} \bar{y}^{l_Z}_r(z)W_\nu (dr,dz)\right| _{H_{{l_Z},{l_W},T}}^2 \\&\qquad + {l_W}2^{l_W}\int _0^t\int _{\{z\ge M\}}\vert D^{W,{{l_W}-1}} \bar{y}^{l_Z}_r(z)\vert _{H_{{l_Z},{l_W}-1,T}}^2\nu (dz)dr. \end{aligned}$$

Using Burkholder’s inequality for Hilbert-space-valued martingales (see [28] for example), we have

$$\begin{aligned}&\mathbb {E}\vert D^{W,{l_W}}D^{Z,{l_Z}}I_t(y)\vert _{H_{{l_Z},{l_W},T}}^p\nonumber \\&\quad \le C_{l,p}(T)\left[ \mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert D^{W,{l_W}}D^{Z,{l_Z}}y_r(z)\vert _{H_{{l_Z},{l_W},T}}^2\nu (dz))^{\frac{p}{2}}dr \right. \nonumber \\&\left. \qquad +\mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert D^{W,{{l_W}-1}}D^{Z,{l_Z}}y_r(z)\vert _{H_{{l_Z},{l_W}-1,T}}^2\nu (dz))^{\frac{p}{2}}dr\right] . \end{aligned}$$
(57)

We recall that for \(F\in \mathcal {D}_\infty \), we have \(\vert D^{W,{l_W}}D^{Z,{l_Z}}F\vert _{H_{{l_Z},{l_W},T}}\le \vert F\vert _{{l_Z}+{l_W}}\) (see the definition in (33)). Then (57) gives

$$\begin{aligned} \mathbb {E}\vert I_t(y)\vert _{1,l}^p&\le C_{l,p}(T)\sum \limits _{{l_Z}+{l_W}\le l}\mathbb {E}\vert D^{W,{l_W}}D^{Z,{l_Z}}I_t(y)\vert _{H_{{l_Z},{l_W},T}}^p\nonumber \\&\le {C}_{l,p}(T)\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert y_r(z)\vert _{l}^2\nu (dz)\right) ^{\frac{p}{2}}dr. \end{aligned}$$
(58)

Finally, using Burkholder’s inequality, we have

$$\begin{aligned} \mathbb {E}\vert I_t(y)\vert ^p\le {C}_{l,p}(T)\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert y_r(z)\vert ^2\nu (dz)\right) ^{\frac{p}{2}}dr. \end{aligned}$$
(59)

So a) is proved.

Proof of b): We first show that

$$\begin{aligned} LI_t(y)=I_t(Ly)+I_t(y). \end{aligned}$$
(60)

We denote

$$\begin{aligned}&I_k^t(f_k)=k!\int _0^t\int _0^{s_1}\cdots \int _0^{s_{k-1}}\int _{[M,+\infty )^k}f_k(s_1,\ldots ,s_k,z_1,\ldots ,z_k)\\&\quad W_\nu (ds_k,dz_k)\cdots W_\nu (ds_1,dz_1) \end{aligned}$$

the multiple stochastic integral for a deterministic function \(f_k\), which is square integrable with respect to \((\nu (dz)ds)^{\otimes k}\) and is symmetric with respect to the time variation \((s_1,\ldots ,s_k)\) for each fixed \((z_1,\ldots ,z_k)\). Notice that \(L^ZI_k^t(f_k)=0\) and \(L^WI_k^t(f_k)=kI_k^t(f_k)\). So, \(LI_k^t(f_k)=kI_k^t(f_k)\). Then by the duality relation (32),

$$\begin{aligned} \mathbb {E}(I_k^t(f_k)L(I_t(y)))=\mathbb {E}(I_t(y)\times LI_k^t(f_k))=k\mathbb {E}(I_t(y)\times I_k^t(f_k)). \end{aligned}$$
(61)

On the other hand, using the isometry property and the duality relation,

$$\begin{aligned} \mathbb {E}(I_k^t(f_k)\times I_t(Ly))= & {} k\mathbb {E}\int _0^t\int _{\{z\ge M\}}I_{k-1}^r(f_k(r,z,\cdot ))Ly_r(z)\nu (dz)dr\\= & {} k\int _0^t\int _{\{z\ge M\}}\mathbb {E}[y_r(z)\times LI_{k-1}^r(f_k(r,z,\cdot ))]\nu (dz)dr\\= & {} k(k-1)\mathbb {E}\int _0^t\int _{\{z\ge M\}}y_r(z)I_{k-1}^r(f_k(r,z,\cdot ))\nu (dz)dr\\= & {} k(k-1)\mathbb {E}(I_t(y)\times \int _0^t\int _{\{z\ge M\}}I_{k-1}^r(f_k(r,z,\cdot ))W_\nu (dr,dz))\\= & {} (k-1)\mathbb {E}(I_t(y)\times I_{k}^t(f_k)) . \end{aligned}$$

Combining this with (61), we get

$$\begin{aligned} \mathbb {E}(I_k^t(f_k)(I_t(y)+I_t(Ly)))=k\mathbb {E}(I_k^t(f_k)I_t(y))=\mathbb {E}[I_k^t(f_k)\times LI_t(y)]. \end{aligned}$$
(62)

Since every element in \(L^2(W)\) (defined by (12)) can be represented as the direct sum of multiple stochastic integrals, we have for any \(F\in L^2(W)\),

$$\begin{aligned} \mathbb {E}[FLI_t(y)] =\mathbb {E}[F(I_t(Ly)+I_t(y))] . \end{aligned}$$
(63)

For \(G\in L^2(N)\), one has \(L^WG=0\) and \(L^ZG\in L^2(N)\). Then by using duality and (13),

$$\begin{aligned} \mathbb {E}[GLI_t(y)]=\mathbb {E}[I_t(y)LG]=\mathbb {E}[I_t(y)L^ZG]=0, \end{aligned}$$

and by (13),

$$\begin{aligned} \mathbb {E}[G(I_t(Ly)+I_t(y))]=0. \end{aligned}$$

So,

$$\begin{aligned} \mathbb {E}[GLI_t(y)] =\mathbb {E}[G(I_t(Ly)+I_t(y))] . \end{aligned}$$
(64)

Combining (63) and (64), for any \(\tilde{G}\in L^2(W)\otimes L^2(N)\), we have \(\mathbb {E}[\tilde{G}LI_t(y)] =\mathbb {E}[\tilde{G}(I_t(Ly)+I_t(y))]\), which proves (60).

Then, by Lemma 5.2a),

$$\begin{aligned} \mathbb {E}\vert LI_t(y)\vert _{l}^p\le & {} 2^{p-1}\left( \mathbb {E}\vert \int _0^t\int _{\{z\ge M\}}Ly_r(z)W_\nu (dr,dz)\vert _l^p+\mathbb {E}\vert \int _0^t\int _{\{z\ge M\}}y_r(z)W_\nu (dr,dz)\vert _l^p\right) \\\le & {} C_{l,p}(T)\left[ \mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert Ly_r(z)\vert _{l}^2\nu (dz))^{\frac{p}{2}}dr+\mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert y_r(z)\vert _{l}^2\nu (dz))^{\frac{p}{2}}dr\right] . \end{aligned}$$

\(\square \)

We will also need the following lemma from [7] (Lemma 8 and Lemma 10), which is a consequence of the chain rule for \(D^q\) and L.

Lemma 5.3

Let \(F\in \mathcal {S}^d\). For every \(l\in \mathbb {N},\) if \(\phi : \mathbb {R}^d\rightarrow \mathbb {R}\) is a \(C^{l}(\mathbb {R}^d)\) function (\(l-\)times differentiable function), then

$$\begin{aligned} a)\quad \vert \phi (F)\vert _{1,l}\le \vert \nabla \phi (F)\vert \vert F\vert _{1,l}+C_l\sup _{2\le \vert \beta \vert \le l}\vert \partial ^\beta \phi (F)\vert \vert F\vert _{1,l-1}^{l}. \end{aligned}$$

If \(\phi \in C^{l+2}(\mathbb {R}^d)\), then

$$\begin{aligned} b)\quad \vert L\phi (F)\vert _{l}\le \vert \nabla \phi (F)\vert \vert LF\vert _{l}+C_l\sup _{2\le \vert \beta \vert \le l+2}\vert \partial ^\beta \phi (F)\vert (1+\vert F\vert _{l+1}^{l+2})(1+\vert LF\vert _{l-1}). \end{aligned}$$

For \(l=0\), we have

$$\begin{aligned} c)\quad \vert L\phi (F)\vert \le \vert \nabla \phi (F)\vert \vert LF\vert +\sup _{\vert \beta \vert =2}\vert \partial ^\beta \phi (F)\vert \vert F\vert _{1,1}^{2}. \end{aligned}$$

We finish this section with a first estimate concerning the operator L.

Lemma 5.4

Under the Hypothesis 2.4 (either 2.4(a) or 2.4(b)), for every \(p\ge 2,\tilde{p}\ge 1,l\ge 0\), there exists a constant \(C_{l,p,\tilde{p}}(T)\) such that

$$\begin{aligned} \sup _{M\in \mathbb {N}}\mathbb {E} \left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}\bar{c}(Z_i^k)\vert LZ_i^k\vert _l^{\tilde{p}}\right) ^{p}\le C_{l,p,\tilde{p}}(T). \end{aligned}$$
(65)

Proof

We notice that (with \(\psi _k\) given in (44)), \(LZ_i^k=\xi _i^k(\ln {\psi _k})^{\prime }(V_i^k)\) and \(D^{W,l}LZ^k_i=0\). Moreover,

$$\begin{aligned}&D^{Z,l}_{(r_1,m_1)\cdots (r_l,m_l)}LZ_i^k\\&=\prod \limits _{j=1}^l(\delta _{r_jk}\delta _{m_ji})\xi _i^k(\ln {\psi _k})^{(l+1)}(V_i^k), \end{aligned}$$

with \(\delta _{rk}\) the Kronecker delta, so that

$$\begin{aligned} \vert LZ_i^k\vert _l=\xi _i^k\sum \limits _{0\le \tilde{l}\le l}\vert (\ln {\psi _k})^{(\tilde{l}+1)}(V_i^k)\vert . \end{aligned}$$
(66)

It follows that

$$\begin{aligned}&\mathbb {E} \left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}\bar{c}(Z_i^k)\vert LZ_i^k\vert _l^{\tilde{p}}\right) ^{p}\\&\quad \le C_{l,p,\tilde{p}}\sum \limits _{0\le \tilde{l}\le l}\mathbb {E} \left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}\bar{c}(Z_i^k)\xi _i^k\vert (\ln {\psi _k})^{(\tilde{l}+1)}(V_i^k)\vert ^{\tilde{p}}\right) ^{p}. \end{aligned}$$

Since \(\bar{c}(Z_i^k)\xi _i^k=\bar{c}(V_i^k)\xi _i^k\), we may replace \(Z_i^k\) by \(V_i^k\) in the right hand side of the above estimate. This gives

$$\begin{aligned}&C_{l,p,\tilde{p}}\sum \limits _{0\le \tilde{l}\le l}\mathbb {E} \left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}\bar{c}(V_i^k)\xi _i^k\vert (\ln {\psi _k})^{(\tilde{l}+1)}(V_i^k)\vert ^{\tilde{p}}\right) ^{p}\\&\quad =C_{l,p,\tilde{p}}\sum \limits _{0\le \tilde{l}\le l}\mathbb {E} \vert \int _0^t\int _{[1,M)}\int _{\{0,1\}}\bar{c}(v)\xi \vert (\ln {\bar{\psi }})^{(\tilde{l}+1)}(v)\vert ^{\tilde{p}}\Lambda (ds,d\xi ,dv)\vert ^{p}, \end{aligned}$$

where \(\bar{\psi }(v):=\sum \limits _{k=1}^\infty \mathbb {1}_{I_k}(v)\psi (v-(k+\frac{1}{2}))\) and \(\Lambda \) is a Poisson point measure on \(\{0,1\}\times [1,\infty )\) with compensator

$$\begin{aligned} \widehat{\Lambda }(ds,d\xi ,dv)=\sum _{k=1}^\infty [\frac{\psi (v-(k+\frac{1}{2}))}{m(\psi )}\mathbb {1}_{I_k}(v)dv\times b(v,d\xi )]ds, \end{aligned}$$

with \(b(v,d\xi )\) the Bernoulli probability measure on \(\{0,1\}\) with parameter \(\varepsilon _km(\psi )\), if \(v\in I_k\). Then by compensating \(\Lambda \) and using Burkholder’s inequality (the same proof as for (5)),

(67)

We notice that by (45),

$$\begin{aligned}&\int _0^t\int _{[1,M)\times \{0,1\}}\vert \bar{c}(v)\vert ^p\xi \vert (\ln {\bar{\psi }})^{(\tilde{l}+1)}(v)\vert ^{p\tilde{p}}\widehat{\Lambda }(ds,d\xi ,dv)\\&\quad =t\sum _{k=1}^{M-1}\int _{I_k}\varepsilon _km(\psi )\vert \bar{c}(v)\vert ^p\vert (\ln {{\psi _k}})^{(\tilde{l}+1)}(v)\vert ^{p\tilde{p}}\frac{\psi _k(v)}{m(\psi )}dv\\&\quad \le C_{\tilde{l},p,\tilde{p}}(T)\sum _{k=1}^{M-1}\varepsilon _k\int _{I_k}\vert \bar{c}(v)\vert ^pdv. \end{aligned}$$

Similar upper bounds hold for the two other terms in the right hanf side of (67), so (67) is upper bounded by

$$\begin{aligned}&C_{l,p,\tilde{p}}(T)\left[ \left( \sum _{k=1}^{M-1}\varepsilon _k\int _{I_k}\vert \bar{c}(v)\vert ^2dv\right) ^{\frac{p}{2}}+\sum _{k=1}^{M-1}\varepsilon _k\int _{I_k}\vert \bar{c}(v)\vert ^pdv\right. \nonumber \\&\quad \left. +\left( \sum _{k=1}^{M-1}\varepsilon _k\int _{I_k}\vert \bar{c}(v)\vert dv\right) ^p\right] . \end{aligned}$$
(68)

If we assume the Hypothesis 2.4 (a), then we have \(\varepsilon _k=\varepsilon _*/(k+1)^{1-\alpha _0}\), with \(\alpha _0\) given in (7). So the above term is less than

$$\begin{aligned} C_{l,p,\tilde{p}}(T)\left[ \left( \int _{1}^\infty \frac{\vert \bar{c}(v)\vert ^2}{v^{1-\alpha _0}}dv\right) ^{\frac{p}{2}}+\int _{1}^\infty \frac{\vert \bar{c}(v)\vert ^p}{v^{1-\alpha _0}}dv+\left( \int _{1}^\infty \frac{\vert \bar{c}(v)\vert }{v^{1-\alpha _0}} dv\right) ^p\right] , \end{aligned}$$

which is upper bounded by a constant \(C_{l,p,\tilde{p}}(T)\) thanks to (7). On the other hand, if we assume the Hypothesis 2.4 (b), then \(\varepsilon _k=\varepsilon _*/(k+1)\). So (68) is upper bounded by a constant \(C_{l,p,\tilde{p}}(T)\) thanks to (9). \(\square \)

1.1.3 5.1.3 Estimations of \(\Vert X_t^{n,M}\Vert _{L,l,p}\)

In this section, our aim is to prove the following lemma.

Lemma 5.5

Under the Hypothesis 2.1 with \(q^*\ge 2\) and Hypothesis 2.4 (either 2.4(a) or 2.4(b)), for all \(p\ge 2, 0\le l\le q^*\), there exists a constant \(C_{l,p}(T)\) depending on lpx and T, such that

$$\begin{aligned} \mathrm{(a)}\quad \sup \limits _{n}\sup \limits _{M}\mathbb {E}\vert X_t^{n,M}\vert _{l}^p\le C_{l,p}(T), \end{aligned}$$
(69)

and for \(0\le l\le q^*-2\),

$$\begin{aligned} \mathrm{(b)}\quad \sup \limits _{n}\sup \limits _{M}\mathbb {E}\vert LX_t^{n,M}\vert _{l}^p\le C_{l,p}(T). \end{aligned}$$
(70)

Proof

In the following proof, \(C_{l,p}(T)\) will be a constant which depends on lpx and T, and which may change from a line to another. \(q^*\ge 2\) is fixed throughout the proof.

(a) We prove (69) for \(0\le l\le q^*\) by recurrence on l.

Step 1 For \(l=0\), using Burkholder’s inequality, Hypothesis 2.1 and (55),

$$\begin{aligned} \mathbb {E}\vert X_t^{n,M}\vert ^p&\le C_{0,p}(T)\left[ x^p+\mathbb {E}\vert \int _0^tb_M(\tau _n(r),X_{\tau _n(r)}^{n,M})dr\vert ^p\right. \nonumber \\&\left. \quad +\mathbb {E}\vert \int _0^t\int _{\{z\ge M\}}\widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})W_\nu (dr,dz)\vert ^p \right. \nonumber \\&\left. \quad +\mathbb {E}\vert \int _0^t\int _{[1,M)}\widetilde{c}(\tau _n(r),z,X_{\tau _n(r)-}^{n,M})N_\nu (dr,dz)\vert ^p\right] \nonumber \\&\le C_{0,p}(T)\left[ 1+\mathbb {E}\int _0^t\vert \int _{\{z\ge M\}}\widetilde{c}(\tau _n(r),z,X_{\tau _n(r)}^{n,M})\nu (dz)\vert ^pdr\right. \nonumber \\&\left. \quad +\mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert \widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert ^2\nu (dz))^{\frac{p}{2}}dr\right. \nonumber \\&\left. \quad +\mathbb {E}\int _0^t(\int _{[1,M)}\vert \widetilde{c}(\tau _n(r),z,X_{\tau _n(r)-}^{n,M})\vert ^2\nu (dz))^{\frac{p}{2}}dr\right. \nonumber \\&\left. \quad +\mathbb {E}\int _0^t\int _{[1,M)}\vert \widetilde{c}(\tau _n(r),z,X_{\tau _n(r)-}^{n,M})\vert ^p\nu (dz)dr\right. \nonumber \\&\left. \quad +\mathbb {E}\int _0^t\vert \int _{[1,M)}\widetilde{c}(\tau _n(r),z,X_{\tau _n(r)-}^{n,M})\nu (dz)\vert ^pdr\right] \nonumber \\&\le C_{0,p}(T). \end{aligned}$$
(71)

Step 2 Now we assume that (69) holds for \(l-1\), with \(l\ge 1\) and for every \(p\ge 2\), and we prove that it holds for l and for every \(p\ge 2\). We write \(\mathbb {E}\vert X_t^{n,M}\vert _{l}^p\le C_{l,p}(T)(A_1+A_2+A_3)\), with

$$\begin{aligned} A_1= & {} \mathbb {E}\left| \int _0^tb_M(\tau _n(r),X_{\tau _n(r)}^{n,M})dr\right| _{l}^p, \\ A_2= & {} \mathbb {E}\left| \int _0^t\int _{\{z\ge M\}}\widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})W_\nu (dr,dz)\right| _{l}^p, \\ A_3= & {} \mathbb {E}\left| \int _0^t\int _{[1,M)}\widetilde{c}(\tau _n(r),z,X_{\tau _n(r)-}^{n,M})N_\nu (dr,dz)\right| _{l}^p. \end{aligned}$$

We notice that by Hypothesis 2.1, \(\Vert b_M\Vert _{l,\infty }\le \bar{c}_1.\) Then using Lemma 5.3a) and the recurrence hypothesis, we get

$$\begin{aligned} A_1\le & {} {C}_{l,p}(T)\mathbb {E}\int _0^t\vert b_M(\tau _n(r),X_{\tau _n(r)}^{n,M})\vert _{l}^pdr\nonumber \\\le & {} {C}_{l,p}(T)\left[ (\bar{c}_1)^p+\mathbb {E}\int _0^t\vert \partial _x b_M(\tau _n(r),X_{\tau _n(r)}^{n,M})\vert ^p\vert X_{\tau _n(r)}^{n,M}\vert _{1,l}^{p}dr\right. \nonumber \\&\left. \quad +\mathbb {E}\int _0^t\sup \limits _{2\le \vert \beta \vert \le l}\vert \partial _x^\beta b_M(\tau _n(r),X_{\tau _n(r)}^{n,M})\vert ^p\vert X_{\tau _n(r)}^{n,M}\vert _{1,l-1}^{lp}dr\right] \nonumber \\\le & {} {C}_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert X_{\tau _n(r)}^{n,M}\vert _{l}^{p}dr\right] . \end{aligned}$$
(72)

Next, we estimate \(A_2\). By Hypothesis 2.1, for every n, \(\Vert \widetilde{c}(\tau _n(r),\gamma _n(z),\cdot )\Vert _{l,\infty }\le \vert \bar{c}(\gamma _n(z))\vert \). Then using Lemma 5.2a), Lemma 5.3a), (53) and the recurrence hypothesis, we get

$$\begin{aligned} A_2\le & {} {C}_{l,p}(T)[\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert \widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert _{l}^2\nu (dz)\right) ^{\frac{p}{2}}dr\nonumber \\\le & {} {C}_{l,p}(T)[\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert \partial _{x}\widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert ^2\vert X_{\tau _n(r)}^{n,M}\vert _{1,l}^2\nu (dz)\right) ^{\frac{p}{2}}dr\nonumber \\&\quad +\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\sup \limits _{2\le \vert \beta \vert \le l}\vert \partial _{x}^\beta \widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert ^2\vert X_{\tau _n(r)}^{n,M}\vert _{1,l-1}^{2l}\nu (dz)\right) ^{\frac{p}{2}}dr\nonumber \\&\quad +\mathbb {E}\int _0^t\left( \int _{\{z\ge M\}}\vert \widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert ^2\nu (dz)\right) ^{\frac{p}{2}}dr\nonumber \\\le & {} {C}_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert X_{\tau _n(r)}^{n,M}\vert _{l}^{p}dr\right] . \end{aligned}$$
(73)

Finally we estimate \(A_3\). We notice that \(D^{Z}_{(r,m)}Z_i^k=\xi _i^k\delta _{rk}\delta _{mi}\), \(D^W_{(s,z)}Z_i^k=0,\) and for \(l\ge 2\),

\(D^{Z,W,l}_{(r_1,m_1)\cdots (r_l,m_l),(s_1,z_1)\cdots (s_l,z_l)}Z_i^k=0\). So we have \(\vert Z_i^k\vert _{1,l}^p= \vert \xi _i^k\vert ^p\le 1\). By Lemma 5.3a) for \(d=2\), Hypothesis 2.1, for any \(k,i\in \mathbb {N}\),

$$\begin{aligned} \vert \widetilde{c}(\tau _n(T_i^k),Z_i^k,X^{n,M}_{\tau _n(T_i^k)-})\vert _{l}&\le \vert \bar{c}(Z_i^k)\vert \\&\quad +(\vert \partial _{z}\widetilde{c}(\tau _n(T_i^k),Z_i^k,X_{\tau _n(T_i^k)-}^{n,M})\vert \\&\quad +\vert \partial _{x}\widetilde{c}(\tau _n(T_i^k),Z_i^k,X_{\tau _n(T_i^k)-}^{n,M})\vert )(\vert Z_i^k\vert _{1,l}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{1,l})\\&\quad +{C}_{l,p}(T)\sup \limits _{2\le \vert \beta _1+\beta _2\vert \le l}(\vert \partial _{z}^{\beta _2}\partial _{x}^{\beta _1}\widetilde{c}(\tau _n(T_i^k),Z_i^k,X_{\tau _n(T_i^k)-}^{n,M})\vert )\\&\qquad \times (\vert Z_i^k\vert _{1,l-1}^{l}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{1,l-1}^{l})\\&\le {C}_{l,p}(T)\bar{c}(Z_i^k)(1+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l-1}^{l}). \end{aligned}$$

It follows that

$$\begin{aligned} A_3\le & {} \mathbb {E}\left( \sum _{k=1}^{M-1}\sum _{i=1}^{J^k_t}\vert \widetilde{c}(\tau _n(T_i^k),Z_i^k,X^{n,M}_{\tau _n(T_i^k)-})\vert _{l}\right) ^p\nonumber \\\le & {} {C}_{l,p}(T)\mathbb {E}\left( \sum _{k=1}^{M-1}\sum _{i=1}^{J^k_t}\bar{c}(Z_i^k)(1+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l-1}^{l})\right) ^p \nonumber \\= & {} {C}_{l,p}(T)\mathbb {E}\left( \int _0^t\int _{[1,M)}{\bar{c}}(z)(1+\vert X_{\tau _n(r)-}^{n,M}\vert _{l}+\vert X_{\tau _n(r)-}^{n,M}\vert _{l-1}^{l})N_\nu (dr,dz)\right) ^p\nonumber \\\le & {} {C}_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert X_{\tau _n(r)}^{n,M}\vert _{l}^pdr\right] , \end{aligned}$$
(74)

where the last inequality is obtained by using (5) and recurrence hypothesis. Then combining (72),(73) and (74),

$$\begin{aligned} \mathbb {E}\vert X_t^{n,M}\vert _{l}^p\le {C}_{l,p}(T)[1+\int _0^t\mathbb {E}\vert X_{\tau _n(r)}^{n,M}\vert _{l}^pdr]. \end{aligned}$$
(75)

So \(\mathbb {E}\vert X_{\tau _n(t)}^{n,M}\vert _{l}^p\le {C}_{l,p}(T)[1+\int _0^{\tau _n(t)}\mathbb {E}\vert X_{\tau _n(r)}^{n,M}\vert _{l}^pdr]\le {C}_{l,p}(T)[1+\int _0^{t}\mathbb {E}\vert X_{\tau _n(r)}^{n,M}\vert _{l}^pdr]\). We denote temporarily \(g(t)=\mathbb {E}\vert X_{\tau _n(t)}^{n,M}\vert _{l}^p\), then we have \(g(t)\le {C}_{l,p}(T)[1+\int _0^{t}g(r)dr]\). By Gronwall’s lemma, \(g(t)\le {C}_{l,p}(T)e^{T{C}_{l,p}(T)}\), which means that

$$\begin{aligned} \mathbb {E}\vert X_{\tau _n(t)}^{n,M}\vert _{l}^p\le {C}_{l,p}(T)e^{T{C}_{l,p}(T)}. \end{aligned}$$

Substituting into (75), we conclude that

$$\begin{aligned} \sup _{n,M}\mathbb {E}\vert X_t^{n,M}\vert _{l}^p\le C_{l,p}(T). \end{aligned}$$
(76)

As a summary of the recurrence argument, we remark that the uniform bound in nM of the operator D for \(l=0\) is due to the Hypothesis 2.1, and it propagates to larger l thanks to Lemma 5.3a).

b) Now we prove (70) for \(0\le l\le q^*-2\), by recurrence on l.

Step 1 One has to check that (70) holds for \(l=0\). The proof is analogous to that in the following Step 2, but simpler. It is done by using Lemma 5.3c), (60), Burkholder’s inequality, Hypothesis 2.1, 2.4, (53), (69) and Gronwall’s lemma. So we skip it.

Step 2 Now we assume that (70) holds for \(l-1\), with \(l\ge 1\) and for any \(p\ge 2\) and we prove that it holds for l and for any \(p\ge 2\). We write \(\mathbb {E}\vert LX_t^{n,M}\vert _{l}^p\le C_{l,p}(T)(B_1+B_2+B_3)\), with

$$\begin{aligned} B_1= & {} \mathbb {E}\left| L\int _0^tb_M(\tau _n(r),X_{\tau _n(r)}^{n,M})dr\right| _{l}^p, \\ B_2= & {} \mathbb {E}\left| L\int _0^t\int _{\{z\ge M\}}\widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})W_\nu (dr,dz)\right| _{l}^p, \\ B_3= & {} \mathbb {E}\left| L\int _0^t\int _{[1,M)}\widetilde{c}(\tau _n(r),z,X_{\tau _n(r)-}^{n,M})N_\nu (dr,dz)\right| _{l}^p. \end{aligned}$$

Using Lemma 5.3b), Hypothesis 2.1, the recurrence hypothesis and (69), we get

$$\begin{aligned} B_1\le & {} {C}_{l,p}(T)\mathbb {E}\int _0^t\vert Lb_M(\tau _n(r),X_{\tau _n(r)}^{n,M})\vert _{l}^pdr \nonumber \\\le & {} {C}_{l,p}(T)\left[ \mathbb {E}\int _0^t\vert \partial _x b_M(\tau _n(r),X_{\tau _n(r)}^{n,M})\vert ^p\vert LX_{\tau _n(r)}^{n,M}\vert _{l}^{p}dr \right. \nonumber \\&\left. \quad +\mathbb {E}\int _0^t\sup \limits _{2\le \vert \beta \vert \le l+2}\vert \partial _x^\beta b_M(\tau _n(r),X_{\tau _n(r)}^{n,M})\vert ^p\right. \nonumber \\&\left. \quad \times (1+\vert X_{\tau _n(r)}^{n,M}\vert _{l+1}^{(l+2)p})(1+\vert LX_{\tau _n(r)}^{n,M}\vert _{l-1}^{p})dr\right] \nonumber \\\le & {} {C}_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert LX_{\tau _n(r)}^{n,M}\vert _{l}^{p}dr\right] . \end{aligned}$$
(77)

Then by Lemma 5.2b), we get

$$\begin{aligned} B_2\le & {} C_{l,p}(T)\left[ \mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert L\widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert _{l}^2\nu (dz))^{\frac{p}{2}}dr\right. \\&\left. \quad +\mathbb {E}\int _0^t(\int _{\{z\ge M\}}\vert \widetilde{c}(\tau _n(r),\gamma _n(z),X_{\tau _n(r)}^{n,M})\vert _{l}^2\nu (dz))^{\frac{p}{2}}dr\right] \\:= & {} C_{l,p}(T)[B_{2,1}+B_{2,2}]. \end{aligned}$$

As a consequence of Lemma 5.3b), we have

And using Lemma 5.3a),

Then by Hypothesis 2.1, (53), (69) and the recurrence hypothesis,

$$\begin{aligned} B_2\le C_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert LX_{\tau _n(r)}^{n,M}\vert _{l}^{p}dr\right] . \end{aligned}$$
(78)

Now we estimate \(B_3\). By Lemma 5.3b) for \(d=2\), Hypothesis 2.1, for any \(k,i\in \mathbb {N}\),

$$\begin{aligned}&\vert L\widetilde{c}(\tau _n(T_i^k),Z_i^k,X^{n,M}_{\tau _n(T_i^k)-})\vert _{l}\\&\quad \le (\vert \partial _{z}\widetilde{c}(\tau _n(T_i^k),Z_i^k,X_{\tau _n(T_i^k)-}^{n,M})\vert \\&\qquad +\vert \partial _{x}\widetilde{c}(\tau _n(T_i^k),Z_i^k,X_{\tau _n(T_i^k)-}^{n,M})\vert )(\vert LZ_i^k\vert _{l}+\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l})\\&\qquad +C_{l,p}(T)\sup \limits _{2\le \vert \beta _1+\beta _2\vert \le l+2}(\vert \partial _{z}^{\beta _2}\partial _{x}^{\beta _1}\widetilde{c}(\tau _n(T_i^k),Z_i^k,X_{\tau _n(T_i^k)-}^{n,M})\vert )\\&\qquad \times (1+\vert Z_i^k\vert _{l+1}^{l+2}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2})(1+\vert LZ_i^k\vert _{l-1}+\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l-1})\\&\quad \le C_{l,p}(T){\bar{c}}(Z_i^k)(1+\vert LZ_i^k\vert _{l}+\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l}\\&\qquad +\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}\times (\vert LZ_i^k\vert _{l-1}+\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l-1})). \end{aligned}$$

Then

$$\begin{aligned} B_3&\le \mathbb {E}\left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}\vert L\widetilde{c}(\tau _n(T_i^k),Z_i^k,X^{n,M}_{\tau _n(T_i^k)-})\vert _{l}\right) ^p \\&\le C_{l,p}(T)\mathbb {E}\left| \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)(1+\vert LZ_i^k\vert _{l}+\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}\right. \\&\left. \quad +\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}\times (\vert LZ_i^k\vert _{l-1}+\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l-1}))\right| ^p \\&\le C_{l,p}(T)(B_{3,1}+B_{3,2}+B_{3,3}), \end{aligned}$$

where

$$\begin{aligned} B_{3,1}= & {} \mathbb {E}\left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)\vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l}\right) ^p, \\ B_{3,2}= & {} \mathbb {E}\left| \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)(\vert LZ_i^k\vert _{l}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}\times \vert LZ_i^k\vert _{l-1})\right| ^p, \\ B_{3,3}= & {} \mathbb {E}\left| \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)(1+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}+\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}\times \vert LX_{\tau _n(T_i^k)-}^{n,M}\vert _{l-1})\right| ^p. \end{aligned}$$

By (5),

$$\begin{aligned} B_{3,1}= & {} \mathbb {E}\vert \int _0^t\int _{[1,M)}{\bar{c}}(z)\vert LX_{\tau _n(r)-}^{n,M}\vert _{l}N_\nu (dr,dz)\vert ^p \nonumber \\\le & {} C_{l,p}(T)\int _0^t\mathbb {E}\vert LX_{\tau _n(r)-}^{n,M}\vert _{l}^pdr. \end{aligned}$$
(79)

Using Schwartz’s inequality, (5) and (69), we have

$$\begin{aligned}&\mathbb {E}\left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{l+2}\times \vert LZ_i^k\vert _{l-1}\right) ^p\\&\le \left[ \mathbb {E}(\sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)\vert X_{\tau _n(T_i^k)-}^{n,M}\vert _{l+1}^{2(l+2)})^p\right] ^{\frac{1}{2}}\times \left[ \mathbb {E}(\sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)\vert LZ_i^k\vert _{l-1}^{2})^p\right] ^{\frac{1}{2}}\\&= \left[ \mathbb {E}\vert \int _0^t\int _{[1,M)}{\bar{c}}(z)\vert X_{\tau _n(r)-}^{n,M}\vert _{l+1}^{2(l+2)}N_\nu (dr,dz)\vert ^p\right] ^{\frac{1}{2}}\\&\quad \times \left[ \mathbb {E}(\sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)\vert LZ_i^k\vert _{l-1}^{2})^p\right] ^{\frac{1}{2}}\\&\le C_{l,p}(T)\left[ \mathbb {E}(\sum _{k=1}^{M-1}\sum _{i=1}^{J_t^k}{\bar{c}}(Z_i^k)\vert LZ_i^k\vert _{l-1}^{2})^p\right] ^{\frac{1}{2}}. \end{aligned}$$

Then applying Lemma 5.4, we get

$$\begin{aligned} B_{3,2}\le {C}_{l,p}(T). \end{aligned}$$
(80)

By (5), (69) and recurrence hypothesis, we have

$$\begin{aligned} B_{3,3}= & {} \mathbb {E}\left| \int _0^t\int _{[1,M)}{\bar{c}}(z)(1+\vert X_{\tau _n(r)-}^{n,M}\vert _{l+1}^{l+2}+\vert X_{\tau _n(r)-}^{n,M}\vert _{l+1}^{l+2}\times \vert LX_{\tau _n(r)-}^{n,M}\vert _{l-1})N_\nu (dr,dz)\right| ^p\nonumber \\\le & {} {C}_{l,p}(T). \end{aligned}$$
(81)

So by (79),(80) and (81),

$$\begin{aligned} B_3\le {C}_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert LX_{\tau _n(r)-}^{n,M}\vert _{l}^pdr\right] . \end{aligned}$$
(82)

Then combining (77),(78) and (82),

$$\begin{aligned} \mathbb {E}\vert LX_t^{n,M}\vert _{l}^p\le {C}_{l,p}(T)\left[ 1+\int _0^t\mathbb {E}\vert LX_{\tau _n(r)}^{n,M}\vert _{l}^pdr\right] , \end{aligned}$$
(83)

Using Gronwall’s lemma for (83) as for (75), we conclude that

$$\begin{aligned} \sup _{n,M}\mathbb {E}\vert LX_t^{n,M}\vert _{l}^p\le C_{l,p}(T). \end{aligned}$$
(84)

As a summary of the recurrence argument, we remark that the uniform bound in nM of the operator L for \(l=0\) is due to the Hypothesis 2.1, 2.4 and Lemma 5.3c), and it propagates to larger l thanks to Lemma 5.3b). \(\square \)

Proof of Lemma 4.1

By Lemmas 5.1 and 5.5, as a consequence of Lemma 3.2, we have \(X_t^M\in \mathcal {D}_{l,p}\) and \(\sup \limits _M\Vert X_t^M\Vert _{L,l,p}\le C_{l,p}(T)\). \(\square \)

1.2 5.2 Proof of Lemma 4.2

In the following, we turn to the non-degeneracy of \(X_t^M\). We consider the approximate Eq. (49)

$$\begin{aligned} X_{t}^{M }&=x+\int _{0}^{t}\int _{[1,M)}\widetilde{c}(r,z,X_{r-}^{M })N_\nu (dr,dz)+\int _{0}^{t}b _{M}(r,X_{r}^{M })dr\\&\quad +\int _{0}^{t}\int _{\{z\ge M\}}\widetilde{c} (r,z,X_{r}^{M })W_\nu (dr,dz). \end{aligned}$$

We can calculate the Malliavin derivatives of the Euler scheme and then by passing to the limit, we have

$$\begin{aligned} D^{Z}_{(k,i)}X_{t}^{M}&=\mathbb {1}_{\{k\le M-1\}}\mathbb {1}_{\{i\le J_t^k\}}\xi ^{k}_{i}\partial _{z}\widetilde{c}(T_i^k,Z_i^k,X_{T_i^k-}^M)\nonumber \\&\quad +\int _{T_{i}^{k}}^{t}\int _{[1,M)}\partial _{x}\widetilde{c}(r,z,X^M_{r-})D^{Z}_{(k,i)}X_{r-}^{M}N_{\nu }(dr,dz) \nonumber \\&\quad +\int _{T_{i}^{k}}^{t}\partial _xb _{M}(r,X_{r}^{M })D^{Z}_{(k,i)}X_{s}^{M}dr\nonumber \\&\quad +\int _{T_{i}^{k}}^{t}\int _{\{z\ge M\}}\partial _x\widetilde{c}(r,z,X_{r}^{M })D^{Z}_{(k,i)}X_{s}^{M}W_\nu (dr,dz). \end{aligned}$$
(85)
$$\begin{aligned} D^{W}_{(s,z_0)}X_{t}^{M}&=\int _{s}^{t}\int _{[1,M)}\partial _{x}\widetilde{c}(r,z,X^M_{r-})D^{W}_{(s,z_0)}X_{r-}^{M}N_{\nu }(dr,dz)\nonumber \\&\quad +\int _{s}^{t}\partial _xb _{M}(r,X_{r}^{M })D^{W}_{(s,z_0)}X_{r}^{M}dr \nonumber \\&\quad +\mathbb {1}_{\{s\le t\}}\mathbb {1}_{\{ z_0\ge M\}}\widetilde{c}(s,z_0,X_{s}^{M })\nonumber \\&\quad +\int _{s}^{t}\int _{\{z\ge M\}}\partial _x\widetilde{c}(r,z,X_{r}^{M })D^{W}_{(s,z_0)}X_{r}^{M}W_\nu (dr,dz). \end{aligned}$$
(86)

We obtain now some explicit expressions for the Malliavin derivatives. We consider the tangent flow \((Y^M_t)_{t\in [0,T]}\) which is the solution of the linear equation

$$\begin{aligned} Y_{t}^{M}&=1+\int _{0}^{t}\int _{[1,M)}\partial _{x}\widetilde{c}(r,z,X^M_{r-})Y_{r-}^{M}N_{\nu }(dr,dz)+\int _{0}^{t}\partial _xb _{M}(r,X_{r}^{M })Y_{r}^{M}dr\\&\quad +\int _{0}^{t}\int _{\{z\ge M\}}\partial _x\widetilde{c}(r,z,X_{r}^{M })Y_{r}^{M}W_\nu (dr,dz). \end{aligned}$$

And using It\(\hat{o}\)’s formula, \(\overline{Y}_{t}^{M}=1/Y^M_t\) verifies the equation

$$\begin{aligned} \overline{Y}_{t}^{M}&=1-\int _{0}^{t}\int _{[1,M)}\partial _{x}\widetilde{c}(r,z,X^M_{r-})(1+\partial _{x}\widetilde{c}(r,z,X^M_{r-}))^{-1}\overline{Y}_{r-}^{M}N_{\nu }(dr,dz)\\&\quad -\int _{0}^{t}\partial _xb _{M}(r,X_{r}^{M })\overline{Y}_{r}^{M}dr\\&\quad -\int _{0}^{t}\int _{\{z\ge M\}}\partial _x\widetilde{c}(r,z,X_{r}^{M })\overline{Y}_{r}^{M}W_\nu (dr,dz)\\&\quad +\frac{1}{2}\int _{0}^{t}\int _{\{z\ge M\}}\vert \partial _x\widetilde{c}(r,z,X_{r}^{M })\vert ^2\overline{Y}_{r}^{M}\nu (dz)dr . \end{aligned}$$

Applying Hypothesis 2.1 with \(q^*\ge 1\) and Hypothesis 2.2, with \(K_p\) a constant only depending on p, one also has (the proof is standard)

$$\begin{aligned} \mathbb {E}(\sup _{s\le t}(\left| Y_{s}^{M}\right| ^{p}+\left| \overline{Y} _{s}^{M}\right| ^{p}))\le K_p<\infty . \end{aligned}$$
(87)

Remark

Due to (4), we have

$$\begin{aligned} \max \Big \{\int _{[1,M)}\vert \bar{c}(z)\vert ^p\nu (dz),\int _{[M,\infty )}\vert \bar{c}(z)\vert ^p\nu (dz)\Big \} \le \int _{[1,\infty )}\vert \bar{c}(z)\vert ^p\nu (dz)= \bar{c}_p, \end{aligned}$$

so the constant in (87) is uniform with respect to M.

Then using the uniqueness of solution to the Eqs. (85) and (86), one obtains

$$\begin{aligned} D^{Z}_{(k,i)}X_{t}^{M}= & {} \mathbb {1}_{\{k\le M-1\}}\mathbb {1}_{\{i\le J_t^k\}}\xi ^{k}_{i}Y_{t}^{M}\overline{Y}^{M}_{T^{k}_{i}-}\partial _{z}\widetilde{c}(T^{k}_{i},Z^{k}_{i},X^M_{T^{k}_{i}-}),\nonumber \\ D^{W}_{(s,z_0)}X_{t}^{M}= & {} \mathbb {1}_{\{s\le t\}}\mathbb {1}_{\{ z_0\ge M\}}Y_{t}^{M}\overline{Y}^{M}_{s}\widetilde{c}(s,z_0,X_{s}^{M}). \end{aligned}$$
(88)

And the Malliavin covariance of \( X_{t}^{M}\) is

$$\begin{aligned} \sigma _{X_{t}^{M}}&=\left\langle DX_{t}^{M},DX_{t}^{M}\right\rangle _{\mathcal {H}}=\sum _{k=1}^{M-1 }\sum _{i=1}^{J_{t}^{k}}\vert D^{Z}_{(k,i)}X_{t}^{M}\vert ^2\nonumber \\&\quad +\int _0^T\int _{\{z\ge M\}}\vert D^{W}_{(s,z)}X_{t}^{M}\vert ^2\nu (dz)ds . \end{aligned}$$
(89)

In the following, we denote \(\lambda _{t}^{M}= \sigma _{X_t^M}\). So the aim is to prove that for every \(p\ge 1\),

$$\begin{aligned} \mathbb {E}(\vert \lambda _{t}^{M}\vert ^{-p})\le C_p. \end{aligned}$$
(90)

We proceed in 5 steps.

Step 1 We notice that by (88) and (89)

$$\begin{aligned} \lambda _{t}^{M}&=\sum _{k=1}^{M-1}\sum _{i=1}^{J_{t}^{k}}\xi _i^k \vert Y_{t}^{M}\vert ^2\vert \overline{Y} _{T_{i}^{k}-}^{M}\vert ^2\vert \partial _{z}\widetilde{c}(T_{i}^{k},Z_{i}^{k},X_{T_{i}^{k}-}^{M })\vert ^{2}\\&\quad +\vert Y_{t}^M\vert ^{2}\int _{0}^{t}\vert \overline{Y}_{s}^{M}\vert ^2\int _{\{z\ge M\}}\vert \widetilde{c}(s,z,X_{s}^{M })\vert ^2\nu (dz)ds. \end{aligned}$$

We recall the ellipticity hypothesis (Hypothesis 2.3): There exists a function \(\underline{c}(z)\) such that

$$\begin{aligned} \left| \partial _{z}\widetilde{c}(s,z,x)\right| ^{2}\ge \underline{c}(z)\quad and\quad \left| \widetilde{c}(s,z,x)\right| ^{2}\ge \underline{c}(z). \end{aligned}$$

In particular

$$\begin{aligned} \int _{\{z\ge M\}}\vert \widetilde{c}(s,z,x)\vert ^2\nu (dz)\ge \int _{\{z \ge M\}}\underline{c}(z)\nu (dz), \end{aligned}$$

so that

$$\begin{aligned} \lambda _{t}^{M}\ge Q_{t}^{-2}\times \!\left( \sum _{k=1}^{M-1}\sum _{i=1}^{J_{t}^{k}} \xi _i^k\underline{c}(Z_{i}^{k})+t\int _{\{z \ge M\}} \underline{c}(z)\nu (dz)\right) \! with\quad Q_{t}=\inf _{s\le t}\vert Y_{s}^M \overline{Y}_{t}^M\vert . \end{aligned}$$

We denote

$$\begin{aligned} \rho _{t}^{M}=\sum _{k=1}^{M-1}\sum _{i=1}^{J_{t}^{k}}\xi _i^k\underline{c}(Z_{i}^{k}),\quad \bar{\rho } _{t}^{M}=\sum _{k=M}^{\infty }\sum _{i=1}^{J_{t}^{k}}\xi _i^k\underline{c}(Z_{i}^{k}),\quad \alpha ^{M}=\int _{\{z\ge M\}}\underline{c}(z)\nu (dz). \end{aligned}$$

By (87), \((\mathbb {E}\sup \limits _{s\le t}\left| Y_{s}^M\overline{Y}_{t}^M\right| ^{4p})^{1/2}\le C<\infty ,\) so that

$$\begin{aligned} \mathbb {E}(\vert \lambda _t^M\vert ^{-p})\le C(\mathbb {E}(\vert \rho _{t}^{M}+t\alpha ^{M}\vert ^{-2p}))^{\frac{1}{2}}. \end{aligned}$$
(91)

Step 2Let \(\Gamma (p)=\int _0^\infty s^{p-1}e^{-s}ds\). By a change of variables, we have the numerical equality

$$\begin{aligned} \frac{1}{(\rho _{t}^{M}+t\alpha ^{M})^{p}}=\frac{1}{\Gamma (p)}\int _{0}^{\infty }s^{p-1}e^{-s(\rho _{t}^{M}+t\alpha ^{M})}ds \end{aligned}$$

which, by taking expectation, gives

$$\begin{aligned} \mathbb {E}\left( \frac{1}{(\rho _{t}^{M}+t\alpha ^{M})^{p}}\right) =\frac{1}{\Gamma (p)}\int _{0}^{\infty }s^{p-1}\mathbb {E}(e^{-s(\rho _{t}^{M}+t\alpha ^{M})})ds. \end{aligned}$$
(92)

Step 3 (splitting). In order to compute \(\mathbb {E}(e^{-s(\rho _{t}^{M}+t\alpha ^{M})})\) we have to interpret \(\rho _{t}^M\) in terms of Poisson measures. We recall that we suppose the “splitting hypothesis” (40):

$$\begin{aligned} \mathbb {1}_{I_k}(z)\frac{\nu (dz)}{m_k}\ge \mathbb {1}_{I_k}(z)\varepsilon _{k}dz, \end{aligned}$$

with \(I_k=[k,k+1),\ m_k=\nu (I_k)\). We also have the function \(\psi \) and \(m(\psi )=\int _{\mathbb {R}} \psi (t)dt.\) And we use the basic decomposition

$$\begin{aligned} Z_{i}^{k}= \xi _{i}^{k}V_{i}^{k}+(1-\xi _{i}^{k})U_{i}^{k} \end{aligned}$$

where \(V_{i}^{k},U_{i}^{k},\xi _{i}^{k}, k,i\in \mathbb {N}\) are some independent random variables with laws given in (47).

For every k we consider a Poisson point measure \(N_{k}(ds,d\xi ,dv,du)\) with \(\xi \in \{0,1\},v,u\in [1,\infty ),s\in [0,T]\) with compensator

$$\begin{aligned} \widehat{N_{k}}(ds,d\xi ,dv,du)= & {} \widehat{M}_{k}(d\xi ,dv,du)\times ds \\ with\quad \widehat{M}_{k}(d\xi ,dv,du)= & {} b_k(d\xi )\times \mathbb {1}_{I_k}(v)\frac{1}{m(\psi )}\psi (v-(k+\frac{1}{2}))dv \\&\times \frac{1}{1-\varepsilon _{k}m(\psi )}\mathbb {1}_{I_k}(u)(\mathbb {P}(Z_{1}^{k}\in du) \\&-\varepsilon _{k}\psi (u-(k+\frac{1}{2}))du). \end{aligned}$$

Here \(b_k(d\xi )\) is the Bernoulli law of parameter \(\varepsilon _{k}m(\psi )\). The intervals \(I_k,k\in \mathbb {N}\) are disjoint so the Poisson point measures \(N_{k},k=1,\ldots ,M-1\) are independent. Then

$$\begin{aligned} \sum _{i=1}^{J_{t}^{k}}\xi _{i}^{k}{\underline{c}}(Z_{i}^{k})&= \sum _{i=1}^{J_{t}^{k}}\xi _{i}^{k}{\underline{c}}(\xi _{i}^{k}V_{i}^{k}+(1-\xi _{i}^{k})U_{i}^{k})\\&=\int _{0}^{t}\int _{\{0,1\}}\int _{[1,\infty )^2} \xi {\underline{c}}(\xi v+(1-\xi )u)N_{k}(ds,d\xi ,dv,du). \end{aligned}$$

In order to get compact notation, we put together all the measures \( N_{k},k\le M-1.\) Since they are independent we get a new Poisson point measure that we denote by \(\Theta .\) And we have

$$\begin{aligned} \rho _{t}^M= & {} \sum _{k=1}^{M-1}\sum _{i=1}^{J_{t}^{k}}\xi _{i}^{k} {\underline{c}}(Z_{i}^{k})=\int _{0}^{t}\int _{\{0,1\}}\int _{[1,\infty )^2} \xi {\underline{c}}(\xi v+(1-\xi )v)\Theta (ds,d\xi ,dv,du). \end{aligned}$$

Step 4 Using Itô’s formula,

$$\begin{aligned} \mathbb {E}(e^{-s\rho _{t}^M})= & {} 1+\mathbb {E}\int _{0}^{t}\int _{\{0,1\}}\int _{[1,\infty )^2} (e^{-s(\rho _{r-}^M+\xi {\underline{c}} (\xi v+(1-\xi )v))}-e^{-s\rho _{r-}^M})\widehat{\Theta }\\&\quad \times (dr,d\xi ,dv,du) \\= & {} 1-\int _{0}^{t}\mathbb {E}(e^{-s\rho _{r-}^M})dr\int _{\{0,1\}}\int _{[1,\infty )^2} (1-e^{-s\xi {\underline{c}}(\xi v+(1-\xi )v)})\sum _{k=1}^{M-1 }\widehat{M}_{k}\\&\quad \times (d\xi ,dv,du). \end{aligned}$$

Solving the above equation we obtain

$$\begin{aligned} \mathbb {E}(e^{-s\rho _{t}^M})= & {} \exp \left( -t\sum _{k=1}^{M-1 }\int _{\{0,1\}}\int _{[1,\infty )^2} (1-e^{-s\xi {\underline{c}}(\xi v+(1-\xi )u)})\widehat{M}_{k}(d\xi ,dv,du)\right) . \end{aligned}$$

We compute

$$\begin{aligned}&\int _{\{0,1\}\times [1,\infty )^2}(1-e^{-s\xi {\underline{c}}(\xi v+(1-\xi )u)})\widehat{M}_{k}(d\xi ,dv,du) \\&\quad = \varepsilon _{k}m(\psi )\int _{k}^{k+1}(1-e^{-s{\underline{c}}(v)})\frac{1}{m(\psi )}\psi (v-(k+\frac{1}{2}))dv. \end{aligned}$$

Since \(\psi \ge 0\) and \(\psi (z)=1\) if \(\left| z\right| \le \frac{1 }{4}\) it follows that the above term is larger than

$$\begin{aligned} \varepsilon _{k}\int _{k+\frac{1}{4}}^{k+\frac{3}{4}}(1-e^{-s{\underline{c}} (v)})dv. \end{aligned}$$

Finally this gives

$$\begin{aligned} \mathbb {E}(e^{-s\rho _{t}^M})\le & {} \exp \left( -t\sum _{k=1}^{M-1 }\varepsilon _{k}\int _{k+\frac{1}{4}}^{k+\frac{3}{4}}(1-e^{-s{\underline{c}}(v)})dv\right) \\= & {} \exp \left( -t\int _{1}^{M }(1-e^{-s{\underline{c}}(v)})m(dv)\right) , \end{aligned}$$

with

$$\begin{aligned} m(dv)=\sum _{k=1}^{\infty }\varepsilon _{k}1_{(k+\frac{1}{4},k+\frac{3}{4} )}(v)dv. \end{aligned}$$
(93)

In the same way, we get

$$\begin{aligned} \mathbb {E}(e^{-s\bar{\rho } _{t}^{M}})\le \exp \left( -t\int _{M}^{\infty }(1-e^{-s\underline{{c}} (v)})m(dv)\right) . \end{aligned}$$

Notice that \(t\alpha ^M\ge \mathbb {E}(\bar{\rho }_t^M)\). Then using Jensen’s inequality for the convex function \(f(x)=e^{-sx},s,x>0\), we have

$$\begin{aligned} e^{-st\alpha ^{M}}\le e^{-s\mathbb {E}\bar{\rho } _{t}^{M}}\le \mathbb {E}(e^{-s\bar{\rho } _{t}^{M}})\le \exp \left( -t\int _{M}^{\infty }(1-e^{-s\underline{{c}} (v)})m(dv)\right) . \end{aligned}$$

So for every \(M\in \mathbb {N}\), we get

$$\begin{aligned} \mathbb {E}(e^{-s(\rho _{t}^{M}+t\alpha ^{M})})= & {} e^{-st\alpha ^{M}}\times \mathbb {E}(e^{-s\rho _{t}^{M}}) \nonumber \\\le & {} \exp \left( -t\int _{M}^{\infty }(1-e^{-s\underline{{c}} (v)})m(dv)\right) \nonumber \\&\times \exp (-t\int _{1}^{M}\left( 1-e^{-s\underline{{c}} (v)})m(dv)\right) \nonumber \\= & {} \exp \left( -t\int _{1}^{\infty }(1-e^{-s\underline{{c}} (v)})m(dv)\right) , \end{aligned}$$
(94)

and the last term does not depend on M.

Now we will use the Lemma 14 from [7], which states the following.

Lemma 5.6

We consider an abstract measurable space E, a \(\sigma \)-finite measure \(\eta \) on this space and a non-negative measurable function \(f:E\rightarrow \mathbb {R}_+\) such that \(\int _Efd\eta <\infty .\) For \(t>0\) and \(p\ge 1\), we note

$$\begin{aligned} \alpha _f(t)=\int _E(1-e^{-tf(a)})\eta (da)\quad and\quad I_t^p(f)=\int _0^\infty s^{p-1}e^{-t\alpha _f(s)}ds. \end{aligned}$$

We suppose that for some \(t>0\) and \(p\ge 1\),

$$\begin{aligned} \underline{\lim }_{u\rightarrow \infty }\frac{1}{\ln u}\eta (f\ge \frac{1}{u})>p/t, \end{aligned}$$
(95)

then \(I_t^p(f)<\infty .\)

We will use the above lemma for \(\eta =m\) and \(f=\underline{c}\). So if we have

$$\begin{aligned} \underline{\lim }_{u\rightarrow \infty }\frac{1}{\ln u}m({\underline{c}}\ge \frac{1}{u})=\infty , \end{aligned}$$
(96)

then for every \(p\ge 1,t>0,M\ge 1\), (92),(94) and Lemma 5.6 give

$$\begin{aligned} \mathbb {E}\left( \frac{1}{\rho _{t}^{M}+t\alpha ^{M}}\right) ^{2p}= & {} \frac{1}{\Gamma (2p)}\int _{0}^{\infty }s^{2p-1}\mathbb {E}(e^{-s(\rho _{t}^{M}+t\alpha ^{M})})ds \nonumber \\\le & {} \frac{1}{\Gamma (2p)}\int _{0}^{\infty }s^{2p-1}\exp \left( -t\int _{1}^{\infty }(1-e^{-s{\underline{c}}(v)})m(dv)\right) ds<\infty .\nonumber \\ \end{aligned}$$
(97)

Finally using (91), we conclude that if (96) holds, then

$$\begin{aligned} \sup _{M}\mathbb {E}(\lambda _t^M)^{-p}<\infty . \end{aligned}$$
(98)

Step 5Now the only problem left is to compute \(m({\underline{c}}\ge \frac{1}{u}).\) It seems difficult to discuss this in a completely abstract framework. So we suppose Hypothesis 2.4 (a): There exists a constant \(\varepsilon _{*}>0\) and there are some \(\alpha _1>\alpha _2>0\) such that for every \(k\in \mathbb {N}\),

$$\begin{aligned}&\mathbb {1}_{I_k}(z)\frac{\nu (dz)}{m_k}\ge \mathbb {1}_{I_k}(z)\varepsilon _kdz \quad with\quad \varepsilon _{k}=\frac{\varepsilon _{*}}{{(k+1)}^{1-{{\alpha }} }},\ for\ any\ {\alpha }\in (\alpha _2,\alpha _1],\\&\quad and\quad {\underline{c}}(z)\ge e^{- z^{\alpha _2 }}, \end{aligned}$$

Then \(\{z:{\underline{c}}(z)\ge \frac{1}{u}\}\supseteq \{z:(\ln u)^{1/{\alpha _2} }\ge z\}.\) In particular, for \(k\le \lfloor (\ln u)^{1/\alpha _2}\rfloor -1:=k(u)\), one has \(I_k\subseteq \{z:\underline{c}(z)\ge \frac{1}{u}\}\). Then for u large enough, we compute

$$\begin{aligned} m\left( {\underline{c}} \ge \frac{1}{u}\right)\ge & {} \sum _{k=1}^{k(u)}m(I_k)\ge \frac{1}{2}\sum _{k=1}^{k(u)}\varepsilon _{k}\ge \frac{1}{2}\varepsilon _{*}\sum _{k=1}^{k(u)}\frac{1}{ (k+1)^{1-{{\alpha }} }}\\\ge & {} \frac{1}{2}\varepsilon _{*}\int _{2}^{(\ln u)^{1/{\alpha _2} }}\frac{1}{ z^{1-{{\alpha }} }}dz \\= & {} \frac{\varepsilon _{*}}{{2{\alpha }} }((\ln u)^{{\alpha } /{\alpha _2} }-2^{{{\alpha }} }). \end{aligned}$$

Since \({\alpha }>\alpha _2\), (96) is verified and we obtain (98).

Now we consider Hypothesis 2.4 (b): We suppose that there exists a constant \(\varepsilon _{*}>0\) and there are some \(\alpha >0\) such that for every \(k\in \mathbb {N}\),

$$\begin{aligned} \mathbb {1}_{I_k}(z)\frac{\nu (dz)}{m_k}\ge \mathbb {1}_{I_k}(z)\varepsilon _kdz\quad with\quad \varepsilon _{k}=\frac{\varepsilon _{*}}{k+1},\quad and\quad {\underline{c}}(z)\ge \frac{1}{ z^{\alpha }}. \end{aligned}$$

Now \(\{z:{\underline{c}}(z)\ge \frac{1}{u}\} \supseteq \{z:z\le u^{1/\alpha }\} \). Then for u large enough,

$$\begin{aligned} m\left( {\underline{c}}\ge \frac{1}{u}\right) \ge \frac{1}{2}\varepsilon _{*}\sum _{k=1}^{\lfloor u^{1/\alpha }\rfloor -1}\frac{1 }{k+1}\ge \frac{1}{2}\varepsilon _{*}\int _{2}^{u^{1/\alpha }}\frac{dz}{z}= \frac{1}{2}\varepsilon _{*}\left( \frac{1}{\alpha }\ln u-\ln {2}\right) . \end{aligned}$$

And consequently

$$\begin{aligned} \underline{\lim }_{u\rightarrow \infty }\frac{1}{\ln u}m\left( {\underline{c}}\ge \frac{1}{u}\right) \ge \frac{\varepsilon _{*}}{2\alpha }. \end{aligned}$$

Using Lemma 5.6, this gives: if

$$\begin{aligned} \frac{2p}{t}< \frac{\varepsilon _{*}}{2\alpha }\quad \Leftrightarrow \quad t> \frac{4p \alpha }{\varepsilon _{*}} \end{aligned}$$

then

$$\begin{aligned} \sup _M\mathbb {E}\left( \frac{1}{\rho _{t}^{M}+t\alpha ^{M}}\right) ^{2p}<\infty , \end{aligned}$$

and we have \(\sup \limits _M\mathbb {E}(\lambda _t^{M})^{-p}<\infty \). \(\square \)

1.3 5.3 Some proofs concerning Section 4.2

We will prove that the triplet \((\mathcal {S},D,L)\) defined in Sect. 4.2 is an IbP framework. Here, we only show that \(D^q\) is closable and L verifies the duality formula (32). To do so, we introduce the divergence operator \(\delta \). We denote the space of simple processes by

$$\begin{aligned} \mathcal {P}=\left\{ u=\left( (\bar{u}^k_i)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},\sum _{r=1}^{n}u_r\varphi _r\right) :\bar{u}^k_i,u_r\in \mathcal {S},\varphi _r\in L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb),m^\prime ,m,n\in \mathbb {N}\right\} . \end{aligned}$$

For \(u=((\bar{u}^k_i)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},\sum _{r=1}^{n}u_r\varphi _r)\in \mathcal {P}\), we denote \(u^Z=(\bar{u}^k_i)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}}\) and \(u^W=\sum _{r=1}^{n}u_r\varphi _r\), so that \(u=(u^Z,u^W)\).

We notice that \(\mathcal {P}\) is dense in \(L^2(\Omega ;\mathcal {H})\), with \(\mathcal {H}=l_{2}\otimes L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb)\).

Then we define the divergence operator \(\delta : \mathcal {P}\rightarrow \mathcal {S}\) by

$$\begin{aligned}&\delta (u)=\delta ^Z(u^Z)+\delta ^W(u^W)\\&\quad with\,\,\, \delta ^Z(u^Z)=-\sum _{k=1}^m\sum _{i=1}^{m^\prime } (D^{Z}_{(k,i)}\bar{u}^k_i+\xi ^k_i \bar{u}^k_i\times \theta _{k}(V^{k}_i))\\&\quad \delta ^W(u^W)=\sum _{r=1}^{n}u_rW_\nu (\varphi _r)-\sum _{r=1}^{n} \langle D^Wu_r,\varphi _r\rangle _{L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb)}. \end{aligned}$$

We will show that \(\delta \) satisfies the following duality formula: For every \(F\in \mathcal {S},u\in \mathcal {P}\),

$$\begin{aligned} \mathbb {E}\langle DF,u\rangle _{\mathcal {H}}=\mathbb {E}F\delta (u). \end{aligned}$$
(99)

In fact, if we denote \(\hat{V}_{i}^{k }(x)\) the sequence \((V_{i_0}^{k_0})_{\begin{array}{c} 1\le i_0\le m^\prime \\ 1\le k_0\le m \end{array}}\) after replacing \(V_{i}^{k}\) by x, then for any \(m^\prime ,m\in \mathbb {N}\),

$$\begin{aligned}&\mathbb {E}\langle D^ZF,u^Z\rangle _{l_2}=\mathbb {E}\sum _{k=1}^m\sum _{i=1}^{m^\prime } D_{(k,i)}^ZF\times \bar{u}^k_i\\&=\sum _{k=1}^m\sum _{i=1}^{m^\prime }\mathbb {E}\xi ^k_i\partial _{v^k_i}f(\omega ,(V_{i_0}^{k_0})_{\begin{array}{c} 1\le i_0\le m^\prime \\ 1\le k_0\le m \end{array}},(W_\nu (\varphi _{j}))_{j=1}^{n})\bar{u}^k_i(\omega ,(V_{i_0}^{k_0})_{\begin{array}{c} 1\le i_0\le m^\prime \\ 1\le k_0\le m \end{array}},(W_\nu (\varphi _{j}))_{j=1}^{n})\\&=\sum _{k=1}^m\sum _{i=1}^{m^\prime }\mathbb {E}\int _{\mathbb {R}}\xi ^k_i\partial _{v^k_i}f(\omega ,\hat{V}_i^k(x),(W_\nu (\varphi _{j}))_{j=1}^{n})\times \bar{u}_k(\omega ,\hat{V}_i^k(x),(W_\nu (\varphi _{j}))_{j=1}^{n})\frac{\psi _{k}(x)}{m(\psi )}dx\\&=-\sum _{k=1}^m\sum _{i=1}^{m^\prime }\mathbb {E}\int _{\mathbb {R}}\xi ^k_if(\omega ,\hat{V}_i^k(x),(W_\nu (\varphi _{j}))_{j=1}^{n})\times [\partial _{v^k_i}\bar{u}^k_i(\omega ,\hat{V}_i^k(x),(W_\nu (\varphi _{j}))_{j=1}^{n})\\&\qquad +\bar{u}^k_i(\omega ,\hat{V}_i^k(x),(W_\nu (\varphi _{j}))_{j=1}^{n})\frac{\partial _{x}\psi _{k}(x)}{\psi _{k}(x)}]\frac{\psi _{k}(x)}{m(\psi )}dx\\&=-\sum _{k=1}^m\sum _{i=1}^{m^\prime }\mathbb {E}F[D^Z_{(k,i)}\bar{u}^k_i+\xi ^k_i\bar{u}^k_i\partial _{x}(\ln \psi _{k}(V^k_i))]=\mathbb {E}(F\delta ^Z(u^Z)). \end{aligned}$$

On the other hand, since \(L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb)\) is a separable Hilbert space, we can assume without loss of generality that, in the definition of simple functionals, \((\varphi _1,\ldots ,\varphi _m,\ldots )\) is the orthogonal basis of the space \(L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb)\).

Then with \(p_r=\int _{\mathbb {R}_{+}\times \mathbb {R}_{+}}\varphi _r^2(s,z)\nu (dz)ds\), for any \(n\in \mathbb {N}\),

$$\begin{aligned}&\mathbb {E}\langle D^WF,u^W\rangle _{L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb)}=\mathbb {E}\int _{\mathbb {R}_{+}\times \mathbb {R}_{+}}D^{W}_{(s,z)}F\times \sum _{r=1}^{n}u_r\varphi _r(s,z)\ \nu (dz)ds\\&=\mathbb {E}\sum _{r=1}^{n}\partial _{ w_r}f(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},(W_\nu (\varphi _{j}))_{j=1}^{n})u_r(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},(W_\nu (\varphi _{j}))_{j=1}^{n})p_r\\&=\sum _{r=1}^{n}\mathbb {E}\int _{\mathbb {R}}\partial _{ w_r}f(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},W_\nu (\varphi _{1}),\ldots ,W_\nu (\varphi _{r-1}),y,W_\nu (\varphi _{r+1}),\ldots ,W_\nu (\varphi _{n}))\\&\quad \times u_r(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},W_\nu (\varphi _{1}),\ldots ,W_\nu (\varphi _{r-1}),y,W_\nu (\varphi _{r+1}),\ldots ,\\&\quad W_\nu (\varphi _{n}))\frac{1}{\sqrt{2\pi p_r}}e^{-\frac{y^2}{2p_r}} dy\times p_r\\&=-\sum _{r=1}^{n}\mathbb {E}\int _{\mathbb {R}}f(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},W_\nu (\varphi _{1}),\ldots ,W_\nu (\varphi _{r-1}),y,W_\nu (\varphi _{r+1}),\ldots ,W_\nu (\varphi _{n}))\\&\quad \times [\partial _{w_r}u_r(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},W_\nu (\varphi _{1}),\ldots ,W_\nu (\varphi _{r-1}),y,W_\nu (\varphi _{r+1}),\ldots ,W_\nu (\varphi _{n}))\\&\quad -\frac{y}{p_r}u_r(\omega ,(V_{i}^k)_{\begin{array}{c} 1\le i\le m^\prime \\ 1\le k\le m \end{array}},W_\nu (\varphi _{1}),\ldots ,W_\nu (\varphi _{r-1}),y,W_\nu (\varphi _{r+1}),\ldots ,W_\nu (\varphi _{n}))]\\&\quad \frac{1}{\sqrt{2\pi p_r}}e^{-\frac{y^2}{2p_r}} dy\times p_r\\&=\mathbb {E}F(\sum _{r=1}^{n}u_rW_\nu (\varphi _r)-\sum _{r=1}^{n} \langle D^Wu_r,\varphi _r\rangle _{L^2(\mathbb {R}_{+}\times \mathbb {R}_{+},\nu \times Leb)})=\mathbb {E}(F\delta ^W(u^W)). \end{aligned}$$

Then (99) is proved. Using this duality formula recursively, we can show the closability of \(D^q\). If there exists \(u\in L^2(\Omega ;\mathcal {H}^{\otimes q})\) such that \(F_n\rightarrow 0\) in \(L^2(\Omega )\) and \(D^qF_n\rightarrow u\) in \(L^2(\Omega ;\mathcal {H}^{\otimes q})\), then for any \(h_1,\ldots ,h_q\in \mathcal {P}\), \(\mathbb {E}\langle u,h_1\otimes \cdots \otimes h_q\rangle _{\mathcal {H}^{\otimes q}}=\lim \limits _{n\rightarrow \infty }\mathbb {E}\langle D^qF_n,h_1\otimes \cdots \otimes h_q\rangle _{\mathcal {H}^{\otimes q}}=\lim \limits _{n\rightarrow \infty }\mathbb {E}F_n\delta (h_1\delta (h_2(\cdots \delta (h_q))))=0\). Since \(\mathcal {P}^{\otimes q}\) is dense in \(L^2(\Omega ;\mathcal {H}^{\otimes q})\), we conclude that \(u=0\). This implies that \(D^q\) is closable.

We notice that from the definition of \(\delta \) and L, we get immediately that \(LF=\delta (DF),\ \forall F\in \mathcal {S}\). And if we replace u by DG in (99) for \(G\in \mathcal {S}\), we get the duality formula of L (32).

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bally, V., Qin, Y. Total variation distance between a jump-equation and its Gaussian approximation. Stoch PDE: Anal Comp 10, 1211–1260 (2022). https://doi.org/10.1007/s40072-022-00270-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40072-022-00270-w

Keywords

Navigation