Abstract
We prove the existence of classical solutions to parabolic linear stochastic integro-differential equations with adapted coefficients using Feynman–Kac transformations, conditioning, and the interlacing of space-inverses of stochastic flows associated with the equations. The equations are forward and the derivation of existence does not use the “general theory” of SPDEs. Uniqueness is proved in the class of classical solutions with polynomial growth.
Similar content being viewed by others
References
Brzeźniak, Z., van Neerven, J.M.A.M., Veraar, M.C., Weis, L.: Itô’s formula in UMD Banach spaces and regularity of solutions of the Zakai equation. J. Differ. Equ. 245(1), 30–58 (2008)
Chen, Z.-Q., Kim, K.-H.: An Lp-theory of non-divergence form SPDEs driven by Lévy processes. arXiv:1007.3295 (2010)
De Marco, Giuseppe, Gorni, Gianluca, Zampieri, Gaetano: Global inversion of functions: an introduction. Nonlinear Differ. Equ. Appl. NoDEA 1(3), 229–248 (1994)
Da Prato, Giuseppe, Menaldi, Jose-Luis, Tubaro, Luciano: Some results of backward Itô formula. Stoch. Anal. Appl. 25(3), 679–703 (2007)
Da Giuseppe, P., Jerzy, Z.: Encyclopedia of mathematics and its applications. Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)
Grigelionis, B., Mikulevicius, R.: Nonlinear filtering equations for stochastic processes with jumps. The Oxford Handbook of Nonlinear Filtering, pp. 95–128. Oxford University Press, Oxford (2011)
Grigelionis, B.: Reduced stochastic equations of nonlinear filtering of random processes. Litovsk. Mat. Sb. 16(3), 51–63 (1976)
Gyöngy, I.: On stochastic equations with respect to semimartingales III. Stochastics 7(4), 231–254 (1982)
Hausenblas, Erika: Existence, uniqueness and regularity of parabolic SPDEs driven by Poisson random measure. Electron. J. Probab. 10, 1496–1546 (2005)
Holden, H., Øksendal, B., Ubøe, J., Zhang, T.: Stochastic Partial Differential Equations: A Modeling, White Noise Functional Approach, 2nd edn. Universitext. Springer, New York (2010)
Jacod, J.: Calcul Stochastique et Problèmes de Martingales. Lecture notes in mathematics. Springer, Berlin (1979)
Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, volume 288 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Berlin (2003)
Kallenberg, O.: Foundations of Modern Probability. Probability and its Applications. Springer, New York (1997)
Krylov, N.V.: The Cauchy problem for linear stochastic partial differential equations. Izv. Akad. Nauk SSSR Ser. Mat. 41(6), 1329–1347 (1977)
Krylov, N.V., Rozovskiĭ, B.L.: On the first integrals and Liouville equations for diffusion processes. Stochastic differential systems (Visegrád, 1980). Lecture notes in control and information science, pp. 117–125. Springer, Berlin (1981)
Krylov, N.V.: An analytic approach to SPDEs. Stochastic Partial Differential Equations: Six Perspectives of Mathematical Surveys and Monographs, pp. 185–242. American Mathematical Society, Providence (1999)
Krylov, N.V.: On the Itô-Wentzell formula for distribution-valued processes and related topics. Probab. Theory Relat. Fields 150(1–2), 295–319 (2011)
Kunita, H.: On the decomposition of solutions of stochastic differential equations. In Stochastic Integrals (Proc. Sympos., Univ. Durham, Durham, 1980). Lecture Notes in mathematics, pp. 213–255. Springer, Berlin (1981)
Kunita, H.: Lectures on Stochastic Flows and Applications. Published for the Tata Institute of Fundamental Research, Bombay (1986)
Kunita, H.: Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms. Real and Stochastic Analysis, Trends in Mathematics, pp. 305–373. Birkhäuser Boston, Boston (2004)
Leahy, J.-M., Mikulevicius, R.: On degenerate linear stochastic evolution equations driven by jump processes. Stoch. Process Appl. 125(10), 3748–3784 (2015)
Leahy, J.-M., Mikulevičius, R.: On some properties of space inverses of stochastic flows. Stoch. PDE: Anal. 3(4), 1–34 (2015)
Liptser, R.S., Shiryayev, A.N.: Theory of Martingales, Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht (1989). Translated from the Russian by K. Dzjaparidze
Meyer-Brandis, T.: Stochastic Feynman–Kac equations associated to Lévy-Itô diffusions. Stoch. Anal. Appl. 25(5), 913–932 (2007)
Meyer, P.A.: La théorie de la prédiction de F. Knight. Séminaire de Probabilités, X (Première partie, Univ. Strasbourg, Strasbourg, année universitaire 1974/1975. Lecture Notes in Mathematics, pp. 86–103. Springer, Berlin (1976)
Mikulevičius, R.: Properties of solutions of stochastic differential equations. Litovsk. Mat. Sb. 23(4), 18–31 (1983)
Mikulevičius, R.: On the Cauchy problem for parabolic SPDEs in Hölder classes. Ann. Probab. 28(1), 74–103 (2000)
Mikulevičius, R., Pragarauskas, H.: On Hölder solutions of the integro-differential Zakai equation. Stoch. Process. Appl. 119(10), 3319–3355 (2009)
Métivier, M., Pistone, G.: Une formule d’isométrie pour l’intégrale stochastique hilbertienne et équations d’évolution linéaires stochastiques. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 33(1),1–18, 1975/1976
Novikov, A.A.: Discontinuous martingales. Teor. Verojatnost. i Primemen. 20, 13–28 (1975)
Pardoux, É.: Sur des équations aux dérivées partielles stochastiques monotones. C. R. Acad. Sci. Paris Sér 275, A101–A103 (1972)
Étienne, P.: Équations aux dérivées partielles stochastiques de type monotone. Séminaire sur les Équations aux Dérivées Partielles (1974–1975), 2nd edn, p. 10. Collège de France, Paris (1975)
Priola, E.: Pathwise uniqueness for singular SDEs driven by stable processes. Osaka J. Math. 49(2), 421–447 (2012)
Priola, E.: Stochastic flow for SDEs with jumps and irregular drift term. arXiv:1405.2575, (2014)
Protter, P.E.: Stochastic integration and differential equations. Stochastic Modelling and Applied Probability, 2nd edn. Springer, Berlin (2005). Version 2.1, Corrected third printing
Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Lévy Noise: An Evolution Equation Approach. Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge (2007)
Rozovskiĭ, B.L.: Stochastic Evolution Systems: Linear Theory and Applications to Nonlinear Filtering, volume 35 of Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1990. Translated from the Russian by A. Yarkho
Rökner, M., Zhang, T.: Stochastic evolution equations of jump type: existence, uniqueness and large deviation principles. Potential Anal. 26(3), 255–279 (2007)
Tinfavicius, E.: Linearized stochastic equations of nonlinear filtering of random processes. Lith. Math. J. 17(3), 321–334 (1977)
Veraar, M.: The stochastic Fubini theorem revisited. Stochastics 84(4), 543–551 (2012)
Walsh, J.B.: École d’été de Probabilités de Saint-Flour, XIV—1984. An Introduction to Stochastic Partial Differential Equations. Lecture Notes in Mathematics, pp. 265–439. Springer, Berlin (1986)
Zhang, X.: Degenerate irregular SDEs with jumps and application to integro-differential equations of Fokker–Planck type. Electron. J. Probab. 18(55), 25 (2013)
Zhou, G.: Global well-posedness of a class of stochastic equations with jumps. Adv. Differ. Equ. 2013, 175 (2013)
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Martingale and point measure moment estimates
Set \((Z,{\mathcal {Z}},\pi )=(Z^{1},{\mathcal {Z}}^{1},\pi ^{1})\), \( p(dt,dz)=p^{1}(dt,dz)\), and \(q(dt,dz)=q^{1}(dt,dz)\). The following moment estimates are used to derive the estimates of \(\Gamma _t\) and \(\Psi _t\) in Lemma 3.1. The notation \(a\underset{p,T}{ \sim }b\) is used to indicate that the quantity a is bounded above and below by a constant depending only on p and T times b.
Lemma 4.1
Let \(h:\Omega \times [ 0,T]\times Z\rightarrow \mathbf {R}^{d_1}\) be \({\mathcal {P}}_{T}\otimes {\mathcal {Z}}\) -measurable
-
(1)
For any stopping time \(\tau \le T\) and \(p\ge 2\),
$$\begin{aligned} \mathbf {E}\left[ \sup _{t\le \tau }\left| \int _{]0,\tau ] }\int _{Z}h_s(z)q(ds,dz)\right| ^{p}\right]&\underset{p,T}{\sim }\mathbf {E}\left[ \int _{]0,\tau ] }\int _{Z}\left| h_s(z)\right| ^{p}\pi (dz)ds\ \right] \\&\quad +\mathbf {E}\left[ \left( \int _{]0,\tau ] }\int _{Z}\left| h_s(z)\right| ^{2}\pi (dz)ds\right) ^{p/2}\right] . \end{aligned}$$ -
(2)
For any stopping time \(\tau \le T\) and \(\bar{p}\ge 1\),
$$\begin{aligned} \mathbf {E}\left[ \sup _{t\le \tau }\left( \int _{]0,\tau ] }\int _{Z}|h_s(z)|p(ds,dz)\right) ^{\bar{p}}\right]&\underset{p,T}{\sim }\mathbf {E}\left[ \int _{]0,\tau ] }\int _{Z}\left| h_s(z)\right| ^{\bar{p}}\pi (dz)ds\right] \\&\quad +\mathbf {E}\left[ \left( \int _{]0,\tau ] }\int _{Z}|h_s(z)|\pi (dz)ds\right) ^{\bar{p}}\right] . \end{aligned}$$
Proof
We will only prove part (2), since part (1) is well-known (see, e.g., [20] or [30]). Assume that \(h_t(\omega ,z)>0\) for all \(\omega ,t\) and z. Let
It suffices to prove (2) for \(p>1\), since the case \(p= 1\) is obvious. Fix an arbitrary stopping time \(\tau \le T\) and \(p>1\). For all t, we have
Thus, by the inequality
we get
and
Since \(A_t\) is increasing, we obtain
It is easy to see that
Applying Young’s inequality, for any \(\varepsilon >0\), we get
Combining the estimates for any \(\varepsilon _1\in (0,\frac{1}{p})\), we have
and for any \(\varepsilon _2 \in (0,\frac{1}{p2^{p-2}})\)
which completes the proof.\(\square \)
1.2 Optional projection
The following lemma concerning the optional projection plays an integral role in Sect. 3.4 and the proof of Theorem 2.2. For more information on the Skorokhod \(\mathcal {J}_{1}\) -topology, we refer the reader to Chapter 6, Sect. 1 of [23]. Also, we refer the reader to Theorem 5.3 [13] for the construction of regular conditional probability measures on Borel spaces.
Lemma 4.2
([25, cf. Theorem 1]) Let \(\mathcal {X} \) be a Polish space and \(D\left( [0,T];\mathcal {X}\right) \) be the space of \( \mathcal {X}\)-valued càdlàg trajectories with the Skorokhod \(\mathcal { J}_{1}\)-topology. If \(\mathfrak {A}\) is a random variable taking values in \( D\left( [0,T];\mathcal {X}\right) \), then there exists a family of \(\mathcal {B }([0,T])\times \mathcal {F}\)-measurable non-negative measures \(E^{t}(dU),\) \( (\omega ,t)\in \Omega \times [0,T],\) on \(D\left( [0,T];\mathcal {X}\right) \) and a random-variable \(\zeta \) satisfying \(\mathbf {P}\left( \zeta <T\right) =0\) such that \(E^{t}(D\left( [0,T];\mathcal {X}\right) )=1\) for \(t<\zeta \) and \( E^{t}(D([0,T];\mathcal {X}) )=0\) for \(t\ge \zeta .\) In addition, \(E^{t}\) is c àdlàg in the topology of weak convergence, \(E^{t}=E^{t+}\) for all \( t\in [0,T]\), and for each continuous and bounded functional F on \( D\left( [0,T];\mathcal {X}\right) ,\) the process \(E^{t}\left( F\right) \) is the càdlàg version of \(\mathbf {E}[ F\left( \mathfrak {A}\right) | \mathcal {F}_{t}] \). If \(G:\Omega \times [0,T]\times [0,T]\times D\left( [0,T]; \mathcal {X}\right) \rightarrow \mathbf {R}^{d_2}\) is bounded and \( \mathcal {O\times B}\left( [0,T]\right) \times {\mathcal {B}}\left( D\left( [0,T]; \mathcal {X}\right) \right) \)-measurable, then
is the optional projection of \(G_{t}(\mathfrak {A})=G_t(\omega ,t,\mathfrak {A} )\). Furthermore, if \(G=G_t(\omega ,t,U)\) is bounded and \({\mathcal {P}}\times {\mathcal {B}}( [0,T]) \times {\mathcal {B}}( D( [0,T]; \mathcal {X})) \) -measurable, then \(E^{t-}(G_{t}) \) is the predictable projection of \(G_{t}( \mathfrak {A}) =G_t(\omega ,t,\mathfrak {A}).\)
Proof
We follow the proof of Theorem 1 in [25]. Since \(D([0,T];\mathcal {X})\) is a Polish space, for each \(t\in [0,T]\), there is family of probability measures \(\tilde{E}_{\omega }^{t}(dw)\), \(\omega \in \Omega \), on \(D([0,T];\mathcal {X})\) such that for each \(A\in {\mathcal {B}}(D([0,T];\mathcal { X})),\) \(\tilde{E}^{t}(A)\) is \(\mathcal {F}_{t}\)-measurable and \(\mathbf {P}\) -a.s.
For each \(\omega \in \Omega \), let \(I\left( \omega \right) \) be the set of all \(t\in (0,T]\) such that for any bounded continuous function F on \( D([0,T];\mathcal {X})\), the function
has a right-hand limit on \([0,s)\cap \mathbf {Q}\) and a left-hand limit on \( (0,s]\cap \mathbf {Q}\) for every rational \(s\in [0,t]\cap \mathbf {Q}\). Let \(\zeta \left( \omega \right) =\sup \left( t:t\in I(\omega )\right) \wedge T.\) It is easy to see that \(\mathbf {P}\left( \zeta <T\right) =0.\) We set \(\tilde{E}_{\omega }^{t}=0\) if \(\zeta (\omega )<t\le T\). The function \( \tilde{E}_{\omega }^{t}\) has left-hand and right-hand limits for all \(t\in \mathbf {Q}\cap \left[ 0,T\right] \). We define \(E_{\omega }^{t}=\tilde{E} _{\omega }^{t+}\) for each \(t\in [0,T)\) (the limit is taken along the rationals), and \(E_{\omega }^{T}\) is the left-hand limit at T along the rationals. The statement follows by repeating the proof of Theorem 1 in [25] in an obvious way.\(\square \)
1.3 Estimates of Hölder continuous functions
For a Banach space V with norm \(|\cdot |_{V}\) and continuous function \( f:\mathbf {R}^{d_1}\rightarrow V\), we define
and
For each real number \(\beta \in \mathbf {R}\), we write \(\beta =[\beta ]^-+\{\beta \}^+\), where \([\beta ]^-\) is an integer and \(\{\beta \}^+\in (0,1]\). For a Banach space V with norm \(|\cdot |_{V}\) and real number \(\beta >0 \), we denote by \(\mathcal {C}^{\beta }(\mathbf {R}^{d_1};V)\) the Banach space of all \([\beta ]^-\)-times continuously differentiable bounded functions \(f:\mathbf {R}^{d_1}\rightarrow V\) having finite norm
When V is clear from the context, we drop the subscript V from the norm \( | \cdot |_{\beta ;V}\) and semi-norm \([\cdot ]_{\{\beta \}^+;V}\) and write \( |\cdot |_{\beta }\) and \([\cdot ]_{\{\beta \}^+}\), respectively.
In the coming lemmas, we establish some properties of weighted Hölder spaces that are used Sect. 3.5 and the proof of Theorem 2.4.
Lemma 4.3
Let \(\beta \in (0,1]\) and \( \theta _1,\theta _2\in \mathbf {R}\) with \(\theta _{1}-\theta _{2}\le \beta .\)
-
(1)
There is a constant \(c_{1}=c_{1}\left( \theta _{2},\beta \right) \) such that for all \(\phi :\mathbf {R}^{d_1}\rightarrow \mathbf {R}\) with \( |r_{1}^{-\theta _{1}}\phi |_{0}+[r_{1}^{-\theta _{2}}\phi ]_{\beta }=:N_{1}<\infty , \)
$$\begin{aligned} |\phi (x)-\phi (y)|\le c_{1}N_{1}(r_{1}(x)^{\theta _{2}}\vee r_{1}(y)^{\theta _{2}})|x-y|^{\beta }, \end{aligned}$$for all \(x,y\in \mathbf {R}^{d_1}\), where \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
-
(2)
Conversely, if \(\phi :\mathbf {R}^{d_1}\rightarrow \mathbf {R}\) satisfies \(|r_{1}^{-\theta _{1}}\phi |_{0}<\infty \) and there is a constant \(N_{2}\) such that for all \(x,y\in \mathbf {R}^{d_1}\),
$$\begin{aligned} |\phi (x)-\phi (y)|\le N_{2}( r_{1}(x)^{\theta _{2}}\vee r_{1}(y)^{\theta _{2}}) |x-y|^{\beta }, \end{aligned}$$then
$$\begin{aligned}{}[r_{1}^{-\theta _{2}}\phi ]_{\beta }\le c_{1}|r_{1}^{-\theta _{1}}\phi |_{0}+N_{2}. \end{aligned}$$
Proof
(1) For all x, y such that \(r_{1}\left( x\right) ^{\theta _{2}}\ge r_{1}\left( y\right) ^{\theta _{2}}\), we have
where \(c_{1} :=1+\sup _{t\in (0,1) }\frac{1-t^{\theta _{2}}}{( 1-t)^{\beta }}\) if \(\theta _{2}\ge 0\) and \(c_{1} :=1+\sup _{t\in (1,\infty )}\frac{ (t^{\theta _{2}}-1)t^{\beta }}{( t-1) ^{\beta }}\) if \(\theta _{2}<0,\) which proves the first claim. (2) For all x and y with \(r_{1}(x)^{\theta _{2}}>r_{1}(y)^{\theta _{2}}\), we have
which proves the second claim.\(\square \)
Lemma 4.4
Let \(\beta ,\mu \in (0,1]\) and \( \theta _1,\theta _2,\theta _3,\theta _4\in \mathbf {R}\) with \(\theta _{1}-\theta _{2}\le \beta \), \(\theta _{3}-\theta _{4}\le \mu \), and \(\theta _{3}\ge 0.\) If \(\phi :\mathbf {R}^{d_1}\rightarrow \mathbf {R}\) and \(H:\mathbf {R} ^{d_1}\rightarrow \mathbf {R}^{d_1}\) are such that
then
and there is a constant \(N=N(\beta ,\mu ,\theta _{1},\theta _{2})\) such that
where \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
Proof
For all x, we have
and hence
Using Lemma 4.3, for all x and y, we get
for some constant \(N=N(\beta ,\mu ,\theta _{1},\theta _{2})\). Noting that
we apply Lemma 4.3 to complete the proof.\(\square \)
Remark 4.5
Let \(\beta \in (0,1]\) and \(\theta _1,\theta _2\in \mathbf {R}.\) Then there is a constant \(N=N(\beta ,\theta _1,\theta _2)\) such that for all \(\phi : \mathbf {R}^{d_1}\rightarrow \mathbf {R}\) with \(|r_{1}^{-\theta _{1}}\phi |_{0}+[r_{1}^{-\theta _{2}}\phi ]_{\beta }=:N_{1}<\infty , \) we have \( |r^{-\theta }\phi |_{\beta }\le NN_{1}\), where \(\theta =\max \left\{ \theta _{1},\theta _{2}\right\} .\) Thus, imposing the assumptions of Lemma 4.4 and assuming further that \(\theta _{1}=\theta _{2}\) and \(\theta _{4}\ge 0\), we have
Proof
If \(\theta _{2}\ge \theta _{1}\), then the claim is obvious. If \(\theta _{1}>\theta _{2}\), then for all x and y we find
where \(c_{1}:=\sup _{t\in (0,1)}\frac{1-t^{\theta _{1}-\theta _{2}}}{\left( 1-t\right) ^{\beta }}.\) \(\square \)
Lemma 4.6
Let \(\theta \ge 0\) and \(\beta >1\). Then there are constants \(N_{1}=N_{1}(d_1,\theta ,\beta )\) and \(N_{2}(d_1,\theta ,\beta )\) such that for all \(\phi :\mathbf {R}^{d_1}\rightarrow \mathbf {R}\) with \( r_{1}^{-\theta _{1}}\phi \in \mathcal {C}^{\beta }(\mathbf {R}^{d_1},\mathbf {R}) \) ,
where \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
Proof
For any multi-index \(\gamma \) with \(|\gamma |\le [\beta ]^{-}\) and x , we have
It is easy to show by induction that for all multi-indices \(\gamma \), \( |r_{1}^{\theta }\partial ^{\gamma }(r_{1}^{-\theta })|_{1}<\infty .\) Moreover, for all multi-indices \(\gamma \) with \(|\gamma |<[\beta ]^{-}\),
Thus, for any multi-index \(\gamma \) with \(|\gamma |\le [\beta ]^{-}\) ,
and for any multi-index \(\gamma \) with \(|\gamma |=[\beta ]^{-}\),
This proves the leftmost inequality in (4.1). For all \(i\in \{1,\ldots ,d\}\) and x,
It follows by induction that for all multi-indices \(\gamma \) with \(|\gamma |\le [\beta ]^{-}\) and x, \(r_{1}^{-\theta }\partial ^{\gamma }\phi (x)\) is a sum of \(\partial ^{\gamma }(r_{1}^{-\theta }\phi )(x),\) a finite sum of terms, each of which is a product of one term of the form \( \partial ^{\tilde{\gamma }}(r_{1}^{-\theta }\phi )(x),\) \(|\tilde{\gamma } |<|\gamma |\), and a finite number of terms of the form \(\partial ^{\gamma _{1}}(r_{1}^{\theta })\partial ^{\gamma _{2}}(r_{1}^{-\theta }),\) \(|\gamma _{1}|,|\gamma _{2}|\le |\gamma |\). Since for all multi-indices \(\gamma _{1}\) and \(\gamma _{2}\), we have \(|\partial ^{\gamma _{1}}(r_{1}^{\theta })\partial ^{\gamma _{2}}(r_{1}^{-\theta })|_{1}<\infty ,\) the rightmost inequality in (4.1) follows.\(\square \)
Corollary 4.7
For any \(\theta \ge 0\) and \(\beta >1\) , there are constants \( N_{1}=N_{1}(d_1,\theta ,\beta )\) and \(N_{2}(d_1,\theta ,\beta )\) such that for all \(\phi :\mathbf {R}^{d_1}\rightarrow \mathbf {R}\) with \(r_{1}^{-\theta _{1}}\phi \in \mathcal {C}^{\beta }(\mathbf {R}^{d_1},\mathbf {R})\),
where \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
Proof
It is well known that for an arbitrary unit ball \(B\subset \mathbf {R}^{d_1}\) and any \(1\le k<[\beta ]^{-}\), there is a constant N such that for any \( \varepsilon >0,\)
Let \(U_{0}=\{ x\in \mathbf {R}^{d_1}:|x|\le 1\}\) and \(U_{j}=\{ x\in \mathbf {R }^{d_1}:2^{j-1}\le | x| \le 2^{j}\} ,j\ge 1.\) For all j, we have
Since for every j,
we see that
and the statement follows.\(\square \)
Remark 4.8
If \(\phi :\mathbf {R}^{d_1}\rightarrow \mathbf {R}\) is such that \(| r^{-\theta _{1}}\phi | _{0}+| r^{-\theta _{2}}\nabla \phi | _{0}<\infty \) for \( \theta _1,\theta _2\in \mathbf {R}\) with \(\theta _{1}-\theta _{2}\le 1\), then
where \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
Proof
Indeed, for all x and y, we have
and hence the claim follows from Lemma 4.3.\(\square \)
Lemma 4.9
Let \(n\in \mathbf {N}\), \(\beta ,\mu \in (0,1]\), \(\theta _1,\theta _{3},\theta _{4}\ge 0\) be such that \(\theta _{3}-\theta _{4}\le 1\). There is a constant \(N=N(d_1,\theta _{1},\theta _{3},\theta _{4},n,\beta ) \) such that for all \(\phi :\mathbf {R} ^{d_1}\rightarrow \mathbf {R}\) with \(r_{1}^{-\theta _{1}}\phi \in \mathcal {C} ^{n+\beta }(\mathbf {R}^{d_1},\mathbf {R})\) and \(H:\mathbf {R}^{d_1}\rightarrow \mathbf {R}^{d_1}\) with
we have
and
where \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
Proof
It follows immediately from Lemma 4.4 and Remark 4.8 that
Using induction, we get that for all x and \(|\gamma |=n\),
where
\(\mathcal {I}_{2}^{\gamma }(x)\) is a finite sum of terms of the form
with \(i_{1},\ldots ,i_{|\gamma |}\in \{1,2,\ldots ,d\}\), \(|\tilde{\gamma } _{1}|=\cdots =|\tilde{\gamma }_{|\gamma |}|=1\), and \(\sum _{k=1}^{|\gamma |} \tilde{\gamma }_{k}=\gamma \), if \(n\ge 2\) and zero otherwise, and where \( \mathcal {I}_{3}^{\gamma }(x)\) is a finite sum of terms of the form
with \(2\le k<n\), \(i_{1},i_{2},\ldots ,i_{k}\in \{1,\ldots ,d\}\), and \( \sum _{j=1}^{k}\tilde{\gamma }_{j}=\gamma \), \(1\le |\tilde{\gamma } _{j}|<|\gamma |,\) if \(n\ge 3\), and zero otherwise. Thus, owing to Lemmas 4.4 and 4.6, for any multi-index \(\gamma \) with \(|\gamma |=n\), we have
and
and hence
Since
by Lemma 4.4, we have
Thus, since \([r_1^{-\theta _4}\partial ^{\gamma }H^{i}]_{\mu }\le N_2\), we obtain
Appealing to Lemma 4.6, for all multi-indices \(\gamma \) with \(|\gamma |=n\), we get
Similarly, applying Lemmas 4.4 and 4.6, we have
Another application of Lemma 4.6 completes the proof.\(\square \)
We will now provide some useful estimates of composite functions of diffeomorphisms.
Lemma 4.10
Let \(H:\mathbf {R}^{d_1}\rightarrow \mathbf {R} ^{d_1} \) be continuously differentiable and assume that for all \(x\in \mathbf {R}^{d_1} \),
Assume that for all \(x\in \mathbf {R}^{d_1}\) \(\kappa (x)=(I_{d_1}+\nabla H(x))^{-1}\) exists and \(|\kappa (x)|\le N_{\kappa }\). In what follows \(r_{1 }(x):=\sqrt{ 1+| x|^{2}},\;x\in \mathbf {R}^{d_1}\).
-
(1)
Then the mapping \(\tilde{H}(x):=x+H(x)\) is a diffeomorphism with \(\tilde{H}^{-1}(x)=x-H( \tilde{H}^{-1}(x))=:x+F(x)\) and for all \(x\in \mathbf {R}^{d_1}\),
$$\begin{aligned}&| F(x)| \le L_0+L_1L_0N_{\kappa }+L_1N_{\kappa }| x|, \\&|\nabla F(x)| \le N_{\kappa }L_{2}, \quad \quad |\left( I_{d_1}+\nabla F(x)\right) ^{-1}| \le 1+L_{2}. \end{aligned}$$For all \(p\in \mathbf {R}\), there is a constant \(N=N(L_0,L_1,N_{\kappa },p)\) such that for all \(x\in \mathbf {R}^{d_1}\),
$$\begin{aligned} \frac{r_1^p(\tilde{H}(x))}{r_1^p(x)}+\frac{r_1^p(\tilde{H}^{-1}(x))}{r_1^p(x)}\le N, \quad r_1^{-1}(x)|H^{i}(x)+F^{i}(x)|\le N[H]_1|r_1^{-1}H|_0. \end{aligned}$$Moreover, there is a constant \(N=N(L_0,L_1,N_{\kappa },p)\) such that
$$\begin{aligned}&\left| \frac{r_1^p(\tilde{H})}{r_1^p}-1+\mathbf {1}_{(1,2]}(\alpha )pH^{i} r_1^{-2}x^i\right| _\alpha +\left| \frac{r_1^p(\tilde{H}^{-1})}{r_1^p}-1-\mathbf {1}_{(1,2]}(\alpha )pF^{i}r_1^{-2}x^i\right| _{\alpha }\\&\le N(|r_1^{-1}H|_0^{[\alpha ]^-+1}+[H]^{[\alpha ]^-+1}_1). \end{aligned}$$ -
(2)
If for some \(\beta >1\), \(|\nabla H|_{\beta -1}\le L_3\), then there is a constant \(N=N(d_1,\beta ,N_{\kappa },L_{3})\) such that
$$\begin{aligned} |\nabla F| _{\beta -1}\le N|\nabla H| _{\beta -1}. \end{aligned}$$(4.2) -
(3)
If for some \(\beta \ge 1\), \(|\nabla H|_{\beta -1}\le L_3\), then for all \( \theta \ge 0\), there is a constant \(N=N(d_1,\) \(\beta ,N_{\kappa },\) \(L_1,L_3,\theta )\) such that
$$\begin{aligned} \left| \frac{r^{\theta }_1\circ \tilde{H}^{-1}}{r_1^{\theta }} -1\right| _{\beta }\le N( |r_1^{-1}H|_0+|\nabla H|_{\beta -1}). \end{aligned}$$ -
(4)
If \(|H|_{0 }\le L_{4}\), and for some \(\beta >0\), \(|\nabla H|_{\beta \vee 1-1 }\le L_{5}\) and \(\phi :\mathbf {R} ^{d_1}\rightarrow \mathbf {R}\) is such that for some \(\mu \in (0,1]\) and \( \theta \ge 0\), \(r_1^{-\theta }\phi \in C^{\beta +\mu }( \mathbf {R}^{d_1};\mathbf {R})\), then there is a constant \(N=N(d_1,\beta ,\mu ,N_{ \kappa },L_{4},L_5,\theta )\) such that
$$\begin{aligned}&|r_1^{-\theta }(\phi \circ \tilde{H}^{-1}-\phi )| _{\beta } \le N|r_1^{-\theta }\phi |_{\beta }(|H|_0+|\nabla H|_{\beta \vee 1-1})\\&\quad + NL_4^{\mu }\mathbf {1}_{(0,1]}(\{\beta \}^++\mu )\sum _{|\gamma |=[\beta ]^-} [\partial ^{\gamma }(r_1^{-\theta }\phi )]_{\{\beta \}^++\mu } \\&\quad +N\mathbf {1}_{(1,2]}(\{\beta \}^++\mu )\sum _{|\gamma |=[\beta ]^-}\left( L_4^{\mu }[\nabla \partial ^{\gamma }(r_1^{-\theta }\phi )]_{\{\beta \}^++\mu -1}+|\nabla \partial ^{\gamma }(r_1^{-\theta }\phi )|_{0}|\nabla H|_0 \right) . \end{aligned}$$
Proof
(1) Since \((I_{d_1}+\nabla H(x))^{-1}\) exists for all x, it follows from Theorem 0.2 in [3] that the mapping \(\tilde{H}\) is a global diffeomorphism. For all x, we easily verify \(\tilde{H}^{-1}(x)=x-H(\tilde{H }^{-1} (x))\) by substituting \(\tilde{H}(x)\) into the expression. Simple computations show that for all x, we have
For all x and y, we easily obtain
and hence
Making use of (4.3), for all x, we get
and thus
The rest of the estimates then follow easily from the above estimates and Taylor’s theorem.
(2) Using the chain rule, for all x, we obtain
and hence \(|\nabla F|_{0}\le N_{\kappa }|\nabla H|_{0}.\) For all x and y , we have
and thus since \([\tilde{H}^{-1}]_{1}\le (1+ N_{\kappa }L_{3})\) by part (1), we have for all \(\delta \in (0,1\wedge (\beta -1)],\)
It follows that there is a constant \(N=N(N_{\kappa },L_{3})\) such that for all \(\delta \in (0,1\wedge (\beta -1)]\),
It is well-known that the inverse map \(\mathfrak {I}\) on the set of invertible \(d_1\times d_1\) matrices is infinitely differentiable and for each n, there exists a constant \(N=N(n,d_1)\) such that for all invertible matrices A , the n-th derivative of \(\mathfrak {I}\) evaluated at A, denoted \(\mathfrak {I}^{(n)}(A)\), satisfies
Using induction we find that for all multi-indices \(\gamma \) with \(|\gamma |\le [\beta ]^{-}\) and for all x, \(\partial ^{\gamma }F(x)\) is a finite sum of terms, each of which is a finite product of
Therefore, differentiating (4.4) and estimating directly we easily obtain (4.2).
(3) For each x, we have
where \(G_s(x): =x+sF(x)\), \(s\in [0,1]\), and \(J(x):=r_{1}(x)^{-1}x.\) According to part (1) and (2), we have \(| r_{1}^{-1}F| _{0}\le N| r_{1}^{-1}H| _{0}\) and \(|\nabla F|_{\beta -1}\le N|\nabla H|_{\beta -1}\), and hence
and
for some constant N independent of s. Moreover, using Lemma 4.9 we find
The statement then follows.
(4) First, we will consider the case \(\theta =0\). By part (1), we have that for all \(\bar{\mu }\in (0,(\beta +\mu )\wedge 1] \),
Let us consider the case \(\beta \le 1\). For each x, let \(\mathcal {J} (x)=\phi (\tilde{H}^{-1}(x))-\phi (x)\). For all x and y, it is clear that
where
and
Moreover, owing to part (1), if \(\beta +\mu \le 1\), then for all x and y, we have
and
for some constant \(N=N(\mu ,N_{\kappa },L_4)\). Using the identity
and part (1), if \(\beta +\mu >1\), then there is a constant \( N=N(\mu ,N_{\kappa },L_4)\) such that for all x and y,
Moreover, since
by part (1) and (4.2), if \(\beta +\mu >1\), we attain that there is a constant \(N=N(\mu ,N_{\kappa },L_4)\) such that for all x and y,
Combining the above estimates, we get that for all \(\beta \le 1\) and \( \mu \in (0,1]\), there is a constant \(N=N(\mu ,N_{\kappa },L_4)\) such that
This proves the desired estimate for \(\beta \le 1\) and \(\theta =0\). We now consider the case \(\beta >1\). For \(\beta >1\), it is straightforward to prove by induction that for all multi-indices \(\gamma \) with \(1\le |\gamma |\le [\beta ]^-\) and for all x,
where
\(\mathcal {J}^{\gamma }_3(x)\) is a finite sum of terms of the form
with \(1\le k<[\beta ]^-\), \(j_1,\ldots ,j_k\in \{1,\ldots ,d\}\), and \( \sum _{j=1}^{k}\tilde{\gamma }_{j}=\gamma \), and \(\mathcal {J}_4(x)\) is a finite sum of terms of the form
with \(i_1,j_1,\ldots ,i_{[\beta ]^-},j_{[\beta ]^-}\in \{1,\ldots ,d\}\) and at least one pair \(i_{k}\ne j_{k}\). Since for all x,
and (4.2) holds, there is a constant \(N=N(d_1,\beta )\) such that
If \(\beta >2\), then for all multi-indices \(\gamma \) with \(1\le |\gamma |<[\beta ]^-\), we get
It is easy to see that there is a constant \(N=N(L_4,N_{\kappa })\) such that for all \(\gamma \) with \(|\gamma |=[\beta ]^-\) and all \(\bar{\mu }\in (0,(\{\beta \}^++\mu )\wedge 1] \),
Moreover, appealing to the estimate (4.5) we obtain
Let us now consider the case \(\theta >0\). The following decomposition obviously holds for all x:
where \(\hat{\phi }=r_1^{-\theta }\phi \in C^{\beta }( \mathbf {R}^{d_1};\mathbf {R} ^{d_1}).\) Thus, to complete the proof we require
The latter inequality was proved in part (3) and the first inequality follows from part (2) and Lemma 4.9.\(\square \)
Remark 4.11
Let \(H:\mathbf {R}^{d_1}\rightarrow \mathbf {R} ^{d_1}\) be continuously differentiable and assume that for all x,
Then for all \(x\in \mathbf {R}^{d_1}\),
1.4 Stochastic Fubini theorem
Let \(m=(m_t^{\varrho })_{t\le T}\), \(\varrho \ge 1\), be a sequence of \(\mathbf {F }\)-adapted locally square integrable continuous martingales issuing from zero such that \(\mathbf {P}\)-a.s. for all \(t\in [0,T]\), \(\langle m^{\varrho _1},m^{\varrho _2}\rangle _t=0\) for \(\varrho _1\ne \varrho _2\) and \( \langle m^{\varrho }\rangle _t=N_t\) for \(\varrho \ge 1\), where \(N_t\) is a \( {\mathcal {P}}_T\)-measurable continuous increasing processes issuing from zero. Let \(\eta (dt,dz)\) be a \(\mathbf {F}\)-adapted integer-valued random measure on \(([0,T]\times U,{\mathcal {B}}([0,T])\otimes \mathcal {U})\), where \((U,\mathcal {U })\) is a Blackwell space. We assume that \(\eta (dt,dz)\) is optional, \( {\mathcal {P}}_T\otimes \mathcal {U}\)-sigma-finite, and quasi-left continuous. Thus, there exists a unique (up to a \(\mathbf {P}\)-null set) dual predictable projection (or compensator) \(\eta ^p(dt,dz)\) of \(\eta (dt,dz)\) such that \( \mu (\omega ,\{t\}\times U)=0\) for all \(\omega \) and t. We refer the reader to Chapter II, Sect. 1, in [12] for any unexplained concepts relating to random measures.
Let \((X,\Sigma ,\mu )\) be a sigma-finite measure space; that is, there is an increasing sequence of \(\Sigma \)-measurable sets \(X_n\), \(n\in \mathbf {N}\), such that \(X=\cup _{n=1}^{\infty } X_n\) and \(\mu (X_n)<\infty \) for each n. Let \(f:\Omega \times [0,T] \times X\rightarrow \mathbf {R}^{d_2}\) be \(\mathcal { R}_{T}\otimes \Sigma \)-measurable, \(g:\Omega \times [0,T] \times X\rightarrow \ell _{2}(\mathbf {R}^{d_2})\) be \(\mathcal {R}_{T}\otimes \Sigma /{\mathcal {B}} (\ell _{2}(\mathbf {R}^{d_2}))\)-measurable, and \(h:\Omega \times [0,T] \times X\times U\rightarrow \mathbf {R}^{d_2}\) be \({\mathcal {P}}_{T}\otimes \Sigma \otimes \mathcal {U}\)-measurable. Moreover, assume that for all \(t\in [0,T]\) and \(x\in X\), \(\mathbf {P}\)-a.s.
Let \(F=F_t(x):\Omega \times [0,T]\times X\rightarrow \mathbf {R}^{d_2}\) be \( \mathcal {O}_T\otimes {\mathcal {B}}(X)\)-measurable and assume that for \(d \mathbf {P} \mu \)-almost all \((t,x)\in [0,T]\times X\),
where \(\tilde{\eta }(dt,dz)=\eta (dt,dz)-\eta ^p(dt,dz)\).
The following version of the stochastic Fubini theorem is a straightforward extension of [17, Lemma 2.6] and [26, Corollary 1]. See also in [43, Proposition 3.1], [40, Theorem 2.2], and [37, Theorem 1.4.8]. Indeed, to prove it for a bounded measure we can use a monotone class argument as in [35, Theorem 64]. To handle the general setting with possibly infinite \(\mu \), we use assumptions (ii) and (iii) below and take limits on the sets \(X_n\) using the Lenglart domination lemma [23, Theorem 1.4.5] and the following well-known inequalities:
where \(\tau \le T\) is an arbitrary stopping time and \(N=N(T)\) is a constant independent of g and h.
Proposition 4.12
(c.f. Corollary 1 in [26] and Lemma 2.6 in [17]) Assume that
-
(1)
\(\mathbf {P}\)-a.s. for all \(n\ge 1\),
$$\begin{aligned}&\int _{X_n}\left( \int _{]0,T]}|g_t(x)|^2dN_t\right) ^{1/2}\mu (dx)\\&\quad +\int _{X_n}\left( \int _{]0,T]}\int _{U}|h_t(x,z)|^2\eta ^p (dt,dz)\right) ^{1/2}\mu (dx)<\infty ; \end{aligned}$$ -
(2)
\(\mathbf {P}\)-a.s.
$$\begin{aligned} \int _{]0,T]} \left( \int _X |g_t(x)|\mu (dx)\right) ^2dt+\int _{]0,T]} \int _U \left( \int _X |h_t(x,z)|\mu (dx)\right) ^2\eta ^p(dt,dz); \end{aligned}$$ -
(3)
\(\mathbf {P}\)-a.s. for al \(t\in [0,T]\),
$$\begin{aligned} \int _X |F_t(x)|\mu (dx)<\infty . \end{aligned}$$
Then \(\mathbf {P}\)-a.s. for all \(t\in [0,T]\),
We obtain the following corollary by applying Minkowski’s integral inequaility.
Corollary 4.13
Assume that \(\mathbf {P}\)-a.s.
Then \(\mathbf {P}\)-a.s. for all \(t\in [0,T]\),
Remark 4.14
If \(\mu \) is a finite-measure and \(\mathbf {P}\) -a.s.
then (4.6) holds by Hölder’s inequality.
1.5 Itô-Wentzell formula
Definition 4.15
We say that an \(\mathbf {R}^{d_1}\)-valued \(\mathbf {F}\)-adapted quasi-left continuous semimartingale \(L_{t}=(L_t^k)_{1\le k\le d_1}\), \(t\ge 0\), is of \( \alpha \)-order for \(\alpha \in (0,2]\), if \(\mathbf {P}\)-a.s. for all \(t\ge 0\),
and
where \(p^{L}(dt,dz)\) is the jump measure of L with dual predictable projection \(\pi ^L (dt,dz)\), \(q^{L}\) (dt, dz) \(=p^{L}(dt,dz) -\pi ^L (dt,dz)\) is a martingale measure, \(A_{t}=(A_t^i)_{1\le i\le d_1}\) is a continuous process of finite variation with \(A_0=0\), and \(L_{t}^{c}=(L_t^{c;i})_{1\le i\le d_1}\) is a continuous local martingale issuing from zero.
Set \((w^{\varrho })_{\varrho \ge 1}=(w^{1;\varrho })_{\varrho \ge 1}\), \((Z, {\mathcal {Z}},\pi )=({\mathcal {Z}}^{1},{\mathcal {Z}}^{1},\pi ^{1}),\) \(p(dt,dz )=p^{1}(dt,dz )\), and \(q(dt,dz )=q^{1}(dt,dz )\). Also, set \(D=D^{1}\), \(E=E^{1},\) and assume \(Z=D\cup E\).
Let \(f:\Omega \times [0,T] \times \mathbf {R}^{d_1}\rightarrow \mathbf {R}^{d_2} \) be \(\mathcal {R}_{T}\otimes {\mathcal {B}}(\mathbf {R}^{d_1})\)-measurable, \( g:\Omega \times [0,T] \times \mathbf {R}^{d_1}\rightarrow \ell _{2}(\mathbf {R} ^{d_2})\) be \(\mathcal {R}_{T}\otimes {\mathcal {B}}(\mathbf {R}^{d_1})/{\mathcal {B}} (\ell _{2}(\mathbf {R}^{d_2}))\)-measurable, and \(h:\Omega \times [0,T] \times \mathbf {R}^{d_1}\times Z\rightarrow \mathbf {R}^{d_2}\) be \({\mathcal {P}} _{T}\otimes {\mathcal {B}}(\mathbf {R}^{d_1})\otimes {\mathcal {Z}}\)-measurable. Moreover, assume that, \(\mathbf {P}\)-a.s. for all \(x\in \mathbf {R}^{d_1}\),
Let \(F=F_t(x):\Omega \times [0,T]\times \mathbf {R}^{d_1}\rightarrow \mathbf {R} ^{d_2}\) be \(\mathcal {O}_T\otimes {\mathcal {B}}(\mathbf {R}^{d_1})\)-measurable and assume that for all x, \(\mathbf {P}\)-a.s. for all t,
For each \(n\in \{1,2\}\), let \(\bar{C} ^{n}_{loc}(\mathbf {R}^{d_1};\mathbf {R} ^{d_2})\) be space of n-times continuously differentiable functions \(f: \mathbf {R}^{d_1}\rightarrow \mathbf {R}^{d_2}\). We now state our version of the Itô-Wentzell formula. For each \(\omega ,t\) and x, we denote \(\Delta F(x)=F_t(x)-F_{t-}(x)\).
Proposition 4.16
([26, cf. Proposition 1]) Let \((L_t)_{t\ge 0}\) be an \(\mathbf {R}^{d_1}\) -valued quasi-left continuous semimartingale of order \(\alpha \in (0,2]\). Assume that:
-
(1)
-
(a)
\(\mathbf {P}\)-a.s. \(F\in D([0,T];\mathcal {C}_{loc}^{\alpha }(\mathbf {R}^d;\mathbf {R}^m)\) if \(\alpha \) is fractional and \(F\in D([0,T];\bar{C}_{loc}^{\alpha }(\mathbf {R}^d;\mathbf {R}^m)\) if \(\alpha =1,2\) ;
-
(b)
for \(d\mathbf {P}dt\)-almost-all \((\omega ,t)\in \Omega \times [0,T]\), \(f_t(x)\) and \(g_t(x)=(g^{i\varrho }_t(x))_{\varrho \ge 1}\in \ell _2(\mathbf {R}^{d_2})\) are continuous in x and
$$\begin{aligned}&d\mathbf {P}dt-\lim _{y\rightarrow x}\left[ \int _{D}|h_t(y,z)-h_t(x,z)| ^{2}\pi (dz)\right. \\&\left. \quad +\int _{E}|h_t(y,z)-h_t(x,z)|\pi (dz)\right] = 0; \end{aligned}$$ -
(c)
for all \(\varrho \ge 1\) and \(i\in \{1,\ldots ,d_1\}\) and for \(d\mathbf {P}d |\langle L^{c;i},w^{\varrho }\rangle |_t\)-almost-all \((\omega ,t)\in \Omega \times [0,T]\), \(g^{i\varrho }_t\in C^1_{loc}(\mathbf {R}^d;\mathbf {R})\), if \(\alpha = 2\), ;
-
(a)
-
(2)
for all compact subsets K of \(\mathbf {R}^{d_1}\), \(\mathbf {P}\)-a.s.
$$\begin{aligned}&\int _{]0,T]}\sup _{x\in K} \left( |f_t(x)|+|g_t(x)|^2+\int _{D}|h_t(x,z)|^2\pi (dz) + \int _{E}|h_t(x,z)|\pi (dz)\right) dt{<}\infty ,\\&\quad \sum _{\varrho \ge 1}\int _{]0,T]}\sup _{x\in K}| \nabla g^{i\varrho }_t(x)|d|\langle L^{c;i},w^{\varrho }\rangle |_t <\infty \\&\quad \sum _{t\le T}|\Delta F_t|_{\alpha \wedge 1;K}|\Delta L_t|^{\alpha \wedge 1}<\infty . \end{aligned}$$
Then \(\mathbf {P}\)-a.s for all \(t\in [0,T]\),
Proof
Since both sides have identical jumps and we can always interlace a finite set of jumps, we may assume that \(|\Delta L_{t}|\le 1\) for all \(t\in [0,T]\); that is, it is enough to prove the statement for \(\tilde{L} _{t}=L_{t}-\sum _{s\le t}\mathbf {1}_{[1,\infty )}(|\Delta L_s|)\Delta L_{s}\), \(t\in [0,T]\). It suffices to assume that for some K and all \(\omega \), \( |L_0|\le K\). For each \(R>K\), let
and note that \(\mathbf {P}\)-a.s. \(\tau _R\uparrow T\) as R tends to infinity. If instead of L, f, g, h, and F, we take \(L_{\cdot \wedge \tau _R}\), \(f\mathbf {1}_{(0,\tau _R]}\), \(g^{\varrho }\mathbf {1}_{(0,\tau _R]}\), \( h\mathbf {1}_{(0,\tau _R]}\), \(F\mathbf {1}_{(0,\tau _R]}\), then the assumptions of the proposition hold for this new set of processes. Moreover, if we can prove (4.7) for this new set of processes, then by taking the limit as R tends to infinity, we obtain (4.7). Therefore, we may assume that for some \(R>0\) , \(\mathbf {P}\)-a.s. for all \(t\in [0,T]\),
Let \(\phi \in C_{c}^{\infty }( \mathbf {R}^{d_1},\mathbf {R})\) have support in the unit ball in \(\mathbf {R}^{d_1}\) and satisfy \(\int _{\mathbf {R} ^{d_1}}\phi (x)dx=1,\phi (x)=\phi (-x),\) and \(\phi (x)\ge 0\), for all \(x\in \mathbf {R}^{d_1}\). For each \(\varepsilon \in (0,1)\), let \(\phi _{\varepsilon }(x)=\varepsilon ^{-d}\phi \left( x/\varepsilon \right) ,x\in \mathbf {R}^{d_1} \). By Itô’s formula, for all \(x\in \mathbf {R}^{d_1}\), \(\mathbf {P}\)-a.s. for all \(t\in [0,T]\),
Appealing to assumption (2) and (4.8) (i.e. for the integrals against F), we integrate both sides of the above in x, apply Corollary 4.13 (see, also, Remark 4.14) and the deterministic Fubini theorem, and then integrate by parts to get that \(\mathbf {P}\)-a.s. for all \(t\in [0,T]\),
where for all \(\omega ,t,x,\) and z,
and \(*\) denotes the convolution operator. Let \( B_{R+1}=\{x\in \mathbf {R}^{d_1}: |x|\le R+1\}\). Owing to assumption (1)(a) and standard properties of mollifiers, for any multi-index \(\gamma \) with \( |\gamma |\le \alpha \), \(\mathbf {P}\)-a.s. for all t,
and for all x,
Similarly, by assumption 1(b), \(d\mathbf {P} dt\)-almost-all \((\omega ,t)\in \Omega \times [0,T]\),
and for all x,
and
where in the last-line we have also used Minkowski’s integral inequality and a standard mollifying convergence argument. Using assumption 1(d), for all \( \varrho \ge 1\) and \(i\in \{1,\ldots ,d_1\}\) and for \(d\mathbf {P} d |\langle L^{c;i},w^{\varrho }\rangle |_t\)-almost-all \((\omega ,t)\in \Omega \times [0,T]\)
and for all x,
Owing to assumption 1(a) and (4.8), \(\mathbf {P}\)-a.s.
Since \(\mathbf {P}\)-a.s. \(F\in D([0,T];\mathcal {C}^{\alpha }(\mathbf {R}^d; \mathbf {R}^m)\), it follows that for all x, \(\mathbf {P}\)-a.s. for all t,
By assumption (2), \(\mathbf {P}\)-a.s for all t, we have
Combining the above and using assumptions (1)(a) and (2) and the bounds given in (4.8) and the deterministic and stochastic dominated convergence theorem, we obtain convergence of all the terms in (4.9), which complete the proof.\(\square \)
Rights and permissions
About this article
Cite this article
Leahy, JM., Mikulevičius, R. On classical solutions of linear stochastic integro-differential equations. Stoch PDE: Anal Comp 4, 535–591 (2016). https://doi.org/10.1007/s40072-016-0070-5
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-016-0070-5