Abstract
In this paper we prove Burkholder–Davis–Gundy inequalities for a general martingale M with values in a UMD Banach space X. Assuming that \(M_0=0\), we show that the following two-sided inequality holds for all \(1\le p<\infty \):
Here \( \gamma ([\![M]\!]_t) \) is the \(L^2\)-norm of the unique Gaussian measure on X having \([\![M]\!]_t(x^*,y^*):= [\langle M,x^*\rangle , \langle M,y^*\rangle ]_t\) as its covariance bilinear form. This extends to general UMD spaces a recent result by Veraar and the author, where a pointwise version of (\(\star \)) was proved for UMD Banach functions spaces X. We show that for continuous martingales, (\(\star \)) holds for all \(0<p<\infty \), and that for purely discontinuous martingales the right-hand side of (\(\star \)) can be expressed more explicitly in terms of the jumps of M. For martingales with independent increments, (\(\star \)) is shown to hold more generally in reflexive Banach spaces X with finite cotype. In the converse direction, we show that the validity of (\(\star \)) for arbitrary martingales implies the UMD property for X. As an application we prove various Itô isomorphisms for vector-valued stochastic integrals with respect to general martingales, which extends earlier results by van Neerven, Veraar, and Weis for vector-valued stochastic integrals with respect to a Brownian motion. We also provide Itô isomorphisms for vector-valued stochastic integrals with respect to compensated Poisson and general random measures.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the celebrated paper [12] Burkholder, Davis, and Gundy proved that if \(M = (M_t)_{t\ge 0}\) is a real-valued martingale satisfying \(M_0=0\), then for all \(1\le p<\infty \) and \(t\ge 0\) one has the two-sided inequality
where [M] is the quadratic variation of M, i.e.,
where the limit in probability is taken over partitions \(\pi = \{0=t_0< \cdots < t_N = t\}\) whose mesh approaches 0. Later, Burkholder [9, 10] and Kallenberg and Sztencel [39] extended (1.1) to Hilbert space-valued martingales (see also [52]). They showed that if M is a martingale with values in a Hilbert space H satisfying \(M_0=0\), then for all \(1\le p<\infty \) and \(t\ge 0\) one has
where the quadratic variation [M] is defined as in (1.2) with absolute values replaced by norms in H. A further result along these lines was obtained recently by Veraar and the author [80], who showed that if M is an \(L^p\)-bounded martingale, \(1<p<\infty \), with \(M_0=0\), that takes values in a UMD Banach function space X over a measure space \((S,\Sigma ,\mu )\) (see Sections 2 and 8 for the definition), then for all \(t\ge 0\):
where the quadratic variation \([M(\sigma )]_t\) is considered pointwise in \(\sigma \in S\). Although this inequality seems to be particularly useful from a practical point of view, it does not give any hint how to work with a general Banach space since not every (UMD) Banach space has a Banach function space structure (e.g. noncommutative \(L^q\)-spaces).
Notice that (1.3)-type inequalities obtained for general Banach spaces could be of big interest in the area of mathematical physics for the following two reasons. First, vector-valued stochastic analysis is closely tied to vector-valued harmonic analysis; in particular, inequalities of the form (1.3) could yield sharp bounds for Fourier multipliers (i.e. operators of the form \(f\mapsto \mathcal F^{-1}(m{\mathcal {F}} f)\), where \({\mathcal {F}}\) is the Fourier transform, \({\mathcal {F}}^{-1}\) is its inverse, and m is a bounded function). Such operators acting on \(L^p\), Sobolev, Hölder, and Besov spaces naturally appear in PDE theory while working with the frequency space (see e.g. [1, 32, 34, 36, 41, 48]). A notable example of such an interaction was demonstrated by Bourgain [4] and Burkholder [7] in the case of the Hilbert transform (see also [64, 65, 82]).
Second, as we will show in Section 7, (1.3) [and its Banach space-valued analogue (1.5)] provides us with sharp bounds for Banach space-valued stochastic integrals with respect to a general martingale. This in turn might be helpful for showing solution existence and uniqueness together with basic \(L^p\) estimates for SPDEs containing nongaussian noise regularly exploited in models in physics and economics (such as \(\alpha \)-stable or general Lévy processes, see e.g. [21, 31]). There is a rich set of instruments (see e.g. those for stochastic evolution equations with Wiener noise explored by van Neerven et al. in [60]) which could help one to convert Burkholder–Davis–Gundy inequalities and stochastic integral estimates into the corresponding assertions needed. We refer the reader to Section 7 and [14, 30, 43, 44, 59, 60, 79] for further details on stochastic integration in infinite dimensions and its applications in SPDEs.
In connection with all of the above the following natural question is rising up. Given a Banach space X. Is there an analogue of (1.3) for a general X-valued local martingale M and how then should the right-hand side of (1.3) look like? In the current article we present the following complete solution to this problem for local martingales M with values in a UMD Banach space X.
Theorem 1.1
Let X be a UMD Banach space. Then for any local martingale \(M:{\mathbb {R}}_+\times \Omega \rightarrow X\) with \(M_0=0\) and any \(t\ge 0\) the covariation bilinear form \([\![M]\!]_t\) is well-defined and bounded almost surely, and for all \(1\le p<\infty \) we have
Here \(\gamma (V)\), where \(V:X^*\times X^* \rightarrow {\mathbb {R}}\) is a given nonnegative symmetric bilinear form, is the \(L^2\)-norm of an X-valued Gaussian random variable \(\xi \) with
We call \(\gamma (V)\) the Gaussian characteristic of V (see Section 3).
Let us explain briefly the main steps of the proof of Theorem 1.1. This discussion will also clarify the meaning of the term on the right-hand side, which is equivalent to the right-hand side of (1.3) if X is a Hilbert space, and of (1.4) (up to a multiplicative constant) if X is a UMD Banach function space.
In Section 2 we start by proving the discrete-time version of Theorem 1.1, which takes the following simple form
where \((d_n)_{n= 1}^N\) is an X-valued martingale difference sequence and \((\gamma _n)_{n= 1}^N\) is a sequence of independent standard Gaussian random variables defined on a probability space \((\Omega _\gamma ,{\mathbb {P}}_\gamma )\). (1.6) follows from a decoupling inequality due to Garling [22] and a martingale transform inequality due to Burkholder [8] (each of which holds if and only if X has the UMD property) together with the equivalence of Rademacher and Gaussian random sums with values in spaces with finite cotype due to Maurey and Pisier (see [53]).
Theorem 1.1 is derived from (1.6) by finite-dimensional approximation and discretization. This is a rather intricate procedure and depends on some elementary, but nevertheless important properties of a Gaussian characteristic \(\gamma (\cdot )\). In particular in Section 3 we show that for a finite dimensional Banach space X there exists a proper continuous extension of the Gaussian characteristic to all (not necessarily nonnegative) symmetric bilinear forms \(V:X^* \times X^* \rightarrow \mathbb R\), with the bound
Next, in Section 5, under the assumptions of Theorem 1.1 we show that M has a well-defined covariation bilinear form, i.e. for each \(t\ge 0\) and for almost all \(\omega \in \Omega \) there exists a symmetric bilinear form \([\![M]\!]_t(\omega ):X^* \times X^* \rightarrow {\mathbb {R}}\) such that for all \(x^*, y^*\in X^*\) one has
Existence of such a covariance bilinear form in the nonhilbertian setting used to be an open problem since 1970’s (see e.g. Meyer [56, p. 448] and Métivier [54, p. 156]; see also [2, 28, 74, 79]). In Section 5 we show that such a covariation exists in the UMD case. Moreover, in Proposition 5.5 we show that the process \([\![M]\!]\) has an increasing adapted càdlàg version.
Next we prove that the bilinear form \([\![M]\!]_t(\omega )\) has a finite Gaussian characteristic \(\gamma ([\![M]\!]_t)\) for almost all \(\omega \in \Omega \). After these preparations we prove Theorem 1.1. We also show that the UMD property is necessary for the conclusion of the theorem to hold true (see Subsection 7.3).
In Section 6 we develop three ramifications of our main result:
-
If M is continuous, the conclusion of Theorem 1.1 holds for all \(0< p< \infty \).
-
If M is purely discontinuous, the theorem can be reformulated in terms of the jumps of M.
-
If M has independent increments, the UMD assumption on X can be weakened to reflexivity and finite cotype.
The first two cases are particularly important in view of the fact that any UMD space-valued local martingale has a unique Meyer–Yoeurp decomposition into a sum of a continuous local martingale and a purely discontinuous local martingale (see [84, 85]).
A reasonable part of the paper, namely Section 7, is devoted to applications of Theorem 1.1 and results related to Theorem 1.1. Let us outline some of them. In Subsection 7.1 we develop a theory of vector-valued stochastic integration. Our starting point is a result of van Neerven, Veraar, and Weis [59]. They proved that if \(W_H\) is a cylindrical Brownian motion in a Hilbert space H and \(\Phi :{\mathbb {R}}_+\times \Omega \rightarrow {\mathcal {L}}(H, X)\) is an elementary predictable process, then for all \(0<p<\infty \) and \(t\ge 0\) one has the two-sided inequality
Here \(\Vert \Phi \Vert _{\gamma (L^2([0,t]; H),X)}\) is the \(\gamma \)-radonifying norm of \(\Phi \) as an operator from a Hilbert space \(L^2([0,t]; H)\) into X (see (2.1) for the definition); this norm coincides with the Hilbert–Schmidt norm given X is a Hilbert space. This result was extended to continuous local martingales in [77, 79].
Theorem 1.1 directly implies (1.7). More generally, if \(M = \int \Phi \,\mathrm {d}{\widetilde{M}}\) for some H-valued martingale \({\widetilde{M}}\) and elementary predictable process \(\Phi :{\mathbb {R}}_+\times \Omega \rightarrow {\mathcal {L}}(H, X)\), then it follows from Theorem 1.1 that for all \(1\le p<\infty \) and \(t\ge 0\) one has
Here \(q_{{\widetilde{M}}}\) is the quadratic variation derivative of \({\widetilde{M}}\) and \(\gamma (L^2(0,t;[{\widetilde{M}}]), X)\) is a suitable space of \(\gamma \)-radonifying operator associated with \({\widetilde{M}}\) (see Subsection 7.1 for details). This represents a significant improvement of (1.7).
In Subsection 7.2 we apply our results to vector-valued stochastic integrals with respect to a compensated Poisson random measure \({\widetilde{N}}\). We show that if N is a Poisson random measure on \({\mathbb {R}}_+ \times J\) for some measurable space \((J, {\mathcal {J}})\), \(\nu \) is its compensator, \({\widetilde{N}} := N - \nu \) is the corresponding compensated Poisson random measure, then for any UMD Banach space X, for any elementary predictable \(F:J\times {\mathbb {R}}_+ \times \Omega \rightarrow X\), and for any \(1\le p<\infty \) one has that
We also show that (1.9) holds if one considers a general quasi-left continuous random measure \(\mu \) instead of N.
In Subsection 7.4 we prove the following martingale domination inequality: for all local martingales M and N with values in a UMD Banach space X such that
and
for all \(1\le p<\infty \) we have that
This extends weak differential subordination \(L^p\)-estimates obtained in [82, 84] (which used to be known to hold only for \(1<p<\infty \), see [65, 82, 84]).
Finally, in Section 8, we prove that for any UMD Banach function space X over a measure space \((S, \Sigma , \mu )\), that any X-valued local martingale M has a pointwise local martingale version \(M(\sigma )\), \(\sigma \in S\), such that if \(1\le p<\infty \), then for \(\mu \)-almost all \(\sigma \in S\) one has
for all \(t\ge 0\), which extends (1.4) to the case \(p=1\) and general local martingales.
In conclusion we wish to notice that it remains open whether one can find a predictable right-hand side in (1.5): so far such a predictable right-hand side was explored only in the real-valued case and in the case \(X = L^q(S)\), \(1<q<\infty \), see Burkholder-Novikov-Rosenthal inequalities in the forthcoming paper [18]. This problem might be resolved via using recently discovered decoupled tangent martingales, see [83].
2 Burkholder–Davis–Gundy Inequalities: The Discrete Time Case
Let us show discrete Burkholder–Davis–Gundy inequalities. First we will provide the reader with the definitions of UMD Banach spaces and \(\gamma \)-radonifying operators. A Banach space X is called a UMD space if for some (equivalently, for all) \(p \in (1,\infty )\) there exists a constant \(\beta >0\) such that for every \(n \ge 1\), every martingale difference sequence \((d_j)^n_{j=1}\) in \(L^p(\Omega ; X)\), and every \(\{-1,1\}\)-valued sequence \((\varepsilon _j)^n_{j=1}\) we have
The least admissible constant \(\beta \) is denoted by \(\beta _{p,X}\) and is called the UMD constant. It is well known (see [32, Chapter 4]) that \(\beta _{p, X}\ge p^*-1\) and that \(\beta _{p, H} = p^*-1\) for a Hilbert space H. We refer the reader to [11, 25, 32, 33, 49, 66, 70] for details.
Let H be a separable Hilbert space, X be a Banach space, \(T\in {\mathcal {L}}(H, X)\). Then T is called \(\gamma \)-radonifying if
where \((h_n)_{n\ge 1}\) is an orthonormal basis of H, and \((\gamma _n)_{n\ge 1}\) is a sequence of standard Gaussian random variables (otherwise we set \(\Vert T\Vert _{\gamma (H,X)}:=\infty \)). Note that \(\Vert T\Vert _{\gamma (H,X)}\)does not depend on the choice of \((h_n)_{n\ge 1}\) (see [33, Section 9.2] and [58] for details). Often we will call \(\Vert T\Vert _{\gamma (H, X)}\) the \(\gamma \)-norm of T. \(\gamma \)-norms are exceptionally important in analysis as they are easily computable and enjoy a number of useful properties such as the ideal property, \(\gamma \)-multiplier theorems, Fubini-type theorems, etc., see [33, 58].
Now we are able state and prove discrete UMD-valued Burkholder–Davis–Gundy inequalities.
Theorem 2.1
Let X be a UMD Banach space, \((d_n)_{n\ge 1}\) be an X-valued martingale difference sequence. Then for any \(1\le p<\infty \)
For the proof we will need Rademacher random variables.
Definition 2.2
A real-valued random variable r is called Rademacher if \({\mathbb {P}}(r=1) = {\mathbb {P}}(r=-1) = 1/2\).
Proof of Theorem 2.1
Without loss of generality we may assume that there exists \(N\ge 1\) such that \(d_n=0\) for all \(n>N\). Let \((r_n)_{n\ge 1}\) be a sequence of independent Rademacher random variables, \((\gamma _n)_{n\ge 1}\) be a sequence of independent standard Gaussian random variables. Then
where (i) follows from [8, (8.22)], (ii) holds by [33, Proposition 6.1.12], (iii) follows from [33, Corollary 7.2.10 and Proposition 7.3.15], and (iv) follows from [33, Proposition 6.3.1]. \(\quad \square \)
Remark 2.3
Note that if we collect all the constants in (2.3), then the final constant will depend only on p and \(\beta _{2, X}\) (or \(\beta _{q, X}\) for any fixed \(1<q<\infty \)).
Remark 2.4
If we collect all the constants in (2.3) then one can see that those constants behave well as \(p\rightarrow 1\), i.e. for any \(1<r<\infty \) there exist positive \(C_{r, X}\) and \(c_{r,X}\) such that for any \(1\le p\le r\)
Remark 2.5
Fix \(1<p<\infty \) and a UMD Banach space X. By Doob’s maximal inequality (4.1) and Theorem 2.1 we have that
Let us find the constants in the equivalence
Since X is UMD, it has a finite cotype q (see [33, Definition 7.1.1. and Proposition 7.3.15]), and therefore by modifying (2.3) (using decoupling inequalities [32, p. 282] instead of [8, (8.22)] and [33, Proposition 6.1.12]) one can show that
where \(c_{p, X}\) depends on p, the cotype of X, and the Gaussian cotype constant of X (see [33, Proposition 7.3.15]), while \(\kappa _{p,q}\) is the Kahane–Khinchin constant (see [33, Section 6.2]).
Remark 2.6
Theorem 2.1 can be extended to general convex functions. Indeed, let X be a UMD Banach space, \(\phi :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) be a convex increasing function such that \(\phi (0)=0\) and
for some fixed \(c>0\). Then from a standard good-\(\lambda \) inequality argument due to Burkholder (see [8, Remark 8.3], [6, Lemma 7.1], and [7, pp. 1000–1001]) we imply that
where (i) and (iii) follow from good-\(\lambda \) inequalities [8, (8.22)], (ii) follows from [33, Proposition 6.1.12], (iv) holds by [16, Corollary 2.7.9], Doob’s maximal inequality (4.1), and (2.4), and (v) follows from (2.4) and Kahane–Khinchin inequalities [33, Theorem 6.2.6]. Note that as in Remark 2.3 the final constant in (2.5) will depend only on \(\phi \) and \(\beta _{2, X}\) (or \(\beta _{q, X}\) for any fixed \(1<q<\infty \)).
In the following theorem we show that X having the UMD property is necessary for Theorem 2.1 to hold.
Theorem 2.7
Let X be a Banach space and \(1\le p<\infty \) be such that (2.2) holds for any martingale difference sequence \((d_n)_{n\ge 1}\). Then X is UMD.
Proof
Note that for any set \((x_n)_{n=1}^N\) of elements of X and for any \([-1,1]\)-valued sequence \((\varepsilon _n)_{n=1}^N\) we have that \(\Vert (\varepsilon _n x_n)_{n=1}^N\Vert _{\gamma (\ell ^2_N, X)} \le \Vert (x_n)_{n=1}^N\Vert _{\gamma (\ell ^2_N, X)}\) by the ideal property (see [33, Theorem 9.1.10]). Therefore if (2.2) holds for any X-valued martingale difference sequence \((d_n)_{n\ge 1}\), then we have that for any \([-1,1]\)-valued sequence \((\varepsilon _n)_{n\ge 1}\)
If \(p>1\), then (2.6) together with (4.1) implies the UMD property. If \(p=1\), then (2.6) for \(p=1\) implies (2.6) for any \(p>1\) (see [32, Theorem 3.5.4]), and hence it again implies UMD. \(\quad \square \)
Now we turn to the continuous-time case. It turns out that in this case the right-hand side of (2.2) transforms to a so-called Gaussian characteristic of a certain bilinear form generated by a quadratic variation of the corresponding martingale. Therefore before proving our main result (Theorem 5.1) we will need to outline some basic properties of a Gaussian characteristic (see Section 3). We will also need some preliminaries concerning continuous-time Banach space-valued martingales (see Section 4).
3 Gaussian Characteristics
The current section is devoted to the definition and some basic properties of one of the main objects of the paper—a Gaussian characteristic of a bilinear form. Many of the statements here might seem to be obvious for the reader. Nevertheless we need to show them before reaching our main Theorem 5.1.
3.1 Basic definitions
Let us first recall some basic facts on Gaussian measures. Let X be a Banach space. An X-valued random variable \(\xi \) is called Gaussian if \(\langle \xi , x^*\rangle \) has a Gaussian distribution for all \(x^*\in X^*\). Gaussian random variables enjoy a number of useful properties (see [3, 45]). We will need the following Gaussian covariance domination inequality (see [3, Corollary 3.3.7] and [33, Theorem 6.1.25] for the case \(\phi = \Vert \cdot \Vert ^p\)).
Lemma 3.1
Let X be a Banach space, \(\xi ,\eta \) be centered X-valued Gaussian random variables. Assume that \({\mathbb {E}} \langle \eta , x^*\rangle ^2 \le {\mathbb {E}} \langle \xi , x^*\rangle ^2 \) for all \(x^* \in X^*\). Then \({\mathbb {E}} \phi (\eta ) \le {\mathbb {E}} \phi (\xi )\) for any convex symmetric continuous function \(\phi :X \rightarrow {\mathbb {R}}_+\).
Let X be a Banach space. We denote the linear space of all continuous \({\mathbb {R}}\)-valued bilinear forms on \(X\times X\) by \(X^*\otimes X^*\). Note that this linear space can be endowed with the following natural norm:
where the latter expression is finite due to bilinearity and continuity of V. A bilinear form V is called nonnegative if \(V(x, x)\ge 0\) for all \(x\in X\), and V is called symmetric if \(V(x, y) = V(y, x)\) for all \(x, y\in X\).
Let X be a Banach space, \(\xi \) be a centered X-valued Gaussian random variable. Then \(\xi \) has a covariance bilinear form \(V:X^*\times X^*\rightarrow {\mathbb {R}}\) such that
Notice that a covariance bilinear form is always continuous, symmetric, and nonnegative. It is worth noticing that one usually considers a covariance operator \(Q:X^* \rightarrow X^{**}\) defined by
But since there exists a simple one-to-one correspondence between bilinear forms and \({\mathcal {L}}(X^*, X^{**})\), we will work with covariance bilinear forms instead. We refer the reader to [3, 14, 27, 75] for details.
Let \(V:X^*\times X^*\rightarrow {\mathbb {R}}\) be a symmetric continuous nonnegative bilinear form. Then V is said to have a finite Gaussian characteristic \(\gamma (V)\) if there exists a centered X-valued Gaussian random variable \(\xi \) such that V is the covariance bilinear form of \(\xi \). Then we set \(\gamma (V) := ({\mathbb {E}} \Vert \xi \Vert ^2)^{\frac{1}{2}}\) (this value is finite due to the Fernique theorem, see [3, Theorem 2.8.5]). Otherwise we set \(\gamma (V) = \infty \). Note that then for all \(x^*, y^*\in X^*\) one has the following control of continuity of V:
Remark 3.2
Note that for any V with \(\gamma (V)<\infty \) the distribution of the corresponding centered X-valued Gaussian random variable \(\xi \) is uniquely determined (see [3, Chapter 2]).
Remark 3.3
Note that if X is finite dimensional, then \(\gamma (V)<\infty \) for any nonnegative symmetric bilinear form V. Indeed, in this case X is isomorphic to a finite dimensional Hilbert space H, so there exists an eigenbasis \((h_n)_{n=1}^d\) making V diagonal, and then the corresponding Gaussian random variable will be equal to \(\xi := \sum _{n=1}^d V(h_n, h_n) \gamma _n h_n\), where \((\gamma _n)_{n=1}^d\) are independent standard Gaussian.
3.2 Basic properties of \(\gamma (\cdot )\)
Later we will need the following technical lemmas.
Lemma 3.4
Let X be a reflexive (separable) Banach space, \(V:X^* \times X^* \rightarrow {\mathbb {R}}\) be a symmetric continuous nonnegative bilinear form. Then there exist a (separable) Hilbert space H and \(T\in \mathcal L(H,X)\) such that
Proof
See [5, pp. 57–58] or [45, p. 154]. \(\quad \square \)
The following lemma connects Gaussian characteristics and \(\gamma \)-norms [see (2.1)] and it can be found e.g. in [58, Theorem 7.4] or in [5, 61].
Lemma 3.5
Let X be a separable Banach space, H be a separable Hilbert space, \(T\in {\mathcal {L}}(H, X)\), \(V:X^*\times X^* \rightarrow {\mathbb {R}}\) be a symmetric continuous nonnegative bilinear form such that \(V(x^*, y^*) = \langle T^* x^*, T^*y^*\rangle \) for all \(x^*,y^*\in X^*\). Then \(\gamma (V) =\Vert T\Vert _{\gamma (H,X)}\).
Remark 3.6
Fix a Hilbert space H and a Banach space X. Note that even though by the lemma above there exists a natural embedding of \(\gamma \)-radonifying operators from \({\mathcal {L}}(H, X)\) to the space of symmetric nonnegative bilinear forms on \(X^*\times X^*\), this embedding is neither injective nor linear. This also explains why we need to use bilinear forms with finite Gaussian characteristics instead of \(\gamma \)-radonifying operators: in the proof of our main result—Theorem 5.1—we will need various statements (like triangular inequalities and convergence theorems) for bilinear forms, not operators.
Now we will prove some statements about approximation of nonnegative symmetric bilinear forms by finite dimensional ones in \(\gamma (\cdot )\).
Lemma 3.7
Let X be a reflexive Banach space, \(Y \subset X^*\) be a finite dimensional subspace. Let \(P:Y \hookrightarrow X^*\) be an inclusion operator. Let \(V:X^* \times X^* \rightarrow {\mathbb {R}}\) and \(V_0:Y\times Y \rightarrow {\mathbb {R}}\) be symmetric continuous nonnegative bilinear forms such that \(V_0(x_0^*, y_0^*) = V(Px_0^*, Py_0^*)\) for all \(x_0^*, y_0^*\in Y\). Then \(\gamma (V_0)\) is well-defined and \(\gamma (V_0) \le \gamma (V)\).
Proof
First of all notice that \(\gamma (V_0)\) is well-defined since Y is finite dimensional, hence reflexive, and thus has a predual space coinciding with its dual. Without loss of generality assume that \(\Vert V\Vert _{\gamma }<\infty \). Let \(\xi _V\) be a centered X-valued Gaussian random variable with V as the covariance bilinear form. Define \(\xi _{V_0}:= P^*\xi _V\) (note that \(Y^*\hookrightarrow X\) due to the Hahn-Banach theorem). Then for all \(x_0^*, y_0^* \in X_0^*\)
so \(V_0\) is the covariance bilinear form of \(\xi _{V_0}\) and since \(\Vert P^*\Vert = \Vert P\Vert =1\)
\(\quad \square \)
Proposition 3.8
Let X be a separable reflexive Banach space, \(V:X^* \times X^* \rightarrow {\mathbb {R}}\) be a symmetric continuous nonnegative bilinear form. Let \(Y_1\subset Y_2\subset \ldots \subset Y_m\subset \ldots \) be a sequence of finite dimensional subspaces of \(X^*\) with \(\overline{\cup _m Y_m}=X^*\). Then for each \(m\ge 1\) a symmetric continuous nonnegative bilinear form \(V_m = V|_{Y_m\times Y_m}\) is well-defined and \(\gamma (V_m) \rightarrow \gamma (V)\) as \(m\rightarrow \infty \).
Proof
First of all notice that \(V_m\)’s are well-defined since each of the \(Y_m\) is finite dimensional, hence reflexive, and thus has a predual space coinciding with its dual (which we will call \(X_m\) and which can even be embedded into X due to the Hahn-Banach theorem). Let \(P_m:Y_m \hookrightarrow X^*\) be the inclusion operator (thus is particular \(\Vert P_m\Vert \le 1\)). Let a Hilbert space H and an operator \(T\in {\mathcal {L}}(H,X)\) be as constructed in Lemma 3.4. Let \((h_n)_{n\ge 1}\) be an orthonormal basis of H, and \((\gamma _n)_{n\ge 1}\) be a sequence of standard Gaussian random variables. For each \(N\ge 1\) define a centered Gaussian random variable \(\xi _N :=\sum _{n=1}^{N} \gamma _n Th_n\). Then for each \(m\ge 1\) the centered Gaussian random variable \(\sum _{n=1}^{\infty } \gamma _n P_m^*Th_n\) is well-defined (since \(P_m^*T\) has a finite rank, and every finite rank operator has a finite \(\gamma \)-norm, see [33, Section 9.2]), and for any \(x^*\in Y_m\) we have that
so \(V_m\) is the covariance bilinear form of \(\sum _{n=1}^{\infty } \gamma _n P_m^* Th_n\), and
The latter expression converges to \(\gamma (V)\) by Lemma 3.5 and due to the fact that \(\Vert P^*_m x\Vert \rightarrow \Vert x\Vert \) monotonically for each \(x\in X\) as \(m\rightarrow \infty \). \(\quad \square \)
The next lemma provides the Gaussian characteristic with the triangular inequality.
Lemma 3.9
Let X be a reflexive Banach space, \(V, W:X^*\times X^*\) be symmetric continuous nonnegative bilinear forms. Then \(\gamma (V+W) \le \gamma (V) +\gamma (W)\).
Proof
If \(\max \{\gamma (V) , \gamma (W)\} = \infty \) then the lemma is obvious. Let \(\gamma (V) , \gamma (W) < \infty \). Let \(\xi _V\) and \(\xi _W\) be X-valued centered Gaussian random variables corresponding to V and W respectively. Without loss of generality we can set \(\xi _V\) and \(\xi _W\) independent. Let \(\xi _{V+W} = \xi _V + \xi _W\). Then \(\xi _{V+W}\) is an X-valued centered Gaussian random variable (see [3]) and for any \(x^*\in X^*\) due to the independence of \(\xi _V\) and \(\xi _W\)
So \(\xi _{V+W}\) has \(V+W\) as the covariation bilinear form, and therefore
\(\quad \square \)
Now we discuss such important properties of \(\gamma (\cdot )\) as monotonicity and monotone continuity.
Lemma 3.10
Let X be a separable Banach space, \(V, W:X^*\times X^* \rightarrow \mathbb R\) be symmetric continuous nonnegative bilinear forms such that \(W(x^*, x^*) \le V(x^*, x^*)\) for all \(x^* \in X^*\). Then \(\gamma (W) \le \gamma (V)\).
Proof
The lemma follows from Lemma 3.5 and [33, Theorem 9.4.1]. \(\quad \square \)
Lemma 3.11
Let X be a separable reflexive Banach space, \(Y\subset X^*\) be a dense subset, \((V_n)_{n\ge 1}\) be symmetric continuous nonnegative bilinear forms on \(X^* \times X^*\) such that \(V_n(x^*, x^*) \rightarrow 0\) for any \(x^*\in Y\) monotonically as \(n\rightarrow \infty \). Assume additionally that \(\gamma (V_n) <\infty \) for some \(n\ge 1\). Then \(\gamma (V_n)\rightarrow 0\) monotonically as \(n\rightarrow \infty \).
Proof
Without loss of generality assume that \(\gamma (V_1)<\infty \). Note that by Lemma 3.10 the sequence \((\gamma (V_n))_{n\ge 1}\) is monotone and bounded by \(\gamma (V_1)\). First of all notice that \(V_n(x^*, x^*) \rightarrow 0\) for any \(x^*\in X^*\) monotonically as \(n\rightarrow \infty \). Indeed, fix \(x^*\in X^*\). For any \(\varepsilon >0\) fix \(x^*_{\varepsilon } \in Y\) such that \(\Vert x^*-x^*_{\varepsilon }\Vert < \varepsilon \). Then \((V_n(x^*_{\varepsilon },x^*_{\varepsilon }))_{n\ge 1}\) vanishes monotonically, and
by (3.2). Thus \((V_n(x^*,x^*))_{n\ge 1}\) vanishes monotonically if we let \(\varepsilon \rightarrow 0\).
By Lemma 3.4 we may assume that there exists a separable Hilbert space H and a sequence of operators \((T_n)_{n\ge 1}\) from H to X such that \(V_n(x^*, x^*) = \Vert T_n^* x^*\Vert ^2\) for all \(x^*\in X^*\) (note that we are working with one Hilbert space since all the separable Hilbert spaces are isometrically isomorphic). Let \(T\in {\mathcal {L}}(H,X)\) be the zero operator. Then \(T_n^* x^* \rightarrow T^*x^* = 0\) as \(n\rightarrow \infty \) for all \(x^*\in X^*\), and hence by [33, Theorem 9.4.2], Lemma 3.5, and the fact that \(\Vert T_nx^*\Vert \le \Vert T_1x^*\Vert \) for all \(x^*\in X^*\)
\(\quad \square \)
The following lemma follows for Lemmas 3.9 and 3.11.
Lemma 3.12
Let X be a separable reflexive Banach space, \(Y\subset X^*\) be a dense subset, V, \((V_n)_{n\ge 1}\) be symmetric continuous nonnegative bilinear forms on \(X^* \times X^*\) such that \(V_n(x^*, x^*) \nearrow V(x^*, x^*)\) for any \(x^*\in Y\) monotonically as \(n\rightarrow \infty \). Then \(\gamma (V_n)\nearrow \gamma (V)\) monotonically as \(n\rightarrow \infty \).
3.3 \(\gamma (\cdot )\) and \(\gamma (\cdot )^2\) are not norms
Notice that \(\gamma (\cdot )\) is not a norm. Indeed, it is easy to see that \(\gamma (\alpha V) = \sqrt{\alpha } \gamma (V)\) for any \(\alpha \ge 0\) and any nonnegative symmetric bilinear form V: if we fix any X-valued Gaussian random variable \(\xi \) having V as its covariance bilinear form, then \(\sqrt{\alpha } \xi \) has \({\alpha } \gamma (V)\) as its covariance bilinear form.
It is a natural question whether \(\gamma (\cdot )^2\) satisfies the triangle inequality and hence has the norm properties. It is easy to check the triangle inequality if X is Hilbert: indeed, for any V and W
where \(\xi _V\), \(\xi _W\), and \(\xi _{V+W}\) are as in the latter proof.
It turns out that if such a triangular inequality holds for some Banach space X, then this Banach space must have a Gaussian type 2 (see [33, Subsection 7.1.d]). Indeed, let X be such that for all nonnegative symmetric bilinear forms V and W on \(X^*\times X^*\)
Fix \((x_i)_{i=1}^n\subset X\) and a sequence of independent standard Gaussian random variables \((\xi _i)_{i=1}^n\). For each \(i=1,\ldots ,n\) define a symmetric bilinear form \(V_i:X^* \times X^* \rightarrow {\mathbb {R}}\) as \(V_i(x^*, y^*) := \langle x_i, x^*\rangle \cdot \langle x_i, y^*\rangle \). Let \(V = V_1+ \cdots + V_n\). Then by (3.4) and the induction argument
where \((*)\) follows from the fact that \(\sum _{i=1}^n \xi _i x_i\) is a centered Gaussian random variable the fact that for all \(x^*, y^*\in X^*\)
while \((**)\) follows analogously by exploiting the fact that \(\xi _i x_i\) is a centered Gaussian random variable with the covariance bilinear form \(V_i\). Therefore by [33, Definition 7.1.17], X has a Gaussian type 2 with the corresponding Gaussian type constant \(\tau _{2,X}^{\gamma }=1\). In the following proposition we show that this condition yields that X is Hilbert, and thus we conclude that \(\gamma (\cdot )^2\) defines a norm if and only if X is a Hilbert space.
Proposition 3.13
Let X be a Banach space such that its Gaussian type 2 constant equals 1. Then X is Hilbert.
Proof
Due to the parallelogram identity it is sufficient to show that every two dimensional space of X is Hilbert; consequently, without loss of generality we can assume that X is two dimensional. We need to show that the unit ball of X is an ellipse as any ellipse corresponds to an inner product (see e.g. [15]). Let \(B\in X \simeq {\mathbb {R}}^2\) be the unit ball of X. Then by [71, Theorem 1] there exists an ellipse \(E \in X\) containing B such that \(\partial B\) and \(\partial E\) intersect in at least two pairs of points. Let us denote these pairs by \((x_1, -x_1)\) and \((x_2, -x_2)\). Notice that both \(x_1\) and \(x_2\) are nonzero and are not collinear. Let \({\left| \left| \left| \cdot \right| \right| \right| }\) be the norm associated to E. Then
as \(B\subset E\), and \({\left| \left| \left| x_1 \right| \right| \right| } = \Vert x_1\Vert ={\left| \left| \left| x_2 \right| \right| \right| } = \Vert x_2\Vert =1\) (as both points are in \(\partial B \cap \partial E\)). Note that X endowed with \({\left| \left| \left| \cdot \right| \right| \right| }\) is a Hilbert space by [15], thus it has an inner product \(\langle \cdot , \cdot \rangle _E\). Let \(\gamma _1\) and \(\gamma _2\) be independent standard Gaussian random variables. Then we have that
where \((*)\) holds by (3.5), and \((**)\) holds since \(\tau _{2, X}^1 = 1\) (see [33, Definition 7.1.17]). Therefore we have that every inequality in the estimate above is actually an equality, and hence \({\mathbb {E}} {\left| \left| \left| \gamma _1 x_1 + \gamma _2x_2 \right| \right| \right| }^2 = {\mathbb {E}} \Vert \gamma _1 x_1 + \gamma _2x_2\Vert ^2\). Thus by (3.5) \({\left| \left| \left| \gamma _1 x_1 + \gamma _2x_2 \right| \right| \right| } = \Vert \gamma _1 x_1 + \gamma _2x_2\Vert \) a.s., and as \(x_1\) and \(x_2\) are not collinear and X is two dimensional, \(\gamma _1 x_1 + \gamma _2x_2\) has a nonzero distribution density on the whole X, so we have that \({\left| \left| \left| x \right| \right| \right| } = \Vert x\Vert \) for a.e. \(x\in X\) (and by continuity for any \(x\in X\)), and the desired follows. \(\quad \square \)
Remark 3.14
Assume that X has a Gaussian cotype 2 constant equals 1. Then the same proof will yield that X is Hilbert, but now one needs to find an ellipse E inside B such that \(\partial B\) and \(\partial E\) intersect in at least two pairs of points. In order to find such an ellipse it is sufficient to find an ellipse \(E'\in X^*\) containing the unit ball \(B' \subset X^*\) such that \(\partial B'\) and \(\partial E'\) intersect in at least two pairs of points, and then set B to be the unit ball of a space \(Y^*\), where Y is a Hilbert space having \(E'\) as its unit ball. Then (3.6) will hold true but with \(\ge \) instead of \(\le \).
3.4 Finite dimensional case
Even though a Gaussian characteristic is well-defined only for some nonnegative symmetric forms, it can be extended in a proper continuous way to all the symmetric forms given X is finite dimensional. Let X be a finite dimensional Banach space. Notice that in this case \(\gamma (V)<\infty \) for any nonnegative symmetric bilinear form V (see Remark 3.3). Let us define \(\gamma (V)\) for a general symmetric \(V\in X^{**}\otimes X^{**} = X\otimes X\) in the following way:
Notice that \(\gamma (V)\) is well-defined and finite for any symmetric V. Indeed, by a well known linear algebra fact (see e.g. [73, Theorem 6.6 and 6.10]) any symmetric bilinear form V has an eigenbasis \((x_n^*)_{n=1}^d\) of \(X^*\) that diagonalizes V, i.e. there exists \((\lambda _n)_{n= 1}^d \in {\mathbb {R}}\) such that for all \((a_n)_{n= 1}^d, (b_n)_{n= 1}^d \in {\mathbb {R}}\) we have that for \(x^* = \sum _{n=1}^d a_n x_n^*\) and \(y^* = \sum _{n=1}^d b_n x_n^*\)
Therefore it is sufficient to define
and then \(\gamma (V) \le \gamma (V^+) + \gamma (V^-)<\infty \) due to the fact that \(V^+\) and \(V^-\) are nonnegative and by Remark 3.3. (In fact, one can check that \(\gamma (V) = \gamma (V^+) + \gamma (V^-)\), but we will not need this later, so we leave this fact without a proof).
Now we will develop some basic and elementary (but nonetheless important) properties of such a general \(\gamma (\cdot )\).
Lemma 3.15
Let \(V:X^* \times X^* \rightarrow {\mathbb {R}}\) be a nonnegative symmetric bilinear form. Then \(\gamma (V)\) defined by (3.7) coincides with \(\gamma (V)\) defined in Subsection 3.1. In other words, these definitions agree given V is nonnegative.
Proof
Fix nonnegative \(V^+\) and \(V^-\) such that \(V = V^+ - V^-\). Then \(\gamma (V^+) + \gamma (V^-) = \gamma (V + V^-) + \gamma (V^-) \ge \gamma (V) + \gamma (V^-) \ge \gamma (V)\) by Lemma 3.10, so \(\gamma (V)\) does not change. \(\quad \square \)
Lemma 3.16
Let \(V, W:X^*\times X^* \rightarrow {\mathbb {R}}\) be symmetric bilinear forms. Then \(\gamma (V) - \gamma (W) \le \gamma (V-W)\).
Proof
Denote \(V-W\) by U. Fix \(\varepsilon >0\). Then there exist symmetric nonnegative bilinear forms \(W^+,W^-,U^+,U^-\) such that \(W= W^+-W^-\), \(U=U^+-U^-\), and
Then since \(V = U + W\) by (3.7) and Lemma 3.9
and by sending \(\varepsilon \rightarrow 0\) we conclude the desired. \(\quad \square \)
Lemma 3.17
Let \(V:X^*\times X^* \rightarrow {\mathbb {R}}\) be a symmetric bilinear form. Then \(\gamma (V) = \gamma (-V)\) and \(\gamma (\alpha V) = \sqrt{\alpha }\gamma (V)\) for any \(\alpha \ge 0\).
Proof
The first part follows directly from (3.7). For the second part we have that due to (3.7) it is enough to justify \(\gamma (\alpha V) = \sqrt{\alpha }\gamma (V)\) only for nonnegative V, which was done in Subsection 3.3. \(\quad \square \)
Proposition 3.18
The function \(\gamma (\cdot )\) defined by (3.7) is continuous on the linear space of all symmetric bilinear forms endowed with \(\Vert \cdot \Vert \) defined by (3.1). Moreover, \(\gamma (V)^2 \lesssim _X \Vert V\Vert \) for any symmetric bilinear form \(V:X^* \times X^* \rightarrow {\mathbb {R}}\).
Proof
Due to Lemmas 3.16 and 3.17 it is sufficient to show that \(\gamma (\cdot )\) is bounded on the unit ball with respect to the norm \(\Vert \cdot \Vert \) in order to prove the first part of the proposition. Let us show this boundedness. Let U be a fixed symmetric nonnegative element of \(X\otimes X\) such that \(U+V\) is nonnegative and such that \(U(x^*, x^*)\ge V(x^*, x^*)\) for any symmetric V with \(\Vert V\Vert \le 1\) (since X is finite dimensional, one can take \(U(x^*):= c{\left| \left| \left| x^* \right| \right| \right| }^2\) for some Euclidean norm \({\left| \left| \left| \cdot \right| \right| \right| }\) on \(X^*\) and some big enough constant \(c>0\)). Fix a symmetric \(V:X^* \times X^* \rightarrow {\mathbb {R}}\) with \(\Vert V\Vert \le 1\). Then \(V= (U+V) - U\), and by (3.7)
which does not depend on V.
Let us show the second part. Due to the latter consideration there exists a constant \(C_X\) depending only on X such that \(\gamma (V)\le C_X \) if \(\Vert V\Vert \le 1\). Therefore by Lemma 3.17 we have that for a general symmetric V
\(\quad \square \)
Later we will also need the following elementary lemma.
Lemma 3.19
There exists vectors \((x_i^*)_{i=1}^n\) in \(X^*\) such that
defines a norm on the space of all symmetric bilinear forms on \(X^*\times X^*\). In particular we have that \(\Vert V\Vert \eqsim _X {\left| \left| \left| V \right| \right| \right| }\) for any symmetric bilinear form \(V:X^* \times X^*\rightarrow {\mathbb {R}}\).
We will demonstrate here the proof for the convenience of the reader.
Proof
First notice that \({\left| \left| \left| \cdot \right| \right| \right| }\) clearly satisfies the triangular inequality. Let us show that there exists a set \((x_i^*)_{i=1}^n\) such that \({\left| \left| \left| V \right| \right| \right| }=0\) implies \(V=0\). Let \((y_i^*)_{i=1}^d\) be a basis of \(X^*\). Then there exist \(i, j \in \{1,\ldots ,d\}\) such that
(otherwise \(V = 0\)). This means that for these i and j
so in particular
It remains to notice that the latter sum has the form (3.8) for a proper choice of \((x_i^*)_{i=1}^n\) independent of V.
In order to show the last part of the lemma we need to notice that the space of symmetric bilinear forms is finite dimensional if X is so, so all the norms on the linear space of symmetric bilinear forms are equivalent, and therefore \(\Vert V\Vert \eqsim _X {\left| \left| \left| V \right| \right| \right| }\) for any symmetric bilinear form \(V:X^* \times X^*\rightarrow {\mathbb {R}}\). \(\quad \square \)
4 Preliminaries
We continue with some preliminaries concerning continuous-time martingales.
4.1 Banach space-valued martingales
Let \((\Omega ,{\mathcal {F}}, {\mathbb {P}})\) be a probability space with a filtration \({\mathbb {F}} = ({\mathcal {F}}_t)_{t\ge 0}\) which satisfies the usual conditions. Then \({\mathbb {F}}\) is right-continuous (see [35, 37] for details).
Let X be a Banach space. An adapted process \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) is called a martingale if \(M_t\in L^1(\Omega ; X)\) and \({\mathbb {E}} (M_t|{\mathcal {F}}_s) = M_s\) for all \(0\le s\le t\) (we refer the reader to [32] for the details on vector-valued integration and vector-valued conditional expectation). It is well known that in the real-valued case any martingale is càdlàg (i.e. has a version which is right-continuous and that has limits from the left-hand side). The same holds for a general X-valued martingale M as well (see [76, 82]), so one can define \(\Delta M_{\tau } := M_{\tau } - \lim _{\varepsilon \searrow 0} M_{0 \vee (\tau - \varepsilon )}\) on \(\{\tau <\infty \}\) for any stopping time \(\tau \).
Let \(1\le p\le \infty \). A martingale \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) is called an \(L^p\)-bounded martingale if \(M_t \in L^p(\Omega ; X)\) for each \(t\ge 0\) and there exists a limit \(M_{\infty } := \lim _{t\rightarrow \infty } M_t\in L^p(\Omega ; X)\) in \(L^p(\Omega ; X)\)-sense. Since \(\Vert \cdot \Vert :X \rightarrow {\mathbb {R}}_+\) is a convex function, and M is a martingale, \(\Vert M\Vert \) is a submartingale by Jensen’s inequality, and hence by Doob’s inequality (see e.g. [40, Theorem 1.3.8(i)]) we have that for all \(1<p\le \infty \)
4.2 Quadratic variation
Let H be a Hilbert space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow H\) be a local martingale. We define a quadratic variation of M in the following way:
where the limit in probability is taken over partitions \(0= t_0< \ldots < t_N = t\). Note that [M] exists and is nondecreasing a.s. The reader can find more on quadratic variations in [54, 55, 79] for the vector-valued setting, and in [37, 55, 67] for the real-valued setting.
As it was shown in [56, Proposition 1] (see also [69, Theorem 2.13] and [79, Example 3.19] for the continuous case) that for any H-valued martingale M there exists an adapted process \(q_M:{\mathbb {R}}_+ \times \Omega \rightarrow {\mathcal {L}}(H)\) which we will call a quadratic variation derivative, such that the trace of \(q_M\) does not exceed 1 on \({\mathbb {R}}_+ \times \Omega \), \(q_M\) is self-adjoint nonnegative on \({\mathbb {R}}_+ \times \Omega \), and for any \(h,g\in H\) a.s.
For any martingales \(M, N:{\mathbb {R}}_+ \times \Omega \rightarrow H\) we can define a covariation \([M,N]:{\mathbb {R}}_+ \times \Omega \rightarrow {\mathbb {R}}\) as \([M,N] := \frac{1}{4}([M+N]-[M-N])\). Since M and N have càdlàg versions, [M, N] has a càdlàg version as well (see [35, Theorem I.4.47] and [54]).
Let X be a Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a local martingale. Fix \(t\ge 0\). Then M is said to have a covariation bilinear from \([\![M]\!]_t\) at \(t\ge 0\) if there exists a continuous bilinear form-valued random variable \([\![M]\!]_t:X^* \times X^* \times \Omega \rightarrow {\mathbb {R}}\) such that for any fixed \(x^*, y^*\in X^*\) a.s. \([\![M]\!]_t(x^*, y^*) = [\langle M, x^*\rangle , \langle M, y^*\rangle ]_t\).
Remark 4.1
Let us outline some basic properties of the covariation bilinear forms, which follow directly from [37, Theorem 26.6] (here we presume the existence of \([\![M]\!]_t\) and \([\![N]\!]_t\) for all \(t\ge 0\))
-
(i)
\(t\mapsto [\![M]\!]_t\) is nondecreasing, i.e. \([\![M]\!]_t(x^*, x^*) \ge [\![M]\!]_s(x^*, x^*)\) a.s. for all \(0\le s\le t\) and \(x^*\in X^*\),
-
(ii)
\( [\![M]\!]^{\tau } = [\![M^{\tau }]\!]\) a.s. for any stopping time \(\tau \),
-
(iii)
\( \Delta [\![M]\!]_{\tau }(x^*, x^*) =| \langle \Delta M_{\tau }, x^*\rangle |^2\) a.s. for any stopping time \(\tau \).
Remark 4.2
If X is finite dimensional, then it is isomorphic to a Hilbert space, and hence existence of \([\![M]\!]_t\) follows from existence of \([M]_t\) with the following estimate a.s.
For a general infinite dimensional Banach space the existence of \([\![M]\!]_t\) remains an open problem. In Theorem 5.1 we show that if X has the UMD property, then existence of \([\![M]\!]_t\) follows automatically; moreover, in this case \( \gamma ([\![M]\!]_t) <\infty \) a.s. (see Section 3 and Theorem 5.1), which is way stronger than continuity.
5 Burkholder–Davis–Gundy Inequalities: The Continuous-Time Case
The following theorem is the main theorem of the paper.
Theorem 5.1
Let X be a UMD Banach space. Then for any local martingale \(M:{\mathbb {R}}_+\times \Omega \rightarrow X\) with \(M_0=0\) and any \(t\ge 0\) the covariation bilinear form \([\![M]\!]_t\) is well-defined and bounded almost surely, and for all \(1\le p<\infty \)
Proof of Theorem 5.1
Step 1: finite dimensional case. First note that in this case \([\![M]\!]_t\) exists and bounded a.s. due to Remark 4.2. Fix \(1\le p<\infty \). By mutlidimensional Burkholder–Davis–Gundy inequalities we may assume that both \({\mathbb {E}} \sup _{0\le s\le t}\Vert M_s\Vert ^p\) and \({\mathbb {E}} \gamma ([\![M]\!]_t)^p\) are finite. For each \(N\ge 1\) fix a partition \(0= t_1^N<\ldots < t_{n_N}^N = t\) with the mesh not exceeding 1/N. For each \(\omega \in \Omega \) and \(N\ge 1\) define a bilinear form \(V_N:X^*\times X^* \rightarrow {\mathbb {R}}\) as follows:
Note that \((M_{t_{i}^N} - M_{t_{i-1}^N})_{i=1}^{n_N}\) is a martingale difference sequence with respect to the filtration \(({\mathcal {F}}_{t_{i}^N})_{i=1}^{n_N}\), so by Theorem 2.1
where \((\gamma _i)_{i=1}^{n_N}\) is a sequence of independent Gaussian standard random variables, and the latter equality holds due to the fact that for any fixed \(\omega \in \Omega \) the random variable \( \sum _{i=1}^{n_N} \gamma _i( M_{t_{i}^N} - M_{t_{i-1}^N})(\omega )\) is Gaussian and by (5.2)
Therefore it is sufficient to show that \(\gamma (V_N - [\![M]\!]_t)\rightarrow 0\) in \(L^p(\Omega )\) as \(N\rightarrow \infty \). Indeed, if this is the case, then by (5.3) and by Lemma 3.16
where the latter holds by the dominated convergence theorem as any martingale has a càdlàg version (see Subsection 4.1). Let us show this convergence. Note that by Proposition 3.18 and Lemma 3.19 a.s.
(where \({\left| \left| \left| \cdot \right| \right| \right| }\) is as in (3.8)) Therefore we need to show that \({\left| \left| \left| V_N - [\![M]\!]_t \right| \right| \right| } \rightarrow 0\) in \(L^{\frac{p}{2}}(\Omega )\), which follows from the fact that for any \(x_i^*\) from Lemma 3.19, \(i=1,\ldots ,n\), we have that
in \(L^{\frac{p}{2}}\)-sense by [20, Théorème 2] and [12, Theorem 5.1].
Step 2: infinite dimensional case. First assume that M is an \(L^p\)-bounded martingale. Without loss of generality we can assume X to be separable. Since X is UMD, X is reflexive, so \(X^*\) is separable as well. Let \(Y_1 \subset Y_2 \subset \ldots \subset Y_n \subset \ldots \) be a family of finite dimensional subspaces of \(X^*\) such that \(\overline{\cup _n Y_n} = X^*\). For each \(n\ge 1\) let \(P_n:Y_n \rightarrow X^*\) be the inclusion operator. Then \(\Vert P_n^*\Vert \le 1\) and \(P_n^* M\) is a well-defined \(Y_n^*\)-valued \(L^p\)-bounded martingale. By Step 1 this martingale a.s. has a covariation bilinear form \([\![P_n^* M]\!]_t\) acting on \(Y_n \times Y_n\) and
where \((*)\) is independent of n due to [32, Proposition 4.2.17] and Remark 2.3. Note that a.s. \([\![P_n^* M]\!]_t\) and \([\![P_m^* M]\!]_t\) agree for all \(m\ge n\ge 1\), i.e. a.s.
Let \(\Omega _0\subset \Omega \) be a subset of measure 1 such that (5.5) holds for all \(m\ge n\ge 1\). Fix \(\omega \in \Omega _0\). Then by (5.5) we can define a bilinear form (not necessarily continuous!) V on \(Y\times Y\) (where \(Y:= \cup _{n}Y_n \subset X^*\)) such that \(V(x^*,y^*)=[\![P_n^* M]\!]_t(x^*,y^*)\) for all \(x^*, y^*\in Y_n\) and \(n\ge 1\).
Let us show that V is continuous (and hence has a continuous extension to \(X^* \times X^*\)) and \(\gamma (V) <\infty \) a.s. on \(\Omega _0\). Notice that by Lemma 3.7 the sequence \((\gamma ([\![P_n^* M]\!]_t))_{n\ge 1}\) is increasing a.s. on \(\Omega _0\). Moreover, by the monotone convergence theorem and (5.4) \((\gamma ([\![P_n^* M]\!]_t))_{n\ge 1}\) has a limit a.s. on \(\Omega _0\). Let \({\Omega _1}\subset \Omega _0\) be a subset of full measure such that \((\gamma ([\![P_n^* M]\!]_t))_{n\ge 1}\) has a limit on \({\Omega _1}\). Then by (3.2) V is continuous on \(\Omega _1\) and hence has a continuous extension to \(X^* \times X^*\) (which we will denote by V as well for simplicity). Then by Proposition 3.8\(\gamma (V) = \lim _{n\rightarrow \infty }\gamma ([\![P_n^* M]\!]_t)\) monotonically on \(\Omega _1\) and hence by monotone convergence theorem ans the fact that \(\Vert P_n^* x\Vert \rightarrow \Vert x\Vert \) as \(n\rightarrow \infty \) monotonically for all \(x\in X\)
It remains to show that \(V = [\![M]\!]_t\) a.s., i.e. \(V(x^*, x^*) = [\langle M, x^*\rangle ]_t\) a.s. for any \(x^*\in X^*\). If \(x^* \in Y\), then the desired follows from the construction of V. Fix \(x^*\in X^*{\setminus } Y\). Since Y is dense in \(X^*\), there exists a Cauchy sequence \((x_n^*)_{n\ge 1}\) in Y converging to \(x^*\). Then since \(V(x_n^*,x_n^*) = [\langle M, x^*_n\rangle ]_t\) a.s. for all \(n\ge 1\),
so due to a.s. continuity of V, \(V(x^*, x^*)\) and \([\langle M, x^*\rangle ]_t\) coincide a.s.
Now let M be a general local martingale. By a stopping time argument we can assume that M is an \(L^1\)-bounded martingale, and then the existence of \([\![M]\!]_t\) follows from the case \(p=1\).
Let us now show (5.1). If the left-hand side is finite then M is an \(L^p\)-bounded martingale and the desired follows from the previous part of the proof. Let the left-hand side be infinite. Then it is sufficient to notice that by Step 1
for any (finite or infinite) left-hand side, and the desired will follow as \(n\rightarrow \infty \) by the fact that \(\Vert P_n^*M_s\Vert \rightarrow \Vert M_s\Vert \) and \(\gamma ([\![P_n^*M]\!]_t) \rightarrow \gamma ([\![M]\!]_t)\) monotonically a.s. as \(n\rightarrow \infty \), and the monotone convergence theorem. \(\quad \square \)
Remark 5.2
Note that X being a UMD Banach space is necessary in Theorem 5.1 (see Theorem 2.7 and [59]).
Remark 5.3
Because of Lemma 3.5 the reader may suggest that if X is a UMD Banach space, then for any X-valued local martingale M, for any \(t\ge 0\), and for a.a. \(\omega \in \Omega \) there exist a natural choice of a Hilbert space \(H(\omega )\) and a natural choice of an operator \(T(\omega )\in \mathcal L(H(\omega ), X)\) such that for all \(x^*, y^*\in X^*\) a.s.
If this is the case, then by Lemma 3.5 and Theorem 5.1
Such a natural pair of \(H(\omega )\) and \(T(\omega )\), \(\omega \in \Omega \), is known for purely discontinuous local martingales (see Theorem 6.5) and for stochastic integrals (see Subsection 7.1 and 7.2). Unfortunately, it remains open how such H and T should look like for a general local martingale M.
Remark 5.4
As in Remark 2.6, by a limiting argument shown in the proof of Theorem 5.1 one can prove that for any UMD Banach space X, for any martingale \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\), and for any convex increasing function \(\phi :\mathbb R_+ \rightarrow {\mathbb {R}}_+\) with \(\phi (0)=0\) and with \(\phi (2\lambda ) \le c\phi (\lambda )\) for some fixed \(c>0\) for any \(\lambda >0\) one has that
To this end, one first needs to prove the finite-dimensional case by using the proof of [12, Theorem 5.1] and the fact that for any convex increasing \(\psi :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) with \(\psi (0)=0\) and with \(\psi (2\lambda ) \le c\phi (\lambda )\) one has that \(\psi \circ \phi \) satisfies the same properties (perhaps with a different constant c), and then apply the same extending argument.
Let X be a UMD Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a martingale. Then by Theorem 5.1 there exists a process \([\![M]\!]:{\mathbb {R}}_+\times \Omega \rightarrow X \otimes X\) such that for any \(x^*, y^*\in X^*\) and a.e. \((t,\omega ) \in {\mathbb {R}}_+ \times \Omega \)
In our final proposition we show that this process is adapted and has a càdlàg version (i.e. a version which is right-continuous with left limits).
Proposition 5.5
Let X be a UMD Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a local martingale. Then there exists an increasing adapted càdlàg process \([\![M]\!]:{\mathbb {R}}_+\times \Omega \rightarrow X \otimes X\) such that (5.6) holds true. Moreover, if this is the case then \(\gamma ([\![M]\!])\) is increasing adapted càdlàg.
Proof
Existence of such a process follows from the considerations above. Let us show that this process has an increasing, adapted, and càdlàg version. First of all by a stopping time argument assume that M is a martingale (so \({\mathbb {E}} \gamma ([\![M]\!]_{\infty }) < \infty \) and hence \(\gamma ([\![M]\!]_{\infty }) < \infty \) a.s.) and that there exists \(T>0\) such that \(M_t = M_T\) for any \(t\ge T\). Let \((Y_n)_{n\ge 1}\) and \((P_n)_{n\ge 1}\) be as in the proof of Theorem 5.1. Then \(P_nM\) takes values in a finite dimensional space \(Y_n^*\) and hence \([\![P_nM]\!]\) has increasing, adapted, and càdlàg version. Therefore we can fix \(\Omega _0\subset \Omega \) of full measure which is an intersection of the following sets:
-
(1)
\([\![P_nM]\!]\) is increasing càdlàg for any \(n\ge 1\),
-
(2)
\([\![M]\!]_T(x^*, y^*) = [\![P_nM]\!]_T(x^*, y^*)\) for any \(x^*, y^*\in Y_n\) and for any \(n\ge 1\),
-
(3)
\([\![P_mM]\!]_r(x^*, y^*) = [\![P_nM]\!]_r(x^*, y^*)\) for any \(r\in {\mathbb {Q}}\), for any \(x^*, y^*\in Y_{m \wedge n}\), and for any \(m, n\ge 1\),
-
(4)
\(\gamma ([\![M]\!]_T) = \gamma ([\![M]\!]_{\infty }) <\infty \).
First notice that since all \([\![P_nM]\!]\), \(n\ge 1\), are increasing càdlàg on \(\Omega _0\), for any \(t\ge 0\) (not necessarily rational) we have that
Let \(F:{\mathbb {R}}_+ \times \Omega \rightarrow X\otimes X\) be a bilinear form-valued process such that
for any \(n\ge 1\), which existence can be shown analogously proof of Theorem 5.1.
First note that F is adapted by the definition. Let us show that F is increasing càdlàg on \(\Omega _0\). Fix \(\omega \in \Omega _0\). Then \(F_t(x^*, x^*) \ge F_s(x^*, x^*)\) for any \(t\ge s\ge 0\) and any \(x^* \in Y := \cup _{n}Y_n \subset X^*\), and thus we have the same for any \(x^*\in X^*\) by continuity of \(F_t\) and \(F_s\) and the fact that Y is dense in \(X^*\).
Now let us show that F is right-continuous. By (5.7) and the fact that \([\![P_nM]\!]\) is càdlàg we have that
so by Lemma 3.11 and the fact that \(\gamma (F_T) = \gamma ([\![M]\!]_T) <\infty \) we have that \(\gamma (F_{t+\varepsilon }-F_t) \rightarrow 0\) as \(\varepsilon \rightarrow 0\), and thus the desired right continuity follows from (3.2).
Finally, F has left-hand limits. Indeed, fix \(t> 0\) and let \(F_{t-}\) be a bilinear form defined by
Then \(\Vert F_{t-}-F_{t-{\varepsilon }}\Vert \rightarrow 0\) as \(\varepsilon \rightarrow 0\) by Lemma 3.11, (3.2), and the fact that \(\gamma (F_T) = \gamma ([\![M]\!]_T) <\infty \), so F has left-hand limits.
Now we need to conclude with the fact that F is a version of \([\![M]\!]\), which follows from the fact that by (5.7) for any fixed \(t\ge 0\) a.s.
so by a.e. continuity of \(F_t\) and \( [\![M]\!]_t\) on \(X^*\times X^*\) we have the same for all \(x^*, y^*\in X^*\), and thus \(F_t = [\![M]\!]_t\) a.s.
The process \(\gamma (F)\) is finite a.s. by the fact that \(\gamma (F_T)<\infty \) a.s., increasing a.s. by the fact that F is increasing a.s. and by Lemma 3.10, and adapted and càdlàg as F is adapted and càdlàg and by the fact that the map \(V \mapsto \gamma (V)\) is continuous by(3.2). \(\quad \square \)
6 Ramifications of Theorem 5.1
Let us outline some ramifications of Theorem 5.1.
6.1 Continuous and purely discontinuous martingales
In the following theorems we will consider separately the cases of continuous and purely discontinuous martingales. Recall that an X-valued martingale is called purely discontinuous if \([\langle M, x^*\rangle ]\) is a.s. a pure jump process for any \(x^*\in X^*\) (see [35, 37, 82, 84] for details).
First we show that if M is continuous, then Theorem 5.1 holds for the whole range \(0<p<\infty \).
Theorem 6.1
Let X be a UMD Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a continuous local martingale. Then we have that for any \(0<p<\infty \)
For the proof we will need the following technical lemma, which extends Proposition 5.5 in the case of a continuous martingale.
Lemma 6.2
Let X be a UMD Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a continuous local martingale. Then the processes \([\![M]\!]\) and \(\gamma ([\![M]\!])\) have continuous versions.
Proof
The proof is entirely the same as the proof of Proposition 5.5, one needs only to use the fact that by [37, Theorem 26.6(iv)] we can assume that \([\![P_nM]\!]\) is increasing continuous for any \(n\ge 1\) on \(\Omega _0\), so both \([\![M]\!]\) and \(\gamma ([\![M]\!])\) will not just have left-hand limits, but be left continuous, and thus continuous (as these processes are already right continuous). \(\quad \square \)
Proof of Theorem 6.1
The case \(p\ge 1\) follows from Theorem 5.1. Let us treat the case \(0<p<1\). First we show that \((\gamma ([\![M]\!]_t))_{t\ge 0}\) is a predictable process: \((\gamma ([\![M]\!]_t))_{t\ge 0}\) is a monotone limit of processes \((\gamma ([\![P_n^*M]\!]_t))_{t\ge 0}\) (where \(P_n\)’s are as in the proof of Theorem 5.1), which are predictable due to the fact that \(([\![P_n^* M]\!]_t)_{t\ge 0}\) is a \(Y_n^*\otimes Y_n^*\)-valued predictable process and \(\gamma :Y_n^*\otimes Y_n^* \rightarrow {\mathbb {R}}_+\) is a fixed measurable function. Moreover, by Lemma 6.2\((\gamma ([\![M]\!]_t))_{t\ge 0}\) is continuous a.s., and by Remark 4.1 and Lemma 3.10\((\gamma ([\![M]\!]_t))_{t\ge 0}\) is increasing a.s.
Now since \((\gamma ([\![M]\!]_t))_{t\ge 0}\) is continuous predictable increasing, (6.1) follows from the case \(p\ge 1\) and Lenglart’s inequality (see [47] and [68, Proposition IV.4.7]). \(\quad \square \)
Theorem 6.3
Let X be a UMD Banach space, \((M^n)_{n\ge 1}\) be a sequence of X-valued continuous local martingales such that \(M^n_0=0\) for all \(n\ge 1\). Then \(\sup _{t\ge 0} \Vert M^n_t\Vert \rightarrow 0\) in probability as \(n\rightarrow \infty \) if and only if \(\gamma ([\![M^n]\!]_{\infty }) \rightarrow 0\) in probability as \(n\rightarrow \infty \).
Proof
The proof follows from the classical argument due do Lenglart (see [47]), but we will recall this argument for the convenience of the reader. We will show only one direction, the other direction follows analogously. Fix \(\varepsilon , \delta >0\). For each \(n\ge 1\) define a stopping time \(\tau _n\) in the following way:
Then by (5.1) and Chebyshev’s inequality
and the latter vanishes for any fixed \(\delta >0\) as \(\varepsilon \rightarrow 0\) and \(n\rightarrow \infty \). \(\quad \square \)
Remark 6.4
Note that Theorem 6.3 does not hold for general martingales even in the real-valued case, see [37, Exercise 26.5].
For the next theorem recall that \(\ell ^2([0, t])\) is the nonseparable Hilbert space consisting of all functions \(f:[0,t] \rightarrow {\mathbb {R}}\) which support \(\{s\in [0,t]:f(s)\ne 0\}\) is countable and \(\Vert f\Vert _{\ell ^2([0, t])} := \sum _{0\le s\le t} |f(s)|^2 <\infty \).
Theorem 6.5
Let X be a UMD Banach space, \(1\le p<\infty \), \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a purely discontinuous martingale. Then for any \(t\ge 0\)
(In this case a \(\gamma \)-norm is well-defined. Indeed, if H is nonseparable, then an operator \(T:{\mathcal {L}}(H, X)\) is said to an infinite \(\gamma \)-norm if there exists an uncountable orthonormal system \((h_{\alpha })_{\alpha \in \Lambda }\) such that \(Th_{\alpha } \ne 0\) for any \(\alpha \in \Lambda \). Otherwise, there exists a separable Hilbert subspace \(H_0 \subset H\) such that \(T|_{H_{0}^{\perp }} = 0\), and then we set \(\Vert T\Vert _{\gamma (H, X)} := \Vert T|_{H_0}\Vert _{\gamma (H_0, X)} \)).
Proof
It is sufficient to notice that for any \(x^*\in X^*\) a.s.
and apply Theorem 5.1 and Lemma 3.5. \(\quad \square \)
Remark 6.6
Note that martingales in Theorems 6.1 and 6.5 cover all the martingales if X is UMD. More specifically, if X has the UMD property, then any X-valued local martingale M has a unique decomposition \(M = M^c + M^d\) into a sum of a continuous local martingale \(M^c\) and a purely discontinuous local martingale \(M^d\) (such a decomposition is called the Meyer–Yoeurp decomposition, and it characterizes the UMD property, see [84, 85]).
6.2 Martingales with independent increments
Here we show that both Theorem 2.1 and 5.1 hold in much more general Banach spaces given the corresponding martingale has independent increments.
Proposition 6.7
Let X be a Banach space, \((d_n)_{n\ge 1}\) be an X-valued martingale difference sequence with independent increments. Then for any \(1< p<\infty \)
Moreover, if X has a finite cotype, then
Proof
Let \((r_n)_{n\ge 1}\) be a sequence of independent Rademacher random variables, \((\gamma _n)_{n\ge 1}\) be a sequence of independent standard Gaussian random variables. Then
where (i) follows from (4.1), (ii) follows from [46, Lemma 6.3], (iii) holds by [33, Proposition 6.3.2], and finally (iv) follows from [33, Proposition 6.3.1].
If X has a finite cotype, then one has \(\eqsim _{p, X}\) instead of \(\lesssim _{p}\) in (iii) (see [33, Corollary 7.2.10]), and the second part of the proposition follows. \(\quad \square \)
Based on Proposition 6.7 and the proof of Theorem 5.1 one can show the following assertion. Notice that we presume the reflexivity of X since it was assumed in the whole Section 3.
Proposition 6.8
Let X be a reflexive Banach space, \(1\le p<\infty \), \(M:\mathbb R_+\times \Omega \rightarrow X\) be an \(L^p\)-bounded martingale with independent increments such that \(M_0=0\). Let \(t\ge 0\). If M has a covariation bilinear form \([\![M]\!]_t\) at t, then
Moreover, if X has a finite cotype, then the existence of \([\![M]\!]_t\) is guaranteed, and
Proof
The proof coincides with the proof of Theorem 5.1, but one needs to use Proposition 6.7 instead of Theorem 2.1. \(\quad \square \)
6.3 One-sided estimates
In practice one often needs only the upper bound of (2.2). It turns out that existence of such estimates for a fixed Banach space X is equivalent to the fact that X has the UMD\(^-\) property.
Definition 6.9
A Banach space X is called a UMD\(^-\)space if for some (equivalently, for all) \(p \in (1,\infty )\) there exists a constant \(\beta >0\) such that for every \(n \ge 1\), every martingale difference sequence \((d_j)^n_{j=1}\) in \(L^p(\Omega ; X)\), and every sequence \((r_j)^n_{j=1}\) of independent Rademachers we have
The least admissible constant \(\beta \) is denoted by \(\beta _{p,X}^{-}\) and is called the UMD\(^-\)constant.
By the definition of the UMD property and a triangular inequality one can show that UMD implies UMD\(^-\). Moreover, UMD\(^-\) is a strictly bigger family of Banach spaces and includes nonreflexive Banach spaces such as \(L^1\). The reader can find more information on UMD\(^-\) spaces in [13, 22,23,24, 32, 78].
The following theorem presents the desired equivalence.
Theorem 6.10
Let X be a Banach space, \(1\le p<\infty \). Then X has the UMD\(^-\) property if and only if one has that for any X-valued martingale difference sequence \((d_n)_{n=1}^m\)
Proof
Assume that X has the UMD\(^-\) property. Let \((d_n)_{n=1}^m\) be an X-valued martingale difference sequence. Then we have that for a sequence \((r_n)_{n\ge 1}\) of independent Rademacher random variables and for a sequence \((\gamma _n)_{n\ge 1}\) of independent standard Gaussian random variables
where (i) follows from [8, (8.22)], (ii) holds by [33, Proposition 6.1.12], (iii) follows from [33, Corollaries 7.2.10 and 7.3.14], and (iv) follows from [33, Proposition 6.3.1].
Let us show the converse. Assume that (6.2) holds for any X-valued martingale difference sequence \((d_n)_{n=1}^m\). Then X has a finite cotype by [33, Corollary 7.3.14.], and the desired UMD\(^-\) property follows from [33, Corollary 7.2.10]. \(\quad \square \)
Remark 6.11
Unfortunately, it remains open whether one can prove the upper bound of (5.1) given X has the UMD\(^{-}\) property. The problem is in the approximation argument employed in the proof of (5.1): we can not use an increasing sequence \((Y_n)_{n\ge 1}\) of finite dimensional subspaces of \(X^*\) since we can not guarantee that \(\beta _{p, Y_n^*}^-\) does not blow up as \(n\rightarrow \infty \) (recall that \(\beta _{p, Y_n^*} \le \beta _{p, X}\) by the duality argument, see [32, Proposition 4.2.17]). Nonetheless, such an upper bound can be shown for \(X=L^1\) by an ad hoc argument (by using an increasing sequence of projections onto finite-dimensional \(L^1\)-spaces).
7 Applications and Miscellanea
Here we provide further applications of Theorem 5.1.
7.1 Itô isomorphism: general martingales
Let H be a Hilbert space, X be a Banach space. For each \(x\in X\) and \(h \in H\) we denote the linear operator \(g\mapsto \langle g, h\rangle x\), \(g\in H\), by \(h\otimes x\). The process \(\Phi : \mathbb R_+ \times \Omega \rightarrow {\mathcal {L}}(H,X)\) is called elementary predictable with respect to the filtration \({\mathbb {F}} = (\mathcal F_t)_{t \ge 0}\) if it is of the form
where \(0 = t_0< \cdots< t_K <\infty \), for each \(k = 1,\ldots , K\) the sets \(B_{1k},\ldots ,B_{Mk}\) are in \({\mathcal {F}}_{t_{k-1}}\) and the vectors \(h_1,\ldots ,h_N\) are in H. Let \({\widetilde{M}}:\mathbb R_+ \times \Omega \rightarrow H\) be a local martingale. Then we define the stochastic integral \(\Phi \cdot {\widetilde{M}}:{\mathbb {R}}_+ \times \Omega \rightarrow X\) of \(\Phi \) with respect to \({\widetilde{M}}\) as follows:
Notice that for any \(t\ge 0\) the stochastic integral \(\Phi \cdot {\widetilde{M}}\) obtains a covariation bilinear form \([\![\Phi \cdot {\widetilde{M}}]\!]_t\) which is a.s. continuous on \(X^*\times X^*\) and which has the following form due to (4.3) and (7.2)
Remark 7.1
If \(X = {\mathbb {R}}\), then by the real-valued Burkholder–Davis–Gundy inequality and the fact that for any elementary predictable \(\Phi \)
one has an isomorphism
so one can extend the definition of a stochastic integral to all predictable \(\Phi :{\mathbb {R}}_+ \times \Omega \rightarrow H\) with
by extending the stochastic integral operator from a dense subspace of all elementary predictable processes satisfying (7.4). We refer the reader to [37, 55, 56] for details.
Remark 7.2
Let \(X = {\mathbb {R}}^d\) for some \(d\ge 1\). Then analogously to Remark 7.1 one can extend the definition of a stochastic integral to all predictable processes \(\Phi :{\mathbb {R}}_+ \times \Omega \rightarrow {\mathcal {L}}(H, {\mathbb {R}}^d)\) with
where \((e_n)_{n=1}^d\) is a basis of \({\mathbb {R}}^d\), \(\Vert T\Vert _{HS(H_1, H_2)}\) is the Hilbert–Schmidt norm of an operator T acting form a Hilbert space \(H_1\) to a Hilbert space \(H_2\), and \( L^2({\mathbb {R}}_+; A)\) for a given increasing \(A:{\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) is a Hilbert space of all functions \(f:{\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) such that \( \int _{{\mathbb {R}}_+} \Vert f(s)\Vert ^2 \,\mathrm {d}A(s)<\infty \).
Now we present the Itô isomorphism for vector-valued stochastic integrals with respect to general martingales, which extends [59, 77, 79].
Theorem 7.3
Let H be a Hilbert space, X be a UMD Banach space, \(\widetilde{M}:{\mathbb {R}}_+ \times \Omega \rightarrow H\) be a local martingale, \(\Phi :{\mathbb {R}}_+ \times \Omega \rightarrow {\mathcal {L}}(H,X)\) be elementary predictable. Then for all \(1\le p<\infty \)
where \([{\widetilde{M}}]\) is the quadratic variation of \({\widetilde{M}}\), \(q_{{\widetilde{M}}}\) is the quadratic variation derivative (see Subsection 4.2), and \(\Vert \Phi q_{\widetilde{M}}^{1/2}\Vert ^p_{\gamma (L^2([0,t], [{\widetilde{M}}];H),X)}\) is the \(\gamma \)-norm (see (2.1)).
Proof
Fix \(t\ge 0\). Then the theorem holds by Theorem 5.1, Lemma 3.5, and the fact that by (7.3) for any fixed \(x^*\in X^*\) a.s.
\(\quad \square \)
Theorem 7.3 allows us to provide the following general stochastic integration result. Recall that a predictable process \(\Phi :{\mathbb {R}}_+ \times \Omega \rightarrow \mathcal L(H,X)\) is called strongly predictable if there exists a sequence \((\Phi _n)_{n\ge 1}\) of elementary predictable \(\mathcal L(H,X)\)-valued processes such that \(\Phi \) is a pointwise limit of \((\Phi _n)_{n\ge 1}\).
Corollary 7.4
Let H be a Hilbert space, X be a UMD Banach space, \(\widetilde{M}:{\mathbb {R}}_+ \times \Omega \rightarrow H\) be a local martingale, \(\Phi :{\mathbb {R}}_+ \times \Omega \rightarrow {\mathcal {L}}(H,X)\) be strongly predictable such that \({\mathbb {E}} \Vert \Phi q_{\widetilde{M}}^{1/2}\Vert _{\gamma (L^2({\mathbb {R}}_+, [{\widetilde{M}}];H),X)}<\infty \). Then there exists a martingale \(\Phi \cdot {\widetilde{M}}\) which coincides with the stochastic integral given \(\Phi \) is elementary predictable such that
where the latter integral is defined as in Remark 7.1. Moreover, then we have that for any \(1\le p<\infty \)
For the proof we will need the following technical lemma.
Lemma 7.5
Let X be a reflexive separable Banach space, \(Y_1 \subset Y_2 \subset \ldots \subset Y_n \subset \ldots \subset X^*\) be finite dimensional subspaces such that \(\overline{\cup _n Y_n} = X^*\). Let \(P_n : Y_n\hookrightarrow X^*\), \(n\ge 1\), and \(P_{n,m}: Y_n\hookrightarrow Y_m\), \(m\ge n\ge 1\), be the inclusion operators. For each \(n\ge 1\) let \(x_n\in Y_n^*\) be such that \(P_{n,m}^* x_m = x_n\) for all \(m\ge n\ge 1\). Assume also that \(\sup _n \Vert x_n\Vert <\infty \). Then there exists \(x\in X\) such that \(P_{n}^* x = x_n\) for all \(n\ge 1\) and \(\Vert x\Vert = \lim _{n\rightarrow \infty } \Vert x_n\Vert \) monotonically.
Proof
Set \(C = \sup _{n}\Vert x_n\Vert \). First notice that \((x_n)_{n\ge 1}\) defines a bounded linear functional on \(Y = \cup _n Y_n\). Indeed, fix \(y\in Y_n\) for some fixed \(n\ge 1\) (then automatically \(y\in Y_m\) for any \(m\ge n\)). Define \(\ell (y) = \langle x_n, y\rangle \). Then this definition of \(\ell \) agrees for different n’s since for any \(m\ge n\) we have that
Moreover, this linear functional is bounded since \(|\langle x_n, y_n\rangle | \le \Vert x_n\Vert \Vert y_n\Vert \le C \Vert y_n\Vert \). So, it can be continuously extended to the whole space \(X^*\). Since X is reflexive, there exists \(x\in X\) such that \(\ell (x^*) = \langle x^*, x\rangle \) for any \(x^*\in X^*\). Then for any fixed \(n\ge 1\) and for any \(y\in Y_n\) we have that
so \(P_n^*x = x_n\). The latter follows from the fact that \(\Vert P_n^* x\Vert \rightarrow \Vert x\Vert \) monotonically as \(n\rightarrow \infty \) for any \(x\in X\). \(\quad \square \)
Proof of Corollary 7.4
We will first consider the finite dimensional case and then deduce the infinite dimensional case.
Finite dimensional case. Since X is finite dimensional, it is isomorphic to a finite dimensional Euclidean space, and so the \(\gamma \)-norm is equivalent to the Hilbert–Schmidt norm (see e.g. [33, Proposition 9.1.9]). Then \(\Phi \) is stochastically integrable with respect to \({\widetilde{M}}\) due to Remark 7.2, so (7.5) clearly holds and we have that for any \(x^* \in X^*\) a.s.
thus (7.6) follows from Theorem 5.1 and Lemma 3.5.
Infinite dimensional case. Let now X be general. Since \(\Phi \) is strongly predictable, it takes values in a separable subspace of X, so we may assume that X is separable. Since X is UMD, it is reflexive, so \(X^*\) is separable as well, and there exists a sequence \(Y_1\subset Y_2 \subset \ldots \subset Y_n \subset \ldots \subset X^*\) of finite dimensional subsets of \(X^*\) such that \(\overline{\cup _n Y_n} = X^*\). For each \(m\ge n\ge 1\) define inclusion operators \(P_n:Y_n \hookrightarrow X^*\) and \(P_{n,m}:Y_n \hookrightarrow Y_m\). Notice that by the ideal property [33, Theorem 9.1.10] \({\mathbb {E}} \Vert P_n^* \Phi q_{\widetilde{M}}^{1/2}\Vert _{\gamma (L^2({\mathbb {R}}_+, [\widetilde{M}];H),Y_n^*)}<\infty \) for any \(n\ge 1\), so since \(Y_n^*\) is finite dimensional, the stochastic integral \((P_n^* \Phi ) \cdot \widetilde{M}\) is well-defined by the case above and
where the equivalence is independent of n since \(Y_n \subset X^*\) for all \(n\ge 1\) and due to [32, Proposition 4.2.17] and Theorem 5.1. Denote the stochastic integral \((P_n^* \Phi ) \cdot {\widetilde{M}}\) by \(Z^n\). Note that \(Z^n\) is \(Y_n^*\)-valued, and since \(P_{n,m}^* P_m^*\Phi = P_n^* \Phi \) for all \(m\ge n\ge 1\), \(P_{m,n}^* Z^m_t = Z^n_t\) a.s. for any \(t\ge 0\). Therefore by Lemma 7.5 there exists a process \(Z:{\mathbb {R}}_+ \times \Omega \rightarrow X\) such that \(P_n^*Z = Z^n\) for all \(n\ge 1\). Let us show that Z is integrable. Fix \(t\ge 1\). Notice that by Lemma 7.5 the limit \(\Vert Z_t\Vert = \lim _{n\rightarrow \infty } \Vert P_n^* Z_t\Vert = \lim _{n\rightarrow \infty } \Vert Z^n_t\Vert \) is monotone, so by the monotone convergence theorem, (7.7), and the ideal property [33, Theorem 9.1.10]
Now let us show that Z is a martingale. Since Z is integrable, due to [32, Section 2.6] it is sufficient to show that \({\mathbb {E}} (\langle Z_t, x^*\rangle |{\mathcal {F}}_s) = \langle Z_s, x^*\rangle \) for all \(0\le s\le t\) for all \(x^*\) from some dense subspace Y of \(X^*\). Set \(Y = \cup _n Y_n\) and \(x^* \in Y_n\) for some \(n\ge 1\). Then for all \(0\le s\le t\)
so Z is a martingale. Finally, let us show (7.6). First notice that for any \(n\ge 1\) and \(x^* \in Y_n \subset X^*\) a.s.
the same holds for a general \(x^*\in X^*\) by a density argument. Then (7.6) follows from Theorem 5.1 and Lemma 3.5. \(\quad \square \)
Remark 7.6
As the reader can judge, the basic assumptions on \(\Phi \) in Corollary 7.4 can be weakened by a stopping time argument. Namely, one can assume that \(\Phi q_{{\widetilde{M}}}^{1/2}\) is locally in \(L^1(\Omega , \gamma (L^2({\mathbb {R}}_+, [{\widetilde{M}}];H),X))\) (i.e. there exists an increasing sequence \((\tau _n)_{n\ge 1}\) of stopping times such that \(\tau _n \rightarrow \infty \) a.s. as \(n\rightarrow \infty \) and \(\Phi q_{{\widetilde{M}}}^{1/2} {\mathbf {1}}_{[0,\tau _n]}\) is in \(L^1(\Omega , \gamma (L^2({\mathbb {R}}_+, [{\widetilde{M}}];H),X))\) for all \(n\ge 1\)). Notice that such an assumption is a natural generalization of classical assumptions for stochastic integration in the real-valued case (see e.g. [37, p. 526]).
In the case when \({\widetilde{M}}\) is continuous by a standard localization argument (since \(t\mapsto \Vert \Phi q_{\widetilde{M}}^{1/2}\Vert _{\gamma (L^2([0, t], [{\widetilde{M}}];H),X)}\) is continuous) one can assume even a weaker assumption, namely that \(\Phi q_{{\widetilde{M}}}^{1/2}\) is locally in \(\gamma (L^2({\mathbb {R}}_+, [{\widetilde{M}}];H),X)\), see e.g. [37, 59, 79].
In the theory of stochastic integration one might be interested in one-sided estimates. In the following proposition we show that such type of estimates is possible if X satisfies the UMD\(^-\) property (see Subsection 6.3).
Proposition 7.7
Let H be a Hilbert space, X be a UMD\(^-\) Banach space, \({\widetilde{M}}:{\mathbb {R}}_+ \times \Omega \rightarrow H\) be a local martingale, \(1 \le p<\infty \), \(\Phi :{\mathbb {R}}_+ \times \Omega \rightarrow {\mathcal {L}}(H,X)\) be strongly predictable such that \({\mathbb {E}} \Vert \Phi q_{{\widetilde{M}}}^{1/2}\Vert _{\gamma (L^2({\mathbb {R}}_+, [\widetilde{M}];H),X)}^p<\infty \) and such that there exists a sequence \((\Phi )_{n\ge 1}\) of elementary predictable \({\mathcal {L}}(H, X)\)-valued processes such that
Then there exists an \(L^p\)-bounded martingale \(\Phi \cdot \widetilde{M}\) as a strong \(L^p\)-limit of \((\Phi _n \cdot {\widetilde{M}})_{n\ge 1}\), and we have that for any \(1\le p<\infty \)
Proof
Inequality (7.8) for \(\Phi = \Phi _n\) follows from Theorem 6.10, while the proposition together with (7.8) for a general \(\Phi \) follows from a simple limiting argument. \(\quad \square \)
7.2 Itô isomorphism: Poisson and general random measures
Let \((J, {\mathcal {J}})\) be a measurable space, N be a Poisson random measure on \(J \times {\mathbb {R}}_+\), \({\widetilde{N}}\) be the corresponding compensated Poisson random measure (see e.g. [17, 26, 37, 42, 72] for details). Then by Theorem 6.5 for any UMD Banach space X, for any \(1\le p<\infty \), and for any elementary predictable \(F:J\times R_+ \times \Omega \rightarrow X\) we have that
The same holds for a general quasi-left continuous random measure (see [19, 29, 35, 38, 51, 62] for the definition and the details): if \(\mu \) is a general quasi-left continuous random measure on \(J \times {\mathbb {R}}_+\), \(\nu \) is its compensator, and \(\bar{\mu } := \mu -\nu \), then for any \(1\le p<\infty \)
The disadvantage of right-hand sides of (7.9) and (7.10) is that both of them are not predictable and do not depend continuously on time a.s. on \(\Omega \) (therefore they seem not to be useful from the SPDE’s point of view since one may not produce a fixed point argument). For example, if \(X = L^q\) for some \(1<q<\infty \), then such predictable a.s. continuous in time right-hand sides do exist (see [17, 19]). For general UMD Banach spaces those can be provided so far only by decoupled tangent martingales, see [83].
7.3 Necessity of the UMD property
As it follows from Remark 5.2, Theorem 5.1 holds only in the UMD setting. The natural question is whether there exists an appropriate right-hand side of (5.1) in terms of \(([\langle M, x^*\rangle ,\langle M, y^*\rangle ])_{x^*, y^*\in X^*}\) for some non-UMD Banach space X and some \(1\le p<\infty \). Here we show that this is impossible.
Assume that for some Banach space X and some \(1\le p<\infty \) there exists a function G acting on families of stochastic processes parametrized by \(X^*\times X^*\) (i.e. each family has the form \(V = (V_{x^*, y^*})_{x^*, y^*\in X^*}\)) taking values in \({\mathbb {R}}\) such that for any X-valued local martingale M starting in zero we have that
where we denote \([\![M]\!] = ([\langle M, x^*\rangle ,\langle M, y^*\rangle ])_{x^*, y^*\in X^*}\) for simplicity (note that the latter might not have a proper bilinear structure). Let us show that then X must have the UMD property.
Fix any X-valued \(L^p\)-bounded martingale difference sequence \((d_n)_{n= 1}^N\) and any \(\{-1,1\}\)-valued sequence \((\varepsilon _n)_{n= 1}^N\). Let \(e_n:= \varepsilon _nd_n\) for all \(n=1,\ldots ,N\). For every \(x^*, y^*\in X^*\) define a stochastic process \(V_{x^*,y^*}:{\mathbb {R}}_+ \times \Omega \rightarrow {\mathbb {R}}\) as
(recall that [t] is the integer part of t). Let \(V := (V_{x^*, y^*})_{x^*,y^*\in X^*}\). Then by (7.11)
Since N, \((d_n)_{n= 1}^N\), and \((\varepsilon _n)_{n= 1}^N\) are general, (7.12) implies that X is a UMD Banach space (see the proof of Theorem 2.7).
7.4 Martingale domination
The next theorem shows that under some natural domination assumptions on martingales one gets \(L^p\)-estimates.
Theorem 7.8
Let X be a UMD Banach space, \(M,N:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be local martingales such that \(\Vert N_0\Vert \le \Vert M_0\Vert \) a.s. and \([\langle N, x^*\rangle ]_{\infty } \le [\langle M, x^*\rangle ]_{\infty }\) a.s. for all \(x^*\in X^*\). Then for all \(1\le p<\infty \)
Note that the assumptions in Theorem 7.8 are a way more general than the weak differential subordination assumptions (recall that N is weakly differentially subordinate to M if \([\langle M, x^*\rangle ] - [\langle N, x^*\rangle ]\) is nondecreasing a.s. for any \(x^*\in X^*\), see [65, 82, 84]), so Theorem 7.8 significantly improves the \(L^p\)-bounds obtained previously for weakly differentially subordinated martingales in [82, 84] and extends the results to the case \(p=1\) as well.
Proof of Theorem 7.8
First notice that by a triangular inequality
Consequently we can reduce the statement to the case \(M_0=N_0=0\) a.s. (by setting \(M := M-M_0\), \(N:= N-N_0\)), and then the proof follows directly from Theorem 5.1 and Lemma 3.10. \(\quad \square \)
Remark 7.9
It is not known what the sharp constant is in (7.13). Nevertheless, sharp inequalities of such type have been discovered in the scalar case by Osękowski in [63]. It was shown there that if M and N are real-valued \(L^p\)-bounded martingales such that a.s.
then
where \(p^* := \max \{p, \tfrac{p}{p-1}\}\).
7.5 Martingale approximations
The current subsection is devoted to approximation of martingales. Namely, we will extend the following lemma by Weisz (see [81, Theorem 6]) to general UMD Banach space-valued martingales.
Lemma 7.10
Let X be a finite dimensional Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a martingale such that \({\mathbb {E}} \sup _{t\ge 0} \Vert M_t\Vert <\infty \). Then there exists a sequence \((M^n)_{n\ge 1}\) of X-valued uniformly bounded martingales such that \({\mathbb {E}} \sup _{t\ge 0}\Vert M_t-M^n_t\Vert \rightarrow 0\) as \(n\rightarrow \infty \).
Here is the main theorem of the current subsection.
Theorem 7.11
Let X be a UMD Banach space, \(1\le p<\infty \), \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a martingale such that \({\mathbb {E}} \sup _{t\ge 0} \Vert M_t\Vert ^p <\infty \). Then there exists a sequence \((M^n)_{n\ge 1}\) of X-valued \(L^{\infty }\)-bounded martingales such that \({\mathbb {E}} \sup _{t\ge 0}\Vert M_t-M_t^n\Vert ^p \rightarrow 0\) as \(n\rightarrow \infty \).
Though this theorem easily follows from Doob’s maximal inequality (4.1) in the case \(p>1\), the case \(p=1\) (which is the most important one for the main application of Theorems 7.11 and 8.2) remains problematic and requires some work.
For the proof of the theorem we will need to find its analogues for purely discontinuous martingales. Let us first recall some definitions.
A random variable \(\tau :\Omega \rightarrow {\mathbb {R}}_+\) is called an optional stopping time (or just a stopping time) if \(\{\tau \le t\} \in {\mathcal {F}}_t\) for each \(t\ge 0\). With an optional stopping time \(\tau \) we associate a \(\sigma \)-field \({\mathcal {F}}_{\tau } = \{A\in {\mathcal {F}}_{\infty }: A\cap \{\tau \le t\}\in {\mathcal {F}}_{t}, t\in {\mathbb {R}}_+\}\). Note that \(M_{\tau }\) is strongly \({\mathcal {F}}_{\tau }\)-measurable for any local martingale M. We refer to [37, Chapter 7] for details.
Recall that due to the existence of a càdlàg version of a martingale \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\), we can define an X-valued random variables \(M_{\tau -}\) and \(\Delta M_{\tau }\) for any stopping time \(\tau \) in the following way: \(M_{\tau -} = \lim _{\varepsilon \rightarrow 0}M_{(\tau - \varepsilon )\vee 0}\), \(\Delta M_{\tau } = M_{\tau } - M_{\tau -}\).
A stopping time \(\tau \) is called predictable if there exists a sequence of stopping times \((\tau _n)_{n\ge 1}\) such that \(\tau _n<\tau \) a.s. on \(\{\tau >0\}\) for each \(n\ge 1\) and \(\tau _n \rightarrow \tau \) monotonically a.s. A stopping time \(\tau \) is called totally inaccessible if \({\mathbb {P}}\{\tau = \sigma < \infty \} =0\) for each predictable stopping time \(\sigma \).
Definition 7.12
Let X be a Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a local martingale. Then M is called quasi-left continuous if \(\Delta M_{\tau } = 0\) a.s. for any predictable stopping time \(\tau \). M is called to have accessible jumps if \(\Delta M_{\tau }=0\) a.s. for any totally inaccessible stopping time.
The reader can find more information on quasi-left continuous martingales and martingales with accessible jumps in [19, 35, 37, 84, 85].
In order to prove Theorem 7.11 we will need to show similar approximation results for quasi-left continuous purely discontinuous martingales and purely discontinuous martingales with accessible jumps. Both cases will be considered separately.
7.5.1 Quasi-left continuous purely discontinuous martingales
Before stating the corresponding approximation theorem let us show the following proposition.
Proposition 7.13
Let X be a Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a purely discontinuous quasi-left continuous martingale. Then there exist sequences of positive numbers \((a_n)_{n\ge 1}\), \((b_n)_{n\ge 1}\), and a sequence of X-valued purely discontinuous quasi-left continuous martingales \((M^n)_{n\ge 1}\) such that
and
Sketch of the proof. The construction of such a family of martingales was essentially provided in the proof of [19, Lemma 5.20]. We will recall the construction here for the convenience of the reader. First of all we refer the reader to [19, 35, 37, 38] for the basic definitions and facts on random measures, which presenting we will omit here for the brevity and simplification of the proof. Let \(\mu ^M\) be a random measure defined on \(({\mathbb {R}}_+ \times X, {\mathcal {B}}({\mathbb {R}}_+) \otimes {\mathcal {B}}(X))\) by
Let \(\nu ^M\) be the corresponding compensator, \(\bar{\mu }^M := \mu ^M - \nu ^M\). Due to the proof of [19, Lemma 5.20] there exists an a.s. increasing sequence \((\tau _n)_{n\ge 1}\) of stopping times such \(\tau _n \rightarrow \infty \) a.s. as \(n\rightarrow \infty \), and such = that there exist positive sequences \((a_n)_{n\ge 1}\), \((b_n)_{n\ge 1}\) with \((a_n)_{n\ge 1}\) being increasing natural and with
Define a predictable set \(A_n := [0, \tau _n]\times B_n \subset {\mathbb {R}}_+ \times X\), where \(B_n := \{x\in X: \Vert x\Vert \in [1/a_n, a_n]\}\). Then the desired \(M^n\) equals the stochastic integral
where the latter is a well-defined martingale since by [19, Subsection 5.4] it is sufficient to check that for any \(t\ge 0\)
All the properties of the sequence \((M^n)_{n\ge 1}\) then follow from the construction, namely from the fact that \(A_n\) are a.s. increasing with \(\cup _n A_n = {\mathbb {R}}_+ \times X {\setminus } \{0\}\) a.s., and the fact that \(\nu ^M\) is non-atomic in time since M is quasi-left continuous (see [19, Subsection 5.4]). \(\quad \square \)
In the next theorem we show that the martingales obtained in Proposition 7.13 approximate M in the strong \(L^p\)-sense.
Theorem 7.14
Let X be a UMD Banach space, M be an X-valued martingale, \((M^n)_{n\ge 1}\) be a sequence of X-valued martingales constructed in Proposition 7.13. Assume that for some fixed \(1\le p<\infty \), \({\mathbb {E}} \sup _{t\ge 0} \Vert M_t\Vert ^p <\infty \). Then \({\mathbb {E}} \sup _{t\ge 0} \Vert M^n_t\Vert ^p <\infty \) for all \(n\ge 1\) and
Proof
First of all notice that by Theorem 6.5, (7.15), and [33, Proposition 6.1.5] for any \(n\ge 1\)
Let us show the second part of the theorem. Note that by (7.15) a.s. for all \(x^*\in X^*\)
which monotonically vanishes as \(n\rightarrow \infty \) by (7.14) and (7.16). Consequently, the desired follows form Theorem 5.1, Lemma 3.11, and the monotone convenience theorem. \(\quad \square \)
7.5.2 Purely discontinuous martingales with accessible jumps
Now let us turn to purely discontinuous martingales with accessible jumps. First notice that by [37, Proposition 25.4], [37, Theorem 25.14], and by [19, Subsection 5.3] the following lemmas hold.
Lemma 7.15
Let X be a Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a local martingale with accessible jumps. Then there exists a set \((\tau _n)_{n\ge 0}\) of predictable stopping times with disjoint graphs (i.e. \(\tau _n\ne \tau _m\) a.s. for all \(m\ne n\)) such that a.s.
Lemma 7.16
Let X be a Banach, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be an \(L^1\)-bounded martingale, \(\tau \) be a predictable stopping time. Then
defines an \(L^1\)-bounded martingale.
Let X be a Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a purely discontinuous martingale with accessible jumps, \((\tau _n)_{n\ge 0}\) be a set of predictable stopping times with disjoint graphs such that (7.17) holds. Thanks to Lemma 7.16 for each \(n\ge 1\) we can define a martingale
Does \((M^n)_{n\ge 1}\)converge to M in strong \(L^p\)-sense? The following theorem answers this question in the UMD case.
Theorem 7.17
Let X be a UMD Banach space, \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a martingale with accessible jumps, \((M^n)_{n\ge 1}\) be as in (7.18). Assume that \({\mathbb {E}} \sup _{t\ge 0} \Vert M_t\Vert ^p <\infty \) for some fixed \(1\le p<\infty \). Then \({\mathbb {E}} \sup _{t\ge 0} \Vert M^n_t\Vert ^p <\infty \) for all \(n\ge 1\) and
Proof
The proof is fully analogous to the proof of Theorem 7.14. \(\quad \square \)
7.5.3 Proof of theorem 7.11
Let us now prove Theorem 7.11. Since X is a UMD Banach space, M has the canonical decomposition, i.e. there exist an X-valued continuous local martingale \(M^c\), an X-valued purely discontinuous quasi-left continuous local martingale \(M^q\), and an X-valued purely discontinuous local martingale \(M^a\) with accessible jumps such that \(M^c_0=M^q_0=0\) and \(M=M^c+M^q+M^a\) (see Subsection 7.6 for details). Moreover, by (7.20) and a triangle inequality
so it is sufficient to show Theorem 7.11 for each of these three cases separately. By [32, Theorem 1.3.2 and 3.3.16] M converges a.s., so we can assume that there exists \(T>0\) such that \(M_t = M_T\) a.s. for all \(t\ge T\).
- Case 1::
-
M is continuous. The theorem follows from the fact that every continuous martingale is locally bounded and the fact that \(M_t= M_T\) for all \(t\ge T\).
- Case 2::
-
M is purely discontinuous quasi-left continuous. By Theorem 7.14 one can assume that M has uniformly bounded jumps. Then the theorem follows from the fact that any adapted càdlàg process with uniformly bounded jumps is local uniformly bounded and the fact that \(M_t= M_T\) for all \(t\ge T\).
- Case 3::
-
M is purely discontinuous with accessible jumps. By Theorem 7.17 we can assume that there exist predictable stopping times \((\tau _n)_{n=1}^N\) with disjoint graphs such that
$$\begin{aligned} M_t = \sum _{n=1}^N \Delta M_{\tau _n} {\mathbf {1}}_{[0,t]}(\tau _n),\quad t\ge 0. \end{aligned}$$
Fix \(\varepsilon >0\). Without loss of generality we may assume that the stopping times \((\tau _n)_{n=1}^N\) are bounded a.s. Due to [19, Subsection 5.3] we may additionally assume that \((\tau _n)_{n=1}^N\) is a.s. increasing. Then by [19, Subsection 5.3] (or [37, Lemma 26.18] in the real-valued case) the sequence \((0,\Delta M_{\tau _1}, 0, \Delta M_{\tau _2},\ldots , 0,\Delta M_{\tau _N})\) is a martingale difference sequence with respect to the filtration
(see [37, Lemma 25.2] for the definition of \(\mathcal F_{\tau -}\)). As any discrete \(L^p\)-bounded martingale difference sequence, \((0,\Delta M_{\tau _1}, 0, \Delta M_{\tau _2},\ldots , 0,\Delta M_{\tau _N})\) can be approximated in a strong \(L^p\)-sense by a uniformly bounded X-valued \({\mathbb {G}}\)-martingale difference sequence \((0, d_1^{\varepsilon }, 0, d_2^{\varepsilon }, \ldots , 0, d_N^{\varepsilon })\) such that
The martingale difference sequence \((0, d_1^{\varepsilon }, 0, d_2^{\varepsilon }, \ldots , 0, d_N^{\varepsilon })\) can be translated back to a martingale on \({\mathbb {R}}_+\) in the same way as it was shown in [19, Subsection 5.3], i.e. one can define a process \(N^{\varepsilon }:{\mathbb {R}}_+ \times \Omega \rightarrow X\) such that
which is a martingale by [19, Subsection 5.3] (or see [37, Lemma 26.18] for the real valued version) with
which terminates the proof.
Remark 7.18
Clearly Theorem 7.11 holds true if X has a Schauder basis. Therefore it remain open for whether Theorem 7.11 holds true for a general Banach space.
7.6 The canonical decomposition
Let X be a Banach space. Then X has the UMD property if and only if any X-valued local martingale M has the so-called canonical decomposition, i.e. there exist an X-valued continuous local martingale \(M^c\) (a Wiener-like part), an X-valued purely discontinuous quasi-left continuous local martingale \(M^q\) (a compensated Poisson-like part), and an X-valued purely discontinuous local martingale \(M^a\) with accessible jumps (a discrete-like part) such that \(M^c_0=M^q_0=0\) and \(M=M^c+M^q+M^a\). We refer the reader to [19, 37, 84, 85] for the details on the canonical decomposition.
As it was shown in [19, 84, 85], the canonical decomposition is unique, and by [84, Section 3] together with (4.1) we have that for any \(1<p<\infty \) and for any \(i=c,q,a\)
Theorem 7.8 allows us to extend (7.19) to the case \(p=1\). Indeed, it is known due to [84, 85] that for any \(x^*\in X^*\) a.s.
so by Theorem 7.8
for all \(1\le p<\infty \) and any \(i=c,q,a\).
7.7 Covariation bilinear forms for pairs of martingales
Let X be a UMD Banach space, \(M, N:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be local martingales. Then for any fixed \(t\ge 0\) and any \(x^*, y^*\in X^*\) we have that by [37, Theorem 26.6(iii)] a.s.
Thus analogously the proof of Theorem 5.1 (by exploiting a subspace Y of \(X^*\) that is a linear span of a countable subset of \(X^*\)) there exists a bounded bilinear form-valued random variable \([\![M, N]\!]_t:\Omega \rightarrow X\otimes X\) such that \([\langle M, x^*\rangle , \langle N, y^*\rangle ]_t =[\![M, N]\!]_t(x^*, y^*) \) for any \(x^*, y^*\in X^*\) a.s.
Now let X and Y be UMD Banach spaces (perhaps different), \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\), \(N:{\mathbb {R}}_+ \times \Omega \rightarrow Y\) be local martingales. Then we can show that for any \(t\ge 0\) there exists a bilinear form-valued process \([\![M,N]\!]_t:\Omega \rightarrow X \otimes Y\) such that \([\![M,N]\!]_t = [\langle M, x^*\rangle , \langle N, y^*\rangle ]_t\) a.s. for any \(x^*\in X^*\) and \(y^*\in Y^*\). Indeed, one can presume the Banach space to be \(X\times Y\) and extend both M and N to take values in this Banach space. Then by the first part of the present subsection there exists a bilinear form \([\![M,N]\!]_t\) acting on \((X \times Y)^* \times (X \times Y)^*\) such that for any \(x^*\in X^*\) and \(y^*\in Y^*\) a.s.
It remains to restrict \([\![M,N]\!]_t\) back to \(X\otimes Y\) from \((X\times Y)\otimes (X\times Y)\) which is possible by (7.21).
Interesting things happen given \(Y={\mathbb {R}}\). In this case \([\![M, N]\!]_t\) takes values in \(X\otimes {\mathbb {R}} \simeq X\), so \([\![M, N]\!]_t\) is simply X-valued, and it is easy to see that
where the limit in probability is taken over partitions \(0= t_0< \ldots < t_n = t\), and it is taken in a weak sense (i.e. (7.22) holds under action of any linear functional \(x^*\in X^*\)). It remains open whether (7.22) holds in a strong sense.
8 UMD Banach Function Spaces
Here we are going to extend (1.4) to the case \(p=1\). Let us first recall some basic definitions on Banach function spaces. For a given measure space \((S,\Sigma ,\mu )\), the linear space of all real-valued measurable functions is denoted by \(L^0(S)\). We endow \(L^0(S)\) with the local convergence in measure topology.
Definition 8.1
Let \((S,\Sigma ,\mu )\) be a measure space. Let \(n:L^0(S)\rightarrow [0,\infty ]\) be a function which satisfies the following properties:
-
(i)
\(n(x) = 0\) if and only if \(x=0\),
-
(ii)
For all \(x,y\in L^0(S)\) and \(\lambda \in {{\mathbb {R}}}\), \(n(\lambda x) = |\lambda | n(x)\) and \(n(x+y)\le n(x)+n(y)\),
-
(iii)
If \(x \in L^0(S), y \in L^0(S)\), and \(|x| \le |y|\), then \(n(x) \le n(y)\),
-
(iv)
There exists \(\zeta \in L^0(S)\) with \(\zeta >0\) and \(n(\zeta )<\infty \),
-
(v)
If \(0 \le x_n \uparrow x\) with \((x_n)_{n=1}^\infty \) a sequence in \(L^0(S)\) and \(x \in L^0(S)\), then \(n(x) = \sup _{n \in {{\mathbb {N}}}}n(x_n)\).
Let X denote the space of all \(x\in L^0(S)\) for which \(\Vert x\Vert :=n(x)<\infty \). Then X is called a normed function space associated to n. It is called a Banach function space when \((X,\Vert \cdot \Vert _X)\) is complete. We will additionally assume the following natural property of X:
-
(vi)
X is continuously embedded into \(L^0(S)\) with the local convergence in measure topology.
Notice that the condition (vi) holds automatically if one changes the measure on \((S, \Sigma )\) in an appropriate way (see [50, Theorem 1.b.14]). We refer the reader to [50, 57, 70, 80, 86] for the details on Banach function spaces.
Given a Banach function space X over a measure space S and Banach space E, let X(E) denote the space of all strongly measurable functions \(f:S\rightarrow E\) with \(\Vert f\Vert _E \in X\). The space X(E) becomes a Banach space when equipped with the norm \(\Vert f\Vert _{X(E)} = \big \Vert \sigma \mapsto \Vert f(\sigma )\Vert _E\big \Vert _X\).
Let X be a UMD Banach function space over a \(\sigma \)-finite measure space \((S, \Sigma ,\mu )\). According to [80] any X-valued \(L^p\)-bounded martingale M, \(1<p<\infty \), has a pointwise martingale version, i.e. there exists a process \(N:{\mathbb {R}}_+ \times \Omega \times S \rightarrow {\mathbb {R}}\) such that
-
(i)
\(N|_{[0,t]\times \Omega \times S}\) is \({\mathcal {B}}([0, t])\otimes {\mathcal {F}}_t \otimes \Sigma \)-measurable for all \(t\ge 0\),
-
(ii)
\(N(\cdot , \cdot , \sigma )\) is a local martingale for a.e. \(\sigma \in S\),
-
(iii)
\(N(\omega , t,\cdot ) = M_t(\omega )\) for any \(t\ge 0\) for a.a. \(\omega \in \Omega \).
A process N satisfying (i) and (ii) is called a local martingale field. Moreover, it was shown in [80] that for any \(1<p<\infty \)
where \(\sigma \mapsto [N(\cdot , \cdot , \sigma )]_{\infty }^{1/2}\), \(\sigma \in S\), defines an element of X a.s. The goal of the present subsection is to show that (8.1) holds for \(p=1\).
Theorem 8.2
Let X be a UMD Banach function space over a \(\sigma \)-finite measure space \((S, \Sigma ,\mu )\), \(M:{\mathbb {R}}_+ \times \Omega \rightarrow X\) be a local martingale. Then there exists a local martingale field \(N:{\mathbb {R}}_+ \times \Omega \times S \rightarrow {\mathbb {R}}\) such that \(N(\omega , t,\cdot ) = M_t(\omega )\) for all \(t\ge 0\) for a.a. \(\omega \in \Omega \), and for all \(1\le p<\infty \)
Let us first show the discrete version of Theorem 8.2, which was shown in [70, Theorem 3] for the case \(p\in (1,\infty )\).
Proposition 8.3
Let X be a UMD Banach function space over a measure space \((S, \Sigma ,\mu )\), \((d_n)_{n\ge 1}\) be an X-valued martingale difference sequence. Then for all \(1\le p<\infty \)
Proof
The proof follows from Theorem 2.1 and the equivalence [33, (9.26)] between the \(\gamma \)-norm and the square function. \(\quad \square \)
Remark 8.4
By Remark 2.4 and [33, (9.26)] one has that for any \(r\in (1,\infty )\) there exist positive \(C_{r, X}\) and \(c_{r,X}\) such that for any \(1\le p\le r\)
We will also need the following technical lemma proved in [80, Section 4]. Recall that \({\mathcal {D}}_b([0,\infty );X)\) is the Banach space of all bounded X-valued càdlàg functions on \({\mathbb {R}}_+\), which is also known as a Skorohod space.
Lemma 8.5
Let X be a Banach function space over a \(\sigma \)-finite measure space \((S, \Sigma ,\mu )\). Let
where
Then \((\text {MQ}^1(X), \Vert \cdot \Vert _{\text {MQ}^1(X)})\) is a Banach space. Moreover, if \(N^n\rightarrow N\) in \(\text {MQ}^1\), then there exists a subsequence \((N^{n_k})_{k\ge 1}\) such that pointwise a.e. in S, we have \(N^{n_k}\rightarrow N\) in \(L^1(\Omega ;\mathcal D_b([0,\infty )))\).
Proof of Theorem 8.2
We will consider separately the cases \(p>1\) and \(p=1\).
Case \(p>1\). This case was covered in [80]. Nevertheless, we wish to notice that by modifying the proof from [80] by using Proposition 8.3 one can obtain better behavior of the equivalence constants in (8.2). Namely, by exploiting the same proof together with Proposition 8.3 and Remark 8.4 one obtains that for any \(p'\in (1,\infty )\) there exist positive \(C_{p', X}\) and \(c_{p',X}\) (the same as in Remark 8.4) such that for any \(1< p\le p'\)
Case \(p=1\). By Theorem 7.11 there exists a sequence \((M^n)_{n\ge 1}\) of uniformly bounded X-valued martingales such that
Since \(M^n\) is uniformly bounded for any \(n\ge 1\), \({\mathbb {E}} \sup _{t\ge 0} \Vert M^n_t\Vert ^2 <\infty \), so by Case \(p>1\) there exists a local martingale field \(N^n\) such that \(N^n(\omega , t,\cdot ) = M^n_t(\omega )\) for all \(t\ge 0\) for a.a. \(\omega \in \Omega \). By (8.4) one has that there exist positive constants \(C_X\) and \(c_X\) such that for all \(m, n\ge 1\)
hence due to (8.5) \((N^n)_{n\ge 1}\) is a Cauchy sequence in \(\text {MQ}^1(X)\). Since by Lemma 8.5 the linear space \(\text {MQ}^1(X)\) endowed with the norm (8.3) is Banach, there exists a limit N of \((N^n)_{n\ge 1}\) in \(\text {MQ}^1(X)\).
Let us show that N is the desired local martingale field. Fix \(t\ge 0\). We need to who that \(N(\cdot ,t,\cdot ) = M_t\) a.s. on \(\Omega \). First notice that by the last part of Lemma 8.5 there exists a subsequence of \((N^n)_{n\ge 1}\) which we will denote by \((N^n)_{n\ge 1}\) as well such that \(N^n(\cdot , t, \sigma ) \rightarrow N(\cdot , t, \sigma )\) in \(L^1(\Omega )\) for a.e. \(\sigma \in S\). On the other hand by Jensen’s inequality
Hence \(N^n(\cdot ,t,\cdot )\rightarrow M_t\) in \(X(L^1(\Omega ))\), and thus by Definition 8.1(vi) in \(L^0(S;L^1(\Omega ))\). Therefore we can find a subsequence of \((N^n)_{n\ge 1}\) (which we will again denote by \((N^n)_{n\ge 1}\)) such that \(N^n(\cdot ,t,\sigma )\rightarrow M_t(\sigma )\) in \(L^1(\Omega )\) for a.e. \(\sigma \in S\) (here we use that fact that \(\mu \) is \(\sigma \)-finite), so \(N(\cdot , t, \cdot ) = M_t\) a.s. on \(\Omega \times S\), and consequently by Definition 8.1(iii), \(N(\omega , t, \cdot ) = M_t(\omega )\) for a.a. \(\omega \in \Omega \).
Let us finally show (8.2). Since \(N^n \rightarrow N\) in \(\text {MQ}^1(X)\) and by (8.5)
which terminates the proof. \(\quad \square \)
Remark 8.6
It was shown in [80] that in the case \(p>1\) the equivalence (8.2) can be strengthen. Namely, in this case one can show that
i.e. one has the same equivalence with a pointwise supremum in S. The techniques that provide such an improvement were discovered by Rubio de Francia in [70]. Unfortunately, it remains open whether (8.6) holds for \(p=1\). Surprisingly, (8.6) holds for \(p=1\) and for \(X=L^1(S)\) by a simple Fubini-type argument, so it might be that (8.6) holds for \(p=1\) even for other nonreflexive Banach spaces.
References
Bahouri, H., Chemin, J.-Y., Danchin, R.: Fourier Analysis and Nonlinear Partial Differential Equations. Volume 343 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Heidelberg (2011)
Björk, T., Di Masi, G., Kabanov, Y., Runggaldier, W.: Towards a general theory of bond markets. Finance Stoch. 1(2), 141–174 (1997)
Bogachev, V.I.: Gaussian measures. Volume 62 of Mathematical Surveys and Monographs. American Mathematical Society, Providence (1998)
Bourgain, J.: Some remarks on Banach spaces in which martingale difference sequences are unconditional. Ark. Mat. 21(2), 163–168 (1983)
Brzeźniak, Z., van Neerven, J.M.A.M.: Stochastic convolution in separable Banach spaces and the stochastic linear Cauchy problem. Studia Math. 143(1), 43–74 (2000)
Burkholder, D.L.: Distribution function inequalities for martingales. Ann. Probab. 1, 19–42 (1973)
Burkholder, D.L.: A geometrical characterization of Banach spaces in which martingale difference sequences are unconditional. Ann. Probab. 9(6), 997–1011 (1981)
Burkholder, D.L.: Martingales and Fourier Analysis in Banach Spaces. In Probability and Analysis (Varenna, 1985), Volume 1206 of Lecture Notes in Mathematics, pp. 61–108. Springer, Berlin (1986)
Burkholder, D.L.: Sharp inequalities for martingales and stochastic integrals. Astérisque, (157–158):75–94, 1988. Colloque Paul Lévy sur les Processus Stochastiques (Palaiseau, 1987)
Burkholder, D.L.: Explorations in Martingale Theory and Its Applications. In École d’Été de Probabilités de Saint-Flour XIX—1989, Volume 1464 of Lecture Notes in Mathematics, pp. 1–66. Springer, Berlin (1991)
Burkholder, D.L.: Martingales and Singular Integrals in Banach Spaces. In Handbook of the Geometry of Banach spaces, Vol. I, pp. 233–269. North-Holland, Amsterdam (2001)
Burkholder, D.L., Davis, B.J., Gundy, R.F.: Integral inequalities for convex functions of operators on martingales. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. II: Probability Theory, pages 223–240. University California Press, Berkeley, California (1972)
Cox, S.G., Geiss, S.: On decoupling in Banach spaces (2018). arXiv:1805.12377
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Volume 152 of Encyclopedia of Mathematics and its Applications, 2nd edn. Cambridge University Press, Cambridge (2014)
Day, M.M.: Some characterizations of inner-product spaces. Trans. Am. Math. Soc. 62, 320–337 (1947)
de la Peña, V.H., Giné, E.: Decoupling. Probability and its Applications (New York). From dependence to independence, Randomly stopped processes. \(U\)-statistics and processes. Martingales and beyond. Springer, New York (1999)
Dirksen, S.: Itô isomorphisms for \(L^p\)-valued Poisson stochastic integrals. Ann. Probab. 42(6), 2595–2643 (2014)
Dirksen, S., Marinelli, C., Yaroslavtsev, I.S.: Stochastic evolution equations in \({L}^p\)-spaces driven by jump noise. In preparation
Dirksen, S., Yaroslavtsev, I.S.: \({L}^{q}\)-valued Burkholder-Rosenthal inequalities and sharp estimates for stochastic integrals. Proc. Lond. Math. Soc. (3) 119(6), 1633–1693 (2019)
Doléans, C.: Variation quadratique des martingales continues à droite. Ann. Math. Stat. 40, 284–289 (1969)
Fama, E.F.: Mandelbrot and the stable Paretian hypothesis. J. Bus. 36(4), 420–429 (1963)
Garling, D.J.H.: Brownian motion and UMD-spaces. In Probability and Banach spaces (Zaragoza, 1985), volume 1221 of Lecture Notes in Mathematics, pp. 36–49. Springer, Berlin (1986)
Garling, D.J.H.: Random martingale transform inequalities. In Probability in Banach Spaces 6 (Sandbjerg, 1986), Volume 20 of Progress in Probability, pp. 101–119. Birkhäuser, Boston (1990)
Geiss, S.: A counterexample concerning the relation between decoupling constants and UMD-constants. Trans. Am. Math. Soc. 351(4), 1355–1375 (1999)
Geiss, S., Montgomery-Smith, S., Saksman, E.: On singular integral and martingale transforms. Trans. Am. Math. Soc. 362(2), 553–575 (2010)
Geiss, S., Yaroslavtsev, I.S.: Dyadic and stochastic shifts and Volterra-type operators. In preparation
Goldys, B., van Neerven, J.M.A.M.: Transition semigroups of Banach space-valued Ornstein–Uhlenbeck processes. Acta Appl. Math. 76(3), 283–330 (2003)
Gravereaux, J.B., Pellaumail, J.: Formule de Ito pour des processus non continus à valeurs dans des espaces de Banach. Ann. Inst. H. Poincaré Sect. B (N.S.) 10(399–422), 1974 (1975)
Grigelionis, B.: The representation of integer-valued random measures as stochastic integrals over the Poisson measure. Litovsk. Mat. Sb. 11, 93–108 (1971)
Gyöngy, I., Krylov, N.V.: On stochastic equations with respect to semimartingales. I. Stochastics 4(1):1–21 (1980/81)
Halpin-Healy, T., Zhang, Y-Ch.: Kinetic roughening phenomena, stochastic growth, directed polymers and all that. Aspects of multidisciplinary statistical mechanics. Phys Rep 254(4–6), 215–414 (1995)
Hytönen, T.P., van Neerven, J.M.A.M., Veraar, M.C., Weis, L.: Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer, (2016)
Hytönen, T.P., van Neerven, J.M.A.M., Veraar, M.C., Weis, L.: Analysis in Banach Spaces. Vol. II. Probabilistic Methods and Operator Theory, Volume 67 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer, Cham (2017)
Hytönen, T.P., van Neerven, J.M.A.M., Veraar, M.C., Weis, L.: Analysis in Banach spaces. Vol. III. Harmonic and Spectral Theory. Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer, in preparation
Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, Volume 288 of Grundlehren der Mathematischen Wissenschaften. 2nd edn, Springer, Berlin (2003)
Kak, S.C.: The discrete Hilbert transform. Proc. IEEE 58(4), 585–586 (1970)
Kallenberg, O.: Foundations of Modern Probability. Probability and its Applications (New York). 2nd edn, Springer, New York (2002)
Kallenberg, O.: Random Measures, Theory and Applications, Volume 77 of Probability Theory and Stochastic Modelling. Springer, Cham (2017)
Kallenberg, O., Sztencel, R.: Some dimension-free features of vector-valued martingales. Probab. Theory Related Fields 88(2), 215–247 (1991)
Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus, Volume 113 of Graduate Texts in Mathematics. 2nd edn, Springer, New York (1991)
Keyantuo, V., Lizama, C.: Fourier multipliers and integro-differential equations in Banach spaces. J. Lond. Math. Soc. (2) 69(3), 737–750 (2004)
Kingman, J.F.C.: Poisson Processes, Volume 3 of Oxford Studies in Probability. Oxford University Press, New York (1993)
Krylov, N.V.: On SPDE’s and superdiffusions. Ann. Probab. 25(4), 1789–1809 (1997)
Krylov, N.V.: An Analytic Approach to SPDEs. In Stochastic Partial Differential Equations: Six Perspectives, Volume 64 of Mathematics Surveys Monograph, pp. 185–242. American Mathematics Society, Providence (1999)
Kuo, H.H.: Gaussian Measures in Banach Spaces. Lecture Notes in Mathematics, vol. 463. Springer, Berlin (1975)
Ledoux, M., Talagrand, M.: Probability in Banach Spaces, Volume 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3). Springer, Berlin (1991)
Lenglart, E.: Relation de domination entre deux processus. Ann. Inst. H. Poincaré Sect. B (N.S.) 13(2), 171–179 (1977)
Lindemulder, N.: Weighted Function Spaces with Applications to Boundary Value Problems. Ph.D. thesis, Delft University of Technology (2019)
Lindemulder, N., Veraar, M.C., Yaroslavtsev, I.S.: The UMD property for Musielak–Orlicz Spaces. In Positivity and Noncommutative Analysis, Trends Math., pp. 349–363. Springer, Cham (2019)
Lindenstrauss, J., Tzafriri, L.: Classical Banach Spaces. II, Volume 97 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer, Berlin (1979)
Marinelli, C.: On maximal inequalities for purely discontinuous \(L_q\)-valued martingales (2013). arXiv:1311.7120
Marinelli, C., Röckner, M.: On the maximal inequalities of Burkholder, Davis and Gundy. Expo. Math. 34(1), 1–26 (2016)
Maurey, B., Pisier, G.: Séries de variables aléatoires vectorielles indépendantes et propriétés géométriques des espaces de Banach. Studia Math. 58(1), 45–90 (1976)
Métivier, M.: Semimartingales, volume 2 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., Berlin (1982)
Métivier, M., Pellaumail, J.: Stochastic Integration. Probability and Mathematical Statistics. Academic Press [Harcourt Brace Jovanovich, Publishers], New York (1980)
Meyer, P.A.: Notes sur les intégrales stochastiques. I. Intégrales hilbertiennes. Lecture Notes in Mathematics, Vol. 581, pp. 446–462 (1977)
Meyer-Nieberg, P.: Banach Lattices. Universitext. Springer, Berlin (1991)
van Neerven, J.M.A.M.: \(\gamma \)-radonifying operators—a survey. In: The AMSI-ANU Workshop on Spectral Theory and Harmonic Analysis, Volume 44 of Proceedings of the Centre for Mathematics and its Applications, Australian National University, pp. 1–61 (2010)
van Neerven, J.M.A.M., Veraar, M.C., Weis, L.W.: Stochastic integration in UMD Banach spaces. Ann. Probab. 35(4), 1438–1478 (2007)
van Neerven, J.M.A.M., Veraar, M.C., Weis, L.W.: Stochastic evolution equations in UMD Banach spaces. J. Funct. Anal. 255(4), 940–993 (2008)
van Neerven, J.M.A.M., Weis, L.W.: Stochastic integration of functions with values in a Banach space. Studia Math. 166(2), 131–170 (2005)
Novikov, A.A.: Discontinuous martingales. Teor. Verojatnost. i Primemen. 20, 13–28 (1975)
Osȩkowski, A.: On relaxing the assumption of differential subordination in some martingale inequalities. Electron. Commun. Probab. 16, 9–21 (2011)
Osȩkowski, A.: Sharp Martingale and Semimartingale Inequalities, Volume 72 of Instytut Matematyczny Polskiej Akademii Nauk. Monografie Matematyczne (New Series). Springer, Basel (2012)
Osȩkowski, A., Yaroslavtsev, I.S.: The Hilbert transform and orthogonal martingales in Banach spaces. Int. Math. Res. Notices IMRN (2019). https://doi.org/10.1093/imrn/rnz187
Pisier, G.: Martingales in Banach Spaces, vol. 155. Cambridge University Press, Cambridge (2016)
Protter, P.E.: Stochastic Integration and Differential Equations, Volume 21 of Stochastic Modelling and Applied Probability. 2nd edn, Springer, Berlin (Version 2.1, Corrected third printing) (2005)
Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, Volume 293 of Grundlehren der Mathematischen Wissenschaften. 3rd edn, Springer, Berlin (1999)
Rozovskiĭ, B.L.: Stochastic Evolution Systems, Volume 35 of Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, (1990). Linear theory and applications to nonlinear filtering, Translated from the Russian by A. Yarkho
Rubio de Francia, J.L.: Martingale and integral transforms of Banach space valued functions. In Probability and Banach spaces (Zaragoza, 1985), Volume 1221 of Lecture Notes in Mathematics, pp. 195–222. Springer, Berlin (1986)
Santos, F.: Inscribing a symmetric body in an ellipse. Inform. Process. Lett. 59(4), 175–178 (1996)
Sato, K.-i.: Lévy Processes and Infinitely Divisible Distributions, Volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, (2013). Translated from the 1990 Japanese original, Revised edition of the 1999 English translation
Shafarevich, I.R., Remizov, A.O.: Linear Algebra and Geometry. Springer, Heidelberg, (2013). Translated from the 2009 Russian original by David Kramer and Lena Nekludova
Shiryaev, A.N., Chernyĭ, A.S.: A vector stochastic integral and the fundamental theorem of asset pricing. Tr. Mat. Inst. Steklova, 237(Stokhast. Finans. Mat.):12–56 (2002)
van Neerven, J.M.A.M.: Nonsymmetric Ornstein-Uhlenbeck semigroups in Banach spaces. J. Funct. Anal. 155(2), 495–535 (1998)
Veraar, M.C.: Stochastic integration in Banach spaces and applications to parabolic evolution equations. Ph.D. thesis, TU Delft, Delft University of Technology (2006)
Veraar, M.C.: Continuous local martingales and stochastic integration in UMD Banach spaces. Stochastics 79(6), 601–618 (2007)
Veraar, M.C.: Randomized UMD Banach spaces and decoupling inequalities for stochastic integrals. Proc. Am. Math. Soc. 135(5), 1477–1486 (2007)
Veraar, M.C., Yaroslavtsev, I.S.: Cylindrical continuous martingales and stochastic integration in infinite dimensions. Electron. J. Probab., 21: Paper No. 59, 53 (2016)
Veraar, M.C., Yaroslavtsev, I.S.: Pointwise properties of martingales with values in Banach function spaces. In: High Dimensional Probability VIII, pp. 321–340. Springer, Berlin (2019)
Weisz, F.: Martingale Hardy Spaces with Continuous Time. In Probability Theory and Applications, Volume 80 of Mathematics Application, pp. 47–75. Kluwer Academic Publication, Dordrecht (1992)
Yaroslavtsev, I.S.: Fourier multipliers and weak differential subordination of martingales in UMD Banach spaces. Studia Math. 243(3), 269–301 (2018)
Yaroslavtsev, I.S.: Local characteristics and tangency of vector-valued martingales (2019). arXiv:1907.11588
Yaroslavtsev, I.S.: Martingale decompositions and weak differential subordination in UMD Banach spaces. Bernoulli 25(3), 1659–1689 (2019)
Yaroslavtsev, I.S.: On the martingale decompositions of Gundy, Meyer, and Yoeurp in infinite dimensions. Ann. Inst. Henri Poincaré Probab. Stat. 55(4), 1988–2018 (2019)
Zaanen, A.C.: Integration. North-Holland Publishing Co., Amsterdam; Interscience Publishers Wiley, New York (1967)
Acknowledgements
The author would like to thank Mark Veraar and Jan van Neerven for inspiring conversations, careful reading of the paper, and useful suggestions. The author thanks Adam Osękowski for discussing inequality (7.13). We also thank Stanisław Kwapień for his hint on Proposition 3.13 and Emiel Lorist for his helpful comments on Banach function spaces (see Section 8). The author thanks Ben Goldys, Carlo Marinelli, and the anonymous referees for their useful suggestions.
Funding
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by M. Hairer.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yaroslavtsev, I. Burkholder–Davis–Gundy Inequalities in UMD Banach Spaces. Commun. Math. Phys. 379, 417–459 (2020). https://doi.org/10.1007/s00220-020-03845-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00220-020-03845-7