Advertisement

On Maximal Inequalities for Purely Discontinuous Martingales in Infinite Dimensions

  • Carlo Marinelli
  • Michael Röckner
Chapter
Part of the Lecture Notes in Mathematics book series (LNM, volume 2123)

Abstract

The purpose of this paper is to give a survey of a class of maximal inequalities for purely discontinuous martingales, as well as for stochastic integral and convolutions with respect to Poisson measures, in infinite dimensional spaces. Such maximal inequalities are important in the study of stochastic partial differential equations with noise of jump type.

Keywords

Stochastic Integral Stochastic Partial Differential Equation Maximal Inequality Poisson Random Measure Stochastic Convolution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The purpose of this work is to collect several proofs, in part revisited and extended, of a class of maximal inequalities for stochastic integrals with respect to compensated random measures, including Poissonian integrals as a special case. The precise formulation of these inequalities can be found in Sects. 35 below. Their main advantage over the maximal inequalities of Burkholder, Davis and Gundy is that their right-hand side is expressed in terms of predictable “ingredients”, rather than in terms of the quadratic variation. Since our main motivation is the application to stochastic partial differential equations (SPDE), in particular to questions of existence, uniqueness, and regularity of solutions (cf. [25, 26, 27, 28, 29]), we focus on processes in continuous time taking values in infinite-dimensional spaces. Corresponding estimates for finite-dimensional processes have been used in many areas, for instance in connection to Malliavin calculus for processes with jumps, flow properties of solutions to SDEs, and numerical schemes for Lévy-driven SDEs (see e.g. [2, 16, 18]). Very recent extensions to vector-valued settings have been used to develop the theory of stochastic integration with jumps in (certain) Banach spaces (see [8] and references therein).

We have tried to reconstruct the historical developments around this class of inequalities (an investigation which led us to quite a few surprises), providing relevant references, and we hope that our account could at least serve to correct some terminology that seems, from an historical point of view, not appropriate. In fact, while we refer to Sect. 6 below for details, it seems important to remark already at this stage that the estimates which we termed “Bichteler-Jacod’s inequalities” in our previous article [27] should have probably more rightfully been baptized as “Novikov’s inequalities”, in recognition of the contribution [33].

Let us conclude this introductory section with a brief outline of the remaining content: after fixing some notation and collecting a few elementary (but useful) results in Sect. 2, we state and prove several upper and lower bounds for purely discontinuous Hilbert-space-valued continuous-time martingales in Sect. 3. We actually present several proofs, adapting, simplifying, and extending arguments of the existing literature. The proofs in Sects. 3.2 and 3.3 might be, at least in part, new. On the issue of who proved what and when, however, we refer to the (hopefully) comprehensive discussion in Sect. 6. Section 4 deals with L q -valued processes that can be written as stochastic integrals with respect to compensated Poisson random measures. Unfortunately, to keep this survey within a reasonable length, it has not been possible to reproduce the proof, for which we refer to the original contribution [8]. The (partial) extension to the case of stochastic convolutions is discussed in Sect. 5.

2 Preliminaries

Let \((\varOmega,\mathcal{F}, \mathbb{F} = (\mathcal{F}_{t})_{t\geq 0}, \mathbb{P})\) be a filtered probability space satisfying the “usual” conditions, on which all random elements will be defined, and H a real (separable) Hilbert space with norm \(\Vert \cdot \Vert\). If ξ is an E-valued random variable, with E a normed space, and p > 0, we shall use the notation1
$$\displaystyle{\Vert \xi \Vert _{\mathbb{L}_{p}(E)}:={\bigl ( \mathbb{E}\Vert \xi \Vert _{E}^{p}\bigr )}^{1/p}.}$$
Let μ be a random measure on a measurable space \((Z,\mathcal{Z})\), with dual predictable projection (compensator) ν. We shall use throughout the paper the symbol M to denote a martingale of the type \(M = g\star \bar{\mu }\), where \(\bar{\mu }:=\mu -\nu\) and g is a vector-valued (predictable) integrand such that the stochastic integral
$$\displaystyle{(g\star \bar{\mu })_{t}:=\int _{(0,t]}\!\int _{Z}g(s,z)\,\bar{\mu }(\mathit{ds},\mathit{dz})}$$
is well defined. We shall deal only with the case that g (hence M) takes values in H or in an L q space. Integrals with respect to μ, ν and \(\bar{\mu }\) will often be written in abbreviated form, e.g. \(\int _{0}^{t}g\,d\bar{\mu }:= (g\star \bar{\mu })_{t}\) and \(\int g\,d\bar{\mu }:= (g\star \bar{\mu })_{\infty }\). If M is H-valued, the following well-known identities hold for the quadratic variation [M, M] and the Meyer process \(\langle M,M\rangle\):
$$\displaystyle{[M,M]_{T} =\sum _{s\leq T}\Vert \varDelta M_{s}\Vert ^{2} =\int _{ 0}^{T}\Vert g\Vert ^{2}\,d\mu,\qquad \langle M,M\rangle _{ T} =\int _{ 0}^{T}\Vert g\Vert ^{2}\,d\nu }$$
for any stopping time T. Moreover, we shall need the fundamental Burkholder-Davis-Gundy’s (BDG) inequality:
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \eqsim\Vert [M,M]_{\infty }^{1/2}\Vert _{ \mathbb{L}_{p}}\qquad \forall p \in [1,\infty [,}$$
where \(M_{\infty }^{{\ast}}:=\sup _{t\geq 0}\Vert M_{t}\Vert\). An expression of the type \(a \lesssim b\) means that there exists a (positive) constant N such that a ≤ Nb. If N depends on the parameters p 1, , p n , we shall write \(a \lesssim _{p_{1},\ldots,p_{n}}b\). Moreover, if \(a \lesssim b\) and \(b \lesssim a\), we shall write \(a \eqsim b\).

The following lemma about (Fréchet) differentiability of powers of the norm of a Hilbert space is elementary and its proof is omitted.

Lemma 1

Let \(\phi: H \rightarrow \mathbb{R}\) be defined as \(\phi: x\mapsto \Vert x\Vert ^{p}\) , with p > 0. Then \(\phi \in C^{\infty }(H\setminus \{0\})\) , with first and second Fréchet derivatives
$$\displaystyle\begin{array}{rcl} \phi ^{{\prime}}(x):\eta \mapsto p\Vert x\Vert ^{p-2}\langle x,\eta \rangle,& &{}\end{array}$$
(1)
$$\displaystyle\begin{array}{rcl} \phi ^{{\prime\prime}}(x): (\eta,\zeta )\mapsto p(p - 2)\Vert x\Vert ^{p-4}\langle x,\eta \rangle \langle x,\zeta \rangle +p\Vert x\Vert ^{p-2}\langle \eta,\zeta \rangle.& &{}\end{array}$$
(2)
In particular, ϕ ∈ C 1 (H) if p > 1, and ϕ ∈ C 2 (H) if p > 2.

It should be noted that, here and in the following, for \(p \in \left [1,2\right [\) and \(p \in \left [2,4\right [\), the linear form \(\Vert x\Vert ^{p-2}\langle x,\cdot \rangle\) and the bilinear form \(\Vert x\Vert ^{p-4}\langle x,\cdot \rangle \langle x,\cdot \rangle\), respectively, have to be interpreted as the zero form if x = 0.

The estimate contained in the following lemma is simple but perhaps not entirely trivial.

Lemma 2

Let 1 ≤ p ≤ 2. One has, for any x,y ∈ H,
$$\displaystyle{ 0 \leq \Vert x + y\Vert ^{p} -\Vert x\Vert ^{p} - p\Vert x\Vert ^{p-2}\langle x,y\rangle \lesssim _{ p}\Vert y\Vert ^{p}. }$$
(3)

Proof

We can clearly assume x, y ≠ 0, otherwise (3) trivially holds. Since the function \(\phi: x\mapsto \Vert x\Vert ^{p}\) is convex and Fréchet differentiable on \(H\setminus \{0\}\) for all p ≥ 1, one has
$$\displaystyle{\phi (x + y) -\phi (x) \geq \langle \nabla \phi (x),y\rangle,}$$
hence, by (1),
$$\displaystyle{\Vert x + y\Vert ^{p} -\Vert x\Vert ^{p} - p\Vert x\Vert ^{p-2}\langle x,y\rangle \geq 0.}$$
To prove the upper bound we distinguish two cases: if \(\Vert x\Vert \leq 2\Vert y\Vert\), it is immediately seen that (3) is true; if \(\Vert x\Vert > 2\Vert y\Vert\), Taylor’s formula applied to the function \([0,1] \ni t\mapsto \Vert x + ty\Vert ^{p}\) implies
$$\displaystyle{\Vert x + y\Vert ^{p} -\Vert x\Vert ^{p} - p\Vert x\Vert ^{p-2}\langle x,y\rangle \lesssim _{ p}\Vert x +\theta y\Vert ^{p-2}\Vert y\Vert ^{2}}$$
for some θ ∈ ]0, 1[ (in particular x +θ y ≠ 0). Moreover, we have
$$\displaystyle{\Vert x +\theta y\Vert \geq \Vert x\Vert -\Vert y\Vert > 2\Vert y\Vert -\Vert y\Vert =\Vert y\Vert,}$$
hence, since p − 2 ≤ 0, \(\Vert x +\theta y\Vert ^{p-2} \leq \Vert y\Vert ^{p-2}\). □ 

For the purposes of the following lemma only, let \((X,\mathcal{A},m)\) be a measure space, and denote \(L_{p}(X,\mathcal{A},m)\) simply by L p .

Lemma 3

Let 1 < q < p. For any α ≥ 0, one has
$$\displaystyle{\Vert f\Vert _{L_{q}}^{\alpha } \leq \Vert f\Vert _{L_{1}}^{\alpha } +\Vert f\Vert _{L_{p}}^{\alpha }}$$

Proof

By a well-known consequence of Hölder’s inequality one has
$$\displaystyle{\Vert f\Vert _{L_{q}} \leq \Vert f\Vert _{L_{1}}^{r}\,\Vert f\Vert _{ L_{p}}^{1-r},}$$
for some 0 < r < 1. Raising to the power α and applying Young’s inequality (see e.g. [13, Sect. 4.8]) with conjugate exponents \(s:= 1/r\) and \(s^{{\prime}}:= 1/(1 - r)\) yields
$$\displaystyle{\Vert f\Vert _{L_{q}}^{\alpha } \leq \Vert f\Vert _{L_{1}}^{r\alpha }\,\Vert f\Vert _{ L_{p}}^{(1-r)\alpha } \leq r\Vert f\Vert _{ L_{1}}^{\alpha } + (1 - r)\Vert f\Vert _{L_{p}}^{\alpha } \leq \Vert f\Vert _{L_{1}}^{\alpha } +\Vert f\Vert _{L_{p}}^{\alpha }.}$$
 □ 

3 Inequalities for Martingales with Values in Hilbert Spaces

The following domination inequality, due to Lenglart [21], will be used several times.

Lemma 4

Let X and A be a positive adapted right-continuous process and an increasing predictable process, respectively, such that \(\mathbb{E}[X_{T}\vert \mathcal{F}_{0}] \leq \mathbb{E}[A_{T}\vert \mathcal{F}_{0}]\) for any bounded stopping time. Then one has
$$\displaystyle{\mathbb{E}(X_{\infty }^{{\ast}})^{p} \lesssim _{ p}\mathbb{E}A_{\infty }^{p}\qquad \forall p \in ]0,1[.}$$

Theorem 1

Let α ∈ [1,2]. One has
$$\displaystyle{ \mathbb{E}(M_{\infty }^{{\ast}})^{p} \lesssim _{\alpha,p}\left \{\begin{array}{@{}l@{\quad }l@{}} \mathbb{E}{\biggl (\int \Vert g\Vert ^{\alpha }\,d\nu \biggr )}^{p/\alpha } \quad &\quad \forall p \in \left ]0,\alpha \right ], \\ \mathbb{E}{\biggl (\int \Vert g\Vert ^{\alpha }\,d\nu \biggr )}^{p/\alpha } + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu \quad &\quad \forall p \in \left [\alpha,\infty \right [, \end{array} \right. }$$
(BJ)
and
$$\displaystyle{ \mathbb{E}(M_{\infty }^{{\ast}})^{p} \gtrsim _{ p}\mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu \qquad \forall p \in \left [2,\infty \right [. }$$
(4)

Sometimes we shall use the notation BJ α, p to denote the inequality BJ with parameters α and p.

Several proofs of BJ will be given below. Before doing that, a few remarks are in order. Choosing α = 2 and α = p, respectively, one obtains the probably more familiar expressions
$$\displaystyle{\mathbb{E}(M_{\infty }^{{\ast}})^{p} \lesssim _{ p}\left \{\begin{array}{@{}l@{\quad }l@{}} \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} \quad &\quad \forall p \in \left ]0,2\right ], \\ \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu \quad &\quad \forall p \in [1,2], \\ \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu \quad &\quad \forall p \in \left [2,\infty \right [. \end{array} \right.}$$
In more compact notation, BJ may equivalently be written as
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \lesssim _{\alpha,p}\left \{\begin{array}{@{}l@{\quad }l@{}} \Vert g\Vert _{\mathbb{L}_{p}(L_{\alpha }(\nu ))} \quad &\quad \forall p \in \left ]0,\alpha \right ], \\ \Vert g\Vert _{\mathbb{L}_{p}(L_{\alpha }(\nu ))} +\Vert g\Vert _{\mathbb{L}_{p}(L_{p}(\nu ))}\quad &\quad \forall p \in \left [\alpha,\infty \right [, \end{array} \right.}$$
where
$$\displaystyle{\Vert g\Vert _{\mathbb{L}_{p}(L_{\alpha }(\nu ))}:=\Vert \Vert g\Vert _{L_{\alpha }(\nu )}\Vert _{\mathbb{L}_{p}},\qquad \Vert g\Vert _{L_{\alpha }(\nu )}:={\biggl (\int \Vert g\Vert ^{\alpha }\,d\nu \biggr )}^{1/\alpha }.}$$
This notation is convenient but slightly abusive, as it is not standard (nor clear how) to define L p spaces with respect to a random measure. However, if μ is a Poisson measure, then ν is “deterministic” (i.e. it does not depend on ω ∈ Ω), and the above notation is thus perfectly lawful. In particular, if ν is deterministic, it is rather straightforward to see that the above estimates imply
$$\displaystyle\begin{array}{rcl} & & \Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \lesssim _{p}\inf _{g_{1}+g_{2}=g}\Vert g_{1}\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))} +\Vert g_{2}\Vert _{\mathbb{L}_{p}(L_{p}(\nu ))} {}\\ & & \quad =:\Vert g\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))+\mathbb{L}_{p}(L_{p}(\nu ))},\qquad 1 \leq p \leq 2, {}\\ \end{array}$$
as well as
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \lesssim _{p}\max {\Bigl (\Vert g\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))},\Vert g\Vert _{\mathbb{L}_{p}(L_{p}(\nu ))}\Bigr )} =:\Vert g\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))\cap \mathbb{L}_{p}(L_{p}(\nu ))},\quad p \geq 2}$$
(for the notions of sum and intersection of Banach spaces see e.g. [17]). Moreover, since the dual space of \(\mathbb{L}_{p}(L_{2}(\nu )) \cap \mathbb{L}_{p}(L_{p}(\nu ))\) is \(\mathbb{L}_{p^{{\prime}}}(L_{2}(\nu )) + \mathbb{L}_{p^{{\prime}}}(L_{p^{{\prime}}}(\nu ))\) for any p ∈ [1, [, where \(1/p + 1/p^{{\prime}} = 1\), by a duality argument one can obtain the lower bound
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \gtrsim \Vert g\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))+\mathbb{L}_{p}(L_{p}(\nu ))}\qquad \forall p \in ]1,2].}$$
One thus has
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \eqsim _{p}\left \{\begin{array}{@{}l@{\quad }l@{}} \Vert g\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))+\mathbb{L}_{p}(L_{p}(\nu ))}\quad &\quad \forall p \in ]1,2], \\ \Vert g\Vert _{\mathbb{L}_{p}(L_{2}(\nu ))\cap \mathbb{L}_{p}(L_{p}(\nu ))}\quad &\quad \forall p \in [2,\infty [. \end{array} \right.}$$
By virtue of the Lévy-Itô decomposition and of the BDG inequality for stochastic integrals with respect to Wiener processes, the above maximal inequalities admit corresponding versions for stochastic integrals with respect to Lévy processes (cf. [15, 25]). We do not dwell on details here.

3.1 Proofs

We first prove the lower bound (4). The proof is taken from [24] (we recently learned, however, cf. Sect. 6 below, that the same argument already appeared in [10]).

Proof (Proof of (4))

Since p∕2 > 1, one has

$$\displaystyle{\mathbb{E}[M,M]_{\infty }^{p/2} = \mathbb{E}{\Bigl (\sum \Vert \varDelta M\Vert ^{2}\Bigr )}^{p/2} \geq \mathbb{E}\sum \Vert \varDelta M\Vert ^{p} = \mathbb{E}\int \Vert g\Vert ^{p}\,d\mu = \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu,}$$
as well as, since \(x\mapsto x^{p/2}\) is convex,
$$\displaystyle{\mathbb{E}[M,M]_{\infty }^{p/2} \geq \mathbb{E}\langle M,M\rangle _{ \infty }^{p/2} = \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2},}$$
see e.g. [22]. Therefore, recalling the BDG inequality,
$$\displaystyle{\mathbb{E}(M_{\infty }^{{\ast}})^{p} \gtrsim \mathbb{E}[M,M]_{ \infty }^{p/2} \gtrsim \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu.}$$
 □ 

We now give several alternative arguments for the upper bounds.

The first proof we present is based on Itô’s formula and Lenglart’s domination inequality. It does not rely, in particular, on the BDG inequality, and it is probably, in this sense, the most elementary.

Proof (First Proof of BJ)

Let α ∈ ]1, 2], and \(\phi: H \ni x\mapsto \Vert x\Vert ^{\alpha } = h(\Vert x\Vert ^{2})\), with \(h: y\mapsto y^{\alpha /2}\). Furthermore, let \((h_{n})_{n\in \mathbb{N}}\) be a sequence of functions of class \(C_{c}^{\infty }(\mathbb{R})\) such that h n  → h pointwise, and define \(\phi _{n}: x\mapsto h_{n}(\Vert x\Vert ^{2})\), so that ϕ n  ∈ C b 2(H).2 Itô’s formula (see e.g. [31]) then yields
$$\displaystyle{\phi _{n}(M_{\infty }) =\int _{ 0}^{\infty }\phi _{ n}^{{\prime}}(M_{ -})\,\mathit{dM} +\sum {\bigl (\phi _{n}(M_{-} +\varDelta M) -\phi _{n}(M_{-}) -\phi _{n}^{{\prime}}(M_{ -})\varDelta M\bigr )}.}$$
Taking expectation and passing to the limit as n → , one has, by estimate (3) and the dominated convergence theorem,
$$\displaystyle\begin{array}{rcl} \mathbb{E}\Vert M_{\infty }\Vert ^{\alpha }& &\leq \mathbb{E}\sum {\bigl (\Vert M_{ -} +\varDelta M\Vert ^{\alpha } -\Vert M_{ -}\Vert ^{\alpha }-\alpha \Vert M_{ -}\Vert ^{\alpha -2}\langle M_{ -},\varDelta M\rangle \bigr )} {}\\ & & \lesssim _{\alpha }\mathbb{E}\sum \Vert \varDelta M\Vert ^{\alpha } = \mathbb{E}\int \Vert g\Vert ^{\alpha }\,d\mu = \mathbb{E}\int \Vert g\Vert ^{\alpha }\,d\nu, {}\\ \end{array}$$
which implies, by Doob’s inequality,
$$\displaystyle{\mathbb{E}(M_{\infty }^{{\ast}})^{\alpha } \lesssim _{\alpha }\mathbb{E}\int \Vert g\Vert ^{\alpha }\,d\nu.}$$
If α = 1 we cannot use Doob’s inequality, but we can argue by a direct calculation:
$$\displaystyle\begin{array}{rcl} \mathbb{E}M_{\infty }^{{\ast}} = \mathbb{E}\sup _{ t\geq 0}\Vert \int _{0}^{t}g\,d\bar{\mu }\Vert & \leq & \mathbb{E}\sup _{ t\geq 0}\Vert \int _{0}^{t}g\,d\mu \Vert + \mathbb{E}\sup _{ t\geq 0}\Vert \int _{0}^{t}g\,d\nu \Vert {}\\ & \leq & \mathbb{E}\sup _{t\geq 0}\int _{0}^{t}\Vert g\Vert \,d\mu + \mathbb{E}\sup _{ t\geq 0}\int _{0}^{t}\Vert g\Vert \,d\nu {}\\ & \leq & 2\mathbb{E}\int \Vert g\Vert \,d\nu. {}\\ \end{array}$$
An application of Lenglart’s domination inequality finishes the proof of the case α ∈ [1, 2], p ∈ ]0, α].
Let us now consider the case α = 2, p > 2. We apply Itô’s formula to a C b 2 approximation of \(x\mapsto \Vert x\Vert ^{p}\), as in the first part of the proof, then take expectation and pass to the limit, obtaining
$$\displaystyle{\mathbb{E}\Vert M_{\infty }\Vert ^{p} \leq \mathbb{E}\sum {\bigl (\Vert M_{ -} +\varDelta M\Vert ^{p} -\Vert M_{ -}\Vert ^{p} - p\Vert M_{ -}\Vert ^{p-2}\langle M_{ -},\varDelta M_{s}\rangle \bigr )}.}$$
Applying Taylor’s formula to the function \(t\mapsto \Vert x + ty\Vert\) we obtain, in view of (2),
$$\displaystyle\begin{array}{rcl} & & \Vert M_{-} +\varDelta M\Vert ^{p} -\Vert M_{ -}\Vert ^{p} - p\Vert M_{ -}\Vert ^{p-2}\langle M_{ -},\varDelta M\rangle {}\\ & & \quad = \frac{1} {2}p(p - 2)\Vert M_{-} +\theta \varDelta M\Vert ^{p-4}\langle M_{ -} +\theta \varDelta M,\varDelta M\rangle ^{2} {}\\ & & \qquad + \frac{1} {2}p\Vert M_{-} +\theta \varDelta M\Vert ^{p-2}\Vert \varDelta M\Vert ^{2} {}\\ & & \quad \leq \frac{1} {2}p(p - 1)\Vert M_{-} +\theta \varDelta M\Vert ^{p-2}\Vert \varDelta M\Vert ^{2}, {}\\ \end{array}$$
where \(\theta \equiv \theta _{s} \in \left ]0,1\right [\). Since \(\Vert M_{-} +\theta \varDelta M\Vert \leq \Vert M_{-}\Vert +\Vert \varDelta M\Vert\), we also have
$$\displaystyle{\Vert M_{-} +\theta \varDelta M\Vert ^{p-2} \lesssim _{ p}\Vert M_{-}\Vert ^{p-2} +\Vert \varDelta M\Vert ^{p-2} \leq (M_{ -}^{{\ast}})^{p-2} +\Vert \varDelta M\Vert ^{p-2}.}$$
Appealing to Doob’s inequality, one thus obtains
$$\displaystyle\begin{array}{rcl} \mathbb{E}{\bigl (M_{\infty }^{{\ast}}\bigr )}^{p} \lesssim _{ p}\mathbb{E}\Vert M_{\infty }\Vert ^{p}& \lesssim _{ p}& \mathbb{E}\sum {\bigl ((M_{-}^{{\ast}})^{p-2}\Vert \varDelta M\Vert ^{2} +\Vert \varDelta M\Vert ^{p}\bigr )} {}\\ & = & \mathbb{E}\int {\bigl ((M_{-}^{{\ast}})^{p-2}\Vert g\Vert ^{2} +\Vert g\Vert ^{p}\bigr )}\,d\mu {}\\ & = & \mathbb{E}\int {\bigl ((M_{-}^{{\ast}})^{p-2}\Vert g\Vert ^{2} +\Vert g\Vert ^{p}\bigr )}\,d\nu {}\\ & \leq & \mathbb{E}(M_{\infty }^{{\ast}})^{p-2}\int \Vert g\Vert ^{2}\,d\nu + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu. {}\\ \end{array}$$
By Young’s inequality in the form
$$\displaystyle{ab \leq \varepsilon a^{ \frac{p} {p-2} } + N(\varepsilon )b^{p/2},}$$
we are left with
$$\displaystyle{\mathbb{E}(M_{\infty }^{{\ast}})^{p} \leq \varepsilon N(p)\mathbb{E}(M_{ \infty }^{{\ast}})^{p} + N(\varepsilon,p)\mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu.}$$
The proof of the case p > α = 2 is completed choosing \(\varepsilon\) small enough.
We are thus left with the case α ∈ [1, 2[, p > α. Note that, by Lemma 3,
$$\displaystyle{\Vert \cdot \Vert _{L_{2}(\nu )} \leq \Vert \cdot \Vert _{L_{2}(\nu )} +\Vert \cdot \Vert _{L_{p}(\nu )} \lesssim \Vert \cdot \Vert _{L_{\alpha }(\nu )} +\Vert \cdot \Vert _{L_{p}(\nu )},}$$
hence the desired result follows immediately by the cases with α = 2 proved above. □ 

Remark 1

The proof of BJ 2, p , p ≥ 2, just given is a (minor) adaptation of the proof in [27], while the other cases are taken from [24]. However (cf. Sect. 6 below), essentially the same result with a very similar proof was already given by Novikov [33]. In the latter paper the author treats the finite-dimensional case, but the constants are explicitly dimension-free. Moreover, he deduces the case p < α from the case p = α using the extrapolation principle of Burkholder and Gundy [6], where we used instead Lenglart’s domination inequality. However, the proof of the latter is based on the former.

Proof (Second Proof of BJ α, p (p ≤ α))

An application of the BDG inequality to M, taking into account that α∕2 ≤ 1, yields
$$\displaystyle\begin{array}{rcl} \mathbb{E}(M_{T}^{{\ast}})^{\alpha }& \lesssim _{\alpha }& \mathbb{E}{\Bigl (\sum \nolimits _{ \leq T}\Vert \varDelta M\Vert ^{2}\Bigr )}^{\alpha /2} {}\\ & \leq & \mathbb{E}\sum \nolimits _{\leq T}\Vert \varDelta M\Vert ^{\alpha } = \mathbb{E}{\bigl (\Vert g\Vert ^{\alpha } \star \mu \bigr )} _{T} = \mathbb{E}{\bigl (\Vert g\Vert ^{\alpha } \star \nu \bigr )} _{T} {}\\ \end{array}$$
for any stopping time T. The result then follows by Lenglart’s domination inequality. □ 

We are now going to present several proofs for the case p > α. As seen at the end of the first proof of BJ, it suffices to consider the case p > α = 2.

Proof (Second Proof of BJ 2, p (p > 2))

Let us show that BJ 2, 2p holds if BJ 2, p does: the identity
$$\displaystyle{[M,M] =\Vert g\Vert ^{2}\star \mu =\Vert g\Vert ^{2} \star \bar{\mu } +\Vert g\Vert ^{2}\star \nu,}$$
the BDG inequality, and BJ 2, p imply
$$\displaystyle\begin{array}{rcl} \mathbb{E}(M_{\infty }^{{\ast}})^{2p}& \lesssim _{ p}& \mathbb{E}[M,M]_{\infty }^{p} \lesssim \mathbb{E}\big\vert {\bigl (\Vert g\Vert ^{2} \star \bar{\mu }\bigr )} _{ \infty }\big\vert ^{p} + \mathbb{E}{\bigl (\Vert g\Vert ^{2} \star \nu \bigr )}_{ \infty }^{p} \\ & \lesssim _{p}& \mathbb{E}\int \Vert g\Vert ^{2p}\,d\nu + \mathbb{E}{\biggl (\int \Vert g\Vert ^{4}\,d\nu \biggr )}^{p/2} + \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{\frac{1} {2} \,2p} \\ & = & \Vert g\Vert _{L_{2p}(\nu )}^{2p} +\Vert g\Vert _{ L_{4}(\nu )}^{2p} +\Vert g\Vert _{ L_{2}(\nu )}^{2p} {}\end{array}$$
(5)
Since 2 < 4 < 2p, one has, by Lemma 3,
$$\displaystyle{\Vert g\Vert _{L_{4}(\nu )}^{2p} \leq \Vert g\Vert _{ L_{2p}(\nu )}^{2p} +\Vert g\Vert _{ L_{2}(\nu )}^{2p},}$$
which immediately implies that BJ 2, 2p holds true. Let us now show that BJ 2, p implies BJ 2, 2p also for any p ∈ [1, 2]. Recalling that BJ 2, p does indeed hold for p ∈ [1, 2], this proves that BJ 2, p holds for all p ∈ [2, 4], hence for all p ≥ 2, thus completing the proof. In fact, completely similarly as above, one has, for any p ∈ [1, 2],
$$\displaystyle\begin{array}{rcl} \mathbb{E}(M_{\infty }^{{\ast}})^{2p}& \lesssim _{ p}& \mathbb{E}\big\vert {\bigl (\Vert g\Vert ^{2} \star \bar{\mu }\bigr )} _{ \infty }\big\vert ^{p} + \mathbb{E}{\bigl (\Vert g\Vert ^{2} \star \nu \bigr )}_{ \infty }^{p} {}\\ & \lesssim _{p}& \mathbb{E}\int \Vert g\Vert ^{2p}\,d\nu + \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{\frac{1} {2} \,2p}. {}\\ \end{array}$$
 □ 

Remark 2

The above proof, with p > 2, is adapted from [3], where the authors assume \(H = \mathbb{R}\) and p = 2 n , \(n \in \mathbb{N}\), mentioning that the extension to any p ≥ 2 can be obtained by an interpolation argument.

Proof (Third Proof of BJ 2, p (p > 2))

Let \(k \in \mathbb{N}\) be such that 2 k  ≤ p < 2 k+1. Applying the BDG inequality twice, one has
$$\displaystyle{\mathbb{E}\Vert (g\star \bar{\mu })_{\infty }\Vert ^{p} \lesssim _{ p}\mathbb{E}{\bigl (\Vert g\Vert ^{2} \star \mu \bigr )}_{ \infty }^{p/2} \lesssim _{ p}\mathbb{E}\big\vert {\bigl (\Vert g\Vert ^{2} \star \bar{\mu }\bigr )} _{ \infty }\big\vert ^{p/2} + \mathbb{E}{\bigl (\Vert g\Vert ^{2} \star \nu \bigr )}_{ \infty }^{p/2},}$$
where
$$\displaystyle{\mathbb{E}\big\vert {\bigl (\Vert g\Vert ^{2} \star \bar{\mu }\bigr )} _{ \infty }\big\vert ^{p/2} \lesssim _{ p}\mathbb{E}{\bigl (\Vert g\Vert ^{2} \star \mu \bigr )}_{ \infty }^{p/4} \lesssim _{ p}\mathbb{E}\big\vert {\bigl (\Vert g\Vert ^{4} \star \bar{\mu }\bigr )} _{ \infty }\big\vert ^{p/4} + \mathbb{E}{\bigl (\Vert g\Vert ^{4} \star \nu \bigr )}_{ \infty }^{p/4}.}$$
Iterating we are left with
$$\displaystyle{\mathbb{E}\Vert (g\star \bar{\mu })_{\infty }\Vert ^{p} \lesssim _{ p}\mathbb{E}{\bigl (\Vert g\Vert ^{2^{k+1} } \star \mu \bigr )}_{ \infty }^{p/2^{k+1} } +\sum _{ i=1}^{k}\mathbb{E}{\biggl (\int \Vert g\Vert ^{2^{i} }\,d\nu \biggr )}^{p/2^{i} },}$$
where, recalling that \(p/2^{k+1} < 1\),
$$\displaystyle\begin{array}{rcl} \mathbb{E}{\bigl (\Vert g\Vert ^{2^{k+1} } \star \mu \bigr )}_{ \infty }^{p/2^{k+1} }& =& \mathbb{E}{\Bigl (\sum \Vert \varDelta M\Vert ^{2^{k+1} }\Bigr )}^{p/2^{k+1} } {}\\ & \leq & \mathbb{E}\sum \Vert \varDelta M\Vert ^{p} = \mathbb{E}\int \Vert g\Vert ^{p}\,d\mu = \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu. {}\\ \end{array}$$
The proof is completed observing that, since 2 ≤ 2 i  ≤ p for all 1 ≤ i ≤ k, one has, by Lemma 3,
$$\displaystyle\begin{array}{rcl} \mathbb{E}{\biggl (\int \Vert g\Vert ^{2^{i} }\,d\nu \biggr )}^{p/2^{i} }& =& \mathbb{E}\Vert g\Vert _{L_{ 2^{i}}(\nu )}^{p} \leq \mathbb{E}\Vert g\Vert _{ L_{2}(\nu )}^{p} + \mathbb{E}\Vert g\Vert _{ L_{p}(\nu )}^{p} {}\\ & =& \mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu. {}\\ \end{array}$$
 □ 

Remark 3

The above proof, which can be seen as a variation of the previous one, is adapted from [37, Lemma  4.1] (which was translated to the H-valued case in [25]). In [37] the interpolation step at the end of the proof is obtained in a rather tortuous (but interesting way), which is not reproduced here.

The next proof is adapted from [16].

Proof (Fourth Proof of BJ 2, p (p > 2))

Let us start again from the BDG inequality:
$$\displaystyle{\mathbb{E}(M_{\infty }^{{\ast}})^{p} \lesssim _{ p}\mathbb{E}[M,M]_{\infty }^{p/2}.}$$
Since [M, M] is a real, positive, increasing, purely discontinuous process with \(\varDelta [M,M] =\Vert \varDelta M\Vert ^{2}\), one has
$$\displaystyle\begin{array}{rcl} [M,M]_{\infty }^{p/2}& =& \sum {\bigl ([M,M]^{p/2} - [M,M]_{ -}^{p/2}\bigr )} {}\\ & =& \sum {\Bigl ({\bigl ([M,M]_{-} +\Vert \varDelta M\Vert ^{2}\bigr )}^{p/2} - [M,M]_{ -}^{p/2}\Bigr )}. {}\\ \end{array}$$
For any a, b ≥ 0, the mean value theorem applied to the function \(x\mapsto x^{p/2}\) yields the inequality
$$\displaystyle\begin{array}{rcl} (a + b)^{p/2} - a^{p/2}& =& (p/2)\xi ^{p/2-1}b \leq (p/2)(a + b)^{p/2-1}b {}\\ & \leq & (p/2)2^{p/2-1}(a^{p/2-1}b + b^{p/2}), {}\\ \end{array}$$
where \(\xi \in \left ]a,b\right [\), hence also
$$\displaystyle{{\bigl ([M,M]_{-} +\Vert \varDelta M\Vert ^{2}\bigr )}^{p/2} - [M,M]_{ -}^{p/2} \lesssim _{ p}[M,M]_{-}^{p/2-1}\Vert \varDelta M\Vert ^{2} +\Vert \varDelta M\Vert ^{p}.}$$
This in turn implies
$$\displaystyle\begin{array}{rcl} \mathbb{E}[M,M]_{\infty }^{p/2}& \lesssim _{ p}& \sum {\Bigl ([M,M]_{-}^{p/2-1}\Vert \varDelta M\Vert ^{2} +\Vert \varDelta M\Vert ^{p}\Bigr )} {}\\ & = & \mathbb{E}\int {\Bigl ([M,M]_{-}^{p/2-1}\Vert g\Vert ^{2} +\Vert g\Vert ^{p}\Bigr )}\,d\mu {}\\ & = & \mathbb{E}\int {\Bigl ([M,M]_{-}^{p/2-1}\Vert g\Vert ^{2} +\Vert g\Vert ^{p}\Bigr )}\,d\nu {}\\ & \leq & \mathbb{E}[M,M]_{\infty }^{p/2-1}\int \Vert g\Vert ^{2}\,d\nu + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu. {}\\ \end{array}$$
By Young’s inequality in the form
$$\displaystyle{a^{p/2-1}b \leq \varepsilon a^{p/2} + N(\varepsilon )b^{p/2},\qquad a,\,b \geq 0,}$$
one easily infers
$$\displaystyle{\mathbb{E}[M,M]_{\infty }^{p/2} \lesssim _{ p}\mathbb{E}{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2} + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu,}$$
 □ 

thus concluding the proof.

3.2 A (Too?) Sophisticated Proof

In this subsection we prove a maximal inequality valid for any H-valued local martingale M (that is, we do not assume that M is purely discontinuous), from which BJ 2, p , p > 2, follows immediately.

Theorem 2

Let M be any local martingale with values in H. One has, for any p ≥ 2,
$$\displaystyle{\mathbb{E}(M_{\infty }^{{\ast}})^{p} \lesssim _{ p}\mathbb{E}\langle M,M\rangle _{\infty }^{p/2} + \mathbb{E}{\bigl ((\varDelta M)_{ \infty }^{{\ast}}\bigr )}^{p}.}$$

Proof

We are going to use Davis’ decomposition (see [32] for a very concise proof in the case of real martingales, a detailed “transliteration” of which to the case of Hilbert-space-valued martingales can be found in [30]): setting S: = (Δ M), one has \(M = L + K\), where L and K are martingales satisfying the following properties:
  1. (i)

    \(\Vert \varDelta L\Vert \lesssim S_{-}\);

     
  2. (ii)

    K has integrable variation and \(K = K^{1} +\widetilde{ K^{1}}\), where \(\widetilde{K^{1}}\) is the predictable compensator of K 1 and \(\int \vert dK^{1}\vert \lesssim S_{\infty }\).

     
Since M  ≤ L + K , we have
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \leq \Vert L_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} +\Vert K_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}},}$$
where, by the BDG inequality, \(\Vert K_{\infty }^{{\ast}}\Vert _{\mathbb{L}_{p}} \lesssim _{p}\Vert [K,K]^{1/2}\Vert _{\mathbb{L}_{p}}\). Moreover, by the maximal inequality for martingales with predictably bounded jumps in [22, p. 37]3 and the elementary estimate \(\langle L,L\rangle ^{1/2} \leq \langle M,M\rangle ^{1/2} +\langle K,K\rangle ^{1/2}\), one has
$$\displaystyle\begin{array}{rcl} \Vert L_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}}& \lesssim _{p}& \Vert \langle L,L\rangle _{\infty }^{1/2}\Vert _{ \mathbb{L}_{p}} +\Vert S_{\infty }\Vert _{\mathbb{L}_{p}} {}\\ & \leq &\Vert \langle M,M\rangle _{\infty }^{1/2}\Vert _{ \mathbb{L}_{p}} +\Vert \langle K,K\rangle _{\infty }^{1/2}\Vert _{ \mathbb{L}_{p}} +\Vert (\varDelta M)_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}}. {}\\ \end{array}$$
Since p ≥ 2, the inequality between moments of a process and of its dual predictable projection in [22, Theoreme 4.1] yields \(\Vert \langle K,K\rangle ^{1/2}\Vert _{\mathbb{L}_{p}} \lesssim _{p}\Vert [K,K]^{1/2}\Vert _{\mathbb{L}_{p}}\). In particular, we are left with
$$\displaystyle{\Vert M_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} \lesssim _{p}\Vert \langle M,M\rangle _{\infty }^{1/2}\Vert _{ \mathbb{L}_{p}} +\Vert (\varDelta M)_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}} +\Vert [K,K]_{\infty }^{1/2}\Vert _{ \mathbb{L}_{p}}.}$$
Furthermore, applying a version of Stein’s inequality between moments of a process and of its predictable projection (see e.g. [30], and [39, p. 103] for the original formulation), one has, for p ≥ 2,
$$\displaystyle{\Vert [\widetilde{K^{1}},\widetilde{K^{1}}]^{1/2}\Vert _{ \mathbb{L}_{p}} \lesssim _{p}\Vert [K^{1},K^{1}]^{1/2}\Vert _{ \mathbb{L}_{p}},}$$
hence, recalling property (ii) above and that the quadratic variation of a process is bounded by its first variation, we are left with
$$\displaystyle\begin{array}{rcl} \Vert [K,K]^{1/2}\Vert _{ \mathbb{L}_{p}}& \leq &\Vert [K^{1},K^{1}]^{1/2}\Vert _{ \mathbb{L}_{p}} +\Vert [\widetilde{K^{1}},\widetilde{K^{1}}]^{1/2}\Vert _{ \mathbb{L}_{p}} {}\\ & \lesssim _{p}& \Vert [K^{1},K^{1}]^{1/2}\Vert _{ \mathbb{L}_{p}} \leq \Vert \int \vert dK^{1}\vert \Vert _{ \mathbb{L}_{p}} \lesssim \Vert (\varDelta M)_{\infty }^{{\ast}}\Vert _{ \mathbb{L}_{p}}. {}\\ \end{array}$$
 □ 
It is easily seen that Theorem 2 implies BJ 2, p (for p ≥ 2): in fact, one has
$$\displaystyle{\mathbb{E}{\bigl ((\varDelta M)^{{\ast}}\bigr )}^{p} \leq \mathbb{E}\sum \Vert \varDelta M\Vert ^{p} = \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu.}$$

Remark 4

The above proof is a simplified version of an argument from [24]. As we recently learned, however, a similar argument was given in [10]. As a matter of fact, their proof is somewhat shorter than ours, as they claim that [K, L] = 0. Unfortunately, we have not been able to prove this claim.

3.3 A Conditional Proof

The purpose of this subsection is to show that if BJ 2, p , p ≥ 2, holds for real (local) martingales, then it also holds for (local) martingales with values in H. For this we are going to use Khinchine’s inequality: let x ∈ H, and \(\{e_{k}\}_{k\in \mathbb{N}}\) be an orthonormal basis of H. Setting \(x_{k}:=\langle x,e_{k}\rangle\). Then one has
$$\displaystyle{\Vert x\Vert ={\Bigl (\sum _{k}x_{k}^{2}\Bigr )}^{1/2} =\Vert \sum _{ k}x_{k}\varepsilon _{k}\Vert _{L_{2}(\bar{\varOmega })} \eqsim\Vert \sum _{k}x_{k}\varepsilon _{k}\Vert _{L_{p}(\bar{\varOmega })},}$$

where \((\bar{\varOmega },\bar{\mathcal{F}},\bar{ \mathbb{P}})\) is an auxiliary probability space, on which a sequence \((\varepsilon _{k})\) of i.i.d Rademacher random variables are defined.

Writing
$$\displaystyle{M_{k}:=\langle M,e_{k}\rangle = g_{k}\star \bar{\mu },\qquad g_{k}:=\langle g,e_{k}\rangle,}$$
one has \(\sum _{k}M_{k}\varepsilon _{k} ={\Bigl (\sum _{k}g_{k}\varepsilon _{k}\Bigr )}\star \bar{\mu }\), hence Khinchine’s inequality, Tonelli’s theorem, and Theorem 1 for real martingales yield
$$\displaystyle\begin{array}{rcl} \mathbb{E}\Vert M\Vert ^{p}& \eqsim & \mathbb{E}\Vert {\Bigl (\sum g_{ k}\varepsilon _{k}\Bigr )} \star \bar{\mu }\Vert _{ L_{p}(\bar{\varOmega })}^{p} =\bar{ \mathbb{E}}\,\mathbb{E}{\biggl |{\Bigl (\sum g_{ k}\varepsilon _{k}\Bigr )} \star \bar{\mu }\biggr |} ^{p} {}\\ & \lesssim _{p}& \bar{\mathbb{E}}\,\mathbb{E}{\biggl (\int {\Bigl \vert\sum g_{k}\varepsilon _{k}\Bigr \vert}^{2}\,d\nu \biggr )}^{p/2} +\bar{ \mathbb{E}}\,\mathbb{E}\int {\Bigl \vert\sum g_{ k}\varepsilon _{k}\Bigr \vert}^{p}\,d\nu {}\\ & =: & I_{1} + I_{2}. {}\\ \end{array}$$
stop Tonelli’s theorem, together with Minkowski’s and Khinchine’s inequalities, yield
$$\displaystyle\begin{array}{rcl} I_{1}& =& \mathbb{E}\,\bar{\mathbb{E}}{\biggl (\int {\Bigl \vert\sum g_{k}\varepsilon _{k}\Bigr \vert}^{2}\,d\nu \biggr )}^{p/2} = \mathbb{E}\Vert \int {\Bigl \vert\sum g_{ k}\varepsilon _{k}\Bigr \vert}^{2}\,d\nu \Vert _{ L_{p/2}(\bar{\varOmega })}^{p/2} {}\\ & \leq & \mathbb{E}{\biggl (\int \Vert {\Bigl \vert\sum g_{k}\varepsilon _{k}\Bigr \vert}^{2}\Vert _{ L_{p/2}(\bar{\varOmega })}\,d\nu \biggr )}^{p/2} {}\\ & =& \mathbb{E}{\biggl (\int \Vert \sum g_{k}\varepsilon _{k}\Vert _{L_{p}(\bar{\varOmega })}^{2}\,d\nu \biggr )}^{p/2} \eqsim{\biggl (\int \Vert g\Vert ^{2}\,d\nu \biggr )}^{p/2}. {}\\ \end{array}$$
Similarly, one has
$$\displaystyle{I_{2} = \mathbb{E}\int \Vert \sum g_{k}\varepsilon _{k}\Vert _{L_{p}(\bar{\varOmega })}^{p}\,d\nu \eqsim \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu.}$$
 □ 

The proof is completed by appealing to Doob’s inequality.

Remark 5

This conditional proof has probably not appeared in published form, although the idea is contained in [36].

4 Inequalities for Poisson Stochastic Integrals with Values in L q Spaces

Even though there exist in the literature some maximal inequalities for stochastic integrals with respect to compensated Poisson random measures and Banach-space-valued integrands, here we limit ourselves to reporting about (very recent) two-sided estimates in the case of L q -valued integrands. Throughout this section we assume that μ is a Poisson random measure, so that its compensator ν is of the form \(\mbox{ Leb} \otimes \nu _{0}\), where Leb stands for the one-dimensional Lebesgue measure and ν 0 is a (non-random) σ-finite measure on Z. Let \((X,\mathcal{A},n)\) be a measure space, and denote L q spaces on X simply by L q , for any q ≥ 1. Moreover, let us introduce the following spaces, where p 1, p 2, p 3 ∈ [1, [:

$$\displaystyle{L_{p_{1},p_{2},p_{3}}:= \mathbb{L}_{p_{1}}(L_{p_{2}}(\mathbb{R}_{+}\times Z \rightarrow L_{p_{3}}(X))),\quad \tilde{L}_{p_{1},p_{2}}:= \mathbb{L}_{p_{1}}(L_{p_{2}}(X \rightarrow L_{2}(\mathbb{R}_{+}\times Z))).}$$
Then one has the following result, due to Dirksen [8]:
$$\displaystyle{ {\Bigl (\mathbb{E}\sup _{t\geq 0}\Vert (g\star \bar{\mu })_{t}\Vert _{L_{q}}^{p}\Bigr )}^{1/p} \eqsim _{ p,q}\Vert g\Vert _{\mathcal{I}_{p,q}}, }$$
(6)
where
$$\displaystyle{ \mathcal{I}_{p,q}:= \left \{\begin{array}{@{}l@{\quad }l@{}} L_{p,p,q} + L_{p,q,q} +\tilde{ L}_{p,q}, \quad &\quad 1 < p \leq q \leq 2, \\ (L_{p,p,q} \cap L_{p,q,q}) +\tilde{ L}_{p,q},\quad &\quad 1 < q \leq p \leq 2, \\ L_{p,p,q} \cap (L_{p,q,q} +\tilde{ L}_{p,q}),\quad &\quad 1 < q < 2 \leq p, \\ L_{p,p,q} + (L_{p,q,q} \cap \tilde{ L}_{p,q}),\quad &\quad 1 < p < 2 \leq q, \\ (L_{p,p,q} + L_{p,q,q}) \cap \tilde{ L}_{p,q},\quad &\quad 2 \leq p \leq q, \\ L_{p,p,q} \cap L_{p,q,q} \cap \tilde{ L}_{p,q}, \quad &\quad 2 \leq q \leq p. \end{array} \right. }$$
(7)
The proof of this result is too long to be included here. We limit instead ourselves to briefly recalling what the main “ingredients” are: the core of the argument is to establish suitable extensions of the classical Rosenthal inequality
$$\displaystyle{\mathbb{E}\Big\vert \sum \xi _{k}\Big\vert ^{p} \lesssim _{ p}\max {\biggl (\mathbb{E}\sum \vert \xi _{k}\vert ^{p},{\Bigl ( \mathbb{E}\sum \vert \xi _{ k}\vert ^{2}\Bigr )}^{p/2}\biggr )},}$$
where p ≥ 2 and \(\xi = (\xi _{k})_{k}\) is any (finite) sequence of centered independent real random variables (see [38]). In particular, if \(\xi = (\xi _{k})_{k}\) is a finite sequence of independent centered random variables taking values in L q , one has
$$\displaystyle{ {\Bigl (\mathbb{E}\Vert \sum \xi _{k}\Vert _{L_{q}}^{p}\Bigr )}^{1/p} \eqsim _{ p,q}\Vert \xi \Vert _{s_{p,q}}\qquad \forall p,\,q \in ]1,\infty [, }$$
(8)
where the space s p, q is defined replacing in the above definition of \(\mathcal{I}_{p,q}\) the spaces L p, q, q , L p, p, q and \(\tilde{L}_{p,q}\) by the spaces D q, q , D p, q and S p, q , respectively, with
$$\displaystyle{\Vert \xi \Vert _{D_{p,q}}:={\Bigl (\sum \mathbb{E}\Vert \xi _{k}\Vert _{L_{q}}^{p}\Bigr )}^{1/p},\qquad \Vert \xi \Vert _{ S_{p,q}}:=\Vert {\Bigl (\sum \mathbb{E}\vert \xi _{k}\vert ^{2}\Bigr )}^{1/2}\Vert _{ L_{q}}.}$$
The proof of the vector-valued Rosenthal inequality (8) combines in a clever and elegant way classical inequalities for sums of independent random variables in Banach spaces (see e.g. [7, 20]) with geometric properties of L q spaces (in particular in connection with the notions of type and cotype). An important role is also played by the duality of sums and intersections of Banach spaces. The maximal inequalities (6) for stochastic integrals of step processes with respect to compensated Poisson random measures are then implied by (8), via a simple argument based on decoupling techniques and on Doob’s inequality (for the decoupling approach to stochastic integration cf. [19]).

5 Inequalities for Stochastic Convolutions

In this section we show how one can extend, under certain assumptions, maximal inequalities from stochastic integrals to stochastic convolutions using dilations of semigroups. As is well known, stochastic convolutions are in general not semimartingales, hence establishing maximal inequalities for them is, in general, not an easy task. Usually one tries to approximate stochastic convolutions by processes which can be written as solutions to stochastic differential equations in either a Hilbert or a Banach space, for which one can (try to) obtain estimates using tools of stochastic calculus. As a final step, one tries to show that such estimates can be transferred to stochastic convolutions as well, based on establishing suitable convergence properties. At present it does not seem possible to claim that any of the two methods is superior to the other (cf., e.g., the discussion in [41]). We choose to concentrate on the dilation technique for its simplicity and elegance.

We shall say that a linear operator A on a Banach space E, such that − A is the infinitesimal generator of a strongly continuous semigroup S, is of class D if there exist a Banach space \(\bar{E}\), an isomorphic embedding \(\iota: E \rightarrow \bar{ E}\), a projection \(\pi:\bar{ E} \rightarrow \iota (E)\), and a strongly continuous bounded group \((U(t))_{t\in \mathbb{R}}\) on \(\bar{E}\) such that the following diagram commutes for all t > 0:

Open image in new window

As far as we know there is no general characterization of operators of class D.4 Several sufficient conditions, however, are known.

We begin with the classical dilation theorem by Sz.-Nagy (see e.g. [40]).

Proposition 1

Let A be a linear m-accretive operator on a Hilbert space H. Then A is of class D.

The next result, due to Fendler [11], is analogous to Sz.-Nagy’s dilation theorem in the context of L q spaces, although it requires an extra positivity assumption. Here and in the following X stands for a measure space and m for a measure on it.

Proposition 2

Let E = L q (X,m), with \(q \in \left ]1,\infty \right [\) . Assume that A is a linear densely defined m-accretive operator on E such that \(S(t):= e^{-tA}\) is positivity preserving for all t > 0. Then A is of class D, with \(\bar{E} = L_{q}(Y )\) , where Y is another measure space.

The following very recent result, due to Fröhlich and Weis [12], allows one to consider classes of operators that are not necessarily accretive (for many interesting examples, see e.g. [41]). For all unexplained notions of functional calculus for operators we refer to, e.g., [42].

Proposition 3

Let E = L q (X,m), with \(q \in \left ]1,\infty \right [\) , and assume that A is sectorial and admits a bounded H -calculus with \(\omega _{H^{\infty }}(A) <\pi /2\) . Then A is of class D, and one can choose \(\bar{E} = L_{q}([0,1] \times X,\mbox{ Leb} \otimes m)\) .

We are now going to show how certain maximal estimates for stochastic integrals yield maximal estimates for convolutions involving the semigroup generated by an operator of class D. As mentioned at the beginning of the section, the problem is that stochastic convolutions are not martingales, hence maximal inequalities for the latter class of processes cannot be directly used. The workaround presented here is, roughly speaking, based on the idea of embedding the stochastic convolution in the larger space \(\bar{E}\), where the semigroup S can be replaced by the group U, to the effect that inequalities for martingales can be applied. In particular, note that, since U is a strongly continuous group of contractions and the operator norms of π is less than or equal to one, we have
$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\sup _{t\geq 0}\Vert \int _{0}^{t}\!\!\!\int _{ Z}S(t - s)g(s,z)\,\bar{\mu }(\mathit{ds},\mathit{dz})\Vert _{E}^{p} \\ & & \quad = \mathbb{E}\sup _{t\geq 0}\Vert \pi \int _{0}^{t}\!\!\!\int _{ Z}U(t - s)\iota (g(s,z))\,\bar{\mu }(\mathit{ds},\mathit{dz})\Vert _{\bar{E}}^{p} \\ & & \quad = \mathbb{E}\sup _{t\geq 0}\Vert \pi U(t)\int _{0}^{t}\!\!\!\int _{ Z}U(-s)\iota (g(s,z))\,\bar{\mu }(\mathit{ds},\mathit{dz})\Vert _{\bar{E}}^{p} \\ & & \quad \leq \Vert \pi \Vert _{\infty }^{p}\;\sup _{ t\geq 0}\Vert U(t)\Vert _{\infty }^{p}\;\mathbb{E}\sup _{ t\geq 0}\Vert \int _{0}^{t}\!\!\!\int _{ Z}U(-s)\iota (g(s,z))\,\bar{\mu }(\mathit{ds},\mathit{dz})\Vert _{\bar{E}}^{p} \\ & & \quad \leq \mathbb{E}\sup _{t\geq 0}\Vert \int _{0}^{t}\!\!\!\int _{ Z}U(-s)\iota (g(s,z))\,\bar{\mu }(\mathit{ds},\mathit{dz})\Vert _{\bar{E}}^{p}, {}\end{array}$$
(9)
where \(\Vert \cdot \Vert _{\infty }\) denotes the operator norm. We have thus reduced the problem to finding a maximal estimate for a stochastic integral, although involving a different integrand and on a larger space.

If E is a Hilbert space we can proceed rather easily.

Proposition 4

Let A be of class D on a Hilbert space E. Then one has, for any α ∈ [1,2],

$$\displaystyle{\mathbb{E}\sup _{t\geq 0}\Vert \int _{0}^{t}S(t-\cdot )g\,d\bar{\mu }\Vert _{ E}^{p} \lesssim _{\alpha,p}\left \{\begin{array}{@{}l@{\quad }l@{}} \mathbb{E}{\biggl (\int \Vert g\Vert ^{\alpha }\,d\nu \biggr )}^{p/\alpha } \quad &\forall p \in \left ]0,\alpha \right ], \\ \mathbb{E}{\biggl (\int \Vert g\Vert ^{\alpha }\,d\nu \biggr )}^{p/\alpha } + \mathbb{E}\int \Vert g\Vert ^{p}\,d\nu \quad &\forall p \in \left [\alpha,\infty \right [. \end{array} \right.}$$

Proof

We consider only the case p > α, as the other one is actually simpler. The estimate BJ α, p and (9) yield
$$\displaystyle\begin{array}{rcl} & & \mathbb{E}\sup _{t\geq 0}\Vert \int _{0}^{t}S(t -\cdot )g\,d\bar{\mu }\Vert _{ E}^{p} {}\\ & & \quad \lesssim _{\alpha,p}\mathbb{E}\int \Vert U(-\cdot )\iota \circ g\Vert _{\bar{E}}^{p}\,d\nu + \mathbb{E}{\biggl (\int \Vert U(-\cdot )\iota \circ g\Vert _{\bar{ E}}^{\alpha }\,d\nu \biggr )}^{p/\alpha } {}\\ & & \quad \leq \mathbb{E}\int \Vert g\Vert _{E}^{p}\,d\nu + \mathbb{E}{\biggl (\int \Vert g\Vert _{ E}^{\alpha }\,d\nu \biggr )}^{p/\alpha }, {}\\ \end{array}$$
because U is a unitary group and the embedding \(\iota\) is isometric. □ 
If E = L q (X), the transposition of maximal inequalities from stochastic integrals to stochastic convolution is not so straightforward. In particular, (9) implies that the corresponding upper bounds will be functions of the norms of \(U(-\,\cdot )\,\iota \circ g\) in three spaces of the type L p, p, q , L p, q, q and \(\tilde{L}_{p,q}\) (with X replaced by a different measure space Y, so that \(\bar{E} = L_{q}(Y )\)). In analogy to the previous proposition, it is not difficult to see that
$$\displaystyle{ \Vert U(-\,\cdot )\,\iota \circ g\Vert _{\mathbb{L}_{p_{ 1}}L_{p_{2}}(\mathbb{R}_{+}\times Z\rightarrow L_{p_{3}}(Y ))} \leq \Vert g\Vert _{\mathbb{L}_{p_{ 1}}L_{p_{2}}(\mathbb{R}_{+}\times Z\rightarrow L_{p_{3}}(X))}. }$$
(10)
However, estimating the norm of \(U(-\,\cdot )\,\iota \circ g\) in \(\tilde{L}_{p,q}(Y )\) in terms of the norm of g in \(\tilde{L}_{p,q}\) does not seem to be possible without further assumptions. Nonetheless, the following sub-optimal estimates can be obtained.

Proposition 5

Let A be of class D on \(E = L_{q}:= L_{q}(X)\) and μ a Poisson random measure. Then one has
$$\displaystyle{\mathbb{E}\sup _{t\geq 0}\Vert \int _{0}^{t}S(t -\cdot )g\,d\bar{\mu }\Vert _{ L_{q}}^{p} \lesssim _{ p,q}\Vert g\Vert _{\mathcal{J}_{p,q}},}$$
where
$$\displaystyle{\mathcal{J}_{p,q}:= \left \{\begin{array}{@{}l@{\quad }l@{}} L_{p,p,q} + L_{p,q,q}, \quad &\quad 1 < p \leq q \leq 2, \\ L_{p,p,q} \cap L_{p,q,q}, \quad &\quad 1 < q \leq p \leq 2, \\ L_{p,p,q} \cap L_{p,q,q}, \quad &\quad 1 < q < 2 \leq p, \\ L_{p,p,q} + (L_{p,q,q} \cap L_{p,2,q}),\quad &\quad 1 < p < 2 \leq q, \\ (L_{p,p,q} + L_{p,q,q}) \cap L_{p,2,q},\quad &\quad 2 \leq p \leq q, \\ L_{p,p,q} \cap L_{p,q,q} \cap L_{p,2,q}, \quad &\quad 2 \leq q \leq p. \end{array} \right.}$$

Proof

Note that, if q < 2, one has, by definition, \(\Vert \cdot \Vert _{\mathcal{I}_{p,q}} \leq \Vert \cdot \Vert _{\mathcal{J}p,q}\) [where the spaces \(\mathcal{I}_{p,q}\) have been defined in (7)]; if q ≥ 2, by Minkowski’s inequality,
$$\displaystyle{\Vert {\biggl (\int \vert g\vert ^{2}\,d\nu \biggr )}^{1/2}\Vert _{ L_{q}} =\Vert \int \vert g\vert ^{2}\,d\nu \Vert _{ L_{q/2}}^{1/2} \leq {\biggl (\int \Vert g\Vert _{ L_{q}}^{2}\,d\nu \biggr )}^{1/2},}$$
that is, \(\Vert \cdot \Vert _{\tilde{L}_{p,q}} \leq \Vert \cdot \Vert _{L_{p,2,q}}\). This implies \(\Vert \cdot \Vert _{\mathcal{I}_{p,q}} \leq \Vert \cdot \Vert _{\mathcal{J}p,q}\) for all q ≥ 2, hence for all (admissible) values of p and q. Therefore (9) and the maximal estimate (7) yield the desired result. □ 

Remark 6

The above maximal inequalities for stochastic convolutions continue to hold if A is only quasi-m-accretive and g has compact support in time. In this case the inequality sign \(\lesssim _{p,q}\) has to be replaced by \(\lesssim _{p,q,\eta,T}\), where T is a finite time horizon. One simply has to repeat the same arguments using the m-accretive operator A +η I, for some η > 0.

6 Historical and Bibliographical Remarks

In this section we try to reconstruct, at least in part, the historical developments around the maximal inequalities presented above. Before doing that, however, let us briefly explain how we became interested in the class of maximal inequalities: the first-named author used in [25] a Hilbert-space version of a maximal inequality in [37] to prove well-posedness for a Lévy-driven SPDE arising in the modeling of the term structure of interest rates. The second-named author pointed out that such an inequality, possibly adapted to the more general case of integrals with respect to compensated Poisson random measures (rather than with respect to Lévy processes), was needed to solve a problem he was interested in, namely to establish regularity of solutions to SPDEs with jumps with respect to initial conditions: our joint efforts led to the results in [27], where we proved a slightly less general version of the inequality BJ 2, p , p ≥ 2, using an argument involving only Itô’s formula. At the time of writing [27] we did not realize that, as demonstrated in the present paper, it would have been possible to obtain the same result adapting one of the two arguments (for Lévy-driven integrals) we were aware of, i.e. those in [3] and [37].

The version in [27] of the inequality BJ 2, p , p ≥ 2, was called in that paper “Bichteler-Jacod inequality”, as we believed it appeared (in dimension one) for the first time in [3]. This is still what we believed until a few days ago (this explains the label BJ), when, after this paper as well as the first drafts of [23] and [24] were completed, we found a reference to [33] in [44]. This is one of the surprises we alluded to in the introduction. Namely, Novikov proved (in 1975, hence well before Bichteler and Jacod, not to mention how long before ourselves) the upper bound BJ α, p for all values of α and p, assuming \(H = \mathbb{R}^{n}\), but with constants that are independent of the dimension. For this reason it seems that, if one wants to give a name (as we do) to the inequality BJ and its extensions, they should be called Novikov’s inequalities.5 Unfortunately Novikov’s paper [33] was probably not known also to Kunita, who proved in [18] (in 2004) a slightly weaker version of BJ 2, p , p ≥ 2, in \(H = \mathbb{R}^{n}\), also using Itô’s formula. Moreover, Applebaum [1] calls these inequalities “Kunita’s estimates”, but, again, they are just a version of what we called (and are going to call) Novikov’s inequality.

Even though the proofs in [2, 3] are only concerned with the real-valued case, the authors explicitly say that they knew how to get the constant independent of the dimension (see, in particular, [2, Lemma  5.1 and remark 5.2]). The proofs in [16, 37] are actually concerned with integrals with respect to Lévy processes, but the adaptation to the more general case presented here is not difficult. Moreover, the inequalities in [2, 3, 16, 37] are of the type

$$\displaystyle{\mathbb{E}\sup _{t\leq T}\,\Vert (g\star \bar{\mu })_{t}\Vert ^{p} \lesssim _{ p,d,T}\mathbb{E}\int _{0}^{T}{\biggl (\int _{ Z}\Vert g((s,\cdot )\Vert ^{2}\,dm\biggr )}^{p/2}\mathit{ds} + \mathbb{E}\int _{ 0}^{T}\!\!\!\int _{ Z}\Vert g((s,\cdot )\Vert ^{p}\,d\nu _{ 0}\,\mathit{ds},}$$

where μ is a Poisson random measure with compensator \(\nu = \mbox{ Leb} \otimes \nu _{0}\). Our proofs show that all their arguments can be improved to yield a constant depending only on p and that the first term on the right-hand side can be replaced by \(\mathbb{E}{\bigl (\Vert g\Vert ^{2} \star \nu \bigr )}_{ T}^{p/2}\).

Again through [44] we also became aware of the Novikov-like inequality by Dzhaparidze and Valkeila [10], where Theorem 2 in proved with \(H = \mathbb{R}\). It should be observed that the inequality in the latter theorem is apparently more general than, but actually equivalent to BJ (cf. [24]).

Another method to obtain Novikov-type inequalities, also in vector-valued settings, goes through their analogs in discrete time, i.e. the Burkholder-Rosenthal inequality. We have not touched upon this method, as we are rather interested in “direct” methods in continuous time. We refer the interested reader to the very recent preprints [8, 9], as well as to [35, 43] and references therein.

The idea of using dilation theorems to extend results from stochastic integrals to stochastic convolutions has been introduced, to the best of our knowledge, in [14]. This method has then been generalized in various directions, see e.g. [15, 25, 41]. In this respect, it should be mentioned that the “classical” direct approach, which goes through approximations by regular processes and avoid dilations (here “classical” stands for equations on Hilbert spaces driven by Wiener process), has been (partially) extended to Banach-space valued stochastic convolutions with jumps in [4]. The former and the latter methods are complementary, in the sense that none is more general than the other. Furthermore, it is well known (see e.g. [34]) that the factorization method breaks down when applied to stochastic convolutions with respect to jump processes.

Footnotes

  1. 1.

    Just to avoid (unlikely) confusion, we note that \(\mathbb{E}(\cdots \,)^{\alpha }\) always stands for the expectation of (⋯ ) α , and not for \([\mathbb{E}(\cdots \,)]^{\alpha }\).

  2. 2.

    The subscript ⋅  c means “with compact support”, and C b 2(H) denotes the set of twice continuously differentiable functions \(\varphi: H \rightarrow \mathbb{R}\) such that \(\varphi\), \(\varphi ^{{\prime}}\) and \(\varphi ^{{\prime\prime}}\) are bounded.

  3. 3.

    One can verify that the proof in [22] goes through without any change also for Hilbert-space-valued martingales.

  4. 4.

    The definition of class D is not standard and it is introduced just for the sake of concision.

  5. 5.

    It should be mentioned that there are discrete-time real-valued analogs of BJ 2, p , p ≥ 2, that go under the name of Burkholder-Rosenthal (in alphabetical but reverse chronological order: Rosenthal [38] proved it for sequences of independent random variables in 1970, then Burkholder [5] extended it to discrete-time (real) martingales in 1973), and some authors speak of continuous-time Burkholder-Rosenthal inequalities. One may then also propose to use the expression Burkholder-Rosenthal-Novikov inequality, that, however, seems too long.

Notes

Acknowledgements

A large part of the work for this paper was carried out during visits of the first-named author to the Interdisziplinäres Zentrum für Komplexe Systeme, Universität Bonn, invited by S. Albeverio. The second-named author is supported by the DFG through the SFB 701.

References

  1. 1.
    D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd edn. (Cambridge University Press, Cambridge, 2009). MR 2512800 (2010m:60002)Google Scholar
  2. 2.
    K. Bichteler, J.-B. Gravereaux, J. Jacod, Malliavin Calculus for Processes with Jumps (Gordon and Breach Science Publishers, New York, 1987). MR MR1008471 (90h:60056)Google Scholar
  3. 3.
    K. Bichteler, J. Jacod, Calcul de Malliavin pour les diffusions avec sauts: existence d’une densité dans le cas unidimensionnel, in Seminar on Probability, XVII. Lecture Notes in Math., vol. 986 (Springer, Berlin, 1983), pp. 132–157. MR 770406 (86f:60070)Google Scholar
  4. 4.
    Z. Brzeźniak, E. Hausenblas, J. Zhu, Maximal inequality of stochastic convolution driven by compensated Poisson random measures in Banach spaces, arXiv:1005.1600 (2010)Google Scholar
  5. 5.
    D.L. Burkholder, Distribution function inequalities for martingales. Ann. Probab. 1, 19–42 (1973). MR 0365692 (51 #1944)Google Scholar
  6. 6.
    D.L. Burkholder, R.F. Gundy, Extrapolation and interpolation of quasi-linear operators on martingales. Acta Math. 124, 249–304 (1970). MR 0440695 (55 #13567)Google Scholar
  7. 7.
    V.H. de la Peña, E. Giné, Decoupling (Springer, New York, 1999). MR 1666908 (99k:60044)Google Scholar
  8. 8.
    S. Dirksen, Itô isomorphisms for L p-valued Poisson stochastic integrals. Ann. Probab. 42(6), 2595–2643 (2014). doi:10.1214/13-AOP906CrossRefMathSciNetzbMATHGoogle Scholar
  9. 9.
    S. Dirksen, J. Maas, and J. van Neerven, Poisson stochastic integration in Banach spaces, Electron. J. Probab. 18(100), 28 pp. (2013)Google Scholar
  10. 10.
    K. Dzhaparidze, E. Valkeila, On the Hellinger type distances for filtered experiments. Probab. Theory Relat. Fields 85(1), 105–117 (1990). MR 1044303 (91d:60102)Google Scholar
  11. 11.
    G. Fendler, Dilations of one parameter semigroups of positive contractions on L p spaces. Can. J. Math. 49(4), 736–748 (1997). MR MR1471054 (98i:47035)Google Scholar
  12. 12.
    A.M. Fröhlich, L. Weis, H calculus and dilations. Bull. Soc. Math. France 134(4), 487–508 (2006). MR 2364942 (2009a:47091)Google Scholar
  13. 13.
    G.H. Hardy, J.E. Littlewood, G.Pólya, Inequalities, 2nd edn. (Cambridge University Press, Cambridge, 1988). MR 0046395 (13,727e)Google Scholar
  14. 14.
    E. Hausenblas, J. Seidler, A note on maximal inequality for stochastic convolutions. Czech. Math. J. 51(126)(4), 785–790 (2001). MR MR1864042 (2002j:60092)Google Scholar
  15. 15.
    E. Hausenblas, J. Seidler, Stochastic convolutions driven by martingales: maximal inequalities and exponential integrability. Stoch. Anal. Appl. 26(1), 98–119 (2008). MR 2378512 (2009a:60066)Google Scholar
  16. 16.
    J. Jacod, Th.G. Kurtz, S. Méléard, Ph. Protter, The approximate Euler method for Lévy driven stochastic differential equations. Ann. Inst. H. Poincaré Probab. Stat. 41(3), 523–558 (2005). MR MR2139032 (2005m:60149)Google Scholar
  17. 17.
    S.G. Kreĭn, Yu.Ī. Petunı̄n, E.M. Semënov, Interpolation of Linear Operators. Translations of Mathematical Monographs, vol. 54 (American Mathematical Society, Providence, 1982). MR 649411 (84j:46103)Google Scholar
  18. 18.
    H. Kunita, Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms, in Real and Stochastic Analysis (Birkhäuser Boston, Boston, 2004), pp. 305–373. MR 2090755 (2005h:60169)Google Scholar
  19. 19.
    S. Kwapień, W.A. Woyczyński, Random Series and Stochastic Integrals: Single and Multiple (Birkhäuser, Boston, 1992). MR 1167198 (94k:60074)Google Scholar
  20. 20.
    M. Ledoux, M. Talagrand, Probability in Banach Spaces (Springer, Berlin, 1991). MR 1102015 (93c:60001)Google Scholar
  21. 21.
    E. Lenglart, Relation de domination entre deux processus. Ann. Inst. H. Poincaré Sect. B (N.S.) 13(2), 171–179 (1977). MR 0471069 (57 #10810)Google Scholar
  22. 22.
    E. Lenglart, D. Lépingle, M. Pratelli, Présentation unifiée de certaines inégalités de la théorie des martingales, in Séminaire de Probabilités, XIV (Paris, 1978/1979). Lecture Notes in Math., vol. 784 (Springer, Berlin, 1980), pp. 26–52. MR 580107 (82d:60087)Google Scholar
  23. 23.
    C. Marinelli, On maximal inequalities for purely discontinuous L q -valued martingales. Arxiv:1311.7120v1 (2013)Google Scholar
  24. 24.
    C. Marinelli, On regular dependence on parameters of stochastic evolution equations, in preparationGoogle Scholar
  25. 25.
    C. Marinelli, Local well-posedness of Musiela’s SPDE with Lévy noise. Math. Finance 20(3), 341–363 (2010). MR 2667893Google Scholar
  26. 26.
    C. Marinelli, Approximation and convergence of solutions to semilinear stochastic evolution equations with jumps. J. Funct. Anal. 264(12), 2784–2816 (2013). MR 3045642Google Scholar
  27. 27.
    C. Marinelli, C. Prévôt, M.Röckner, Regular dependence on initial data for stochastic evolution equations with multiplicative Poisson noise. J. Funct. Anal. 258(2), 616–649 (2010). MR MR2557949Google Scholar
  28. 28.
    C. Marinelli, M. Röckner, On uniqueness of mild solutions for dissipative stochastic evolution equations. Infinite Dimens. Anal. Quantum Probab. Relat. Top. 13(3), 363–376 (2010). MR 2729590 (2011k:60220)Google Scholar
  29. 29.
    C. Marinelli, M. Röckner, Well-posedness and asymptotic behavior for stochastic reaction-diffusion equations with multiplicative Poisson noise. Electron. J. Probab. 15(49), 1528–1555 (2010). MR 2727320Google Scholar
  30. 30.
    C. Marinelli, M. Röckner, On the maximal inequalities of Burkholder, Davis and Gundy, arXiv preprint (2013)Google Scholar
  31. 31.
    M. Métivier, Semimartingales (Walter de Gruyter & Co., Berlin, 1982). MR MR688144 (84i:60002)Google Scholar
  32. 32.
    P.A. Meyer, Le dual de H 1 est BMO (cas continu), Séminaire de Probabilités, VII (Univ. Strasbourg). Lecture Notes in Math., vol. 321 (Springer, Berlin, 1973), pp. 136–145. MR 0410910 (53 #14652a)Google Scholar
  33. 33.
    A.A. Novikov, Discontinuous martingales. Teor. Verojatnost. i Primemen. 20, 13–28 (1975). MR 0394861 (52 #15660)Google Scholar
  34. 34.
    Sz. Peszat, J. Zabczyk, Stochastic Partial Differential Equations with Lévy Noise (Cambridge University Press, Cambridge, 2007). MR MR2356959Google Scholar
  35. 35.
    Io. Pinelis, Optimum bounds for the distributions of martingales in Banach spaces. Ann. Probab. 22(4), 1679–1706 (1994). MR 1331198 (96b:60010)Google Scholar
  36. 36.
    C. Prévôt (Knoche), Mild solutions of SPDE’s driven by Poisson noise in infinite dimensions and their dependence on initial conditions, Ph.D. thesis, Universität Bielefeld, 2005Google Scholar
  37. 37.
    Ph. Protter, D. Talay, The Euler scheme for Lévy driven stochastic differential equations. Ann. Probab. 25(1), 393–423 (1997). MR MR1428514 (98c:60063)Google Scholar
  38. 38.
    H.P. Rosenthal, On the subspaces of L p (p > 2) spanned by sequences of independent random variables. Isr. J. Math. 8, 273–303 (1970). MR 0271721 (42 #6602)Google Scholar
  39. 39.
    E.M. Stein, Topics in Harmonic Analysis Related to the Littlewood-Paley Theory (Princeton University Press, Princeton, 1970). MR 0252961 (40 #6176)Google Scholar
  40. 40.
    B. Sz.-Nagy, C. Foias, H. Bercovici, L. Kérchy, Harmonic analysis of operators on Hilbert space, 2nd edn. (Springer, New York, 2010). MR 2760647 (2012b:47001)Google Scholar
  41. 41.
    M. Veraar, L. Weis, A note on maximal estimates for stochastic convolutions. Czech. Math. J. 61(136)(3), 743–758 (2011). MR 2853088Google Scholar
  42. 42.
    L. Weis, The H holomorphic functional calculus for sectorial operators—a survey, in Partial Differential Equations and Functional Analysis (Birkhäuser, Basel, 2006), pp. 263–294. MR 2240065 (2007c:47018)Google Scholar
  43. 43.
    A.T.A. Wood, Rosenthal’s inequality for point process martingales. Stoch. Process. Appl. 81(2), 231–246 (1999). MR 1694561 (2000f:60073)Google Scholar
  44. 44.
    A.T.A. Wood, Acknowledgement of priority. Comment on: Rosenthal’s inequality for point process martingales. Stoch. Process. Appl. 81(2), 231–246 (1999). MR1694561 (2000f:60073); Stoch. Process. Appl. 93(2), 349 (2001). MR 1828780Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of MathematicsUniversity College LondonLondonUK
  2. 2.Fakultät für MathematikUniversität BielefeldBielefeldGermany

Personalised recommendations