1 Introduction

Cylindrical Lévy processes are a natural generalisation of cylindrical Brownian motions to the non-Gaussian setting, and they can serve as a model of random perturbation of partial differential equations or other dynamical systems. As a generalised random process, cylindrical Lévy processes do not attain values in the underlying space, and they do not enjoy a Lévy–Itô decomposition in general. Since conventional approaches to stochastic integration rely on either stopping time arguments or a semimartingale decomposition in the one or other form, a completely novel method for stochastic integration has been introduced in the work [12] by one of us with Jakubowski. This method is purely based on tightness arguments, since typical estimates of the expectation are not available without utilising stopping time arguments or a semimartingale decomposition. As a consequence, although this method guarantees the existence of the stochastic integral for a large class of random integrands, it does not provide any control of the integrals. Since many applications, such as modelling dynamical systems or control problems, require upper estimates of the stochastic integrals, this method seems to be difficult to use for such applications.

In order to provide a control of the stochastic integral, we develop a theory of stochastic integration for random integrands with respect to cylindrical Lévy processes with finite p-th weak moments for \(p\in [1,2]\) in this work. Our approach enables us to develop the theory on a large class of general Banach spaces. We apply the obtained estimates to establish the existence of an abstract partial differential equations driven by a cylindrical Lévy process with finite p-th weak moments.

Stochastic integration with respect to a cylindrical Wiener process is well developed in Hilbert spaces and various classes of Banach spaces. Typical Banach spaces which permit a development of stochastic integration are martingale type 2 Banach spaces, see, for example, Dettweiler [7, 8], or UMD spaces, see, for example, van Neerven, Veraar and Weis [32]. Veraar and Yaroslavtsev [33] extend the approach for UMD spaces in [32] to continuous cylindrical local martingales by utilising the Dambis–Dubins–Schwarz Theorem. Stochastic integration in Hilbert spaces with respect to genuine Lévy processes is, for example, presented by Peszat and Zabczyk in [18], and with respect to cylindrical Lévy processes, the theory is developed in [12]. Stochastic integration with respect to a Poisson random measure in Banach spaces is developed, for example, by Mandrekar and Rüdiger in [17] and by Brzeźniak, Zhu and Hausenblas [34].

In this work, we are faced with the similar problem as in [12]. Conventional approaches to stochastic integration utilise either stopping times or the Lévy–Itô decomposition to show continuity of the integral operator separately: firstly with respect to the martingale part with finite second moments and secondly with respect to the bounded variation part. However, since these approaches are excluded for cylindrical Lévy processes, we show continuity of the integral operator “in one piece”, i.e. without applying the semimartingale decomposition of the integrator. For this purpose, we utilise a generalised form of Pietsch’s factorisation theorem, originating from the work of Schwartz [29].

More specifically, the space of admissible integrands is predictable stochastic process with values in the space of p-summing operators and with integrable p-summing norm for \(p\in [1,2]\) in this work. Due to results by Kwapień and Schwartz, for \(p>1\) the space of p-summing operators coincides with the space of p-radonifying operators, which are exactly the operators which map each cylindrical random variable with finite p-th weak moments to a genuine random variable. In this way, stochastic processes with values in the space of p-summing operators are the natural class of integrands, as they map the cylindrical increments of the integrator to the genuine random variables. Furthermore, the class of p-summing operators coincides with the class of Hilbert–Schmidt operators in Hilbert spaces, and as such, the aforementioned space of admissible integrands is a natural generalisation of the integration theory in Hilbert spaces with respect to genuine Lévy process in, for example, [18]. In typical applications such as the heat equation, the p-summing norm of the operators appearing in the equation can be explicitly estimated, see [3].

In Sect. 2, we recall the concepts of cylindrical measures and cylindrical Lévy processes. In Sect. 3, we present the generalised Pietsch’s factorisation theorem due to Schwartz and derive a result on the strong convergence of p-summing operators, which is needed in the proof of the stochastic continuity of the stochastic convolution. Section 4 is devoted to the construction of the stochastic integral. This is done in two steps as in the article [22] by the second author. Firstly, we radonify the increments of the cylindrical Lévy process by random p-summing operators. Secondly, we define the integral for simple integrands and extend it by continuity to the general ones. We also present some examples of the processes covered by our theory. In Sect. 5, we apply our results to establish existence and uniqueness solution to the evolution equation driven by a cylindrical Lévy noise with finite p-th weak moments for \(p\in [1,2]\).

2 Preliminaries

Let E and F be Banach spaces with separable duals \(E^*\) and \(F^*\). The operator norm of an operator \(u:E\rightarrow F\) is denoted with \(\left\| u\right\| _{\mathcal {L}(E,F)}\) or simply \(\left\| u\right\| \). We write \(B_E\) for the closed unit ball in E. The Borel \(\sigma \)-field is denoted with \(\mathcal {B}(E)\).

Fix a probability space \((\Omega ,\mathcal {F},P)\) with a filtration \((\mathcal {F}_t)\). We denote the space of equivalence classes of real-valued random variables equipped with the topology of convergence in probability by \(L^0(\Omega ,\mathcal {F},P;{\mathbb {R}})\). The Bochner space of equivalence classes of E-valued random variables with finite p-th moment is denoted with \(L^p(\Omega ,\mathcal {F},P;E)\). In case the codomain is not separable, we take only separably valued random variables.

Cylindrical sets are sets of the form

$$\begin{aligned} C(x_1^*,\ldots ,x_n^*;B) = \{ x \in E : (x_1^*(x),\ldots x_n^*(x)) \in B \} \end{aligned}$$

for \(x_1^*,\ldots ,x_n^* \in E^*\) and \(B \in \mathcal {B}({\mathbb {R}}^n)\). For \(\Gamma \subseteq E^*\), we denote with \(\mathcal {Z}(E,\Gamma )\) the collection of all cylindrical sets with \(x_1^*,\ldots , x_n^* \in \Gamma \), \(B\in \mathcal {B}({\mathbb {R}}^n)\) and \(n\in {\mathbb {N}}\). If \(\Gamma =E^*\), we write \(\mathcal {Z}(E)\) to denote the collection of all cylindrical subsets of E. Note that \(\mathcal {Z}(E)\) is an algebra and that if \(\Gamma \) is finite, then \(\mathcal {Z}(E,\Gamma )\) is a \(\sigma \)-algebra. A function \(\mu :\mathcal {Z}(E) \rightarrow [0,\infty ]\) is called a cylindrical measure if its restriction to \(\mathcal {Z}(E,\Gamma )\) is a measure for every finite subset \(\Gamma \subseteq E^*\). If \(\mu (E)=1\), we call it a cylindrical probability measure. A cylindrical random variable is a linear and continuous mapping:

$$\begin{aligned} Y :E^* \rightarrow L^0(\Omega ,\mathcal {F},P;{\mathbb {R}}). \end{aligned}$$

The cylindrical distribution of a cylindrical random variable Y is defined by

$$\begin{aligned} \mu (C(x_1^*,\ldots ,x_n^*;B)) = P((Yx_1^*,\ldots ,Yx_n^*) \in B), \end{aligned}$$

which defines a cylindrical probability measure on \(\mathcal {Z}(E)\). The characteristic function of a cylindrical random variable (resp. cylindrical probability measure) is given by

$$\begin{aligned} \phi _{Y}(x^*) = {\mathbb {E}}\left[ e^{iYx^*} \right] , \qquad \left( \text {resp. }\, \phi _\mu (x^*) = \int _E e^{ix^*(x)} \, \mu ({\mathrm {d}}x)\right) , \end{aligned}$$

for \(x^* \in E^*\). We say that a cylindrical random variable Y (resp. cylindrical measure \(\mu \)) is of weak order p or has finite p-th weak moments if \({\mathbb {E}}\left[ \left| Yx^*\right| ^p \right] < \infty \) for every \(x^* \in E^*\) (resp. \(\int _E \left| x^*(x)\right| ^p \mu ({\mathrm {d}}x)< \infty \)). We say that Y is induced by an E-valued random variable \(X:\Omega \rightarrow E\) if

$$\begin{aligned} Yx^* = x^*(X) \qquad \text { for all } x^* \in E^*. \end{aligned}$$

A family of cylindrical random variables \((L(t) : t \ge 0)\) is called a cylindrical Lévy process if, for every \(x_1^*,\ldots ,x_n^* \in E^*\) and \(n\in {\mathbb {N}}\), we have that

$$\begin{aligned} \big ( (L(t)x_1^*,\ldots , L(t)x_n^*) : t\ge 0 \big ) \end{aligned}$$

is a Lévy process in \({\mathbb {R}}^n\) with respect to the filtration \((\mathcal {F}_t)\). We say that L is weakly p-integrable if \({\mathbb {E}}\left[ \left| L(1)x^*\right| ^p\right] <\infty \) for every \(x^* \in E^*\). The characteristic function of L(1) can be written in the form:

$$\begin{aligned} \phi _{L(1)}(x^*)= & {} \exp \left( i p(x^*) - \tfrac{1}{2} q(x^*) \right. \\&\left. + \int _E \left( e^{i x^*(x)} -1 -ix^*(x)\mathbb {1}_{B_{\mathbb {R}}}(x^*(x)) \right) \nu (\mathrm {d}x) \right) , \end{aligned}$$

where \(p:E^* \rightarrow {\mathbb {R}}\) is a continuous function with \(p(0)=0\), \(q:E^* \rightarrow {\mathbb {R}}\) is a quadratic form, and \(\nu \) is a finitely additive set function on cylindrical sets of the form \(C(x_1^*, \cdots , x_n^*;B)\) for \(x_1^*,\dots , x_n^*\in E^*\) and \(B\in \mathcal {B}({\mathbb {R}}^n\setminus \{0\})\), such that for every \(x^* \in E^*\), it satisfies

$$\begin{aligned} \int _{{\mathbb {R}}\setminus \{0\}} \left( \left| \beta \right| ^2 \wedge 1 \right) \,(\nu \circ (x^{*})^{-1})({\mathrm {d}}x) < \infty . \end{aligned}$$

Cylindrical Lévy processes are introduced in [1], and further details on the characteristic function can be found in [21].

An operator \(u:E\rightarrow F\) is called p-summing if there exists a constant c such that

$$\begin{aligned} \left( \sum _{k=1}^n \left\| u(x_k)\right\| ^p \right) ^{1/p} \le c \sup \left\{ \left( \sum _{k=1}^n \left| x^*(x_k)\right| ^p \right) ^{1/p} : x^* \in B_{E^*}\right\} \end{aligned}$$
(1)

for all \(x_1,\ldots ,x_n \in E\) and \(n\in {\mathbb {N}}\); see [9]. We denote with \(\pi _p(u)\) its p-summing norm, which is the smallest possible constant c in (1). The space of p-summing operators is denoted with \(\Pi _p(E,F)\). If E and F are Hilbert spaces, this space coincides with the space of Hilbert–Schmidt operators denoted by \(L_{\mathrm{HS}}(E,F)\) with the norm \(\left\| \cdot \right\| _{L_{\mathrm{HS}}(E,F)}\); see [9, Th. 4.10 and Cor. 4.13]. Moreover, the p-summing norms and the Hilbert–Schmidt norm in \(L_{\mathrm{HS}}(E,F)\) are equivalent.

A Banach space E is of martingale type \(p\in [1,2]\) if there exists a constant \(C_p\) such that for all finite E-valued martingales \((M_k)_{k=1}^n\) the following inequality is satisfied:

$$\begin{aligned} \sup _{k=1,\ldots ,n} {\mathbb {E}}\big [ \left\| M_k\right\| ^p \big ] \le C_p \sum _{k=1}^n {\mathbb {E}}\big [ \left\| M_k-M_{k-1}\right\| ^p \big ], \end{aligned}$$
(2)

where we use the convention that \(M_{0}=0\); see [11].

We use the notation \(u(\mu )\) for the push-forward cylindrical measure \(\mu \circ u^{-1}\) for a continuous linear function \(u:E \rightarrow F\) and a cylindrical measure \(\mu \). An operator \(u :E \rightarrow F\) is called p-radonifying for some \(p\ge 0\) if for every cylindrical measure \(\mu \) on E of weak order p, the measure \(u(\mu )\) extends to a Radon measure on F with finite p-th strong moment. Equivalently, the mapping u is p-radonifying if for every cylindrical random variable Y on \(E^*\) with finite weak p-th moment, the cylindrical random variable \(Y \circ u^*\) is induced by an F-valued random variable with finite p-th strong moment; see [31, Prop. VI.5.2].

A Banach space E has the approximation property if for every compact set \(K \subseteq E\) and for every \(\epsilon >0\) there exists a finite rank operator \(u :E \rightarrow E\) such that \(\left\| u(x)-x\right\| \le \epsilon \) for \(x \in K\).

A Banach space E has the Radon–Nikodym property if for any probability space \((\Omega ,\mathcal {F},P)\) and vector-valued measure \(\mu :\mathcal {F} \rightarrow E\), which is absolutely continuous with respect to P, there exists \(f\in L^1(\Omega ,\mathcal {F},P;E)\) such that

$$\begin{aligned} \mu (A) = \int _A f(\omega ) \, P({\mathrm {d}}\omega ) \qquad \text {for all } A \in \mathcal {F}. \end{aligned}$$

It is well known that every reflexive Banach space has the Radon–Nikodym property; see [31, Cor. 2, p. 219]. It follows from [31, Th. VI.5.4 and Th. VI.5.5] that if either \(p>1\) or \(p=1\) and F has the Radon–Nikodym property, then the classes of p-radonifying and p-summing operators between E and F coincide.

3 Some Results on p-Summing Operators

Our approach to stochastic integration with respect to a cylindrical Lévy process is based on a generalisation of Pietsch’s factorisation theorem, which is due to Schwartz; see [30, pp. 23–28] and [28]. For a measure \(\mu \) on \(\mathcal {B}(E)\) and \(p\in [1,2]\), we define

$$\begin{aligned} \left\| \mu \right\| _p := \left( \int _E \left\| x\right\| ^p \mu (dx)\right) ^{1/p}, \end{aligned}$$

and say that \(\mu \) is of order p if \(\left\| \mu \right\| _p<\infty \). For a cylindrical measure \(\mu \) on \(\mathcal {Z}(E)\), we define

$$\begin{aligned} \left\| \mu \right\| _p^* = \sup _{x^* \in B_{E^*}} \left\| x^*(\mu )\right\| _p, \end{aligned}$$

and we say that \(\mu \) is of weak order p if \(\left\| \mu \right\| _p^*<\infty \).

Theorem 1

For \(p\in [1,2]\), assume either that \(p>1\) or that F has the Radon–Nikodym property if \(p=1\). Each cylindrical probability measure \(\mu \) on \(\mathcal {Z}(E)\) and p-summing map \(u :E \rightarrow F\) satisfy

$$\begin{aligned} \left\| u(\mu )\right\| _p \le \pi _p(u) \left\| \mu \right\| _p^*. \end{aligned}$$
(3)

Proof

See [30] or [28, 29]. \(\square \)

Remark 2

Pietsch’s factorisation theorem states that if \(u:E\rightarrow F\) is a p-summing map, then there exists a regular probability measure \(\rho \) on \(B_{E^*}\) such that

$$\begin{aligned} \left\| ux\right\| \le \pi _p(u)\left( \int _{B_{E^*}}\left| x^*(x)\right| ^p\, \rho ({\mathrm {d}}x)\right) ^{1/p} \qquad \text {for all }x\in E. \end{aligned}$$

If X is a genuine random variable \(X:\Omega \rightarrow E\) with probability distribution \(\mu \) on \(\mathcal {B}(E)\), Pietsch’s factorisation theorem immediately implies

$$\begin{aligned} \left\| u(\mu )\right\| _p^p= {\mathbb {E}}\left[ \left\| uX\right\| ^p\right] \le \left( \pi _p(u)\right) ^p {\mathbb {E}}\left[ \int _{B_{E^*}}\left| x^*(X)\right| ^p\, \rho ({\mathrm {d}}x)\right] \le \left( \pi _p(u)\right) ^p \left\| \mu \right\| _p^p. \end{aligned}$$

For this reason, we refer to Theorem 1 as generalised Pietsch’s factorisation theorem.

For establishing the stochastic continuity of the stochastic convolution in Sect. 5, we need a result on the convergence of p-summing operators between Banach spaces. In the case of Hilbert spaces, this convergence result can easily be seen: suppose that U and H are separable Hilbert spaces and let \(\psi :U \rightarrow H\) be a Hilbert–Schmidt operator. If \((\phi _n)\) is a sequence of operators \(\phi _n :H \rightarrow H\) converging strongly to 0 as \(n \rightarrow \infty \), then the composition \(\phi _n \psi \) converges to 0 in the Hilbert–Schmidt norm. Indeed, take \((e_n)\) an orthonormal basis of U and calculate

$$\begin{aligned} \left\| \phi _n \psi \right\| _{L_{\mathrm{HS}}(U,H)}^2 = \sum _{k=1}^\infty \left\| \phi _n \psi e_k\right\| ^2. \end{aligned}$$

Every term in the above sum converges to 0 as \(n \rightarrow \infty \) due to the strong convergence of \(\phi _n\). By Lebesgue’s dominated convergence theorem, we obtain \(\left\| \phi _n \psi \right\| _{L_{\mathrm{HS}}(U,H)}^2 \rightarrow 0\). The following result extends this conclusion in Hilbert spaces to the Banach space setting by approximating p-summing operators with finite rank operators.

Theorem 3

Suppose that E is a reflexive Banach space or a Banach space with separable dual and that \(E^{**}\) has the approximation property. If \(\psi :E\rightarrow F\) is a p-summing operator and \((\phi _n)\) is a sequence of operators \(\phi _n :F\rightarrow F\) converging strongly to 0, then we have

$$\begin{aligned} \pi _p(\phi _n \psi ) \rightarrow 0. \end{aligned}$$
(4)

Proof

We first prove the assertion for finite rank operators \(\psi :E\rightarrow F\), in which case we can assume that \(\psi = \sum \nolimits _{k=1}^N x_k^* \otimes y_k\) for some \(x_k^*\in E^*\) and \(y_k\in F\). Then, \(\phi _n \psi = \sum \nolimits _{k=1}^N x_k^* \otimes (\phi _n y_k)\) and since \(\pi _p(x^*\otimes y)=\left\| x^*\right\| \left\| y\right\| \) by a simple argument (see [9, p. 37]), we estimate

$$\begin{aligned} \pi _p(\phi _n \psi ) \le \sum _{k=1}^N \pi _p(x_k^* \otimes (\phi _n y_k)) = \sum _{k=1}^N \left\| x_k^*\right\| \left\| \phi _n y_k\right\| \rightarrow 0, \end{aligned}$$

because \(\left\| \phi _n y_k\right\| \rightarrow 0\) for every \(k\in \{1,\dots , N\}\).

Consider now the case of a general p-summing operator \(\psi \). Under the assumptions on E and F, by Corollary 1 in [26], the finite rank operators are dense in the space of p-summing operators. That is, there exists a sequence of finite rank operators \((\psi _k)\) such that \(\pi _p(\psi _k-\psi ) \rightarrow 0\) as \(k\rightarrow \infty \). It follows that

$$\begin{aligned} \pi _p(\phi _n \psi ) \le \pi _p(\phi _n\psi _k) + \pi _p(\phi _n(\psi -\psi _k))\qquad \text {for all }k,\, n\in {\mathbb {N}}. \end{aligned}$$
(5)

Fix \(\epsilon >0\), and let \(c:=\sup \{\left\| \phi _n\right\| :\, n\in {\mathbb {N}}\}\). Choose \(k\in {\mathbb {N}}\) such that \(\pi _p(\psi -\psi _k) \le \frac{\epsilon }{2c}\). Since \(\psi _k\) is a finite rank operator, the argument above guarantees that there exists \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\), we have \(\pi _p(\phi _n\psi _k) \le \frac{\epsilon }{2}\). Inequality (5) implies for all \(n\ge n_0\) that \(\pi _p(\phi _n \psi ) \le \tfrac{\epsilon }{2} + \tfrac{\epsilon }{2} \), which completes the proof. \(\square \)

Remark 4

The proof of Theorem 3 relies on the density of finite rank operators in the space of p-summing operators. This holds under more general assumption than assumed in Theorem 3; see [26, pp. 384 and 388].

However, the result of Theorem 3 does not hold in the case of arbitrary Banach spaces as the following example shows. Choose \(E=\ell ^1({\mathbb {R}})\) and \(F=\ell ^2({\mathbb {R}})\), and equip both spaces with the canonical basis \((e_n)\), where \(e_n=(0,\ldots ,0,1,0,\ldots )\). We take \(\psi = {{\,\mathrm{{\mathrm {Id}}}\,}}:E\rightarrow F\), which is 1-radonifying by Grothendieck’s theorem; see [9, p. 38-39]. Furthermore, we define \(\phi _n = e_n \otimes e_n\), i.e. \(\phi _n(x) =x(n)e_n=(0,\ldots ,0,x(n),0,\ldots )\) for a sequence \(x=(x(n))\in \ell ^2({\mathbb {R}})\). Then, \(\phi _n\) converges to 0 strongly as \(n\rightarrow \infty \), but since \(\phi _n \psi \) is finite rank, we have \(\pi _1(\phi _n \psi )=\left\| e_n\right\| \left\| e_n\right\| =1\) for all \(n\in {\mathbb {N}}\). This counterexample shows that the assumptions on the space E in Theorem 3 cannot be dropped.

4 Radonification of Increments and Stochastic Integral

In this section, we fix \(p\in [1,2]\) and assume that the cylindrical Lévy process L has finite p-th weak moments and assume either that \(p>1\) or that F has the Radon–Nikodym property if \(p=1\).

Fix \(0\le s< t\le T\). An \(\mathcal {F}_s\)-measurable random variable \(\Psi :\Omega \rightarrow \Pi _p(E,F)\) is called simple if it is of the form:

$$\begin{aligned} \Psi = \sum _{k=1}^m \mathbb {1}_{A_k} \psi _k, \end{aligned}$$
(6)

for some disjoint sets \(A_1,\ldots ,A_m \in \mathcal {F}_s\) and \(\psi _1,\ldots ,\psi _m \in \Pi _p(E,F)\). The space of simple, \(\mathcal {F}_s\)-measurable random variables is denoted with \(S:=S(\Omega ,\mathcal {F}_s;\Pi _p(E,F))\), and it is equipped with the norm \(\left\| \Psi \right\| _{S,p} := \left( {\mathbb {E}}\left[ \pi _p(\Psi )^p \right] \right) ^{1/p}\).

Since for \(p>1\) or for \(p=1\) with F having the Radon–Nikodym property, each p-summing operator \(\psi _k:E\rightarrow F\) is p-radonifying, it follows that the cylindrical random variable \((L(t)-L(s))\psi _k^*\) is induced by a classical, F-valued random variable \(X_{\psi _k}:\Omega \rightarrow F\):

$$\begin{aligned} \big ( L(t)-L(s) \big ) \psi _k^* (x^*) = x^*(X_{\psi _k}) \qquad \text { for all } x^* \in F^*. \end{aligned}$$

This enables us to define the operator

$$\begin{aligned} J_{s,t} :S(\Omega ,\mathcal {F}_s;\Pi _p(E,F)) \rightarrow L^p(\Omega , \mathcal {F},P;F), \qquad J_{s,t}(\Psi ) := \sum _{k=1}^m \mathbb {1}_{A_k} X_{\psi _k}.\nonumber \\ \end{aligned}$$
(7)

The following result allows us to extend the operator \(J_{s,t}\) to \(L^p(\Omega ,\mathcal {F}_s,P;\Pi _p(E,F))\).

Lemma 5

(Radonification of the increments) For fixed \(0\le s <t\le T\), the operator \(J_{s,t}\) defined in (7) is continuous and satisfies

$$\begin{aligned} \left\| J_{s,t}\right\| _{\mathcal {L}(S,L^p)} \le \left\| L(t-s)\right\| _{\mathcal {L}(E^*,L^p(\Omega ;{\mathbb {R}}))}, \end{aligned}$$
(8)

and thus, \(J_{s,t}\) can be extended to \(J_{s,t} :L^p(\Omega ,\mathcal {F}_s,P;\Pi _p(E,F)) \rightarrow L^p(\Omega , \mathcal {F}_t, P; F)\).

Proof

Let \(\Psi \) be of the form (6). Since the sets \(A_k\) are disjoint, it follows that

$$\begin{aligned} {\mathbb {E}}\left[ \left\| J_{s,t}(\Psi )\right\| ^p \right] = {\mathbb {E}}\left[ \left\| \sum _{k=1}^m \mathbb {1}_{A_k} X_{\psi _k}\right\| ^{p} \right] = {\mathbb {E}}\left[ \sum _{k=1}^m \mathbb {1}_{A_k} \left\| X_{\psi _k}\right\| ^p \right] . \end{aligned}$$

Using the fact that each \(A_k\) is \(\mathcal {F}_s\)-measurable and that \(X_{\psi _k}\) is independent of \(\mathcal {F}_s\), we can calculate further

$$\begin{aligned} {\mathbb {E}}\big [ \left\| J_{s,t}(\Psi )\right\| ^p \big ] = \sum _{k=1}^m {\mathbb {E}}\big [ {\mathbb {E}}\big [ \mathbb {1}_{A_k} \left\| X_{\psi _k}\right\| ^p \vert \mathcal {F}_s \big ] \big ] = \sum _{k=1}^m P(A_k){\mathbb {E}}\big [ \left\| X_{\psi _k}\right\| ^p \big ].\qquad \end{aligned}$$
(9)

In order to estimate \({\mathbb {E}}\left[ \left\| X_{\psi _k}\right\| ^p \right] \), we apply Theorem 1 to obtain that

$$\begin{aligned} \left( {\mathbb {E}}\left[ \left\| X_{\psi _k}\right\| ^p \right] \right) ^{1/p} \le \pi _p(\psi _k) \left\| L(t)-L(s)\right\| _p^*. \end{aligned}$$
(10)

Since stationary increments of the real-valued Lévy processes yield

$$\begin{aligned} \left( {\mathbb {E}}\left[ \left| (L(t)-L(s))x^*\right| ^p\right] \right) ^{1/p} =\left( {\mathbb {E}}\left[ \left| L(t-s)x^*\right| ^p\right] \right) ^{1/p} \qquad \text {for all }x^*\in E^*, \end{aligned}$$

it follows that

$$\begin{aligned} \left\| L(t)-L(s)\right\| _p^* = \sup _{x^*\in B_{E^*}} \left( {\mathbb {E}}\left[ \left| L(t-s)x^*\right| ^p\right] \right) ^{1/p} =\left\| L(t-s)\right\| _{\mathcal {L}(E^*,L^p(\Omega ;{\mathbb {R}}))}.\nonumber \\ \end{aligned}$$
(11)

Note that by the closed graph theorem and the continuity of \(L(t-s) :E^* \rightarrow L^0(\Omega ,\mathcal {F},P;{\mathbb {R}})\), the mapping \(L(t-s) :E^* \rightarrow L^p(\Omega ,\mathcal {F},P;{\mathbb {R}})\) is continuous. This shows that the last expression in (11) is finite. Applying estimates (10) and (11) to (9) results in

$$\begin{aligned} \begin{aligned} \left( {\mathbb {E}}\left[ \left\| J_{s,t}(\Psi )\right\| ^p \right] \right) ^{1/p}&\le \left( \sum _{k=1}^m P(A_k) \pi _p(\psi _k)^p \left\| L(t-s)\right\| _{\mathcal {L}(E^*,L^p(\Omega ;{\mathbb {R}}))}^p \right) ^{1/p} \\&= \left\| L(t-s)\right\| _{\mathcal {L}(E^*,L^p(\Omega ;{\mathbb {R}}))} \left( {\mathbb {E}}\big [ \pi _p(\Psi )^p \big ]\right) ^{1/p}, \end{aligned} \end{aligned}$$
(12)

which proves (8). \(\square \)

For defining the stochastic integral below, let \(\Lambda (\Pi _p(E,F))\) denote the space of predictable processes \(\Psi :[0,T]\times \Omega \rightarrow \Pi _p(E,F)\) such that

$$\begin{aligned} \left\| \Psi \right\| _\Lambda := \left( {\mathbb {E}}\left[ \int _0^T \pi _p(\Psi (s))^p \,\, {\mathrm {d}}s\right] \right) ^{1/p} <\infty , \end{aligned}$$

that is, \(\Lambda (\Pi _p(E,F)) = L^p\big ([0,T] \times \Omega , \mathcal {P}, {\mathrm {d}}t \otimes P; \Pi _p(E,F)\big )\), where \(\mathcal {P}\) denotes the predictable \(\sigma \)-algebra on \([0,T] \times \Omega \). A simple stochastic process is of the form:

$$\begin{aligned} \Psi (t) = \Psi _0\mathbb {1}_{\{0\}}(t)+ \sum _{k=1}^{N-1} \Psi _k \mathbb {1}_{(t_k,t_{k+1}]}(t), \end{aligned}$$
(13)

where \(0=t_1<\cdots <t_N=T\), and each \(\Psi _k\) is an \(\mathcal {F}_{t_k}\)-measurable, \(\Pi _p(E,F)\)-valued random variable with \(E[ \pi _p(\Psi _k)^p]<\infty \). We denote with \(\Lambda _0^S(\Pi _p(E,F))\) the space of simple processes of the form (13) where each \(\Psi _k\) is a simple random variable of the form (6), i.e. taking only a finite number of values.

Since for stochastic processes in \(\Lambda _0^S(\Pi _p(E,F))\), the radonification of the increments is defined by the operator \(J_{s,t}\), we can define the integral operator by

$$\begin{aligned} I:\Lambda _0^S(\Pi _p(E,F)) \rightarrow L^p(\Omega ,\mathcal {F}_T,P;F),\qquad I(\Psi ) := \sum _{k=1}^{N-1} J_{t_k,t_{k+1}}(\Psi _k), \end{aligned}$$
(14)

where \(\Psi \) is assumed to be of the form (13).

Lemma 6

The space \(\Lambda _0^S(\Pi _p(E,F))\) is dense in \(\Lambda (\Pi _p(E,F))\) w.r.t. \(\left\| \cdot \right\| _{\Lambda }\).

Proof

The result follows from the construction in the proof of [6, Prop. 4.22(ii)]. \(\square \)

Theorem 7

(Stochastic integration) Assume that the cylindrical Lévy process L has the characteristics \((b,0,\nu )\) and satisfies

$$\begin{aligned} \int _E \left| x^*(x)\right| ^p \, \nu ({\mathrm {d}}x) < \infty , \qquad \text { for all } x^* \in E^* \end{aligned}$$
(15)

and that F is of martingale type p and has the Radon–Nikodym property if \(p=1\). Then, the integral operator I defined in (14) is continuous and extends to the operator

$$\begin{aligned} I :\Lambda (\Pi _p(E,F)) \rightarrow L^p(\Omega ,\mathcal {F}_T,P;F). \end{aligned}$$

Proof

Let \(\Psi \) in \(\Lambda _0^S(\Pi _p(E,F))\) be given by (13) where \(\Psi _k\) is of the form:

$$\begin{aligned} \Psi _k = \sum _{i=1}^{m_k} 1_{A_{k,i}} \psi _{k,i}, \end{aligned}$$

for some disjoint sets \(A_{k,1},\ldots ,A_{k,m_k} \in \mathcal {F}_{t_k}\) and \(\psi _{k,1},\ldots ,\psi _{k,m_k} \in \Pi _p(E,F)\) for all \(k\in \{0,\dots , N-1\}\).

The cylindrical Lévy process L can be decomposed into \(L(t)x^* = B(t)x^* + M(t)x^*\) for all \(x^*\in E^*\), where \(B(t)x^* := t\, {\mathbb {E}}\left[ L(1)x^* \right] \) and \(M(t)x^* := L(t)x^*-B(t)x^* \) for all \(x^*\in E^*\) and \(t\ge 0\). Both \(B(t):E^*\rightarrow L^1(\Omega ,\mathcal {F},P;{\mathbb {R}})\) and \(M(t):E^*\rightarrow L^1(\Omega ,\mathcal {F},P;{\mathbb {R}})\) are linear and continuous since \(L(1) :E^* \rightarrow L^1(\Omega ,\mathcal {F},P;{\mathbb {R}})\) is continuous due to the closed graph theorem. In particular, both B and M are cylindrical Lévy processes, and we can integrate separately with respect to B and M:

$$\begin{aligned} I(\Psi ) = I_B(\Psi ) + I_M(\Psi ). \end{aligned}$$
(16)

For the first integral in (16), we calculate

$$\begin{aligned} \left\| I_B(\Psi )\right\| ^p = \sup _{y^* \in B_{F^*}} \left| {y^*\left( \int _0^T \Psi (s) \, {\mathrm {d}}B(s) \right) }\right| ^p = \sup _{y^* \in B_{F^*}} \left| \int _0^T B(1)(\Psi ^*(s)y^*)\, {\mathrm {d}}s\right| ^p. \end{aligned}$$

By Hölder’s inequality with \(q=\frac{p}{p-1}\) and \(q=\infty \) if \(p=1\), we obtain

$$\begin{aligned}\begin{aligned} \left\| I_B(\Psi )\right\| ^p&\le \sup _{y^* \in B_{F^*}} T^{p/q} \int _0^T \left| B(1)(\Psi ^*(s)y^*)\right| ^p \, {\mathrm {d}}s\\&\le T^{p/q}\left\| B(1)\right\| _{\mathcal {L}(E^*,{\mathbb {R}})}^p \int _0^T \left\| \Psi ^*(s)\right\| _{\mathcal {L}(F^*,E^*)}^p \, {\mathrm {d}}s. \end{aligned} \end{aligned}$$

Since \(\left\| \Psi ^*(s)\right\| _{\mathcal {L}(F^*,E^*)} = \left\| \Psi (s)\right\| _{\mathcal {L}(E,F)} \le \pi _p(\Psi (s))\) according to [9, p. 31], it follows that

$$\begin{aligned} {\mathbb {E}}\left[ \left\| I_B(\Psi )\right\| ^p\right] \le T^{p/q} \left\| B(1)\right\| _{\mathcal {L}(E^*,{\mathbb {R}})}^p {\mathbb {E}}\left[ \int _0^T \pi _p(\Psi (s))^p \, {\mathrm {d}}s\right] . \end{aligned}$$
(17)

For estimating the second term in (16), define the Banach space

$$\begin{aligned} R_p = \bigg \{ X:(0,T] \times \Omega \rightarrow {\mathbb {R}}: \text { measurable and } \sup \limits _{t \in (0,T]} \frac{1}{t^{1/p}} \big ( {\mathbb {E}}\left[ \left| X(t)\right| ^p \right] \big )^{1/p} <\infty \bigg \} \end{aligned}$$

with the norm \(\left\| X\right\| _{R_p} = \sup _{t \in (0,T]} \frac{1}{t^{1/p}} \left( {\mathbb {E}}\left[ \left| X(t)\right| ^p \right] \right) ^{1/p}\). By standard properties of the real-valued Lévy martingales, see, for example, [18, Th. 8.23(i)], it follows that there exists a constant \(c>0\) such that

$$\begin{aligned} {\mathbb {E}}\big [ \left| M(t)x^*\right| ^p \big ] \le ct \int _{\mathbb {R}}\left| \beta \right| ^p \,(\nu \circ (x^*)^{-1}) ({\mathrm {d}}\beta ) \qquad \text {for all }x^*\in E^*. \end{aligned}$$
(18)

Here, we use that the Lévy measure of \(M(1)x^*\) is given by \(\nu \circ (x^*)^{-1}\). It follows that we can consider the map \(M:E^{*}\rightarrow R_p\) defined by \(Mx^*=(M(t)x^*:\, t\in (0,T])\). To show that M is continuous, let \(x_n^*\) converges to \(x^*\) in \(E^*\) and \(Mx_n^*\) to some Y in \(R_p\). It follows that \(M(t)x_n^* \rightarrow Y(t)\) in \(L^p(\Omega ;{\mathbb {R}})\) for every \(t\in (0,T]\). On the other hand, continuity of \(M(t):E^*\rightarrow L^1(\Omega ,\mathcal {F},P;{\mathbb {R}})\) implies \(M(t)x_n^* \rightarrow M(t)x^*\) in \(L^0(\Omega ;{\mathbb {R}})\). Thus, \(Y(t)=M(t)x^*\) for all \(t\in (0,T]\) a.s., and the closed graph theorem satisfies that \(M:E^{*}\rightarrow R_p\) is continuous. It follows that

$$\begin{aligned} \left\| M(t_{k+1}-t_k)\right\| _{L(E^*;L^p(\Omega ;{\mathbb {R}}))}^p \le (t_{k+1}-t_k) \left\| M\right\| _{\mathcal {L}(E^{*},R_p)}^p. \end{aligned}$$
(19)

Let \(J_{t_k,t_{k+1}}\) be the operators defined in (7) with L replaced by M. Since F is of martingale type p, here exists a constant \(C_p>0\) such that Lemma 5 and inequality (19) imply

$$\begin{aligned} {\mathbb {E}}\left[ \left\| I_M(\Psi )\right\| ^p \right]&= {\mathbb {E}}\left[ \left\| {\sum _{k=1}^{N-1} J_{t_k,t_{k+1}}(\Psi _k)}\right\| ^p \right] \\&\le C_p {\mathbb {E}}\left[ \sum _{k=1}^{N-1} \left\| J_{t_k,t_{k+1}}(\Psi _k)\right\| ^p \right] \\&\le C_p \sum _{k=1}^{N-1} \left\| M(t_{k+1}-t_k)\right\| _{\mathcal {L}(E^*;L^p(\Omega ;{\mathbb {R}}))}^p {\mathbb {E}}\left[ \pi _p(\Psi _k)^p \right] \\&\le C_p \left\| M\right\| _{\mathcal {L}(E^{*},R_p)}^p {\mathbb {E}}\left[ \int _0^T \pi _p(\Psi (s))^p \, {\mathrm {d}}s\right] . \end{aligned}$$

Together with (17), this completes the proof. \(\square \)

By rewriting Condition (15) as

$$\begin{aligned}&\int _{B_{{\mathbb {R}}}}\left| \beta \right| ^p\, (\nu \circ (x^{*})^{-1})({\mathrm {d}}\beta )< \infty \quad \text {and}\quad \\&\quad \int _{B_{{\mathbb {R}}}^c}\left| \beta \right| ^p\, (\nu \circ (x^{*})^{-1})({\mathrm {d}}\beta ) <\infty \quad \text {for all }x^*\in E^*, \end{aligned}$$

it follows that Condition (15) is equivalent to

$$\begin{aligned} (L(t)x^* : t\ge 0) \text { is { p}-integrable and has finite { p}-variation for each } x^*\in E^*. \end{aligned}$$

This is a natural requirement if we want to control the moments, see [2, 16] and Remark 9. Condition (15) implies in particular that the Blumenthal–Getoor index of \((L(t)x^* : t\ge 0)\) is at most p. The interplay between the integrability of the Lévy process and its Blumenthal–Getoor index was observed also in [4, 5].

Example 8

(Gaussian case) Note that if \(p<2\), then L cannot have the Gaussian part for the assertion to hold. Indeed, let W be a one-dimensional Brownian motion and suppose for contradiction that

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\int _0^T \Psi (t) \, {\mathrm {d}}W(t)}\right| ^p \right] \le C {\mathbb {E}}\left[ \int _0^T \left| \Psi (t)\right| ^p \, {\mathrm {d}}t\right] \end{aligned}$$
(20)

for some constant C and every real-valued predictable process \(\Psi \) with \({\mathbb {E}}\left[ \int _0^T \left| \Psi (t)\right| ^2 \, {\mathrm {d}}t\right] < \infty \). Choose for each \(n\in {\mathbb {N}}\) the stochastic process \(\Psi _n(t) = \mathbb {1}_{[0,1/n]}(t)\) for \(t \in [0,T]\). By [10, Sec. 3.478], we calculate

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\int _0^T \Psi _n(t) \, {\mathrm {d}}W(t)}\right| ^p \right] = {\mathbb {E}}\left[ \left| {W\left( \frac{1}{n}\right) }\right| ^p \right] = \left( \frac{1}{n} \right) ^{\frac{p}{2}} \frac{2^{\frac{p}{2}} \Gamma \left( \frac{p+1}{2}\right) }{\sqrt{\pi }}. \end{aligned}$$

But on the other side, since \({\mathbb {E}}\left[ \int _0^T \left| \Psi _n(t)\right| ^p \, {\mathrm {d}}t\right] = \frac{1}{n}\), solving (20) for n yields

$$\begin{aligned} n^{1-\frac{p}{2}} \le \frac{C\sqrt{\pi }}{2^\frac{p}{2} \Gamma \left( \frac{p+1}{2}\right) }, \end{aligned}$$

which results in a contradiction by taking the limit as \(n\rightarrow \infty \).

Example 9

(Stable case) Let \(E=L^{p^\prime }({\mathcal O})\) for \(p^\prime =p/(p-1)\) and some \({\mathcal O}\subseteq {\mathbb {R}}^d\). The canonical \(\alpha \)-stable cylindrical Lévy process has the characteristic function \(\phi _{L(1)}(x^*)=\exp (-\left\| x^*\right\| ^\alpha )\) for each \(x^*\in E^*\); see [24]. It follows that the real-valued Lévy process \((L(t)x^*:t\ge 0)\) is symmetric \(\alpha \)-stable with Lévy measure \(\nu \circ (x^*)^{-1}(\mathrm {d}\beta )=c\frac{1}{\left| \beta \right| ^{1+\alpha }} \mathrm {d}\beta \) for a constant \(c>0\). Condition (15) fails to hold since

$$\begin{aligned}&\int _{B_{\mathbb {R}}} \left| \beta \right| ^p \, \nu \circ (x^*)^{-1} ( \mathrm {d}\beta ) = \infty \, \text { for } p\le \alpha , \qquad \\&\quad \int _{B_{\mathbb {R}}^c} \left| \beta \right| ^p \, \nu \circ (x^*)^{-1} (\mathrm {d} \beta ) = \infty \, \text { for } p\ge \alpha . \end{aligned}$$

One can observe in a similar way as in the Gaussian case that the stochastic integral operator with respect to the \(\alpha \)-stable cylindrical Lévy process L is not continuous. If \(\Psi _n(t) = \mathbb {1}_{[0,1/n]}(t)\), then in the inequality

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\int _0^T \Psi _n(t) \, \, {\mathrm {d}}L(t)}\right| ^p \right] \le C {\mathbb {E}}\left[ \int _0^T \left| \Psi _n(t)\right| ^p \, {\mathrm {d}}t\right] , \end{aligned}$$
(21)

the left-hand side is infinite for \(p \ge \alpha \). For \(p<\alpha \), we use the self-similarity of the stable processes to calculate

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\int _0^T \Psi _n(t) \, \, {\mathrm {d}}L(t)}\right| ^p \right] = {\mathbb {E}}\left[ \left| {L\left( \frac{1}{n} \right) }\right| ^p \right] = {\mathbb {E}}\left[ \frac{1}{n^{p/\alpha }} \left| L(1)\right| ^p \right] . \end{aligned}$$

Solving (21) for n yields

$$\begin{aligned} n^{(\alpha -p)/\alpha } \le \frac{C}{{\mathbb {E}}\left[ \left| L(1)\right| ^p \right] }, \end{aligned}$$

which results in a contradiction by taking the limit as \(n\rightarrow \infty \).

Therefore, the stochastic integral mapping with respect to the \(\alpha \)-stable process cannot be continuous as a mapping from \(L^p([0,T]\times \Omega ,\mathcal {P},{\mathrm {d}}t \otimes P;{\mathbb {R}})\) to \(L^p(\Omega ,\mathcal {F}_T,P;{\mathbb {R}})\) for any \(p>0\). A moment inequality with different powers on the left- and right-hand sides was proven in the case of real-valued integrands and vector-valued integrators in [25]. They prove for any \(\alpha \)-stable Lévy process L and \(p<\alpha \) that

$$\begin{aligned} {\mathbb {E}}\left[ \bigg ( \sup _{t\le T} \left\| {\int _0^t \Psi (s) \, {\mathrm {d}}L(s)}\right\| \bigg )^p \right] \le C {\mathbb {E}}\left[ \left( \int _0^T \left| \Psi (s)\right| ^\alpha \, {\mathrm {d}}t\right) ^{p/\alpha } \right] . \end{aligned}$$

Example 10

In various publications, e.g. [15, 19, 20, 23], specific examples of the following kind of a cylindrical Lévy process have been studied: let E be a Hilbert space with an orthonormal basis \((e_k)\) and let L be given by

$$\begin{aligned} L(t)x = \sum _{k=1}^\infty \langle x,e_k \rangle \ell _k(t) \qquad \text {for all } x \in E, \end{aligned}$$
(22)

where \((\ell _k)\) is a sequence of independent, one-dimensional Lévy processes \(\ell _k\) with characteristics \((b_k,0,\rho _k)\). Precise conditions under which the sum converges and related results can be found in [23]. In this case, we claim that Condition (15) is satisfied if and only if

$$\begin{aligned} \sum _{k=1}^\infty \left( \int _{\mathbb {R}}\left| \beta \right| ^p \, \rho _k({\mathrm {d}}\beta ) \right) ^{\frac{2}{2-p}} < \infty . \end{aligned}$$

It is shown in [23, Lem. 4.2] that the cylindrical Lévy measure \(\nu \) of L is given by

$$\begin{aligned} \nu (A) = \sum _{k=1}^\infty \rho _k \circ m_{e_k}^{-1}(A) \qquad \text {for all } A\in \mathcal {Z}(E), \end{aligned}$$

where \(m_{e_k} :{\mathbb {R}}\rightarrow E\) is given by \(m_{e_k}(x) = xe_k\). Condition (15) simplifies to

$$\begin{aligned} \int _E \left| \langle y,x \rangle \right| ^p \, \nu ({\mathrm {d}}x)= & {} \sum _{k=1}^\infty \int _E \left| \langle y,x \rangle \right| ^p \, (\rho _k \circ m_{e_k}^{-1})({\mathrm {d}}x) \\= & {} \sum _{k=1}^\infty \left| \langle y,e_k \rangle \right| ^p \int _{\mathbb {R}}\left| \beta \right| ^p \, \rho _k({\mathrm {d}}\beta ) <\infty \end{aligned}$$

for any \(y \in E\). This is equivalent to

$$\begin{aligned} \sum _{k=1}^\infty \alpha _k \int _{\mathbb {R}}\left| \beta \right| ^p \, \rho _k({\mathrm {d}}\beta ) <\infty \qquad \text { for any } (\alpha _k) \in \ell ^{2/p}({\mathbb {R}}_+), \end{aligned}$$

which results in \(\left( \int _{\mathbb {R}}\left| \beta \right| ^p \, \rho _k({\mathrm {d}}\beta ) \right) _{k \in {\mathbb {N}}} \in \ell ^{2/p}({\mathbb {R}})^* = \ell ^{2/(2-p)}({\mathbb {R}})\).

Example 11

Another example is cylindrical compound Poisson process, see, for example, [1, Ex. 3.5]. These are cylindrical Lévy processes of the form:

$$\begin{aligned} L(t)x^* = \sum _{k=1}^{N(t)} Y_kx^* \qquad \text {for all } x^*\in E^*, \end{aligned}$$

where N is a real-valued Poisson process with intensity \(\lambda \) and \(Y_k\) are identically distributed, cylindrical random variables, independent of N, and say with cylindrical distribution \(\rho \). Since the Lévy measure of \((L(t)x^*:t \ge 0)\) is given by \(\lambda \rho \circ (x^*)^{-1}\), it follows that Condition (15) is satisfied if and only if

$$\begin{aligned} \int _E \left| x^*(x)\right| ^p \, \rho ({\mathrm {d}}x) < \infty \qquad x^* \in E^*. \end{aligned}$$
(23)

Remark 12

If \(p=2\) and E and F are Hilbert spaces, the space of admissible integrands \(\Lambda (\Pi _2(E,F))\) is given by

$$\begin{aligned} \left\{ \Psi :[0,T] \times \Omega \rightarrow L_{\mathrm{HS}}(E,F) : \Psi \text { is predictable and } {\mathbb {E}}\left[ \int _0^T \left\| \Psi (s)\right\| _{L_{\mathrm{HS}}(E,F)}^2 \, {\mathrm {d}}s\right] < \infty \right\} , \end{aligned}$$

as in the work [22]. This is only suboptimal, as it is known that in this case, the space of admissible integrands can be enlarged to predictable processes satisfying

$$\begin{aligned} {\mathbb {E}}\left[ {\displaystyle \int _0^T} \left\| \Psi (s)Q^{1/2}\right\| _{L_{\mathrm{HS}}(E,F)}^2 \, {\mathrm {d}}s\right] < \infty , \end{aligned}$$

where Q is the covariance operator associated with L; see [13]. In this way, the space of integrands depends on the Lévy triplet of the integrator. One can ask whether a similar result is possible in our more general setting for \(p<2\) and for Banach spaces by replacing the covariance operator by the quadratic variation.

5 Existence and Uniqueness of Solution

In this section, we apply the developed integration theory to derive the existence of an evolution equation in a Banach space under standard assumptions. For this purpose, we consider

$$\begin{aligned} \begin{aligned} \, {\mathrm {d}}X(t)&= \big (AX(t) + B(X(t))\big )\, {\mathrm {d}}t+ G\big (X(t)\big )\, {\mathrm {d}}L(t), \\ X(0)&= X_0, \end{aligned} \end{aligned}$$
(24)

where \(X_0\) is an \(\mathcal {F}_0\)-measurable random variable in a Banach space F and the driving noise L is a cylindrical Lévy process in a Banach space E with finite p-th weak moments and finite p-variation. The operator A is the generator of a \(C_0\)-semigroup on F and \(B:F\rightarrow F\) and \(G:F \rightarrow \Pi _p(E,F)\) are some functions.

Definition 13

A mild solution of (24) is a predictable process X such that

$$\begin{aligned} \sup _{t\in [0,T]} {\mathbb {E}}\left[ \left\| X(t)\right\| ^p \right] < \infty \end{aligned}$$
(25)

for some \(p\ge 1\), and such that, for all \(t\in [0,T]\), we have P-a.s.

$$\begin{aligned} X(t) = S(t) X_0 + \int _0^t S(t-s) B(X(s)) \, {\mathrm {d}}s+ \int _0^t S(t-s) G(X(s)) \, {\mathrm {d}}L(s). \end{aligned}$$

We assume Lipschitz and linear growth condition on the coefficients F and G and an integrability assumption on the initial condition.

Assumption 14

For fixed \(p\in [1,2]\), we assume:

  1. (A1)

    there exist a function \(b\in L^1([0,T];{\mathbb {R}})\) such that for any \(x,x_1,x_2 \in F\) and \(t\in [0,T]\):

    $$\begin{aligned} \left\| S(t)B(x)\right\|&\le b(t)(1+\left\| x\right\| ),\\ \left\| S(t)(B(x_1)-B(x_2))\right\|&\le b(t)\left\| x_1-x_2\right\| . \end{aligned}$$
  2. (A2)

    there exist a function \(g\in L^p([0,T];{\mathbb {R}})\) such that for any \(x,x_1,x_2 \in F\) and \(t\in [0,T]\):

    $$\begin{aligned} \pi _p\big (S(t)G(x)\big )&\le g(t)(1+\left\| x\right\| ), \\ \pi _p\big (S(t)(G(x_1)-G(x_2))\big )&\le g(t)\left\| x_1-x_2\right\| . \end{aligned}$$
  3. (A3)

    \(X_0 \in L^p(\Omega , \mathcal {F}_0,P;F)\).

Theorem 15

Let \(p\in [1,2]\) and suppose that the Banach spaces E and F satisfy that

  1. (a)

    E is reflexive or has separable dual, and \(E^{**}\) has the approximation property,

  2. (b)

    F is of martingale type p,

  3. (c)

    if \(p=1\), then F has the Radon–Nikodym property.

If L is a cylindrical Lévy process such that (15) holds with some \(p\in [1,2]\), then conditions (A1)–(A3) imply that there exists a unique mild solution of (24).

Proof

We define the space

$$\begin{aligned} \tilde{\mathcal {H}}_{T} := \bigg \{ X:[0,T]\times \Omega \rightarrow F \text { is predictable and } \sup _{t \in [0,T]} {\mathbb {E}}\left[ \left\| X(t)\right\| ^p \right] < \infty \bigg \}, \end{aligned}$$

and a family of seminorms for \(\beta \ge 0\):

$$\begin{aligned} \left\| X\right\| _{T,\beta } := \left( \sup _{t \in [0,T]} e^{-\beta t} {\mathbb {E}}\left[ \left\| X(t)\right\| ^p \right] \right) ^{1/p}. \end{aligned}$$

Let \(\mathcal {H}_T\) be the set of equivalence classes of elements \(\tilde{\mathcal {H}}_T\) relative to \(\left\| \cdot \right\| _{T,0}\). Define an operator \(K :\mathcal {H}_T \rightarrow \mathcal {H}_T\) by \(K(X) := K_0(X)+ K_1(X) + K_2(X)\), where

$$\begin{aligned} K_0(X)(t)&:= S(t)X_0, \\ K_1(X)(t)&:= \int _0^t S(t-s) B(X(s)) \, {\mathrm {d}}s, \\ K_2(X)(t)&:= \int _0^t S(t-s) G(X(s)) \, {\mathrm {d}}L(s). \end{aligned}$$

The Bochner integral and the stochastic integral above are well defined because X is predictable and for every \(t\in [0,T]\) the mappings

$$\begin{aligned} {[}0,t]\times F \ni (s,x) \mapsto S(t-s)B(x), \qquad [0,t]\times F \ni (s,h) \mapsto S(t-s)G(x) \end{aligned}$$

are continuous. The appropriate integrability condition follows from (26) and (27).

For applying Banach’s fixed point theorem, we first show that K indeed maps to \(\mathcal {H}_T\). Choose constants \(m\ge 1\) and \(\omega \in {\mathbb {R}}\) such that \(\left\| S(t)\right\| \le m e^{\omega t}\) for each \(t\ge 0\). It follows that

$$\begin{aligned} \sup _{t \in [0,T]} {\mathbb {E}}\left[ \left\| S(t) X_0\right\| ^p \right] \le m e^{\left| \omega \right| T} {\mathbb {E}}\left[ \left\| X_0\right\| ^p \right] < \infty . \end{aligned}$$

By Assumption (A1) and Hölder’s inequality, we obtain with \(q=\frac{p}{p-1}\) that

$$\begin{aligned}&\sup _{t \in [0,T]} {\mathbb {E}}\left[ \left\| {\int _0^t S(t-s)B(X(s)) \, {\mathrm {d}}s}\right\| ^p \right] \nonumber \\&\qquad \le \sup _{t \in [0,T]} {\mathbb {E}}\left[ \left( \int _0^t b(t-s)(1+\left\| X(s)\right\| ) \, {\mathrm {d}}s\right) ^p \right] \nonumber \\&\qquad \le \sup _{t \in [0,T]} {\mathbb {E}}\left[ \left( \int _0^t b(t-s) ds \right) ^{p/q}\int _0^t b(t-s)(1+\left\| X(s)\right\| )^p \, {\mathrm {d}}s\right] \nonumber \\&\qquad \le \left( \int _0^T b(s) \, {\mathrm {d}}s\right) ^{p/q} 2^{p-1} (1+ \left\| X\right\| _{T,0}) \sup _{t \in [0,T]} \int _0^t b(t-s) \, {\mathrm {d}}s\nonumber \\&\qquad = \left( \int _0^T b(s) \, {\mathrm {d}}s\right) ^{1+p/q} 2^{p-1} (1+ \left\| X\right\| _{T,0}) \nonumber \\&\qquad <\infty . \end{aligned}$$
(26)

Similarly, we conclude from Assumption (A2) and Theorem 7 that there exists a constant \(c>0\) such that

$$\begin{aligned} \begin{aligned}&\sup _{t \in [0,T]} {\mathbb {E}}\left[ \left\| \int _0^t S(t-s)G(X(s)) \, {\mathrm {d}}L(s)\right\| ^p \right] \\&\qquad \le c \sup _{t \in [0,T]} {\mathbb {E}}\left[ \int _0^t \pi _p(S(t-s)G(X(s)))^p \, {\mathrm {d}}s\right] \\&\qquad \le c \sup _{t \in [0,T]} {\mathbb {E}}\left[ \int _0^t g(t-s)^p(1+\left\| X(s)\right\| )^p \, {\mathrm {d}}s\right] \\&\qquad \le c 2^{p-1}(1+\left\| X\right\| _{T,0}^p) \int _0^T g(s)^p \, {\mathrm {d}}s\\&\qquad < \infty . \end{aligned} \end{aligned}$$
(27)

Next, we establish that K is stochastically continuous. For this purpose, let \(\epsilon >0\). For each \(t\ge 0\), we obtain

$$\begin{aligned}&{\mathbb {E}}\left[ \left\| K_1(t+\epsilon )-K_1(t)\right\| \right] \\&\quad = {\mathbb {E}}\left[ \left\| \int _0^{t+\epsilon } S(t+\epsilon -s)B(X(s)) \, {\mathrm {d}}s- \int _0^t S(t-s)B(X(s)) \, {\mathrm {d}}s\right\| \right] \\&\quad = {\mathbb {E}}\left[ \left\| \int _t^{t+\epsilon } S(t+\epsilon -s)B(X(s)) \, {\mathrm {d}}s+ \int _0^t (S(\epsilon )-\mathrm {Id})S(t-s)B(X(s)) \, {\mathrm {d}}s\right\| \right] \\&\quad \le {\mathbb {E}}\left[ \int _t^{t+\epsilon } \left\| S(t+\epsilon -s)B(X(s))\right\| \, {\mathrm {d}}s+ \int _0^t \left\| (S(\epsilon )-\mathrm {Id})S(t-s)B(X(s))\right\| \, {\mathrm {d}}s\right] \\&\quad =: I_1 + I_2. \end{aligned}$$

Since \(\left\| X(s)\right\| \le 1+ \left\| X(s)\right\| ^p \) for all \(s\ge 0\), it follows, for \(\epsilon \rightarrow 0\), that

$$\begin{aligned} I_1 \le {\mathbb {E}}\left[ \int _t^{t+\epsilon } b(t+\epsilon -s) (1+\left\| X(s)\right\| ) \, {\mathrm {d}}s\right] \le (2+\left\| X(s)\right\| ^p) \int _0^\epsilon b(s) \, {\mathrm {d}}s\rightarrow 0. \end{aligned}$$

With the same estimate \(\left\| X(s)\right\| \le 1+ \left\| X(s)\right\| ^p \), we obtain

$$\begin{aligned} \left\| (S(\epsilon )-\mathrm {Id})S(t-s)B(X(s))\right\|&\le (1+me^{\left| \omega \right| })b(t-s)(1+\left\| X(s)\right\| ) \\&\le (2+\left\| X\right\| _{T,0}^p)(1+me^{\left| \omega \right| }) b(t-s). \end{aligned}$$

Since the integrand of \(I_2\) tends to 0 as \(\epsilon \rightarrow 0\) by the strong continuity of the semigroup, Lebesgue’s dominated convergence theorem shows that \(I_2\) tends to 0 as \(\epsilon \rightarrow 0\).

For \(K_2\), we obtain by Theorem 7 that there exists a constant \(c>0\) such that

$$\begin{aligned}&{\mathbb {E}}\left[ \left\| K_2(t+\epsilon )-K_2(t)\right\| ^p \right] \\&\quad = {\mathbb {E}}\left[ \left\| \int _0^{t+\epsilon } S(t+\epsilon -s)G(X(s)) \, {\mathrm {d}}L(s) - \int _0^t S(t-s)G(X(s)) \, {\mathrm {d}}L(s)\right\| ^p \right] \\&\quad = {\mathbb {E}}\left[ \left\| \int _t^{t+\epsilon } S(t+\epsilon -s)G(X(s)) \, {\mathrm {d}}L(s) \right. \right. \\&\qquad \left. \left. + \int _0^t (S(\epsilon )-\mathrm {Id})S(t-s)G(X(s)) \, {\mathrm {d}}L(s)\right\| ^p \right] \\&\quad \le 2^{p-1} {\mathbb {E}}\left[ \left\| \int _t^{t+\epsilon } S(t+\epsilon -s)G(X(s)) \, {\mathrm {d}}L(s)\right\| ^p \right. \\&\qquad \left. + \left\| \int _0^t (S(\epsilon )-\mathrm {Id})S(t-s)G(X(s)) \, {\mathrm {d}}L(s)\right\| ^p \right] \\&\quad \le c 2^{p-1} {\mathbb {E}}\left[ \int _t^{t+\epsilon } \pi _p(S(t+\epsilon -s)G(X(s)))^p \, {\mathrm {d}}s\right. \\&\qquad \left. + \int _0^t \pi _p((S(\epsilon )-\mathrm {Id})S(t-s)G(X(s)))^p \, {\mathrm {d}}s\right] \\&\quad \le c 2^{p-1} {\mathbb {E}}\left[ \int _t^{t+\epsilon } 2^{p-1} g(t+\epsilon -s)^p (1+\left\| X(s)\right\| ^p) \, {\mathrm {d}}s\right. \\&\qquad \left. + \int _0^t \pi _p((S(\epsilon )-\mathrm {Id})S(t-s)G(X(s)))^p \, {\mathrm {d}}s\right] \\&\quad =: c 2^{p-1}(2^{p-1}J_1 + J_2), \end{aligned}$$

where

$$\begin{aligned} J_1= & {} {\mathbb {E}}\left[ \int _t^{t+\epsilon } g(t+\epsilon -s)^p (1+\left\| X(s)\right\| ^p) \, {\mathrm {d}}s\right] \\\le & {} (1+\left\| X\right\| _{T,0}^p) \int _t^{t+\epsilon } g(t+\epsilon -s)^p \, {\mathrm {d}}s\rightarrow 0 \end{aligned}$$

as \(\epsilon \rightarrow 0\), and

$$\begin{aligned} J_2 = {\mathbb {E}}\left[ \int _0^t \pi _p(({{\,\mathrm{{\mathrm {Id}}}\,}}-S(\epsilon ))S(t-s)G(X(s)))^p \, {\mathrm {d}}s\right] . \end{aligned}$$

By Theorem 3, the integrand \(\pi _p(({{\,\mathrm{{\mathrm {Id}}}\,}}-S(\epsilon ))S(t-s)G(X(s)))^p\) converges to 0 for all t and \(\omega \in \Omega \). Moreover, it is bounded by \((1+me^{\left| \omega \right| })^p g(t-s)^p(1+\left\| X(s)\right\| )^p\), which is \({\mathrm {d}}t\otimes P\)-integrable. Thus, Lebesgue’s theorem on dominated convergence implies that \(J_2\rightarrow 0\) as \(\epsilon \rightarrow 0\) which completes the proof of stochastic continuity of K. In particular, stochastic continuity guarantees the existence of a predicable modification of K by [18, Prop. 3.21]. In summary, we obtain that K maps \(\mathcal {H}_T\) to \(\mathcal {H}_T\).

For applying Banach’s fixed point theorem, it is enough to show that K is a contraction for some \(\beta \). We have

$$\begin{aligned} \left\| K(X_1) - K(X_2)\right\| _{T,\beta }^p\le & {} 2^{p-1} \left( \left\| K_1(X_1)-K_1(X_2)\right\| _{T,\beta } \right. \\&\left. + \left\| K_2(X_1)-K_2(X_2)\right\| _{T,\beta } \right) . \end{aligned}$$

For the part corresponding to the drift, we calculate similarly to [18, Th. 9.29]

$$\begin{aligned}&\left\| K_1(X_1)-K_1(X_2)\right\| _{T,\beta }^p \\&\quad \le \sup _{t \in [0,T]} e^{-\beta t} {\mathbb {E}}\left[ \left( \int _0^t b(t-s) \left\| X_1(s)-X_2(s)\right\| \, {\mathrm {d}}s\right) ^p \right] \\&\quad = \sup _{t \in [0,T]} e^{-\beta t} {\mathbb {E}}\left[ \left( \int _0^t b(t-s)^{1/q} b(t-s)^{1/p} \left\| X_1(s))-X_2(s)\right\| \, {\mathrm {d}}s\right) ^p \right] \\&\quad \le \left( \int _0^T b(t-s) \, {\mathrm {d}}s\right) ^{p/q} \sup _{t \in [0,T]} e^{-\beta t} \int _0^t b(t-s) {\mathbb {E}}\left[ \left\| X_1(s))-X_2(s)\right\| ^p \right] \, {\mathrm {d}}s\\&\quad = \left( \int _0^T b(s) \, {\mathrm {d}}s\right) ^{p/q} \sup _{t \in [0,T]} e^{-\beta t} \int _0^t b(t-s)e^{\beta s} e^{-\beta s}{\mathbb {E}}\left[ \left\| X_1(s)-X_2(s)\right\| ^p \right] \, {\mathrm {d}}s\\&\quad \le \left( \int _0^T b(s) \, {\mathrm {d}}s\right) ^{p/q} \left\| X_1-X_2\right\| _{T,\beta }^p \sup _{t \in [0,T]} \int _0^t b(t-s)e^{-\beta (t-s)} \, {\mathrm {d}}s\\&\quad = C(\beta ) \left\| X_1-X_2\right\| _{T,\beta }^p \end{aligned}$$

with \(C(\beta )= \left( \int _0^T b(s) \, {\mathrm {d}}s\right) ^{p/q} \int _0^T b(s)e^{-\beta s} \, {\mathrm {d}}s\rightarrow 0\) as \(\beta \rightarrow \infty \).

In the following calculation for the part corresponding to the diffusion, we use in the first inequality the continuity of the stochastic integral formulated in Theorem 7:

$$\begin{aligned}&\left\| K_2(X_1)-K_2(X_2)\right\| _{T,\beta }^p\\&\quad \le c \sup _{t \in [0,T]} e^{-\beta t} {\mathbb {E}}\left[ \int _0^t \pi _p( S(t-s)(G(X_1(s))-G(X_2(s))))^p \, {\mathrm {d}}s\right] \\&\quad \le c \sup _{t \in [0,T]} e^{-\beta t} {\mathbb {E}}\left[ \int _0^t g(t-s)^p \left\| X_1(s)-X_2(s)\right\| ^p \, {\mathrm {d}}s\right] \\&\quad = c \sup _{t \in [0,T]} e^{-\beta t} {\mathbb {E}}\left[ \int _0^t g(t-s)^p e^{\beta s} e^{-\beta s}\left\| X_1(s)-X_2(s)\right\| ^p \, {\mathrm {d}}s\right] \\&\quad \le c \left\| X_1-X_2\right\| _{T,\beta }^p \sup _{t \in [0,T]} \int _0^t e^{-\beta (t-s)} g(t-s)^p \, {\mathrm {d}}s\\&\quad = C'(\beta ) \left\| X_1-X_2\right\| _{T,\beta }^p, \end{aligned}$$

where \(C'(\beta ) = c \int _0^T e^{-\beta s} g(s)^p \, {\mathrm {d}}s\rightarrow 0\) as \(\beta \rightarrow \infty \). Consequently, Banach’s fixed point theorem implies that there exists a unique \(X\in \mathcal {H}_T\) such that \(K(X)=X\) which completes the proof. \(\square \)

Remark 16

Note that if E and F are Hilbert spaces, then they satisfy assumption (ii) in Theorem 7, see, for example, [27, Cor. 1, p. 109]. Thus for \(p=2\), we recover [22].

Remark 17

For processes of the form (22), the integrability assumption can be relaxed to include, for example, stable processes in the same way as in [14] where the existence of variational solutions is demonstrated. The details can be found in the PhD thesis of the first author.