1 Introduction

There are many works studying the regularity of different kinds of stochastic noise. Oftentimes, regularity results are formulated in terms of Besov spaces. Classical results on the Hölder regularity of sample paths of a Brownian motion have been improved using Besov spaces and Besov–Orlicz spaces in [8, 9]. Similar results have been obtained for Feller processes in [31,32,33], for a summary see [6, Section 5.5], and for Brownian motions with values in Banach spaces in [21]. Closely related to these works are characterizations of the Besov regularity of white noise. For a Gaussian white noise on the torus, such characterizations are given in [45]. Lévy white noise on the torus was studied in [15]. Global regularity results for Gaussian and Lévy white noise are given in [4, 14].

Most of these works have in common that the regularity results are shown to be sharp up to possibly some minor improvements in some of the references. For an n-dimensional Gaussian white noise, it is shown for example, that it has a smoothness of exactly or almost \(-\frac{n}{2}\) but not more than \(-\frac{n}{2}\), depending on the scale of isotropic function spaces. In particular, regularity seems to get worse with increasing dimension. The aim of this paper is to show that these results can be improved for Gaussian as well as Lévy white noise if one works with spaces of dominating mixed smoothness. Roughly speaking, the following results states that an n-dimensional Gaussian white noise has local smoothness \(-\frac{1}{2}-\varepsilon \) separately in each direction, while previous results state that it has regularity \(-\frac{n}{2}\) simultaneously in all directions.

Theorem 1.1

Let \(1<p<\infty \) and \(\varepsilon ,T>0\). Then the restriction of an n-dimensional Gaussian white noise on \({\mathbb {R}}^n\) to \([0,T]^n\) has a modification \(\eta \) such that

$$\begin{aligned} {\mathbb {P}}\left( \eta \in S^{\left(-\tfrac{1}{2}-\varepsilon ,\ldots ,-\tfrac{1}{2}-\varepsilon \right)}_{p,p}B([0,T]^n)\right) =1. \end{aligned}$$

In this theorem, \(S^{(-\tfrac{1}{2}-\varepsilon ,\ldots ,-\tfrac{1}{2}-\varepsilon )}_{p,p}B([0,T]^n)\) denotes a Besov space with dominating mixed smoothness. It can be identified with the iterated Besov space

$$\begin{aligned} B^{-\tfrac{1}{2}-\varepsilon }_{p,p}\bigg ([0,T];\;B^{-\tfrac{1}{2}-\varepsilon }_{p,p}\big ([0,T];\ldots B^{-\tfrac{1}{2}-\varepsilon }_{p,p}([0,T])\ldots \big )\bigg ) \end{aligned}$$

and with the tensor product

$$\begin{aligned} B^{-\tfrac{1}{2}-\varepsilon }_{p,p}([0,T])\otimes _{\alpha _p}\cdots \otimes _{\alpha _p}B^{-\tfrac{1}{2}-\varepsilon }_{p,p}([0,T]), \end{aligned}$$

which is defined as the closure of the algebraic tensor product with respect to the so-called p-nuclear tensor norm. We will explain these identifications later in this paper.

If one component is viewed as time, then a white noise is also sometimes called space-time white noise. In this case, it can also be insightful the split space and time in the description of the smoothness. This way, we obtain that a Gaussian space-time white noise has smoothness \(-\tfrac{1}{2}\) in time and \(-\frac{n-1}{2}-\varepsilon \) in (the \(n-1\)-dimensional) space. More precisely, we have the following result:

Theorem 1.2

Let \(1<p,{\widetilde{p}}<\infty \) and \(\varepsilon >0\). Then an n-dimensional Gaussian white noise on \({\mathbb {R}}^n\) has a modification \(\eta \) such that

$$\begin{aligned} {\mathbb {P}}\left( \eta \in B^{-1/2}_{{\widetilde{p}},\infty }\big ([0,T];\,B^{-\frac{n-1}{2}-\varepsilon }_{p,p}({\mathbb {R}}^{n-1},\langle \cdot \rangle ^{1-n-\varepsilon })\big )\right) =1. \end{aligned}$$

Here, \(\langle \xi \rangle ^{1-n-\varepsilon }:=(1+|\xi |^2)^{\frac{1-n-\varepsilon }{2}}\) is a weight function. The interval [0, T] corresponds to the time direction, while \({\mathbb {R}}^{n-1}\) corresponds to the space direction.

Note that compared to Theorem 1.1 we can include growth bounds in space this time. Theorem 1.2 can be useful if one studies parabolic partial differential equations driven by noise. We will illustrate this by deriving regularity results for the heat equation with Dirichlet and Neumann boundary noise. The main tool in previous works such as [1, 7, 10, 36] for analyzing solutions of equations with boundary noise were power weights. These weights measure the distance to the boundary and are well suited to describe the singularities of solutions at the boundary. Our approach, however, adds more flexibility to the description of these singularities, as it allows one to treat regularity in time, tangential and normal directions separately. It will also enable us to analyze the behavior of solutions at the boundary in spaces of higher regularity.

This paper is structured as follows:

  • In Sect. 2, we introduce weighted Besov spaces with dominating mixed smoothness, Lévy white noise and vector-valued Lévy processes and cite the most important results we need throughout the paper. While most of the results are well-known, it seems like the description of the dual spaces of Besov spaces with dominating mixed smoothness on the domain \([0,T]^n\) given in Proposition 2.15 has not been available in the literature before.

  • Section 3 is the main part of this paper. Therein, we derive regularity results for Lévy white noise in spaces with dominating mixed smoothness.

  • As an application of some of our results, we derive new regularity properties of the solutions of Poisson and heat equation with Dirichlet and Neumann boundary noise in Sect. 4.

1.1 Notations and assumptions

We write \({\mathbb {N}}=\{1,2,\ldots \}\) for the natural numbers starting from 1 and \({\mathbb {N}}_0=\{0,1,2,\ldots \}\) for the natural numbers starting from 0. Throughout the paper, we take \(n\in {\mathbb {N}}\) and write

$$\begin{aligned} {\mathbb {R}}^n_+:=\{x=(x_1,\ldots ,x_n)\in {\mathbb {R}}^n: x_n>0\}. \end{aligned}$$

If \(n=1\), we also just write \({\mathbb {R}}_+:={\mathbb {R}}^1_+\). Given a real number \(x\in {\mathbb {R}}\), we write

$$\begin{aligned} x_+:=[x]_+:=\max \{0,x\}. \end{aligned}$$

The Bessel potential will be denoted by

$$\begin{aligned} \langle x\rangle :=(1+|x|^2)^{1/2}\quad (x\in {\mathbb {R}}^n). \end{aligned}$$

Given a Banach space E, we will write \(E'\) for its topological dual. By \({\mathscr {D}}({\mathbb {R}}^n;\,E)\), \({\mathscr {S}}({\mathbb {R}}^n;\,E)\) and \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\,E)\), we denote the spaces of E-valued test functions, E-valued Schwartz functions and E-valued tempered distributions, respectively. If \(E\in \{{\mathbb {R}},{\mathbb {C}}\}\), then we will omit it in the notation. On \({\mathscr {S}}({\mathbb {R}}^n;\,E)\), we define the Fourier transform

$$\begin{aligned} ({\mathscr {F}}f)(\xi ):=\frac{1}{(2\pi )^{n/2}}\int _{{\mathbb {R}}^n} e^{-ix\xi } f(x)\,{\mathrm{d}}x\quad (f\in {\mathscr {S}}({\mathbb {R}}^n;\;E)). \end{aligned}$$

As usual, we extend it to \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\;E)\) by \([{\mathscr {F}}u](f):=u({\mathscr {F}}f)\) for \(u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\;E)\) and \(f\in {\mathscr {S}}({\mathbb {R}}^n)\). Given two topological spaces XY, we write \(X\hookrightarrow Y\) if there is a canonical continuous embedding. We write \(X{\mathop {\hookrightarrow }\limits ^{d}} Y\) if the range of this embedding is dense in Y. If \(E_0\) and \(E_1\) are two locally convex spaces, then the spaces of continuous linear operators from \(E_0\) to \(E_1\) will be denoted by \({\mathcal {B}}(E_0,E_1)\). If \(E_0=E_1\), then we also write \({\mathcal {B}}(E_0)\).

Throughout the paper, we will assume that \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) is a complete probability space.

2 Preliminaries

2.1 Weights

A weight w on \({\mathbb {R}}^n\) is a measurable function \(w:{\mathbb {R}}^n\rightarrow [0,\infty ]\) which takes values in \((0,\infty )\) almost everywhere with respect to the Lebesgue measure. There are several interesting classes of weights one can consider.

Definition 2.1

Let \(w:{\mathbb {R}}^n\rightarrow [0,\infty ]\) be a weight.

  1. (a)

    We say that w is an admissible weight if \(w\in C^{\infty }({\mathbb {R}}^n;(0,\infty ))\) with the following properties:

    1. (i)

      For all \(\alpha \in {\mathbb {N}}_0^n\), there is a constant \(C_{\alpha }\) such that

      $$\begin{aligned} |D^{\alpha }w(x)|\le C_{\alpha } w(x)\quad \text {for all }x\in {\mathbb {R}}^n. \end{aligned}$$
      (1)
    2. (ii)

      There are two constants \(C>0\) and \(s\ge 0\) such that

      $$\begin{aligned} 0<w(x)\le C w(y)\langle x-y\rangle ^s\quad \text {for all }x,y\in {\mathbb {R}}^n. \end{aligned}$$
      (2)

    We write \(W({\mathbb {R}}^n)\) for the set of all admissible weights on \({\mathbb {R}}^n\).

  2. (b)

    Let \(1<p<\infty \). Then, w is called \(A_p\) weight if

    $$\begin{aligned} {[}w]_{A_p}=\sup _{Q\text { cube in }{\mathbb {R}}^n}\left( \frac{1}{{\text {Leb}}_n(Q)}\int _Q w(x)\,{\mathrm{d}}x\right) \left( \frac{1}{{\text {Leb}}_n(Q)}\int _Q w(x)^{-\frac{1}{p-1}}\,{\mathrm{d}}x\right) ^{p-1}<\infty . \end{aligned}$$

    The set of all \(A_p\) weights on \({\mathbb {R}}^n\) will be denoted by \(A_p({\mathbb {R}}^n)\). Moreover, we write \(A_{\infty }({\mathbb {R}}^n):=\bigcup _{1<p<\infty } A_p({\mathbb {R}}^n)\). Such weights are also called Muckenhoupt weights.

  3. (c)

    Let \(1<p<\infty \). Then, w is called \(A_p^{\rm{loc}}\) weight if

    $$\begin{aligned} [w]_{A_p^{\rm{loc}}}=\sup _{Q\text { cube in }{\mathbb {R}}^n,\,{\text {Leb}}_n(Q)\le 1}\left( \tfrac{1}{{\text {Leb}}_n(Q)}\int _Q w(x)\,{\mathrm{d}}x\right) \left( \tfrac{1}{{\text {Leb}}_n(Q)}\int _Q w(x)^{-\frac{1}{p-1}}\,{\mathrm{d}}x\right) ^{p-1}<\infty . \end{aligned}$$

    The set of all \(A_p^{\rm{loc}}\) weights on \({\mathbb {R}}^n\) will be denoted by \(A_p^{\rm{loc}}({\mathbb {R}}^n)\). Moreover, we write \(A_{\infty }^{\rm{loc}}({\mathbb {R}}^n):=\bigcup _{1<p<\infty } A_p^{\rm{loc}}({\mathbb {R}}^n)\). Such weights are also called local Muckenhoupt weights.

Remark 2.2

The class of local Muckenhoupt weights \(A_{\infty }^{\rm{loc}}({\mathbb {R}}^n)\) was introduced in [30] with the aim of unifying Littlewood–Paley theories for function spaces with admissible weights and Muckenhoupt weights. Accordingly, we have that \(W({\mathbb {R}}^n)\cup A_{\infty }({\mathbb {R}}^n)\subset A_{\infty }^{\rm{loc}}({\mathbb {R}}^n)\).

Example

In this paper, we will mainly work with weights of the form

$$\begin{aligned} \langle \,\cdot \,\rangle ^{\rho }:{\mathbb {R}}^n\rightarrow {\mathbb {R}},\,\xi \mapsto (1+|\xi |^2)^{\rho /2} \end{aligned}$$

for some \(\rho \in {\mathbb {R}}\). It will be important for us to which class of weights this function belongs for different choices of \(\rho \in {\mathbb {R}}\).

  1. (a)

    For all \(\rho \in {\mathbb {R}}\), we have that \(\langle \,\cdot \,\rangle ^{\rho }\in W({\mathbb {R}}^n)\), i.e. \(\langle \,\cdot \,\rangle ^{\rho }\) is an admissible weight. This can either be computed directly or one can use the following abstract arguments which in turn are based on simple direct computations:

    For (1) one can recall that \(\langle \,\cdot \,\rangle ^{\rho }\) is the standard example of a so-called Hörmander symbol of order \(\rho \), see for example [23, Chapter 2, §1, Example 2]. Thus, we even have

    $$\begin{aligned} |D^{\alpha } \langle \xi \rangle ^{\rho }|\le C_{\alpha ,\rho } \langle \xi \rangle ^{\rho -|\alpha |} \end{aligned}$$

    which trivially implies (1). In (2) one can take \(C=2^{|\rho |}\) and \(s=|\rho |\) by Peetre’s inequality, see for example [29, Proposition 3.3.31].

  2. (b)

    It holds that \(\langle \,\cdot \,\rangle ^{\rho }\in A_p({\mathbb {R}}^n)\) if and only if \(-n<\rho <(p-1)n\). Again, one can directly verify this for example by a similar computation as in [17, Example 9.1.7]. We also refer to [18, Example 1.3] where this has been observed for the equivalent weight

    $$\begin{aligned} w_{0,\rho }(\xi ):={\left\{ \begin{array}{ll} 1&{}\text {if }|\xi |\le 1,\\ |\xi |^{\rho }&{}\text {if }|\xi |\ge 1. \end{array}\right. } \end{aligned}$$
  3. (c)

    It follows directly from part (b) that \(\langle \,\cdot \,\rangle ^{\rho }\in A_{\infty }({\mathbb {R}}^n)\) if and only if \(-n<\rho \).

Definition 2.3

Let E be a Banach space, \(w:{\mathbb {R}}^n\rightarrow [0,\infty ]\) a weight and \(1\le p<\infty \). Then, the weighted Lebesgue–Bochner space \(L_p({\mathbb {R}}^n,w;\;E)\) is defined as the space of all strongly measurable functions \(f:{\mathbb {R}}^n\rightarrow E\) such that

$$\begin{aligned} \Vert f \Vert _{L_p({\mathbb {R}}^n,w;\;E)}:=\left( \int _{{\mathbb {R}}^n} \Vert f(x)\Vert _{E}^p w(x)\,{\mathrm{d}}x\right) ^{1/p}<\infty \end{aligned}$$

with the usual modification for \(p=\infty \). As usual, functions which coincide on sets of measure 0 are considered as equal.

Remark 2.4

For this work, it is important to note that there are different conventions in the literature concerning the definition of weighted Lebesgue–Bochner spaces. Oftentimes, the expression \(\Vert f \Vert _{L_p({\mathbb {R}}^n,w;\;E)}\) is defined by \(\Vert wf \Vert _{L_p({\mathbb {R}}^n;\;E)}\), whereas in our case, it is defined by \(\Vert w^{1/p}f \Vert _{L_p({\mathbb {R}}^n;\;E)}\). Unfortunately, we will have to refer to some articles which use the one and to other articles which use the other convention. Thus, we will explicitly mention if a certain reference does not use the convention of Definition 2.3.

2.2 Weighted function spaces with dominating mixed smoothness

As general references for the theory of spaces with dominating mixed smoothness, we would like to mention [34, 43, 46]. These spaces are mainly used in approximation theory. They can also be used to study boundary value problems with rough boundary data, see [20]. Our aim here is to derive sharper regularity results for the sample paths of white noise.

In this section, let \(l\in {\mathbb {N}}\) and \( d =(d_1,\ldots, d_l)\in {\mathbb {N}}^l\) with \(d _1+\cdots +d_l=n\). We write \({\mathbb {R}}^n_d\) if we split \({\mathbb {R}}^n\) according to \({d}\), i.e.

$$\begin{aligned} {\mathbb {R}}^n_d:={\mathbb {R}}^{d_1}\times \cdots \times {\mathbb {R}}^{d_l}. \end{aligned}$$

Moreover, if we have such a splitting then for \(x\in {\mathbb {R}}^n_d\) we write \(x=(x_{1,d},\ldots ,x_{l,d})\) with \(x_{j,d}\in {\mathbb {R}}^{d_j}\), \(j=1,\ldots ,l\).

Definition 2.5

  1. (a)

    Let \(\varphi _0\in {\mathscr {D}}({\mathbb {R}}^n)\) be a smooth function with compact support such that \(0\le \varphi _0\le 1\),

    $$\begin{aligned} \varphi _0(\xi )=1\quad \text {if }|\xi |\le 1,\quad \varphi _0(\xi )=0\quad \text {if }|\xi |\ge 3/2. \end{aligned}$$

    For \(\xi \in {\mathbb {R}}^n\) and \(k\in {\mathbb {N}}\) let further

    $$\begin{aligned} \varphi (\xi )&:=\varphi _0(\xi )-\varphi _{0}(2\xi ),\\ \varphi _k(\xi )&:=\varphi (2^{-k}\xi ). \end{aligned}$$

    We call such a sequence \((\varphi _k)_{k\in {\mathbb {N}}_0}\) smooth dyadic resolution of unity and write \(\Phi ({\mathbb {R}}^n)\) for the space of all such sequences.

  2. (b)

    Let E be a Banach space. To a smooth dyadic resolution of unity \((\varphi _k)_{k\in {\mathbb {N}}_0}\in \Phi ({\mathbb {R}}^n)\), we associate the sequence of operators \((S_k)_{k\in {\mathbb {N}}_0}\) on the space of tempered distributions \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\;E)\) by means of

    $$\begin{aligned} S_kf:={\mathscr {F}}^{-1}\varphi _k{\mathscr {F}} f\quad (f\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\;E)). \end{aligned}$$

    The sequence \((S_kf)_{k\in {\mathbb {N}}_0}\) is called dyadic decomposition of f.

  3. (c)

    For \(j\in \{1,\ldots ,l\}\) let \((\varphi ^{(j)}_{k_j})_{k_j\in {\mathbb {N}}_0}\in \Phi ({\mathbb {R}}^{d_j})\) be a smooth dyadic resolution of unity on \({\mathbb {R}}^{d_j}\). Then we define

    $$\begin{aligned} \varphi _{\overline{k}}:=\bigotimes _{j=1}^l \varphi ^{(j)}_{k_j},\quad S_{\overline{k}}={\mathscr {F}}^{-1}\varphi _{\overline{k}}{\mathscr {F}}\quad (\overline{k}=(k_1,\ldots ,k_l)\in {\mathbb {N}}_0^l). \end{aligned}$$

    We write \(\Phi ({\mathbb {R}}^n_d)\) for all such \((\varphi _{\overline{k}})_{\overline{k}\in {\mathbb {N}}_0^l}\).

Definition 2.6

Let \(w:{\mathbb {R}}^n\rightarrow [0,\infty ]\) be a weight, E a Banach space, \((\varphi _{\overline{k}})_{\overline{k}\in {\mathbb {N}}_0^l}\in \Phi ({\mathbb {R}}^n_d)\), \(\overline{s}=(s_1,\ldots ,s_l)\in {\mathbb {R}}^l\) and \(p,q\in [1,\infty ]\).

  1. (a)

    The Besov space with dominating mixed smoothness \(S^{\overline{s}}_{p,q}B({\mathbb {R}}^n_d,w;\;E)\) is defined as the space of all tempered distributions \(f\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n_d;\;E)\) such that

    $$\begin{aligned} \Vert f\Vert _{S^{\overline{s}}_{p,q}B({\mathbb {R}}^n_d,w;\;E)}:=\bigg (\sum _{\overline{k}\in {\mathbb {N}}_0^l}2^{q\overline{s}\cdot \overline{k}}\Vert S_{\overline{k}} f\Vert ^q_{L_p({\mathbb {R}}^n_d,w;\;E)}\bigg )^{1/q}<\infty \end{aligned}$$

    with the usual modification for \(q=\infty \).

  2. (b)

    The respective space on some domain \({\mathcal {O}}_d\subset {\mathbb {R}}^n_d\) is defined by restriction:

    $$\begin{aligned} S^{\overline{s}}_{p,q}B({\mathcal {O}}_d,w;\;E):=\{ f\vert _{{\mathcal {O}}_d}:f\in S^{\overline{s}}_{p,q}B({\mathbb {R}}^n_d,w;\;E)\} \end{aligned}$$

    and

    $$\begin{aligned} \Vert f \Vert _{S^{\overline{s}}_{p,q}B({\mathcal {O}}_d,w;\;E)}:=\inf _{g\in S^{\overline{s}}_{p,q}B({\mathbb {R}}^n_d,w;\;E),\,g\vert _{{\mathcal {O}}_d}=f}\Vert g\Vert _{S^{\overline{s}}_{p,q}B({\mathbb {R}}^n_d,w;\;E)}. \end{aligned}$$
  3. (c)

    The space \(S^{\overline{s}}_{p,q,0}B({\mathcal {O}}_d,w;\;E)\) is defined as the closure of the space \({\mathscr {S}}_0({\mathcal {O}})\) of Schwartz functions with support in \(\overline{{\mathcal {O}}_d}\) in the space \(S^{\overline{s}}_{p,q}B({\mathbb {R}}^n_d,w;\;E)\).

Remark 2.7

  1. (a)

    If \(l=1\), then we obtain the usual definition of isotropic weighted vector-valued Besov spaces. In this case, following the usual convention we write \(B^s_{p,q}\) and \(B^s_{p,q,0}\) instead of \(S^s_{p,q}B\) and \(S^s_{p,q,0}B\), respectively.

  2. (b)

    It is intentional that in the definition of \(S^s_{p,q,0}B({\mathcal {O}}_d,w;\;E)\) we take the closure in the space \(S^s_{p,q}B({\mathbb {R}}^n_d,w;\;E)\) and not in \(S^s_{p,q}B({\mathcal {O}}_d,w;\;E)\). Even in the isotropic case, there is a subtle difference between the two definitions for \(s-\frac{1}{p}\in {\mathbb {N}}_0\) . We refer for example to [39, Section 4.3.2], where this is carefully discussed for isotropic spaces. Therein, the spaces \({\widetilde{B}}^s_{p,q}\) correspond to the definition with the closure in \(B^s_{p,q}({\mathbb {R}}^n,w;\;E)\), while \(\mathring{B}^s_{p,q}\) corresponds to the definition with the closure in \(B^s_{p,q}({\mathcal {O}},w;\;E)\).

  3. (c)

    There are special representations if \(p=q<\infty \). For example, it was shown in [37] that for \(l=n\), we have the tensor product representation

    $$\begin{aligned} S^{\overline{s}}_{p,p}B({\mathbb {R}}^n_d)\cong B^{s_1}_{p,p}({\mathbb {R}})\otimes _{\alpha _p} S^{(s_2,\ldots ,s_n)}_{p,p}B({\mathbb {R}}^{n-1}_{(d_2,\ldots ,d_n)}) \cong B^{s_1}_{p,p}({\mathbb {R}})\otimes _{\alpha _p}\cdots \otimes _{\alpha _p} B^{s_n}_{p,p}({\mathbb {R}}), \end{aligned}$$

    where the tensor product is the closure of the unique tensor product on tempered distributions in the sense of [37, Lemma B.3] with respect to the p-nuclear tensor norm \(\alpha _p\), see [37, Appendix B]. For two Banach spaces \(E_1, E_2\) the p-nuclear tensor norm is defined by

    $$\begin{aligned}&\alpha _p(h,E_1,E_2)\\&\quad := \inf \left\{ \left( \sum _{j=1}^N \Vert x_j\Vert _{E_1}^p\right) ^{1/p}\cdot \sup \bigg \{\left( \sum _{j=1}^N|\lambda _j(y_j)|^{p'}\right) ^{1/p'}:\lambda _j\in E_2', \Vert \lambda _j\Vert _{E_2'}=1\bigg \}\right\} , \end{aligned}$$

    where \(p'\) denotes the conjugated Hölder index and where the infimum is taken over all representations \(h=\sum _{j=1}^N x_j\otimes y_j\) for \(N\in {\mathbb {N}}\), \(x_{1},\ldots ,x_N\in E_1\) and \(y_{1},\ldots ,y_N\in E_2\).

  4. (d)

    In a certain parameter range, one can also view a Besov space with dominating mixed smoothness as a Besov space with values in another Besov space. Since it seems like this has not been formulated in the literature so far, we make this more precise in the following.

Theorem 2.8

Let E be a reflexive Banach space and \(l=2\). Then there are unique isomorphisms

$$\begin{aligned} I_1:{\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\;E)\rightarrow {\mathcal {B}}({\mathscr {S}}({\mathbb {R}}^{d_1}_{x_{1,d}}),{\mathscr {S\,}}^{\prime}({\mathbb {R}}^{d_2}_{x_{2,d}};\;E)),\\ I_2:{\mathscr {S\,}}^{\prime}({\mathbb {R}}^n;\;E)\rightarrow {\mathcal {B}}({\mathscr {S}}({\mathbb {R}}^{d_2}_{x_{2,d}}),{\mathscr {S\,}}^{\prime}({\mathbb {R}}^{d_1}_{x_{1,d}};\;E)), \end{aligned}$$

such that for all \(u\in \mathscr {S'}({\mathbb {R}}^n;\;E)\) and all \(\varphi _1\in {\mathscr {S}}({\mathbb {R}}^{d_1}_{x_{1,d}}),\varphi _2\in {\mathscr {S}}({\mathbb {R}}^{d_1}_{x_{2,d}})\) it holds that

$$\begin{aligned}{}[[I_1(u)](\varphi _1)](\varphi _2)=u(\varphi _1\otimes \varphi _2)=[[I_2(u)](\varphi _2)](\varphi _1). \end{aligned}$$

Proof

This is one of the kernel theorems from [3, Appendix, Theorem 1.8.9]. \(\square \)

Proposition 2.9

Let E be a Banach space, \(\overline{s}=(s_1,s_2)\in {\mathbb {R}}^2\) and let \(w_j:{\mathbb {R}}^{d_j}\rightarrow [0,\infty ]\) \((j=1,2)\) be weights. Suppose that \(w=w_1\otimes w_2\) and that \(1< p<\infty \). The mappings \(I_1,I_2\) from Theorem 2.8yield the following isomorphies:

$$\begin{aligned} B^{s_1}_{pp}\big ({\mathbb {R}}^{d_1}_{x_{1,d}},w_1;\;B^{s_2}_{pp}({\mathbb {R}}^{d_2}_{x_{2,d}},w_2;\;E)\big )&{\mathop {\cong }\limits ^{I_1}}S^{\overline{s}}_{p,p}B({\mathbb {R}}^n_d,w,E)\\&{\mathop {\cong }\limits ^{I_2}}B^{s_2}_{pp}\big ({\mathbb {R}}^{d_2}_{x_{2,d}},w_2;\;B^{s_1}_{pp}({\mathbb {R}}^{d_1}_{x_{1,d}},w_1;\;E)\big ). \end{aligned}$$

Proof

The assertion follows from Theorem 2.8 and

$$\begin{aligned} \Vert f\Vert _{S^{\overline{s}}_{p,p}B({\mathbb {R}}^n_d,w;\;E)}^p&=\sum _{\vec{k}\in {\mathbb {N}}_0^2}2^{p{\overline{s}}\cdot {\vec{k}}}\int _{{\mathbb {R}}^{d_1}}\int _{{\mathbb {R}}^{d_2}} \Vert S_{\vec{k}}f(x)\Vert _E^pw_2(x_{2,d})\,{\mathrm{d}}x_{2,d}\,w_1(x_{1,d})\,{\mathrm{d}}x_{1,d}\\&=\sum _{k_1\in {\mathbb {N}}_0}2^{ps_1k_1}\int _{{\mathbb {R}}^{d_1}}\sum _{k_2\in {\mathbb {N}}_0}2^{ps_2k_2}\int _{{\mathbb {R}}^{d_2}} \Vert S_{k_2}S_{k_1}f(x)\Vert _E^pw_2(x_{2,d})\,{\mathrm{d}}x_{2,d}\,w_1(x_{2,d})\,{\mathrm{d}}x_{1,d}\\&=\sum _{k_1\in {\mathbb {N}}_0}2^{ps_1k_1}\int _{{\mathbb {R}}^{d_1}}\Vert S_{k_1}f(x_{1,d},\,\cdot \,)\Vert ^p_{B^{s_2}_{pp}({\mathbb {R}}^{d_2},w_2;\;E)}w_1(x_{1,d})\,{\mathrm{d}}x_{1,d}\\&=\Vert f\Vert _{B^{s_1}_{pp}({\mathbb {R}}^{d_1}_{x_{1,d}},w_1;\;B^{s_2}_{pp}({\mathbb {R}}^{d_2}_{x_{2,d}},w_2;\;E))}^p. \end{aligned}$$

\(\square \)

Remark 2.10

  1. (a)

    In Theorem 2.8 and Proposition 2.9, we took \(l=2\) only for notational convenience. The same arguments also work for \(l\in \{3,\ldots ,n\}\).

  2. (b)

    In this work, we frequently use the representation in Proposition 2.9 of Besov spaces with dominating mixed smoothness. In the following, we omit the isomorphisms \(I_1\) and \(I_2\) in the notation and consider the spaces in Proposition 2.9 as equal.

Corollary 2.11

Let \(T>0\), \(l=n\), \(\overline{s}=(s_1,\ldots ,s_n)\in {\mathbb {R}}^n\) and \(p\in [1,\infty )\). Then we have the isomorphisms

$$\begin{aligned}&B^{s_1}_{p,p}([0,T];\;B^{s_2}_{p,p}([0,T];\ldots B^{s_l}_{p,p}([0,T])\ldots ))\cong S^{\overline{s}}_{p,p}B([0,T]^{n})\\&\quad \cong B^{s_1}_{p,p}([0,T])\otimes _{\alpha _p}\cdots \otimes _{\alpha _p} B^{s_n}_{p,p}([0,T]). \end{aligned}$$

Proof

For [0, T] being replaced by \({\mathbb {R}}\) these are the statements of Proposition 2.9 and Remark 2.7  (c). Thus, the assertion follows by composing the isomorphisms with a suitable extension operator and the restriction to \([0,T]^n\). \(\square \)

Proposition 2.12

Let \(1<p,q<\infty \), \(s\in {\mathbb {R}}\) and let \(w:{\mathbb {R}}^n\rightarrow (0,\infty )\) be an admissible weight. Let further \(p',q'\in (1,\infty )\) be the conjugated Hölder indices of p and q, respectively. Then, we have

$$\begin{aligned} (B^s_{p,q}({\mathbb {R}}^n,w))'=B^{-s}_{p',q'}({\mathbb {R}}^n,w^{1-p'}). \end{aligned}$$

Proof

This result is taken from [35, Chapter 5.1.2]. Note, however, that therein a different convention concerning the notation of weighted spaces is used. The space \(B^s_{p,q}({\mathbb {R}}^n,w)\) in the notation of [35] corresponds to \(B^s_{p,q}({\mathbb {R}}^n,w^p)\) in our notation. Note also that the weights being considered in [35] are even much more general than the admissible weights we consider here. \(\square \)

Lemma 2.13

Let \(1<p<\infty \), \(l=n\) and \(\overline{s}=(s_1,\ldots ,s_n)\in {\mathbb {R}}^n\). Then we have that \({\mathscr {S}}_0([0,T]^n)\) is dense in \(B^{s_1}_{p,p,0}([0,T];\;B^{s_2}_{p,p,0}([0,T];\ldots B^{s_n}_{p,p,0}([0,T])\ldots ))\).

Proof

It holds that the algebraic tensor product \({\mathscr {S}}_0([0,T]^{n-1})\otimes B^{s_n}_{p,p,0}([0,T];\;E)\) is dense in \({\mathscr {S}}_0([0,T]^{n-1};\;B^{s_n}_{p,p,0}([0,T];\;E))\) for a given Banach space E, see for example [2, Theorem 1.3.6]. On the other hand, \({\mathscr {S}}_0([0,T];\;E)\) is by definition dense in \(B^{s_n}_{p,p,0}([0,T];\;E)\). Thus, we have the dense embeddings

$$\begin{aligned}&{\mathscr {S}}_0([0,T]^{n-1})\otimes {\mathscr {S}}_0([0,T];\;E) {\mathop {\hookrightarrow }\limits ^{d}} {\mathscr {S}}_0([0,T]^{n-1})\otimes B^{s_n}_{p,p,0}([0,T];\;E) \\&\quad {\mathop {\hookrightarrow }\limits ^{d}}{\mathscr {S}}_0([0,T]^{n-1};\;B^{s_n}_{p,p,0}([0,T];\;E)). \end{aligned}$$

Since

$$\begin{aligned} {\mathscr {S}}_0([0,T]^{n-1})\otimes {\mathscr {S}}_0([0,T];\;E)\subset {\mathscr {S}}_0([0,T]^n;\;E)\subset {\mathscr {S}}_0([0,T]^{n-1};\;B^{s_n}_{p,p,0}([0,T];\;E)) \end{aligned}$$

we obtain that

$$\begin{aligned} {\mathscr {S}}_0([0,T]^n;\;E){\mathop {\hookrightarrow }\limits ^{d}} {\mathscr {S}}_0([0,T]^{n-1};\;B^{s_n}_{p,p,0}([0,T];\;E)). \end{aligned}$$

Repeating the same argument for the space \({\mathscr {S}}_0([0,T]^{n-1};\;B^{s_n}_{p,p,0}([0,T];\;E))\) instead of \({\mathscr {S}}_0([0,T]^n;\;E)\) and iterating it, we obtain the assertion. \(\square \)

Corollary 2.14

Let \(1<p<\infty \), \(l=n\) and \(\overline{s}=(s_1,\ldots ,s_n)\in {\mathbb {R}}^n\). Then we have that

$$\begin{aligned} B^{s_1}_{p,p,0}([0,T];\;B^{s_2}_{p,p,0}([0,T];\ldots B^{s_n}_{p,p,0}([0,T])\ldots ))\cong S^{\overline{s}}_{p,p,0}B([0,T]^{n}) \end{aligned}$$

where the isomorphism is the same as in Corollary 2.11.

Proof

By iteration, we define \(R_n:={\mathscr {S}}_0([0,T])\) and

$$\begin{aligned} R_{j-1}:=\{u\in {\mathscr {S}}_0([0,T];\,\,S^{(s_j,\ldots ,s_n)}_{p,p}B([0,T]^{n+1-j}))\;\vert \; \forall t\in [0,T]:u(t)\in R_j\} \end{aligned}$$

for \((j=2,\ldots ,n)\). Then, we have \({\mathscr {S}}_0([0,T]^n)\subset R_1\) so that it follows together with Lemma 2.13 that

$$\begin{aligned}&B^{s_1}_{p,p,0}([0,T];\ldots B^{s_n}_{p,p,0}([0,T];\;E) \ldots )\subset \overline{{\mathscr {S}}_0([0,T]^n)}\subset \overline{R_1}\\&\quad \subset B^{s_1}_{p,p,0}([0,T];\ldots B^{s_n}_{p,p,0}([0,T];\;E)\ldots ), \end{aligned}$$

where the closures are taken with respect to the topology of the iterated Besov space \( B^{s_1}_{p,p,0}([0,T];\ldots B^{s_n}_{p,p,0}([0,T];\;E)\ldots )\). Hence, we have that

$$\begin{aligned} B^{s_1}_{p,p,0}([0,T];\ldots B^{s_n}_{p,p,0}([0,T];\;E)\ldots )=\overline{{\mathscr {S}}_0([0,T]^n)} \end{aligned}$$

On the other hand, \(S^{\overline{s}}_{p,p,0}B([0,T]^{n})\) is defined as the closure of \({\mathscr {S}}_0([0,T]^n)\) and thus, the assertion follows. \(\square \)

Proposition 2.15

Let \(1<p<\infty \), \(p'\) the conjugated Hölder index, \(l=n\) and \(\overline{s}=(s_1,\ldots ,s_n)\in {\mathbb {R}}^n\). Then we have that

$$\begin{aligned} (S^{\overline{s}}_{p,p}B([0,T]^{n}))'\cong S^{-\overline{s}}_{p',p',0}B([0,T]^{n}),\quad S^{\overline{s}}_{p,p,0}B([0,T]^{n}))'\cong S^{-\overline{s}}_{p',p'}B([0,T]^{n}). \end{aligned}$$

Proof

It follows from Corollary 2.11 together with Corollary 2.14 that we may show the assertion on the level of iterated Besov spaces. Since they are defined by iteration, it suffices to show the assertion for the usual isotropic but vector-valued Besov spaces on [0, T], i.e. it suffices to show that

$$\begin{aligned} (B^{s}_{p,p}([0,T];\;E))'=B^{-s}_{p',p',0}([0,T];\;E'),\quad (B^{s}_{p,p,0}([0,T];\;E))'=B^{-s}_{p',p'}([0,T];\;E'), \end{aligned}$$

where E is a reflexive Banach space. For these relations, we refer to [3, Chapter VII, Theorem 2.8.4] or [25, Theorem 11]. Even though the former reference considers different domains and the latter treats the scalar-valued situation, their extension-restriction methods also work in our setting. \(\square \)

Proposition 2.16

Let \(s,\rho \in {\mathbb {R}}\) and \(1\le p,q<\infty \). Then the mapping

$$\begin{aligned} B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\rightarrow B^{s}_{p,q}({\mathbb {R}}^n),\;f\mapsto \langle \,\cdot \,\rangle ^{\rho /p}f \end{aligned}$$

is an isomorphism

Proof

Recall that \(\langle \,\cdot \,\rangle ^{\rho /p}\) is an admissible weight. This proposition actually holds for all admissible weights, see for example [42, Theorem 6.5]. Note that in this reference a different convention concerning the notation of weighted spaces is used. \(\square \)

Theorem 2.17

Let \(p_0,p_1,q_0,q_1\in [1,\infty )\), \(s_0,s_1\in {\mathbb {R}}\) and \(w_0,w_1\in A_{\infty }^{\rm{loc}}({\mathbb {R}}^n)\). Let further \(\theta \in (0,1)\) and

$$\begin{aligned} s=(1-\theta )s_0+\theta s_1,\quad \frac{1}{p}=\frac{1-\theta }{p_0}+\frac{\theta }{p_1},\quad \frac{1}{q}=\frac{1-\theta }{q_0}+\frac{\theta }{q_1},\quad w=w_0^{\frac{(1-\theta )p}{p_0}}w_1^{\frac{\theta p}{p_1}}. \end{aligned}$$

Then, we have that

$$\begin{aligned} {[}B^{s_0}_{p_0,q_0}({\mathbb {R}}^n,w_0),B^{s_1}_{p_1,q_1}({\mathbb {R}}^n,w_1)]_{\theta }=B^{s}_{p,q}({\mathbb {R}}^n,w), \end{aligned}$$

where \([\cdot ,\cdot ]_{\theta }\) denotes the complex interpolation functor. In particular, it holds that

$$\begin{aligned} B^{s_0}_{p_0,q_0}({\mathbb {R}}^n,w_0) \cap B^{s_1}_{p_1,q_1}({\mathbb {R}}^n,w_1) \subset B^{s}_{p,q}({\mathbb {R}}^n,w) \end{aligned}$$

Proof

This is part of the statement of [38, Theorem 4.5]. \(\square \)

In one proof, we also need Bessel potential spaces as a technical tool.

Definition 2.18

Let \(\overline{s}=(s_1,\ldots ,s_l)\in {\mathbb {R}}^l\) and \(p\in (1,\infty )\). Then we define \(S^{\overline{s}}_pH({\mathbb {R}}^n_d)\) by

$$\begin{aligned} S^{\overline{s}}_pH({\mathbb {R}}^n_d):=\bigg \{f\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n): {\mathscr {F}}^{-1}\prod _{j=1}^l\langle \xi _{j,d}\rangle ^{s_j}{\mathscr {F}}f\in L_p({\mathbb {R}}^n)\bigg \} \end{aligned}$$

and endow it with the norm

$$\begin{aligned} \Vert f \Vert _{S^{\overline{s}}_pH({\mathbb {R}}^n_d)}:= \bigg \Vert {\mathscr {F}}^{-1}\prod _{j=1}^l\langle \xi _{j,d}\rangle ^{s_j}{\mathscr {F}}f\bigg \Vert _{L_p({\mathbb {R}}^n)}. \end{aligned}$$

If \(l=1\), then we obtain the standard isotropic Bessel potential spaces and write \(H^{s}_p({\mathbb {R}}^n)\) instead.

For Bessel potential spaces, we have to following embeddings: For some \(s\in {\mathbb {R}}\) let \(\overline{s}=(s,\ldots ,s)\in {\mathbb {R}}^n\). Then, we have

$$\begin{aligned} H^{sn}_p({\mathbb {R}}^n)\hookrightarrow S^{\overline{s}}_pH({\mathbb {R}}^n)\hookrightarrow H^{s}_p({\mathbb {R}}^n). \end{aligned}$$
(3)

This can for example be found in [34, (1.7)] and [43, (1.554)]. If \(p\in (1,2]\), then we have

$$\begin{aligned} B^{s}_{p,p}({\mathbb {R}}^n)\hookrightarrow H^s_p({\mathbb {R}}^n) \end{aligned}$$
(4)

for which we refer to [5, Theorem 6.4.4]. Moreover, for all \(\varepsilon >0\), we have that

$$\begin{aligned} H^s_p({\mathbb {R}}^n)\hookrightarrow B^{s-\varepsilon }_{p,p}({\mathbb {R}}^n), \end{aligned}$$
(5)

which can be obtained as a combination of [40, Section 2.3.2, Proposition 2] and [5, Theorem 6.4.4]. As for Besov spaces, we have the tensor product representation

$$\begin{aligned} S^{\overline{s}'}_pH({\mathbb {R}}^{n-1}_{d'})\otimes _{\alpha _p}H^{s}_p({\mathbb {R}})\cong S^{\overline{s}}_pH({\mathbb {R}}^{n}_d), \end{aligned}$$
(6)

where \(\overline{s}'=(s,\ldots ,s)\in {\mathbb {R}}^{n-1}\), \(\overline{s}=(s,\ldots ,s)\in {\mathbb {R}}^{n}\), \(d'=(1,\ldots ,1)\in {\mathbb {N}}^{n-1}\) and \(d=(1,\ldots ,1)\in {\mathbb {N}}^{n}\). This has also been derived in [37].

2.3 Lévy white noise

Now, we briefly introduce Lévy white noise as a generalized random process and collect some of the known properties. In the following \(\nu \) will be a Lévy measure, i.e. a measure on \({\mathbb {R}}{\setminus }\{0\}\) such that \(\int _{{\mathbb {R}}{\setminus }\{0\}} \min \{1, x^2\}\,{\mathrm{d}}\nu (x)<\infty \). Moreover, we take \(\gamma \in {\mathbb {R}}\) and \(\sigma ^2>0\). We call the triplet \((\gamma ,\sigma ^2,\nu )\) Lévy triplet and the function

$$\begin{aligned} \Psi (\xi ):= i\gamma \xi -\frac{\sigma ^2\xi ^2}{2}+\int _{{\mathbb {R}}{\setminus }\{0\}} (e^{ix\xi }-1-i\xi x\mathbb {1}_{|x|\le 1})\,{\mathrm{d}}\nu (x) \end{aligned}$$

is called Lévy exponent corresponding to the Lévy triplet \((\gamma ,\sigma ^2,\nu )\). Functions of the form \(\exp \circ \Psi \) for some Lévy exponent \(\Psi \) are exactly the characteristic functions of infinitely divisible random variables. An important special case is the one with Lévy triplet (0, 1, 0), as in this case \(\exp \circ \Psi \) is the characteristic function of a Gaussian random variable with mean 0 and variance 1. This case will lead to a standard Gaussian white noise later in Definition 2.23.

We endow the space of tempered distributions \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n)\) with the cylindrical \(\sigma \)-field \({\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n))\) generated by the cylindrical sets, i.e. sets of the form

$$\begin{aligned} \{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n): (\langle u,\varphi _1\rangle ,\ldots , \langle u,\varphi _N\rangle )\in B\} \end{aligned}$$

for some \(N\in {\mathbb {N}}\), \(\varphi _1,\ldots ,\varphi _N\in {\mathscr {S}}({\mathbb {R}}^n)\) and some Borel set \(B\in {\mathcal {B}}({\mathbb {R}}^N)\). We will also consider \(({\mathscr {S\,}}^{\prime}({\mathcal {O}}),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathcal {O}})))\) for certain domains \({\mathcal {O}}\subset {\mathbb {R}}^n\). We define this by restriction. More precisely, we write \({\mathscr {S}}_0({\mathcal {O}})\) for the closed subspace of \({\mathscr {S}}({\mathbb {R}}^n)\) which consists of functions with support in \(\overline{{\mathcal {O}}}\). \({\mathscr {S\,}}^{\prime}({\mathcal {O}})\) is defined by

$$\begin{aligned} {\mathscr {S\,}}^{\prime}({\mathcal {O}}):=\{ u\vert _{{\mathscr {S}}_0({\mathcal {O}})} : u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n) \}, \end{aligned}$$

and \({\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathcal {O}}))\) is the \(\sigma \)-field generated by sets of the form

$$\begin{aligned} \{u\in {\mathscr {S\,}}^{\prime}({\mathcal {O}}): (\langle u,\varphi _1\rangle ,\ldots , \langle u,\varphi _N\rangle )\in B\} \end{aligned}$$

for some \(N\in {\mathbb {N}}\), \(\varphi _1,\ldots ,\varphi _N\in {\mathscr {S}}_0({\mathcal {O}})\) and some Borel set \(B\in {\mathcal {B}}({\mathbb {R}}^N)\). We also just write

$$\begin{aligned} u\vert _{{\mathcal {O}}}:=u\vert _{{\mathscr {S}}_0({\mathcal {O}})}\quad u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n). \end{aligned}$$

This way, the mapping \(u\mapsto u\vert _{{\mathcal {O}}}\) is a measurable mapping

$$\begin{aligned} ({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n)))\rightarrow ({\mathscr {S\,}}^{\prime}({\mathcal {O}}),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathcal {O}}))). \end{aligned}$$

Definition 2.19

Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space. A generalized random process s is a measurable function

$$\begin{aligned} s:(\Omega ,{\mathcal {F}})\rightarrow ({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n))). \end{aligned}$$

The pushforward measure \({\mathbb {P}}_s\) defined by

$$\begin{aligned} {\mathbb {P}}_s(B):={\mathbb {P}}(s^{-1}(B))\quad (B\in {\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n))) \end{aligned}$$

is called probability law of s. Moreover, the characteristic functional \(\widehat{{\mathbb {P}}}_s\) of s is defined by

$$\begin{aligned} \widehat{{\mathbb {P}}}_s(\varphi ):=\int _{{\mathscr {S\,}}^{\prime}({\mathbb {R}}^n)} \exp (i\langle u , \varphi \rangle )\,d{\mathbb {P}}_s(u). \end{aligned}$$

We will write \(s(\omega )\) for the tempered distribution at \(\omega \in \Omega \) and \(\langle s, \varphi \rangle \) for the random variable which one obtains by testing s against the Schwartz function \( \varphi \in {\mathscr {S}}({\mathbb {R}}^n)\).

In certain situations, we also speak of a generalized random process if there only is a null set \(N\subset \Omega \) such that the range of \(s\vert _{\Omega {\setminus } N}\) is a subset of \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n)\). But since we assume our probability space to be complete, we may change every measurable mapping \(f:(\Omega ,{\mathcal {F}})\rightarrow (M,{\mathcal {A}})\) for a measurable space \((M,{\mathcal {A}})\) on arbitrary null sets without affecting the measurability. Thus, for our purposes, we can neglect the difference between a generalized random process and a mapping which is a generalized random process only after some change on a null set. This also applies to the following definition:

Definition 2.20

Let

$$\begin{aligned} s_1,s_2:(\Omega ,{\mathcal {F}})\rightarrow ({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n))) \end{aligned}$$

be two generalized random processes. We say that \(s_2\) is a modification of \(s_1\), if

$$\begin{aligned} {\mathbb {P}}(\langle s_1,\varphi \rangle =\langle s_2,\varphi \rangle )=1 \end{aligned}$$

for all \(\varphi \in {\mathscr {S}}({\mathbb {R}}^n)\).

Similar to Bochner’s theorem for random variables, the Bochner–Minlos theorem gives a necessary and sufficient condition for a mapping \(C:{\mathscr {S}}({\mathbb {R}}^n)\rightarrow {\mathbb {C}}\) to be the characteristic functional of a generalized random process.

Theorem 2.21

(Bochner–Minlos) A mapping \(C:{\mathscr {S}}({\mathbb {R}}^n)\rightarrow {\mathbb {C}}\) is the characteristic functional of a generalized random process if and only if C is continuous, \(C(0)=1\) and C is positive definite, i.e. for all \(N\in {\mathbb {N}}\), all \(z_1,\ldots , z_N\in {\mathbb {C}}\), and all \(\varphi _1,\ldots ,\varphi _N\in {\mathscr {S}}({\mathbb {R}}^n)\) it holds that

$$\begin{aligned} \sum _{j,k=1}^N z_j\overline{z_k}C(\varphi _j-\varphi _k)\ge 0. \end{aligned}$$

Remark 2.22

  1. (a)

    The Bochner–Minlos also holds if \({\mathscr {S}}({\mathbb {R}}^n)\) is replaced by a nuclear space as for example the space of test functions \(d({\mathbb {R}}^n)\). It seems like the Bochner–Minlos theorem was first formulated and proved in [24].

  2. (b)

    An important example of a characteristic functional is given by

    $$\begin{aligned} C(\varphi ):=\exp \left( \int _{{\mathbb {R}}^n}\Psi (\varphi (x))\,{\mathrm{d}}x \right) \end{aligned}$$

    for a Lévy exponent \(\Psi \). This is always a characteristic functional on the space of test functions \({\mathscr {D}}({\mathbb {R}}^n)\), see for example [16, Chapter III, Theorem 5]. However, this is not always true for the Schwartz space \({\mathscr {S}}({\mathbb {R}}^n)\). In fact, C is a characteristic functional on \({\mathscr {S}}({\mathbb {R}}^n)\) if and only if it has positive absolute moments, i.e. if there is an \(\varepsilon >0\) such that \({\mathbb {E}}[|X|^{\varepsilon }]<\infty \), where X is an infinitely divisible random variable corresponding to the Lévy triplet \(\Psi \). We refer the reader to [13, Theorem 3] for the sufficiency and to [11] for the necessity.

Definition 2.23

Let \((\gamma ,\sigma ^2,\nu )\) be a Lévy triplet such that the corresponding infinitely divisible random variable has positive absolute moments. A Lévy white noise \(\eta :\Omega \rightarrow {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n)\) with Lévy triplet \((\gamma ,\sigma ^2,\nu )\) is the generalized random process with characteristic functional

$$\begin{aligned} {\mathbb {P}}_{\eta }(\varphi )=\exp \left( \int _{{\mathbb {R}}^n} \Psi (\varphi (x))\,{\mathrm{d}}x\right) \quad (\varphi \in {\mathscr {S}}({\mathbb {R}}^n)). \end{aligned}$$

If we speak of a Lévy white noise on a domain \({\mathcal {O}}\subset {\mathbb {R}}^n\), then we mean that it is given by \(\eta \vert _{{\mathcal {O}}}\) for a Levy white noise \(\eta \) on \({\mathbb {R}}^n\).

Remark 2.24

From a modeling point of view, there are some minimum requirements one has on a random process to call it a white noise. For example, a white noise should and indeed our white noise from Definition 2.23 does satisfy the following:

  1. (a)

    A white noise is invariant under Euclidean motions in the sense that for \(f\in {\mathscr {D}}({\mathbb {R}}^n)\) and for an Euclidean motion A the random variables \(\langle \eta ,f\rangle \) and \(\langle \eta ,f\circ A\rangle \) have the same distribution. This can for example be seen by comparing their characteristic functions:

    $$\begin{aligned} {\mathbb {E}}[e^{i\xi \langle \eta ,f\rangle }]&=\exp \bigg (\int _{{\mathbb {R}}^n}\Psi (\xi f(x))\,{\mathrm{d}}x\bigg )\\&=\exp \bigg (\int _{{\mathbb {R}}^n}\Psi (\xi f(Ax))\,{\mathrm{d}}x\bigg )={\mathbb {E}}[e^{i\xi \langle \eta ,f\circ A\rangle }]\quad (\xi \in {\mathbb {R}}). \end{aligned}$$

    For the representation of the characteristic function, see for example [27, Theorem 2.7 (iv)].

  2. (b)

    The random variables \(\langle \eta ,f\rangle \) and \(\langle \eta ,g\rangle \) are independent if \(f,g\in {\mathscr {D}}({\mathbb {R}}^n)\) have disjoint supports. Indeed, if fg have disjoint supports then \(\Psi (f+g)=\Psi (f) +\Psi (g)\) and, therefore,

    $$\begin{aligned} {\mathbb {E}}[e^{i(\xi _1\langle \eta ,f\rangle +\xi _2\langle \eta ,g\rangle )}]&=\exp \bigg (\int _{{\mathbb {R}}^n}\Psi (\xi _1 f(x)+\xi _2g(x))\,{\mathrm{d}}x\bigg )\\&=\exp \bigg (\int _{{\mathbb {R}}^n}\Psi (\xi _1 f(x))+\Psi (\xi _2g(x))\,{\mathrm{d}}x\bigg ) ={\mathbb {E}}[e^{i\xi _1\langle \eta ,f\rangle }]{\mathbb {E}}[e^{i\xi _2\langle \eta ,g\rangle }] \end{aligned}$$
  3. (c)

    If second moments exist, then we have the relation

    $$\begin{aligned} {\text {cov}}(\langle \eta ,f\rangle ,\langle \eta ,g\rangle )=\left( \sigma ^2+\int _{{\mathbb {R}}{\setminus }\{0\}}z^2\,{\mathrm{d}}\nu (z)\right) \langle f,g \rangle _{L_2({\mathbb {R}}^n)}\quad (f,g\in {\mathscr {D}}({\mathbb {R}}^n)). \end{aligned}$$

    It seems like this has not been stated in this form for Lévy white noise in the literature before. We, therefore, refer the reader to the author’s Ph.D. thesis, [19, Proposition 3.33].

Remark 2.25

By an approximation procedure, it is possible to plug many more functions into a white noise than just test functions or Schwartz functions. For example, it is always possible to apply a Lévy white noise to elements of \(L_2({\mathbb {R}}^n)\) with compact support. In particular, this includes indicator functions \(\mathbb {1}_A\) for bounded Borel sets \(A\in {\mathcal {B}}({\mathbb {R}}^n)\) which is useful for the construction of a stochastic integral. The idea for the construction of such an integral goes back to [44] and was further refined in [27]. We also refer the reader to [12] in which the extension of the domain of definition is carried out in full detail. We will now briefly summarize the results we need in this work.

Definition 2.26

Let \(\eta \) be a Lévy white noise with triplet \((\gamma ,\sigma ^2,\nu )\) and let \(p\ge 0\).

  1. (a)

    The pth order Rajput–Rosiński exponent \(\Psi _p\) of \(\eta \) is defined by

    $$\begin{aligned} \Psi _p(\xi )&:=\bigg |\gamma \xi +\int _{{\mathbb {R}}{\setminus }\{0\}}x\xi (\mathbb {1}_{|x\xi |\le 1}-\mathbb {1}_{|x|\le 1})\,{\mathrm{d}}\nu (x)\bigg |+\sigma ^2\xi ^2\\&\quad +\int _{{\mathbb {R}}{\setminus }\{0\}} |x\xi |^p\mathbb {1}_{|x\xi |>1}+|x\xi |^2\mathbb {1}_{|x\xi |\le 1}\,{\mathrm{d}}\nu (x) \end{aligned}$$

    for \(\xi \in {\mathbb {R}}\).

  2. (b)

    We define the space \(L_p(\eta )\) by

    $$\begin{aligned} L_p(\eta ):=\bigg \{f\in L_0({\mathbb {R}}^n):\int _{{\mathbb {R}}^n}\Psi _p(f(x))\,{\mathrm{d}}x<\infty \bigg \} \end{aligned}$$

    and endow it with the metric

    $$\begin{aligned} d_{\Psi _p}(f,g):=\inf \bigg \{\lambda >0:\int _{{\mathbb {R}}^n}\Psi _p\big (\tfrac{f(x)-g(x)}{\lambda }\big )<\lambda \bigg \}. \end{aligned}$$

    The elements of \(L_0(\eta )\) will be called \(\eta \)-integrable.

Proposition 2.27

Let \(\eta \) be a Lévy white noise with triplet \((\gamma ,\sigma ^2,\nu )\) and \(p\ge 0\).

  1. (a)

    The space \(L_p(\eta )\) is a complete linear metric space.

  2. (b)

    The space of test functions \({\mathscr {D}}({\mathbb {R}}^n)\) is dense in \(L_0(\eta )\).

  3. (c)

    The Lévy white noise \(\eta \) extends to a continuous linear mapping

    $$\begin{aligned} \eta :L_p(\eta )\rightarrow L_p(\Omega ),\,f\mapsto \langle \eta ,f\rangle . \end{aligned}$$
  4. (d)

    Let \(f\in L_0(\eta )\). Then the characteristic function of \(\langle \eta ,f\rangle \) is again given by

    $$\begin{aligned} {\mathbb {E}}[e^{i\xi \langle \eta ,f\rangle }]=\exp \bigg (\int _{{\mathbb {R}}^n}\Psi (\xi f(x))\,{\mathrm{d}}x\bigg ). \end{aligned}$$

Proof

This is a collection of the statements given in [12, Proposition 3.5], [28, Chapter X, Theorem 2, Proposition 5 & Corollary 6] and [27, Theorem 2.7, Lemma 3.1 & Theorem 3.3]. \(\square \)

Remark 2.28

  1. (a)

    In the general case, it can be difficult to give a nice characterization of the space \(L_0(\eta )\). However, as already mentioned in Remark 2.25, elements of \(L_2({\mathbb {R}}^n)\) with compact support are always contained in \(L_0(\eta )\). Moreover, \({\mathscr {S}}({\mathbb {R}}^n)\) is contained in \(L_0(\eta )\) if the white noise \(\eta \) admits positive absolute moments, see Remark 2.22. We also refer the reader to [12, Table 1] which contains a list of examples. For instance, in the Gaussian case we have \(L_0(\eta )=L_2({\mathbb {R}}^n)\). The same holds if the Lévy triplet is given by \((0,\sigma ^2,\nu )\) with \(\nu \) being symmetric and having finite variance, see [12, Proposition 4.10]. If the Lévy triplet is given by \((\gamma ,0,0)\) (\(\gamma \ne 0\)) then we have \(L_0(\eta )=L_1({\mathbb {R}}^n)\) and for \((\gamma ,\sigma ^2,0)\) (\(\gamma \ne 0\), \(\sigma ^2>0\)) by \(L_1({\mathbb {R}}^n)\cap L_2({\mathbb {R}}^n)\).

  2. (b)

    If one wants to work with paths of a Lévy white noise, then a characterization of the Besov regularity of these paths might be more useful than Proposition 2.27 in certain situations. Fortunately, a lot of nice work has already been done in this direction. For example, local regularity of Gaussian white noise has been studied in [45]. In [15], similar results have been obtained for Lévy white noise. Global smoothness properties of Lévy white noise in weighted spaces have been established in [4, 14]. Results as in the latter, two references will be important for the derivation of mixed smoothness properties. But before we can formulate them, we first need to introduce the Blumenthal-Getoor indices and the moment index of a Lévy white noise.

Definition 2.29

Let \(\eta \) be a Lévy white noise with Lévy exponent \(\Psi \). Then, the Blumenthal-Getoor indices are defined by

$$\begin{aligned} \beta _{\infty }&:=\inf \bigg \{p>0:\lim _{|\xi |\rightarrow \infty }\frac{|\Psi (\xi )|}{|\xi |^p}=0\bigg \},\\ \underline{\beta }{}_{\infty }&:=\inf \bigg \{p>0:\liminf _{|\xi |\rightarrow \infty }\frac{|\Psi (\xi )|}{|\xi |^p}=0\bigg \}. \end{aligned}$$

In addition, the moment index is defined by

$$\begin{aligned} p_{\max }:=\sup \{p>0:{\mathbb {E}}[|\langle \eta ,\mathbb {1}_{[0,1]^n}\rangle |^p]<\infty \}. \end{aligned}$$

In general, it holds that \(0\le \underline{\beta }{}_{\infty }\le \beta _{\infty } \le 2\).

Theorem 2.30

Let \(\eta \) be a Lévy white noise with Lévy triplet \((\gamma ,\sigma ^2,\nu )\), Blumenthal–Getoor indices \(\beta _{\infty }, \underline{\beta }{}_{\infty }\) and moment index\(p_{\max }\). Let further \(p\in (0,\infty )\).

  1. (a)

    Gaussian case: Suppose that \(\nu =0\). Then it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))&=1,\quad \text {if }s<\tfrac{n}{2} \text { and }\rho <-n,\\ {\mathbb {P}}(\eta \notin B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))&=1,\quad \text {if }s\ge \tfrac{n}{2}\text { or }\rho \ge -n. \end{aligned}$$
  2. (b)

    Compound Poisson case: Suppose that \(\nu \) is a finite measure on \({\mathcal {B}}({\mathbb {R}}^n{\setminus }\{0\})\) and that \(\sigma ^2=0\). Then it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))&=1,\quad \text {if }s<n\bigg (\tfrac{1}{p}-1\bigg ) \text { and }\rho <-\tfrac{np}{\min \{p,p_{\max }\}},\\ {\mathbb {P}}(\eta \notin B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))&=1,\quad \text {if }s\ge n\bigg (\tfrac{1}{p}-1\bigg )\text { or }\rho >-\tfrac{np}{\min \{p,p_{\max }\}}. \end{aligned}$$
  3. (c)

    General non-Gaussian case: Suppose that \(\nu \ne 0\) and that \(p\le 2\) or \(p\in 2{\mathbb {N}}\). Then it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))&=1,\quad \text {if }s<n\bigg (\tfrac{1}{\max \{p,\beta _{\infty }\}}-1\bigg )\text { and }\rho <-\tfrac{np}{\min \{p,p_{\max }\}},\\ {\mathbb {P}}(\eta \notin B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))&=1,\quad \text {if }s>n\bigg (\tfrac{1}{\max \{p,\underline{\beta }{}_{\infty }\}}-1\bigg )\text { or }\rho >-\tfrac{np}{\min \{p,p_{\max }\}}. \end{aligned}$$

Proof

This is a collection of Proposition 6, 9 and 12 from [4]. Note that the authors of [4] use a different convention concerning the notation of weighted Besov spaces so that the weight parameters in our formulation are multiplied by p compared to the formulation in [4]. \(\square \)

Remark 2.31

  1. (a)

    In Theorem 2.30 (2.31) one can weaken the restriction on p using Theorem 2.17 as follows: If there is \(N\in 2{\mathbb {N}}\) such that \(p_{\max }\in (N,N+2)\) and if \(p\in (1,\infty ){\setminus }(N,N+2)\), then it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))=1,\quad \text {if }s<\tfrac{n}{\max \{p,\beta _{\infty }\}}-n\text { and }\rho <-\tfrac{np}{\min \{p,p_{\max }\}}. \end{aligned}$$

    If \(p\in (N,N+2)\), then let \(\theta \in (0,1)\) such that \(1/p=(1-\theta )/N+\theta /(N+2)\). In this case, it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))=1,\quad \text {if }s<\tfrac{n}{\max \{p,\beta _{\infty }\}}-n\text { and }\rho <-n\bigg (\tfrac{(1-\theta )p_{\max }+\theta p}{p_{\max }}\bigg ). \end{aligned}$$

    If there is no such N, i.e. if \(p_{\max }\in 2{\mathbb {N}}\), then

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))=1,\quad \text {if }s<\tfrac{n}{\max \{p,\beta _{\infty }\}}-n\text { and }\rho <-\tfrac{np}{\min \{p,p_{\max }\}}. \end{aligned}$$

    without restriction on p.

  2. (b)

    If one restricts the white noise \(\eta \) to a bounded set, for example \([0,T]^n\) for some \(T>0\), then one can also drop the conditions on \(\rho \). More precisely, we have the following: In the Gaussian case, it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}([0,T]^n))&=1,\quad \text {if }s<\tfrac{n}{2},\\ {\mathbb {P}}(\eta \notin B^{s}_{p,p}([0,T]^n))&=1,\quad \text {if }s\ge \tfrac{n}{2}. \end{aligned}$$

    In the compound Poisson case it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}([0,T]^n))&=1,\quad \text {if }s<n\bigg (\tfrac{1}{p}-1\bigg ),\\ {\mathbb {P}}(\eta \notin B^{s}_{p,p}([0,T]^n))&=1,\quad \text {if }s\ge n\bigg (\tfrac{1}{p}-1\bigg ). \end{aligned}$$

    In the general con-Gaussian case with \(p\in (1,\infty )\) it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in B^{s}_{p,p}([0,T]^n))&=1,\quad \text {if }s<n\bigg (\tfrac{1}{\max \{p,\beta _{\infty }\}}-1\bigg ),\\ {\mathbb {P}}(\eta \notin B^{s}_{p,p}([0,T]^n))&=1,\quad \text {if }s\ge n\bigg (\tfrac{1}{\max \{p,\underline{\beta }{}_{\infty }\}}-1\bigg ). \end{aligned}$$

2.4 Lévy processes with values in a Banach space

We briefly derive some results on the regularity of sample paths of Lévy processes with values in Banach spaces. While they are most probably far from being optimal, they allow us to also apply our methods to Lévy white noise instead of just Gaussian white noise. Although our regularity results for Lévy white noises will not be sharp, we develop our methods in a way such that the result can directly be improved once properties like the ones in [6, Section 5.5] have been derived for Lévy processes in Banach spaces.

Definition 2.32

Let \(T>0\). As in the scalar-valued case, a stochastic process \((L_t)_{t\in [0,T]}\) with values in a Banach space E is called Lévy process if the following holds:

  1. (i)

    \(L_0=0\),

  2. (ii)

    \((L_t)_{t\in [0,T]}\) has independent increments, i.e. for all \(N\in {\mathbb {N}}\) and all \(0\le t_1<\cdots <t_N\le T\) it holds that \(L_{t_2}-L_{t_1},\ldots , L_{t_N}-L_{t_{N-1}}\) are independent.

  3. (iii)

    \((L_t)_{t\in [0,T]}\) has stationary increments, i.e. the law of \(L_t-L_s\) only depends on \(t-s\).

  4. (iv)

    \((L_t)_{t\in [0,T]}\) is continuous in probability.

Proposition 2.33

Let \(\varepsilon >0\), \(p\in (1,\infty )\) and let \((L_t)_{t\in [0,T]}\) a Lévy process with values in a Banach space E. Then \((L_t)_{t\in [0,T]}\) has a modification with sample paths in \(B^{0}_{p,p}([0,T];\;E)\) if \(p\ge 2\) and in \(B^{-\varepsilon }_{p,p}([0,T];\;E)\) if \(p<2\).

Proof

As a Lévy process, \((L_t)_{t\in [0,T]}\) has a modification such that the sample paths are càdlàg, see [26, Theorem 4.3]. In particular, the sample paths are jump continuous, i.e. they are contained in the closure of simple functions from [0, T] to E with respect to the \(\Vert \cdot \Vert _{L_{\infty }([0,T])}\)-norm. Therefore, the sample paths are elements of

$$\begin{aligned} L_{\infty }([0,T])\hookrightarrow L_p([0,T])\hookrightarrow B^0_{p,p}([0,T]) \end{aligned}$$

where the latter embedding only holds for \(p\ge 2\) and can for example be found in [5, Theorem 6.4.4]. If \(p\in (1,2)\), then the embedding \(L_p([0,T])\hookrightarrow B^{-\varepsilon }_{p,p}([0,T])\) holds. \(\square \)

Proposition 2.33 is surely not sharp, but simple and good enough for our purposes. Nonetheless, there are already much sharper results for E-valued Brownian motions.

Theorem 2.34

Let \(p,q\in [1,\infty )\) and let \((W_t)_{t\in [0,T]}\) be a Brownian motion with values in the Banach space E, i.e. \((W_t)_{t\in [0,T]}\) is an E-valued Lévy process such that \((\lambda (W_t))_{t\ge 0}\) is a Brownian motion for all \(\lambda \in E'\). Then the sample paths of \((W_t)_{t\in [0,T]}\) are contained in \(B^{\frac{1}{2}}_{p,\infty }([0,T];\;E)\) almost surely. Moreover, almost surely they are not contained in \(B^{\frac{1}{2}}_{p,q}([0,T];\;E)\).

Proof

This is one of the statements of [21, Theorem 4.1]. \(\square \)

3 Regularity properties in spaces of mixed smoothness

Lemma 3.1

Let \(n=n_1+n_2\) with \(n_1,n_2\in {\mathbb {N}}\) and let \(s,t\in {\mathbb {R}}^{n_1}\), \(s\le t\). Let \(\eta _n\) be a Lévy white noise in \({\mathbb {R}}^n\) with Lévy triplet \((\gamma ,\sigma ^2,\nu )\) and let \(\eta _{n_2}\) be a Lévy white noise in \({\mathbb {R}}^{n_2}\) with the same Lévy triplet. Then, the mapping

$$\begin{aligned} L_0(\eta _{n_2})\rightarrow L_0(\eta _{n}),\,\varphi \mapsto \mathbb {1}_{(s,t]}\otimes \varphi \end{aligned}$$

is well-defined and continuous.

Proof

Note that for all \(\lambda >0\) and all \(\varphi \in L(\eta _{n_2})\), we have

$$\begin{aligned} \int _{{\mathbb {R}}^{n_1}\times {\mathbb {R}}^{n_2}}\Psi _0\bigg (\frac{\mathbb {1}_{(s,t]}(r_1) \varphi (r_2)}{\lambda }\bigg )\,{\mathrm{d}}(r_1,r_2)={\text {Leb}}_{n_1}((s,t]) \int _{{\mathbb {R}}^{n_2}}\Psi _0\bigg (\frac{\varphi (r_2)}{\lambda }\bigg )\,{\mathrm{d}}r_2. \end{aligned}$$

Thus, \( \varphi \in L_0(\eta _{n_2})\) implies \(\mathbb {1}_{(s,t]}\otimes \varphi \in L_0(\eta _{n})\). Moreover, if \((\varphi _k)_{k\in {\mathbb {N}}}\subset L_0(\eta _{n_2})\) converges to \(\varphi \), then for all \(\varepsilon >0\) we have

$$\begin{aligned} \int _{{\mathbb {R}}^{n_2}}\Psi _0\bigg (\frac{\varphi (r_2)-\varphi _k(r_2)}{\varepsilon }\bigg )\,{\mathrm{d}}r_2<\varepsilon \end{aligned}$$

for \(k\in {\mathbb {N}}\) large enough. If \({\text {Leb}}_{n_1}((s,t])\le 1\) this implies

$$\begin{aligned} \int _{{\mathbb {R}}^{n_1}\times {\mathbb {R}}^{n_2}}\Psi _0\bigg (\frac{\mathbb {1}_{(s,t]}(r_1)(\varphi (r_2)-\varphi _k(r_2))}{\varepsilon }\bigg )\,{\mathrm{d}}(r_1,r_2)<\varepsilon \end{aligned}$$

so that the continuity follows. If \({\text {Leb}}_{n_1}((s,t])>1\), then we write \({\widetilde{\varepsilon }}={\text {Leb}}_{n_1}((s,t])\varepsilon \) and obtain

$$\begin{aligned}&\int _{{\mathbb {R}}^{n_1}\times {\mathbb {R}}^{n_2}}\Psi _0\bigg (\frac{\mathbb {1}_{(s,t]}(r_1)(\varphi (r_2)-\varphi _k(r_2))}{{\widetilde{\varepsilon }}}\bigg )\,{\mathrm{d}}(r_1,r_2)\\&\quad \le \int _{{\mathbb {R}}^{n_1}\times {\mathbb {R}}^{n_2}}\Psi _0\bigg (\frac{\mathbb {1}_{(s,t]}(r_1)(\varphi (r_2)-\varphi _k(r_2))}{\varepsilon }\bigg )\,{\mathrm{d}}(r_1,r_2)\\&\quad <{\text {Leb}}_{n_1}((s,t])\varepsilon ={\widetilde{\varepsilon }} \end{aligned}$$

for k large enough. Again, the continuity follows. \(\square \)

Lemma 3.2

Let \(s,t\in {\mathbb {R}}\) with \(s< t\). Let \(\eta \) be a Lévy white noise in \({\mathbb {R}}^n\) with Lévy triplet \((\gamma ,\sigma ^2,\nu )\). Then the mapping

$$\begin{aligned} \varphi \mapsto \langle \eta (\omega ), \mathbb {1}_{(s,t]}\otimes \varphi \rangle \end{aligned}$$

is an element of \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1})\) for almost all \(\omega \in \Omega \).

Proof

First, we note that \(\mathbb {1}_{(s,t]}\in B^{r}_{p,p}({\mathbb {R}})\) for all \(p\in (1,\infty )\), \(r\in (-\infty ,\frac{1}{p})\). One way to see this is to use the equivalent norm

$$\begin{aligned} \Vert \mathbb {1}_{(s,t]}\Vert _{B^{r}_{p,p}({\mathbb {R}})}\simeq \Vert \mathbb {1}_{(s,t]}\Vert _{L_p({\mathbb {R}})}+\left( \int _{|h|\le 1}\int _{{\mathbb {R}}} |h|^{-1-rp} |\mathbb {1}_{(s,t]}(x+h)-\mathbb {1}_{(s,t]}(x)|^p\,{\mathrm{d}}x\,{\mathrm{d}}h\right) ^{1/p}, \end{aligned}$$

where \(r\in (0,1)\). For this equivalence, we refer to [40, Section 2.2.2 and Section 2.5.12]. Since

$$\begin{aligned} \int _{{\mathbb {R}}} |\mathbb {1}_{(s,t]}(x+h)-\mathbb {1}_{(s,t]}(x)|^p\,{\mathrm{d}}x=\int _{{\mathbb {R}}} |\mathbb {1}_{(s,t]}(x+h)-\mathbb {1}_{(s,t]}(x)|\,{\mathrm{d}}x=2|h|, \end{aligned}$$

for \(|h|\le t-s\), we only have to check for which parameters we have

$$\begin{aligned} \int _{|h|\le 1} |h|^{-rp}\,{\mathrm{d}}h<\infty . \end{aligned}$$

This is the case if and only if \(r<\frac{1}{p}\). If we now take \(p=1+\varepsilon \) for \(\varepsilon >0\) small, then we have that

$$\begin{aligned}&B^{r}_{p,p}({\mathbb {R}})\otimes _{\alpha _p} S^{(r,\ldots ,r)}_{p}H_p({\mathbb {R}}^{n-1}) {\mathop {\hookrightarrow }\limits ^{(2.4)}} H^r_p({\mathbb {R}}) \otimes _{\alpha _p} S^{(r,\ldots ,r)}_{p}H_p({\mathbb {R}}^{n-1})\\&\quad {\mathop {\cong }\limits ^{(2.6)}}S^{(r,\ldots ,r)}_pH({\mathbb {R}}^{n}){\mathop {\hookrightarrow }\limits ^{(2.3)}} H^r_p({\mathbb {R}}^{n}){\mathop {\hookrightarrow }\limits ^{(2.5)}} B^{r-\varepsilon }_{p,p}({\mathbb {R}}^n), \end{aligned}$$

so that the mapping

$$\begin{aligned} {\mathscr {S}}({\mathbb {R}}^{n-1})\rightarrow B^{r-\varepsilon }_{1+\varepsilon ,1+\varepsilon }({\mathbb {R}}^n),\,\varphi \mapsto (\mathbb {1}_{(s,t]}\otimes \varphi )\langle \,\cdot \,\rangle ^{N/p} \end{aligned}$$
(7)

is continuous for arbitrary \(N>0\), where \(\langle \xi \rangle ^{N/p}\) is taken for \(\xi \in {\mathbb {R}}^n\). We take the detour via the Bessel potential scale since the embedding (3) seems to be not available in the literature for Besov spaces. For the Bessel potential scale this holds as \(S^{0,\ldots ,0}_pH({\mathbb {R}}^n)=L_p({\mathbb {R}}^n)\) so that Fourier multiplier techniques are directly available, see the references given for (3).

Combining (7) with Proposition 2.16 shows that also

$$\begin{aligned} {\mathscr {S}}({\mathbb {R}}^{n-1})\rightarrow B^{r-\varepsilon }_{1+\varepsilon ,1+\varepsilon }({\mathbb {R}}^n,\langle \cdot \rangle ^{N}),\,\varphi \rightarrow \mathbb {1}_{(s,t]}\otimes \varphi \end{aligned}$$
(8)

is continuous. Now, we combine (8) with Theorem 2.30. Since the Blumenthal–Getoor indices are not larger than 2, it holds that

$$\begin{aligned} {\mathbb {P}}(\eta \in B^{-3/4}_{1+\varepsilon ,1+\varepsilon }({\mathbb {R}}^n,\langle \cdot \rangle ^{\rho }))=1 \end{aligned}$$

for \(\varepsilon \in (0,1)\) and \(\rho \) negative enough. Therefore, taking \(r-\varepsilon >3/4\) and N large enough in (8) shows that

$$\begin{aligned} \lim _{n\rightarrow \infty } \langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _n\rangle =0 \end{aligned}$$

for almost all \(\omega \in \Omega \) and all \((\varphi _{n})_{n\in {\mathbb {N}}}\subset {\mathscr {S}}({\mathbb {R}}^n)\) such that \(\lim _{n\rightarrow \infty }\varphi _n=0\). Hence, the mapping \(\varphi \mapsto \langle \eta (\omega ), \mathbb {1}_{(s,t]}\otimes \varphi \rangle \) is indeed a tempered distribution. \(\square \)

Proposition 3.3

Let \(n=n_1+n_2\) with \(n_1,n_2\in {\mathbb {N}}\) and \(s,t\in {\mathbb {R}}^{n_1}\), \(s\le t\). Let further \(\eta \) be a Lévy white noise on \({\mathbb {R}}^n\) with Lévy triplet \((\gamma ,\sigma ^2,\nu )\) on the complete probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\). Then the mapping

$$\begin{aligned} \eta _{(s,t]}:(\Omega ,{\mathcal {F}},{\mathbb {P}})\rightarrow ({\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n_2}),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n_2}))),\,\omega \mapsto [\varphi \mapsto \langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi \rangle ] \end{aligned}$$

is a modification of a Lévy white noise with Lévy triplet \({\text {Leb}}_{n_1}((s,t])(\gamma ,\sigma ^2,\nu )\).

Proof

It suffices to show the assertion for \(n_1=1\). The general assertion then follows by iteration. Therefore, let \(n_1=1\). Then, Lemma 3.2 shows that, \(\eta _{(s,t]}(\omega )\) is indeed a tempered distribution almost surely. Thus, after changing it on a set of measure 0, we can assume that it is a \({\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1})\)-valued mapping. For the measurability, it suffices to show that the preimages of cylindrical sets of the form

$$\begin{aligned} C:=\{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1}):(\langle u,\varphi _1\rangle ,\ldots , \langle u,\varphi _N\rangle )\in B\} \end{aligned}$$

for some \(N\in {\mathbb {N}}\), some \(\varphi _1,\ldots ,\varphi _N\in {\mathscr {S}}({\mathbb {R}}^{n-1})\) and some open set \(B\subset {\mathbb {R}}^N\) under \(\eta _{(s,t]}\) are elements of \({\mathcal {F}}\). So let C be such a set. By Proposition 2.27 and Lemma 3.1, we can take sequences \((\psi _{j,k})_{k\in {\mathbb {N}}}\subset {\mathscr {D}}({\mathbb {R}}^n)\), \(j=1,\ldots ,N\) such that \(\psi _{j,k}\rightarrow \mathbb {1}_{(s,t]}\otimes \varphi _j\) in \(L_0(\eta )\) as \(k\rightarrow \infty \). Hence, we have that

$$\begin{aligned} \langle \eta ,\psi _{j,k}\rangle \rightarrow \langle \eta ,\mathbb {1}_{(s,t]}\otimes \varphi _j\rangle \quad \text {in probability as }k\rightarrow \infty . \end{aligned}$$

By taking subsequences, we may without loss of generality assume that the convergence is also almost surely. Let \({\widetilde{K}}\in {\mathcal {F}}\) be the set on which there is no pointwise convergence and

$$\begin{aligned} K:={\widetilde{K}}\cap \eta _{(s,t]}^{-1}(C). \end{aligned}$$

Since the probability space is complete, it follows that \(K\in {\mathcal {F}}\). Now, we define

$$\begin{aligned} B_l&:=\bigg \{x\in B:{\text {dist}}(x,{\mathbb {R}}^n{\setminus } B)>\tfrac{1}{l}\bigg \},\\ C_l&:=\{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1}):(\langle u,\varphi _1\rangle ,\ldots , \langle u,\varphi _N\rangle )\in B_l\},\\ {\widetilde{C}}_l&:=\{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1}):(\langle u,\varphi _1\rangle ,\ldots , \langle u,\varphi _N\rangle )\in \overline{B_l}\} \end{aligned}$$

for \(l\in {\mathbb {N}}\). Note that we have

$$\begin{aligned} K\subset \eta _{(s,t]}^{-1}(C)=\bigcup _{l\in {\mathbb {N}}}\eta _{(s,t]}^{-1}(C_l)=\bigcup _{l\in {\mathbb {N}}}\eta _{(s,t]}^{-1}({\widetilde{C}}_l). \end{aligned}$$
(9)

Let further

$$\begin{aligned} A_{k,l}:=(\langle \eta ,\psi _{1,k}\rangle ,\ldots ,\langle \eta ,\psi _{N,k}\rangle )^{-1}(B_l){\setminus } {\widetilde{K}} \subset \Omega . \end{aligned}$$

Note that \(A_{k,l}\in {\mathcal {F}}\) for all \(k,l\in {\mathbb {N}}\) since \(\eta \) is a a generalized random process. By construction, we have that \(\liminf _{k\rightarrow \infty } A_{k,l}\in {\mathcal {F}}\) and that it consists of all \(\omega \in \Omega \) such that

$$\begin{aligned} (\langle \eta (\omega ),\psi _{1,k}\rangle ,\ldots ,\langle \eta (\omega ),\psi _{N,k}\rangle )\rightarrow (\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _1\rangle ,\ldots ,\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _N\rangle ) \end{aligned}$$

as \(k\rightarrow \infty \) and such that \( (\langle \eta (\omega ),\psi _{1,k}\rangle ,\ldots ,\langle \eta (\omega ),\psi _{N,k}\rangle )\in B_l\) for \(k\in {\mathbb {N}}\) large enough. In particular, for \(\omega \in \liminf _{k\rightarrow \infty } A_{k,l}\) it holds that \((\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _1\rangle ,\ldots ,\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _N\rangle )\in \overline{B}_l\) and thus

$$\begin{aligned} \liminf _{k\rightarrow \infty } A_{k,l}\subset (\langle \eta _{(s,t]},\varphi _1\rangle ,\ldots ,\langle \eta _{(s,t]},\varphi _N\rangle )^{-1}(\overline{B}_l)=\eta _{(s,t]}^{-1}({\widetilde{C}}_l). \end{aligned}$$

Together with (9), this yields

$$\begin{aligned} \bigcup _{l\in {\mathbb {N}}}\liminf _{k\rightarrow \infty } A_{k,l}\cup K\subset \eta _{(s,t]}^{-1}(C). \end{aligned}$$
(10)

For the converse inclusion, let \(\omega \in \eta _{(s,t]}^{-1}(C)\) so that

$$\begin{aligned} (\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _1\rangle ,\ldots ,\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _N\rangle )\in B. \end{aligned}$$

Since B is open, there is an \(l\in {\mathbb {N}}\) such that

$$\begin{aligned} (\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _1\rangle ,\ldots ,\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _N\rangle )\in B_l. \end{aligned}$$

If \(\omega \notin {\widetilde{K}}\), then

$$\begin{aligned} (\langle \eta (\omega ),\psi _{1,k}\rangle ,\ldots ,\langle \eta (\omega ),\psi _{N,k}\rangle )\rightarrow (\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _1\rangle ,\ldots ,\langle \eta (\omega ),\mathbb {1}_{(s,t]}\otimes \varphi _N\rangle ) \end{aligned}$$

as \(k\rightarrow \infty \) and since \(B_l\) is open, it holds that

$$\begin{aligned} (\langle \eta (\omega ),\psi _{1,k}\rangle ,\ldots ,\langle \eta (\omega ),\psi _{N,k}\rangle )\in B_l \end{aligned}$$

for k large enough. Hence, it follows that \(\omega \in A_{k,l}\) for k large enough and therefore \(\omega \in \liminf _{k\rightarrow \infty } A_{k,l}\). If in turn \(\omega \in {\widetilde{K}}\), then \(\omega \in {\widetilde{K}}\cap \eta _{(s,t]}^{-1}(C)=K\). Hence, it follows that

$$\begin{aligned} \eta _{(s,t]}^{-1}(C)\subset \bigcup _{l\in {\mathbb {N}}}\liminf _{k\rightarrow \infty } A_{k,l}\cup K \end{aligned}$$
(11)

Together with (10) it now follows that

$$\begin{aligned} \eta _{(s,t]}^{-1}(C)= \bigcup _{l\in {\mathbb {N}}}\liminf _{k\rightarrow \infty } A_{k,l}\cup K\in {\mathcal {F}} \end{aligned}$$

so that \(\eta _{(s,t]}\) is indeed a generalized random process. Finally, we show that \(\eta _{(s,t]}\) is even a Lévy white noise with Lévy triplet \({\text {Leb}}_{n_1}((s,t])(\gamma ,\sigma ^2,\nu )\) by simply computing its characteristic functional: Let \(\Psi \) be the Lévy exponent of \(\eta \). Then, we obtain

$$\begin{aligned}&\widehat{{\mathbb {P}}}_{\langle \eta ,\mathbb {1}_{(s,t]}\otimes (\,\cdot \,)\rangle }(\varphi )\\&\quad ={\mathbb {E}}[e^{i\langle \eta ,\mathbb {1}_{(s,t]}\otimes \varphi \rangle }]=\exp \bigg (\int _{{\mathbb {R}}^n}\Psi ([\mathbb {1}_{(s,t]}\otimes \varphi ](r))\,{\mathrm{d}}r\bigg )\\&\quad =\exp \bigg (\int _{{\mathbb {R}}^n} i\gamma [\mathbb {1}_{(s,t]}\otimes \varphi ](r)-\frac{\sigma ^2[\mathbb {1}_{(s,t]}^2\otimes \varphi ^2](r)}{2}\\&\qquad +\int _{{\mathbb {R}}{\setminus }\{0\}} e^{ix[\mathbb {1}_{(s,t]}\otimes \varphi ](r)}-1+ix[\mathbb {1}_{(s,t]}\otimes \varphi ](r) \mathbb {1}_{|x|\le 1}\,{\mathrm{d}}\nu (x)\,{\mathrm{d}}r\bigg )\\&\quad =\exp \bigg ({\text {Leb}}_{1}((s,t])\int _{{\mathbb {R}}^{n-1}}i\gamma \varphi (r)-\frac{\sigma ^2\varphi ^2(r)}{2}\\&\qquad +\int _{{\mathbb {R}}{\setminus }\{0\}} e^{ix \varphi (r)}-1+ix\varphi (r) \mathbb {1}_{|x|\le 1}\,{\mathrm{d}}\nu (x)\,{\mathrm{d}}r\bigg )\\&\quad =\exp \bigg ({\text {Leb}}_{1}((s,t])\int _{{\mathbb {R}}^{n-1}}\Psi (\varphi (r))\,{\mathrm{d}}r\bigg ). \end{aligned}$$

Altogether, we obtain the assertion. \(\square \)

Lemma 3.4

Let \(T>0\), \(\overline{s}=(s_1,\ldots ,s_n)\in {\mathbb {R}}^n\), \(s,\rho \in {\mathbb {R}}\), \(p,q\in (1,\infty )\) and let \(p',q'\in (1,\infty )\) be the conjugated Hölder indices.

  1. (a)

    There is a sequence \((\psi _k)_{k\in {\mathbb {N}}}\subset {\mathscr {S}}({\mathbb {R}}^n)\) with \(\Vert \psi \Vert _{B^{-s}_{p',q'}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho (1-p')})}=1\) such that

    $$\begin{aligned} \Vert u\Vert _{B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })}=\sup _{k\in {\mathbb {N}}}|\langle u,\psi _k\rangle | \end{aligned}$$

    for all \(u\in B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\).

  2. (b)

    There is a sequence \((\psi _k)_{k\in {\mathbb {N}}}\subset {\mathscr {S}}_0([0,T]^n)\) with \(\Vert \psi \Vert _{S^{-\overline{s}}_{p',p'}B([0,T]^n)}=1\) such that

    $$\begin{aligned} \Vert u\Vert _{S^{\overline{s}}_{p,p}B([0,T]^n)}=\sup _{k\in {\mathbb {N}}}|\langle u,\psi _k\rangle | \end{aligned}$$

    for all \(u\in S^{\overline{s}}_{p,p}B([0,T]^n)\).

  3. (c)

    The Borel \(\sigma \)-field of \(B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\) is contained in \({\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n))\).

  4. (d)

    The Borel \(\sigma \)-field of \(S^s_{p,p}B([0,T]^n)\) is contained in \({\mathcal {B}}_c({\mathscr {S\,}}^{\prime}([0,T]^n))\).

Proof

  1. (a)

    Since \(B^{s}_{p,q}({\mathbb {R}}^n)\) is separable (see for example [40, Section 2.5.5, Remark 1]) and since the spaces \(B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\) and \(B^{s}_{p,q}({\mathbb {R}}^n)\) are isomorphic by Proposition 2.16, it follows that also \(B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\) is separable. Hence, it follows from [22, Proposition B.1.10] together with Proposition 2.12 that there is a sequence \((\varphi _k)_{k\in {\mathbb {N}}}\subset B^{-s}_{p',q'}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho (1-p')})\) with \(\Vert \varphi _k\Vert _{B^{-s}_{p',q'}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho (1-p')})}=1\) such that

    $$\begin{aligned} \Vert u \Vert _{B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })}=\sup _{k\in {\mathbb {N}}}|\langle u,\varphi _k\rangle |. \end{aligned}$$

    Since \(f\mapsto \langle \,\cdot \,\rangle ^{-\rho /p}f\) leaves \({\mathscr {S}}({\mathbb {R}}^n)\) invariant and since it is an isomorphism between \(B^{s'}_{p',q'}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho (1-p')})\) and \(B^{-s}_{p',q'}({\mathbb {R}}^n)\), it follows from the density of \({\mathscr {S}}({\mathbb {R}}^n)\) in \(B^{-s}_{p',q'}({\mathbb {R}}^n)\) (see for example [40, Section 2.3.3]) that we have the dense embedding

    $$\begin{aligned} {\mathscr {S}}({\mathbb {R}}^n){\mathop {\hookrightarrow }\limits ^{d}}B^{s'}_{p',q'}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho (1-p')}). \end{aligned}$$

    Therefore, there are sequences \((\varphi _{k,l})_{l\in {\mathbb {N}}}\subset {\mathscr {S}}({\mathbb {R}}^n)\) with \(\Vert \varphi _{k,l}\Vert _{B^{-s}_{p',q'}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho (1-p')})}=1\) such that \(\varphi _{k,l}\rightarrow \varphi _k\) as \(l\rightarrow \infty \). Thus, we obtain

    $$\begin{aligned} \Vert u \Vert _{B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })}=\sup _{k,l\in {\mathbb {N}}}|\langle u,\varphi _{k,l}\rangle |. \end{aligned}$$

    Since \({\mathbb {N}}^2\) is countable, we can rename the functions and obtain the asserted sequence \((\psi _k)_{k\in {\mathbb {N}}}\).

  2. (b)

    The proof is almost the same as the one of Part (a). One just has to use Proposition 2.15 instead of Proposition 2.12.

  3. (c)

    Since \(B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\) is separable, its Borel \(\sigma \)-field is generated by the open balls. Hence, it suffices to show that for all \(f\in B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\) and all \(r>0\), we have \(B(f,r)\in {\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n))\). Now, we use part (a). Then, we obtain that

    $$\begin{aligned} B(f,r)&=\bigcup _{m\in {\mathbb {N}}}\overline{B(f,r-\tfrac{1}{m})} = \bigcup _{m\in {\mathbb {N}}}\bigcap _{k\in {\mathbb {N}}}\bigg \{u\in B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho }): |\langle f-u, \psi _k\rangle |\le r-\tfrac{1}{m} \bigg \}\\&= B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\cap \bigcup _{m\in {\mathbb {N}}}\bigcap _{k\in {\mathbb {N}}}\bigg \{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n) : |\langle f-u, \psi _k\rangle |\le r-\tfrac{1}{m} \bigg \}\\&=\bigcup _{m\in {\mathbb {N}}}\bigcap _{k\in {\mathbb {N}}}\bigg \{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n) : |\langle f-u, \psi _k\rangle |\le r-\tfrac{1}{m} \bigg \} \in \mathcal B_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^n)) \end{aligned}$$

    In the last step, we used that a tempered distribution \(u_0\) is an element of the weighted Besov space \(B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })\) if \(\Vert u_0\Vert _{B^{s}_{p,q}({\mathbb {R}}^n,\langle \,\cdot \,\rangle ^{\rho })}<\infty \). By part (a) this is satisfied if

    $$\begin{aligned} u_0\in \bigcap _{k\in {\mathbb {N}}}\bigg \{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n) : |\langle f-u, \psi _k\rangle |\le r-\tfrac{1}{m} \bigg \} . \end{aligned}$$

    This yields the assertion.

  4. (d)

    The proof is almost the same as the one of part (c).

\(\square \)

Lemma 3.5

Let \(\eta \) be a Lévy white noise. For \(t_2\ge t_1\ge 0\) let again

$$\begin{aligned} \eta _{(t_1,t_2]}:\Omega \rightarrow {\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1}), \omega \mapsto \langle \eta (\omega ),\mathbb {1}_{(t_1,t_2]}\otimes (\,\cdot \,)\rangle . \end{aligned}$$

Suppose that for all \(t\ge 0\) the mapping \(\eta _{(0,t]}\) takes values in \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) for fixed parameters \(s,\rho \in {\mathbb {R}}\) and \(1<p,q<\infty \). Let \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) be endowed with its Borel \(\sigma \)-field. Then \((\eta _{(0,t]})_{t\ge 0}\) is a \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\)-valued stochastic process with stationary and independent increments. The same assertion holds if \(\eta \) is restricted to \([0,T]^n\) and if \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) is replaced by \(S^{\overline{s}'}_{p,p}B([0,T]^{n-1})\) for some \(\overline{s}'=(s_2,\ldots ,s_n)\in {\mathbb {R}}^{n-1}\).

Proof

We show the assertion for \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\). The proof for \(S^{\overline{s}}_{p,p}B([0,T]^{n-1})\) can be carried out the same way.

It follows from Proposition 3.3 that the mappings

$$\begin{aligned} \eta _{(0,t]}:(\Omega ,{\mathcal {F}})\rightarrow ({\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1}),{\mathcal {B}}_c({\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1}))) \end{aligned}$$

are measurable. Thus, Lemma 3.4 shows that the mappings \(\eta _{(0,t]}\) for \(t\ge 0\) are \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\)-valued random variables, where \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) is endowed with the Borel \(\sigma \)-field. Proposition 3.3 also shows that the increments are stationary. Hence, it only remains to prove that the increments are independent.

Since the sets \(\prod _{j=1}^N(-\infty ,\alpha _j]\) with \(\alpha _j\in {\mathbb {R}}\) generate the Borel \(\sigma \)-field in \({\mathbb {R}}^N\), we also have that sets of the form

$$\begin{aligned} \{u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^{n-1})\vert \forall j\in \{1,\ldots ,N\}:\langle u,\varphi _j\rangle \le \alpha _j\} \end{aligned}$$

for some \(N\in {\mathbb {N}}\), \(\varphi _1,\ldots ,\varphi _N\in {\mathscr {S}}({\mathbb {R}}^{n-1})\) and \(\alpha _1,\ldots ,\alpha _N\in {\mathbb {R}}\) generate \({\mathcal {B}}_c({\mathscr {S}}({\mathbb {R}}^{n-1}))\). But together with Lemma 3.4 this implies that the Borel \(\sigma \)-field of \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) is generated by sets of the form

$$\begin{aligned} \{u\in B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\vert \forall j\in \{1,\ldots ,N\}:\langle u,\varphi _j\rangle \le \alpha _j\}. \end{aligned}$$

This collection of sets is stable under finite intersections and thus, it suffices to verify the independece for preimages of such sets. As in Remark 2.24, one can use the characteristic function from Proposition 2.27 to show that the random variables \(\langle \eta _{(t_1,t_0]},\varphi _1\rangle ,\ldots ,\langle \eta _{(t_N,t_{N-1}]},\varphi _N\rangle \) are independent for all choices of \(N\in {\mathbb {N}}\), \(0\le t_0<t_1\cdots <t_N\), \(\varphi _1,\ldots ,\varphi _N\in {\mathscr {S}}({\mathbb {R}}^{n-1})\). Thus, for all choices of \(M, N_1,\ldots , N_M\in {\mathbb {N}}\), \(0\le t_0<t_1\cdots <t_M\), \(\alpha _{j,k}\in {\mathbb {R}}\) and \(\varphi _{j,k}\in {\mathscr {S}}({\mathbb {R}}^{n-1})\) (\(1\le j\le M\), \(1\le k\le N_j\)), we have that

$$\begin{aligned}&{\mathbb {P}}\bigg (\bigcap _{j=1}^M \big \{\eta _{(t_{j-1},t_j]}\in \{u\in B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })): \forall k\in \{1,\ldots ,N_j\}:\langle u,\varphi _{j,k}\rangle \le \alpha _{j,k}\}\big \}\bigg )\\&\quad ={\mathbb {P}}\bigg (\bigcap _{j=1}^M\bigcap _{k=1}^{N_j} \big \{\eta _{(t_{j-1},t_j]}\in \{u\in B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })): \langle u,\varphi _{j,k}\rangle \le \alpha _{j,k}\}\big \}\bigg )\\&\quad = {\mathbb {P}}\bigg (\bigcap _{j=1}^M\bigcap _{k=1}^{N_j} \big \{\langle \eta _{(t_{j-1},t_j]},\varphi _{j,k}\rangle \le \alpha _{j,k}\big \}\bigg )\\&\quad =\prod _{j=1}^M{\mathbb {P}}\bigg (\bigcap _{k=1}^{N_j} \big \{\langle \eta _{(t_{j-1},t_j]},\varphi _{j,k}\rangle \le \alpha _{j,k}\big \}\bigg )\\&\quad =\prod _{j=1}^M{\mathbb {P}}\bigg (\big \{\eta _{(t_{j-1},t_j]}\in \{u\in B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })): \forall k\in \{1,\ldots ,N_j\}:\langle u,\varphi _{j,k}\rangle \le \alpha _{j,k}\}\big \}\bigg ). \end{aligned}$$

This shows that \((\eta _{(0,t]})_{t\ge 0}\) has independent increments. \(\square \)

Theorem 3.6

Consider the situation of Lemma 3.5with \(s<0\). Let \(\alpha ,r\in {\mathbb {R}}\) and \(1\le p_1\le p'\le p_2<\infty \) where \(p'\) denotes the conjugated Hölder index of \(p\in (1,\infty )\).

  1. (a)

    Suppose that we have the embedding \(L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\hookrightarrow L_0(\eta )\) for all \(T>0\) and that \(\alpha <\min \{\rho ,p(n-1)(\tfrac{1}{p'}-\tfrac{1}{p_1})\}\). Then \((\eta _{(0,t]})_{t\ge 0}\) is a \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\)-valued Lévy process.

  2. (b)

    Suppose that we have the embedding \(L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\hookrightarrow L_0(\eta )\) for all \(T>0\) and \(r<\min \{s, (n-1)(\tfrac{1}{p_2}-\tfrac{1}{p'})\}\). Then \((\eta _{(0,t]})_{t\ge 0}\) is a \(B^{r}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\)-valued Lévy process.

  3. (c)

    Suppose that we have the embedding \(L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\cap L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\hookrightarrow L_0(\eta )\) for all \(T>0\) as well as the estimates \(\alpha <\min \{\rho ,p(n-1)(\tfrac{1}{p'}-\tfrac{1}{p_1})\}\) and \(r<\min \{s, (n-1)(\tfrac{1}{p_2}-\tfrac{1}{p'})\}\). Then \((\eta _{(0,t]})_{t\ge 0}\) is a \(B^{r}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\)-valued Lévy process.

Proof

  1. (a)

    Let \(\alpha \) be chosen as in the assertion. Then, we have the embedding

    $$\begin{aligned} B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho }){\mathop {\hookrightarrow }\limits ^{d}} B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha }) \end{aligned}$$

    so that \(\eta _{(0,t]}\) takes values in \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\) for \(t\ge 0\). By Proposition 2.12, the dual space of \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\) is given by \(B^{-s}_{p',q'}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha (1-p')})\). It follows from Lemma 3.4 that there is a sequence \((\psi _k)_{k\in {\mathbb {N}}}\subset {\mathscr {S}}({\mathbb {R}}^{n-1})\) of Schwartz functions with \(\Vert u\Vert _{B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })}=\sup _{k\in {\mathbb {N}}}|\langle u, \psi _k \rangle |\) and \(\Vert \psi _k\Vert _{B^{-s}_{p',q'}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha (1-p')})}=1\). Using [14, Proposition 3] and the elementary embedding \(B^{{\widetilde{s}}}_{p_1,q}({\mathbb {R}}^{n-1})\hookrightarrow L_{p_1}({\mathbb {R}}^{n-1})\) for \({\widetilde{s}}>0\), we also obtain that

    $$\begin{aligned} B^{-s}_{p',q'}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha (1-p')})&\hookrightarrow L_{p_1}({\mathbb {R}}^{n-1})\quad \text {if } \alpha <p(n-1)\big (\tfrac{1}{p'}-\tfrac{1}{p_1}\big ). \end{aligned}$$

    In this case, we have that \((\psi _k)_{k\in {\mathbb {N}}}\) is bounded in \(L_{p_1}({\mathbb {R}}^{n-1})\). Therefore, if \(t,t_0\in [0,T]\) then \((\mathbb {1}_{(0,t_0]}-\mathbb {1}_{(0,t]})\otimes \psi _k\) goes uniformly in \(k\in {\mathbb {N}}\) to 0 in \(L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\) as \(t\rightarrow t_0\). But since we have the continuous embeddings

    $$\begin{aligned} L_{p_1}([0,T]\times {\mathbb {R}}^{n-1}){\mathop {\hookrightarrow }\limits ^{{\text {id}}}} L(\eta ) {\mathop {\hookrightarrow }\limits ^{\eta }} L_0(\Omega ,{\mathcal {F}},{\mathbb {P}}), \end{aligned}$$

    it follows that \(\langle \eta ,(\mathbb {1}_{(0,t_0]}-\mathbb {1}_{(0,t]})\otimes \psi _k\rangle \) goes uniformly in \(k\in {\mathbb {N}}\) to 0 in probability as \(t\rightarrow t_0\). Now Lemma 3.4 shows that \(\langle \eta ,(\mathbb {1}_{(0,t_0]}-\mathbb {1}_{(0,t]})\otimes \,\cdot \,\rangle \) goes to 0 in probability with respect to the space \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\) as \(t\rightarrow t_0\). Together with Lemma 3.5 proves the assertion.

  2. (b)

    This can be shown with the same proof as part (a). One just has to replace \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\) by \(B^{r}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) and \(L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\) by \( L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\). In this case, we have

    $$\begin{aligned} B^{-r}_{p',q'}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho (1-p')})&\hookrightarrow L_{p_2}({\mathbb {R}}^{n-1})\quad \text {if } r< (n-1)\big (\tfrac{1}{p_2}-\tfrac{1}{p'}\big ). \end{aligned}$$

    Except for these changes, the proof can be carried out in the same way.

  3. (c)

    This case can again be carried out as part (a). This time, one has to replace \(B^{s}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\) by \(B^{r}_{p,q}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\alpha })\) and \(L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\) by the intersection \( L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\cap L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\). Of course, in this case, both estimates on r and \(\alpha \) have to be satisfied.

\(\square \)

Theorem 3.7

Consider the situation of Lemma3.5with \(1<p<\infty \) and \(\overline{s}'=(s_2,\ldots ,s_{n})\in (-\infty ,0)^{n-1}\). Let \(p'\) be the conjugated Hölder index and \(1\le p_1<\infty \) such that \(L_{p_1}([0,T]^{n})\hookrightarrow L_0(\eta )\). If \(\max \{s_2,\ldots ,s_n\}<\frac{1}{p_1}-\frac{1}{p'}\), then the restriction of \((\eta _{(0,t]})_{t\ge 0}\) to \([0,T]^{n-1}\) is a \(S^{\overline{s}'}_{p,p}B([0,T]^{n-1})\)-valued Lévy process.

Proof

The proof is similar as the one of Theorem 3.6. This time we use that

$$\begin{aligned} (S^{\overline{s}}_{p,p}B([0,T]^{n-1}))'&=S^{-\overline{s}}_{p',p',0}B([0,T]^{n-1})\\&=B^{-s_2}_{p',p',0}([0,T])\otimes _{\alpha _p}\cdots \otimes _{\alpha _p} B^{-s_n}_{p',p',0}([0,T])\ldots )\\&\hookrightarrow L_{p_1}([0,T])\otimes _{\alpha _p}\cdots \otimes _{\alpha _p} L_{p_1}([0,T])\\&\cong L_{p_1}([0,T]^{n-1}), \end{aligned}$$

where \(\otimes _{\alpha _p}\) denotes the tensor product with respect to the p-nuclear tensor norm and where we used that

$$\begin{aligned} B^{-\max \{s_2,\ldots ,s_n\}}_{p',p',0}([0,T])\hookrightarrow B^{-\max \{s_2,\ldots ,s_n\}}_{p',p'}([0,T]) \hookrightarrow B^{\varepsilon }_{p_1,p_1}([0,T];\;E)\hookrightarrow L_{p_1}([0,T];\;E) \end{aligned}$$

if \(-\max \{s_2,\ldots ,s_n\}-\frac{1}{p'}>\varepsilon -\frac{1}{p_1}\) and \(\varepsilon >0\). Here, the first embedding follows directly from the definitions. For the second embedding, we refer to [40, Section 3.3.1]. The last embedding can for example be found in [41, Section 2.3.2, Remark 3]. With the embedding

$$\begin{aligned} (S^{\overline{s}}_{p,p}B([0,T]^{n-1}))'\hookrightarrow L_{p_1}([0,T]^{n-1}) \end{aligned}$$

at hand, the proof is analogous to the one of Theorem 3.6. \(\square \)

Remark 3.8

Embeddings of the form

$$\begin{aligned} L_{p_1}({\mathbb {R}}^n)\hookrightarrow L_0(\eta ),\quad L_{p_2}({\mathbb {R}}^n)\hookrightarrow L_0(\eta )\quad \text {or} \quad L_{p_1}({\mathbb {R}}^n)\cap L_{p_2}({\mathbb {R}}^n)\hookrightarrow L_0(\eta ) \end{aligned}$$

are satisfied for many different kinds of Lévy white noise, see [14, Table 1]. Accordingly, Theorems 3.6 and 3.7 can be applied to them. As an example, we carry out the Gaussian case:

Corollary 3.9

Consider the situation of Theorem3.6and suppose that the Lévy triplet is given by (0, 1, 0) so that we have the Gaussian case. Then \((\eta _{(0,t]})_{t\ge 0}\) has a modification that is a Brownian motion with values in \(B^{s}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho })\) if

$$\begin{aligned} s<-\frac{n-1}{2},\quad \rho <-n+1. \end{aligned}$$

Proof

It follows from Proposition 3.3 and Theorem 2.30 that \(\eta _{(0,t]}\) takes almost surely values in the weighted Besov space \(B^{\widetilde{s}}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\widetilde{\rho }})\) if

$$\begin{aligned} \widetilde{s}<-\frac{n-1}{2},\quad \widetilde{\rho }<-n+1. \end{aligned}$$

By Remark 2.28 we know that \(L_0(\eta )=L_2({\mathbb {R}}^n)\). Hence, if \(1<p\le 2\) we can consider case (a) in Theorem 3.6 with \(p_1=2\). In this case, \(\rho \) has to satisfy

$$\begin{aligned} \rho<\min \bigg \{\widetilde{\rho },p(n-1)\big (\tfrac{1}{p'}-\tfrac{1}{p_1}\big )\bigg \}=\min \bigg \{\widetilde{\rho },(n-1)\big (\tfrac{p}{2}-1\big )\bigg \}<-n+1. \end{aligned}$$

If in turn \(2\le p<\infty \), then we can use Theorem 3.6 (b) with \(p_2=2\) so that we obtain the condition

$$\begin{aligned} s<\min \bigg \{\widetilde{s}, (n-1)\big (\tfrac{1}{p_2}-\tfrac{1}{p'}\big )\bigg \}<-\frac{n-1}{2}. \end{aligned}$$

Altogether, we obtain the assertion. \(\square \)

Proposition 3.10

Let \(n_1,n_2\in {\mathbb {N}}\) with \(n_1+n_2=n\), \(T>0\) and \({\mathcal {O}}\in \{[0,T]^{n_2},{\mathbb {R}}^{n_2}\}\). Suppose that there are \(p,p_1,p_2\in [1,\infty )\) such that \(L_{p_1}([0,T]^{n_1}\times {\mathcal {O}})\cap L_{p_2}([0,T]^{n_1}\times {\mathcal {O}})\hookrightarrow L_p(\eta )\) (for example \(p=2\) in the symmetric case, see Remark 2.28). Then the mapping

$$\begin{aligned} \partial _t \eta _{(0,t]}:{\mathscr {S}}_0([0,T]^{n_1})\otimes {\mathscr {S}}_0({\mathcal {O}})\rightarrow L_p(\Omega ,\mathcal F,\mathbb {P}),\;\psi \otimes \varphi \mapsto [\partial _1\ldots ,\partial _{n_1} \langle \eta _{(0,t]}, \varphi \rangle ](\psi ) \end{aligned}$$

extends again to the white noise \(\eta \). Here, \([\partial _1\ldots ,\partial _{n_1} \langle \eta _{(0,t]}, \varphi \rangle ](\psi )\) means that we apply the distributional derivatives of the trajectories of \((\langle \eta _{(0,t]}, \varphi \rangle )_{t\ge 0}\) to the test function \(\psi \).

Proof

By Proposition 3.3, it suffices to prove the result for \(n_1=1\). Higher dimensions then follow by iteration. Therefore, let \(\psi \in {\mathscr {S}}_0([0,T])\) and \(\varphi \in {\mathscr {S}}_0({\mathcal {O}})\). First, we define the function

$$\begin{aligned} K:[0,\infty )\rightarrow L_{p_1}([0,T])\cap L_{p_2}([0,T]), t\mapsto [s\mapsto \mathbb {1}_{[0,s)}(t) \psi '(t)]. \end{aligned}$$

This function is continuous and therefore Bochner integrable. Indeed, let \((t_k)_{k\in {\mathbb {N}}}\subset [0,\infty )\) such that \(t_0=\lim _{k\rightarrow \infty } t_k\). Then, for all \(s\ne t_0\), we have that

$$\begin{aligned} \mathbb {1}_{[0,s)}(t_k)\psi '(t_k)\rightarrow \mathbb {1}_{[0,s)}(t_0)\psi '(t_0)\quad (k\rightarrow \infty ) \end{aligned}$$

so that the continuity follows by dominated convergence. Moreover, we note that \(f\mapsto \langle \eta ,f\rangle \) defines a bounded linear operator from \(L_{p_1}([0,T]\times {\mathcal {O}})\cap L_{p_2}([0,T]\times {\mathcal {O}})\) to \(L_p(\Omega ,\mathcal F,\mathbb {P})\) by Proposition 2.27. Using these two facts, we may interchange the order of \(\eta \) and the integration in the following computation:

$$\begin{aligned}{}[\partial _t\langle \eta _{(0,t]},\varphi \rangle ](\psi )&=[\partial _t\langle \eta ,\mathbb {1}_{(0,t]}\otimes \varphi \rangle ](\psi )\\&= (\psi (T)-\psi (0))\langle \eta ,\mathbb {1}_{[0,T]}\otimes \varphi \rangle -\int _0^T\langle \eta ,\mathbb {1}_{(0,t]}\otimes \varphi \rangle \psi '(t)\,{\mathrm{d}}t\\&= \int _0^T\langle \eta ,(\mathbb {1}_{[0,T]}-\mathbb {1}_{(0,t]})\otimes \varphi \rangle \psi '(t)\,{\mathrm{d}}t = \int _0^T\langle \eta ,\mathbb {1}_{[0,\,\cdot \,)}(t)\otimes \varphi \rangle \psi '(t)\,{\mathrm{d}}t\\&= \int _0^T \langle \eta ,K(t)\otimes \varphi \rangle \,{\mathrm{d}}t=\left\langle \eta ,\int _0^T K(t)\,{\mathrm{d}}t\otimes \varphi \right\rangle \\&=\left\langle \eta ,\int _0^T \mathbb {1}_{[0,\,\cdot \,)}(t)\psi '(t)\,{\mathrm{d}}t\otimes \varphi \right\rangle =\left\langle \eta ,\int _0^{\cdot } \psi '(t)\,{\mathrm{d}}t\otimes \varphi \right\rangle \\&=\langle \eta ,\psi \otimes \varphi \rangle . \end{aligned}$$

As the tensor product \({\mathscr {S}}_0([0,T])\otimes {\mathscr {S}}_0({\mathcal {O}})\) is sequentially dense in \({\mathscr {S}}_0([0,T]\times {\mathcal {O}})\) (see for example [2, Theorem 1.8.1]), it follows from the continuity of \(\eta :{\mathscr {S}}_0([0,T]\times {\mathcal {O}})\rightarrow L_p(\Omega ,{\mathscr {F}},\mathbb {P})\) that \(\partial _t\eta _{(0,t]}\) extends to \(\eta \). \(\square \)

Theorem 3.11

Let \(0<T<\infty \) and let \(\widetilde{\eta }\) be a Lévy white noise restricted to \([0,T]\times {\mathbb {R}}^{n-1}\) with Lévy triplet \((\gamma ,\sigma ^2,\nu )\), Blumenthal–Getoor indices \(0\le \underline{\beta }{}_{\infty }\le \beta _{\infty }\le 2\) and moment index \(0<p_{max}\le \infty \). Let further \(p\in (1,\infty )\) and \(\widetilde{p}\in (1,\infty )\) be fixed.

  1. (a)

    The Gaussian case: Suppose that \(\gamma =0\) and \(\nu =0\). If \(t\le -\frac{1}{2}\), \(s<-\frac{n-1}{2}\) and \(\rho <-n+1\), then \(\widetilde{\eta }\) has a modification \(\eta \) such that

    $$\begin{aligned} {\mathbb {P}}\big (\eta \in B^{t}_{\widetilde{p},\infty }([0,T],B^{s}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho }))\big )=1. \end{aligned}$$

    If \(t> -\frac{1}{2}\), \(s\ge -\frac{n-1}{2}\) or \(\rho \ge -\frac{n-1}{p}\), then we have

    $$\begin{aligned} {\mathbb {P}}\big (\eta \notin B^{t}_{\widetilde{p},\infty }([0,T],B^{s}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho }))\big )=1. \end{aligned}$$
  2. (b)

    The compound Poisson case: Let \(p\in (1,\infty )\) and \(1\le p_1<p'<p_2<\infty \) such that

    $$\begin{aligned} L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\cap L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\hookrightarrow L_1(\eta ). \end{aligned}$$

    Let further \(t\le -1\) and \(t<-1\) if \({\widetilde{p}}<2\), \(s<(n-1)(\frac{1}{p}-1)\) and \(\rho <-\frac{(n-1)p}{\min \{p,p_{max}\}}\). Then \(\widetilde{\eta }\) has a modification \(\eta \) such that

    $$\begin{aligned} {\mathbb {P}}\big (\eta \in B^{t}_{\widetilde{p},\widetilde{p}}([0,T],B^{s}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho }))\big )=1. \end{aligned}$$
  3. (c)

    The general non-Gaussian case: Let \(p\in (1,2]\cup 2{\mathbb {N}}\) and \(1\le p_1<p'<p_2<\infty \) such that

    $$\begin{aligned} L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\cap L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\hookrightarrow L_1(\eta ). \end{aligned}$$

    Let further \(t\le -1\) and \(t<-1\) if \({\widetilde{p}}< 2\), \(s<(n-1)(\frac{1}{\max \{p,\beta _{\infty }\}}-1)\) and \(\rho <-\frac{(n-1)p}{\min \{p,p_{max}\}}\). Then \(\widetilde{\eta }\) has a modification \(\eta \) such that

    $$\begin{aligned} {\mathbb {P}}\big (\eta \in B^{t}_{\widetilde{p},\widetilde{p}}([0,T],B^{s}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho }))\big )=1. \end{aligned}$$

Proof

Proposition 3.3 yields that for all \(t_0\in [0,T]\) we have that \(\langle \eta ,\mathbb {1}_{(0,t_0]}\otimes \,\cdot \,\rangle \) is a white noise with the Lévy triplet \((t_0\gamma ,t_0\sigma ^2,t_0\nu )\). Hence, for fixed \(t_0\in [0,T]\) we can use Theorem 2.30 in order to obtain that \(t_0\mapsto \langle \eta ,\mathbb {1}_{(0,t_0]}\otimes \,\cdot \,\rangle \) almost surely takes values in \(B^{s}_{p,p}({\mathbb {R}}^{n-1},\langle \,\cdot \,\rangle ^{\rho }))\) with certain s and \(\rho \), depending on the respective case. Moreover, it follows from Proposition 3.10 that we can write \(\eta =\partial _t\langle \eta ,\mathbb {1}_{(0,t]}\otimes \,\cdot \,\rangle \). Hence, the Gaussian case follows from Corollary 3.9 together with the regularity results on Brownian motions, Theorem 2.34. The compound Poisson and the general non-Gaussian case follow from Theorem 3.6 together with Proposition 2.33. In order to see this, we note that

$$\begin{aligned} (n-1)\bigg (\tfrac{1}{p}-1\bigg )\le (n-1)\bigg (\tfrac{1}{p_2}-\tfrac{1}{p'}\bigg ),\quad -\tfrac{(n-1)p}{\min \{p,p_{max}\}}\le p(n-1)\big (\tfrac{1}{p'}-\tfrac{1}{p_1}\big ). \end{aligned}$$

Hence, the estimates from Theorem 3.6 do not give additional restrictions. \(\square \)

Remark 3.12

As in Remark 2.31, one can weaken the conditions on p in the non-Gaussian case of Theorem 3.11. More precisely, the assertion of the non-Gaussian case of Theorem 3.11 also holds if \(p_{\max }\in 2{\mathbb {N}}\) and \(p\in (1,\infty )\) or if \(p_{\max }\in (N,N+2)\) and \(p\in (1,\infty ){\setminus }(N,N+2)\) for some \(N\in 2{\mathbb {N}}\).

Theorem 3.13

Let \(\varepsilon ,T>0\) and let \({\widetilde{\eta }}\) be a Lévy white noise restricted to \([0,T]^n\) and let \(p\in (1,\infty )\). Let further \(l=n\), i.e. the smoothness parameters of spaces with dominating mixed smoothness are elements of \({\mathbb {R}}^n\).

  1. (a)

    The Gaussian case: There is a modification \(\eta \) of \({\widetilde{\eta }}\) such that for any

    $$\begin{aligned} {\mathbb {P}}(\eta \in S^{(-\tfrac{1}{2}-\varepsilon ,\ldots ,-\tfrac{1}{2}-\varepsilon )}_{p,p}B([0,T]^n))=1. \end{aligned}$$

    Moreover, it holds that

    $$\begin{aligned} {\mathbb {P}}(\eta \in S^{(-\tfrac{1}{2},\ldots ,-\tfrac{1}{2})}_{p,p}B([0,T]^n))=0. \end{aligned}$$
  2. (b)

    The compound Poisson case: Let \(p\in (1,\infty )\) and \(1\le p_1<p_2<\infty \) such that

    $$\begin{aligned} L_{p_1}([0,T]^n)\cap L_{p_2}([0,T]^n)\hookrightarrow L_1(\eta ). \end{aligned}$$

    Let further \(t\le -1\) and \(t<-1\) if \(p<2\) and \(s<\frac{1}{p}-1\). Then \(\widetilde{\eta }\) has a modification \(\eta \) such that

    $$\begin{aligned} {\mathbb {P}}\big (\eta \in S^{(t,\ldots ,t,s)}_{p,p}B([0,T]^n)\big )=1. \end{aligned}$$
  3. (c)

    The general non-Gaussian case: Let \(p\in (1,\infty )\) and \(1\le p_1<p_2<\infty \) such that

    $$\begin{aligned} L_{p_1}([0,T]\times {\mathbb {R}}^{n-1})\cap L_{p_2}([0,T]\times {\mathbb {R}}^{n-1})\hookrightarrow L_1(\eta ). \end{aligned}$$

    Let further \(t\le -1\) and \(t<-1\) if \(p< 2\) and \(s<\frac{1}{\max \{p,\beta _{\infty }\}}-1\). Then there is a modification \(\eta \) of \({\widetilde{\eta }}\) such that

    $$\begin{aligned} {\mathbb {P}}(\eta \in S^{(t,\ldots ,t,s)}_{p,p}B([0,T]^n))=1. \end{aligned}$$

Proof

First, we apply Proposition 3.3 with \(n_1=n-1\). Thus, for every fixed \(t\in [0,T]^{n-1}\), we have that \({\widetilde{\eta }}_{(0,t]}\) is a one-dimensional Lévy white noise on [0, T] which almost surely takes values in \(B^{1/2-\varepsilon }_{p,p}([0,T])\) in the Gaussian case or if \(p<2\) and in \(B^{1/p-1-\varepsilon }_{p,p}([0,T])\) in the non-Gaussian case with \(p>2\) by Theorem 2.30. Now, it follows from Theorem 3.7 that for fixed \((t_1,\ldots ,t_{n-2})\in [0,T]^{n-2}\) the family \(({\widetilde{\eta }}_{(0,(t_1,\ldots ,t_{n-1})]})_{t_{n-1}\in [0,T]}\) is a \(B^{1/2-\varepsilon }_{p,p}([0,T])\)-valued Brownian motion in the Gaussian case and a \(B^{s}_{p,p}([0,T])\)-valued Lévy process in the other two cases. By Theorem 2.34, it has a modification which almost surely has paths in \(B^{1/2}_{p,\infty }([0,T];\;B^{1/2-\varepsilon }_{p,p}([0,T]))\) but not better in the Gaussian case and in \(B^{t}_{p,p}([0,T];\;B^{s}_{p,p}([0,T]))\) in the non-Gaussian cases. Together with

$$\begin{aligned} B^{\frac{1}{2}}_{p,\infty }([0,T];\;B^{\frac{1}{2}-\varepsilon }_{p,p}([0,T]))\hookrightarrow B^{\frac{1}{2}-\varepsilon }_{p,p}([0,T];\;B^{\frac{1}{2}-\varepsilon }_{p,p}([0,T])) \end{aligned}$$

and Proposition 2.11 we obtain the assertion for \(n=2\). For general \(n\in {\mathbb {N}}\), we iterate the same argument using Theorem 3.7. \(\square \)

Remark 3.14

  1. (a)

    Since composition of a white noise with an Euclidean motion as in Remark 2.24 again gives a white noise, we can even further improve Theorem 3.13. Let for example \(B(x_0,r)\subset [0,T]^n\) be a ball in \([0,T]^n\) and consider the restriction of \({\widetilde{\eta }}\) to this ball. By Theorem 3.13 there is a modification \(\eta \) which takes values in \(S^{(-1/2-\varepsilon ,\ldots ,-1/2-\varepsilon )}_{p,p}B(B(x_0,r)))\) in the Gaussian case. Now we take a rotation A around \(x_0\), which is a bijection on \(B(x_0,r)\) and an Euclidean motion on \({\mathbb {R}}^n\). Thus, \({\widetilde{\eta }}\circ A\) is again a white noise so that there is a modification \(\eta _1\) which also takes values in \(S^{(-1/2-\varepsilon ,\ldots ,-1/2-\varepsilon )}_{p,p}B(B(x_0,r)))\). Therefore, for any countable family \((A_n)_{n\in {\mathbb {N}}}\) of such rotations, there is a modification \(\eta \) such that for all \(n\in {\mathbb {N}}\) the rotated noise \(\eta \circ A_n\) also takes values in \(S^{(-1/2-\varepsilon ,\ldots ,-1/2-\varepsilon )}_{p,p}B(B(x_0,r)))\) almost surely. The same argument can also be applied in the non-Gaussian cases.

  2. (b)

    Theorems 3.11 and 3.13 are probably not optimal for the general Lévy case. Looking for example at Theorem 3.13, it seems natural to guess that actually

    $$\begin{aligned} {\mathbb {P}}(\eta \in S^{(s,\ldots ,s)}_{p,p}B([0,T]^n))=1 \end{aligned}$$

    holds.

4 Equations with boundary noise

Now we combine our considerations with the ones on elliptic and parabolic boundary value problems with rough boundary data in [20]. While our results might look quite involved, we would like to point out that there is actually a simple idea behind them: The solutions of the boundary value problems we consider here are arbitrarily smooth. However, as white noise is very rough, there will be singularities at the boundary if one looks at the solution in spaces with higher regularity. The higher the smoothness in time, tangential and normal direction is, the stronger will the singularity be. One can avoid stronger singularities by trading smoothness in the different directions against each other to some extend. The question on how far one can push this will be answered by some technical conditions on the parameters involved. These conditions will make our results look more complicated than they actually are.

We should note that Proposition 4.1 on the Poisson equation already follows from the known results, Theorem 2.30 and [20, Theorem 6.1]. Proposition 4.3 on the heat equation in turn uses our new result, Theorem 3.11, and [20, Theorem 6.4]. For this section, it is important to keep Example 2.1 in mind. Since \(\langle \,\cdot \,\rangle ^{\rho }\) is an admissible weight for any \(\rho \in {\mathbb {R}}\), the Besov scale and its dual scale coincide, cf. Proposition 2.12. Thus, we may apply the results from [20].

There are already several papers in which the singularities at the boundary of solutions of Poisson and heat equation with Dirichlet boundary noise are studied. We refer the reader to [1, 7, 10]. This is mainly done by introducing power weights which measure the distance to the boundary of the domain, i.e. weights of the form \({\mathrm{dist}}(x,\partial {\mathcal {O}})^r\) for some \(r\in {\mathbb {R}}\). Such weights are also useful in our approach. But in contrast to [1, 7, 10], we work in spaces with mixed smoothness. This allows us to trade smoothness in normal direction for smoothness in tangential direction. Thus, we can interpret the boundary conditions in a classical sense without having to rely on a mild solution concept.

Since we work in \({\mathbb {R}}^{n}_+\) in this section, the power weight is given by

$$\begin{aligned} {\text {pr}}_n|^r:{\mathbb {R}}^n_+\rightarrow {\mathbb {R}}_+, (x_1,\ldots , x_n)\mapsto |x_n|^r. \end{aligned}$$

In this section, we sometimes add subscripts to the domains of function spaces to indicate with respect to which variables the spaces should understood. For example, we write \(B^{s_1}_{p_1,q_1}({\mathbb {R}}_t;\;B^{s_2}_{p_2,q_2}({\mathbb {R}}_{+,x_n};\;B^{s_3}_{p_3,q_3}({\mathbb {R}}_{x'}^{n-1})))\) where \({\mathbb {R}}_t\) corresponds to the time direction, \({\mathbb {R}}_{+,x_n}\) to the normal direction and \({\mathbb {R}}_{x'}^{n-1}\) to the tangential directions. \({\mathbb {R}}^n_{+,x}\) will refer to the space directions.

Proposition 4.1

Let \(\eta \) be a Lévy white noise on \({\mathbb {R}}^{n-1}\) with values in \(B^{s}_{p,p}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho })\) for some parameters \(p\in (1,\infty )\), \(s,\rho \in {\mathbb {R}}\), see Theorem2.30. Let \(j\in \{0,1\}\). Then for all \(\lambda \in {\mathbb {C}}{\setminus }(-\infty ,0]\), there is almost surely a unique solution \(u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}^n_{+,x})\) of the equation

$$\begin{aligned} \lambda u-\Delta u&=0\quad \text {in }{\mathbb {R}}^n_{+,x},\\ \partial _n^j u&=\eta \quad \text {on }{\mathbb {R}}^{n-1}_{x'}, \end{aligned}$$

which satisfies

$$\begin{aligned} u\in \bigcap _{\begin{array}{c} r,t\in {\mathbb {R}},k\in {\mathbb {N}}_0,q\in [1,\infty ),\\ r-q[t+k-j-s]_+>-1 \end{array}} W^k_q({\mathbb {R}}_{+,x_n},|{\text {pr}}_n|^r;\;B^{t}_{p,p}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho })). \end{aligned}$$

Moreover, for all \(\sigma >0\), \(r,t\in {\mathbb {R}}\), \(k\in {\mathbb {N}}_0\) and \(q\in [1,\infty )\) such that \(r-q[t+k-j-s]_+>-1\) there is a constant \(C>0\) such that for all \(\lambda \in \{z\in {\mathbb {C}}: |z|>\sigma , |{\text {arg}} z|<\pi -\sigma \}\) it holds almost surely that

$$\begin{aligned} \Vert u\Vert _{W^k_q({\mathbb {R}}_{+,x_n},|{\text {pr}}_n|^r;\;B^{t}_{p,p}({\mathbb {R}}^{n-1}_{x'};\langle \cdot \rangle ^{\rho }))}\le C |\lambda |^{\frac{-1-r+q(k-j)+q[t-s]_+}{2q}} \Vert \eta \Vert _{B^{s}_{p,p}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho })}. \end{aligned}$$

Proof

This follows directly from combining Theorem 2.30 and [20, Theorem 6.1]. \(\square \)

Remark 4.2

Note that Proposition 4.1 yields that \(u\in C^{\infty }({\mathbb {R}}^n_+)\) with certain singularities which are measured by the weight \(|{\text {pr}}_n|^r\) at the boundary. It is instructive to give up some generality to see how strong these singularities are in classical function spaces such as \(L_2\). Note that the Lévy noises \(\eta \) on \({\mathbb {R}}^{n-1}\) which we consider here always satisfy \(\eta \in B^{1+\varepsilon -n}_{2,2}({\mathbb {R}}^{n-1},\langle \cdot \rangle ^{\rho })\) for some \(\varepsilon >0\). Consider for example the Dirichlet case, i.e. \(j=0\). If we take \(p=q=2\) and \(k=t=0\), then the restriction \(r-p[t+k-j-s]_+>-1\) shows that we have to take \(r>2n-3-\varepsilon \) so that our solution satisfies \(u\in L_{2,loc}(\overline{{\mathbb {R}}^n_+},|{\text {pr}}_n|^{2n-3})\).

Proposition 4.3

Let \(\eta \) be a Lévy white noise on \({\mathbb {R}}_t\times {\mathbb {R}}^{n-1}_{x'}\) with values in the space \(B^{s_2}_{p_2,\infty ,loc}({\mathbb {R}}_t;\;B^{s_1}_{p_1,p_1}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho }))\) for some parameters \(p_1,p_2\in (1,\infty )\), \(s_1,s_2,\rho \in {\mathbb {R}}\), see Theorem 3.11. Let \({\widetilde{\varphi }}\in {\mathscr {D}}({\mathbb {R}})\), \(\varphi ={\widetilde{\varphi }}\otimes \mathbb {1}_{{\mathbb {R}}^{n-1}}\in C^{\infty }({\mathbb {R}}_t\times {\mathbb {R}}^{n-1}_{x'})\), \(j\in \{0,1\}\) and

$$\begin{aligned}&P:=\bigg \{(r,t_0,l,k,q):t_0,l\in {\mathbb {R}},\, r\in (-1,\infty ),\,k\in {\mathbb {N}}_0,\,q\in [1,\infty ),\\&\quad r-q[t_0+k-j-s]_+>-1,\\&\quad r-2q(l-s_2)-q(k-j)-q[t_0-s_1]_+>-1\bigg \}. \end{aligned}$$

Then, there is almost surely a unique solution \(u\in {\mathscr {S\,}}^{\prime}({\mathbb {R}}_t\times {\mathbb {R}}^n_{+,x})\) of the equation

$$\begin{aligned} \partial _tu+u-\Delta u&=0\quad \;\text {in }{\mathbb {R}}_t\times {\mathbb {R}}^n_{+,x'},\\ \partial _n^j u&=\varphi \cdot \eta \quad \text {on }{\mathbb {R}}_t\times {\mathbb {R}}^{n-1}_x, \end{aligned}$$

which satisfies

$$\begin{aligned} u\in \bigcap _{(r,t_0,l,k,q)\in P} B^{l}_{p_2,\infty }({\mathbb {R}}_t;W^{k}_q({\mathbb {R}}_{+,x_n},|{\text {pr}}_n|^r;\;B^{t_0}_{p_1,p_1}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho }))). \end{aligned}$$

Moreover, for all \((r,t_0,l,k,q)\in P\) there is a constant \(C>0\) such that almost surely we have the estimate

$$\begin{aligned} \Vert u\Vert _{B^{l}_{p_2,\infty }({\mathbb {R}}_t;W^{k}_q({\mathbb {R}}_{+,x_n},|{\text {pr}}_n|^r;\;B^{t_0}_{p_1,p_1}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho })))}\le C \Vert \varphi \cdot \eta \Vert _{B^{s_2}_{p_2,\infty }({\mathbb {R}}_t;\;B^{s_1}_{p_1,p_1}({\mathbb {R}}^{n-1}_{x'},\langle \cdot \rangle ^{\rho }))}. \end{aligned}$$

Proof

This follows directly from combining Theorem 3.11 and [20, Theorem 6.4]. \(\square \)

Remark 4.4

  1. (a)

    The reason why we have to multiply \(\eta \) with a cutoff function in time is that we only have local results for the regularity in time of a space-time white noise. If there were global results with some weight in time, then we would be able to remove the cutoff function.

  2. (b)

    As in the elliptic case, we have \(u\in C^{\infty }({\mathbb {R}}\times {\mathbb {R}}^n_+)\) with certain singularities at the boundary. This time we have \(s_2\ge -1\) and \(s_1\ge 1-n\). Thus, if we want to determine a possible weight for the solution of the Dirichlet problem (i.e. \(j=0\)) to be in a weighted \(L_2\)-space, we can take \(k=t_0=0\), \(l>0\) and \(p_2=q=p_1=2\). The restriction \((r,t_0,l,k,q)\in P\) yields that if we take \(r>2n+1\), then \(u\in L_{2,{\mathrm{loc}}}({\mathbb {R}}\times \overline{{\mathbb {R}}^n_+},|{\text {pr}}_n|^r)\).