Sample paths of white noise in spaces with dominating mixed smoothness

The sample paths of white noise are proved to be elements of certain Besov spaces with dominating mixed smoothness. Unlike in isotropic spaces, here the regularity does not get worse with increasing space dimension. Consequently, white noise is actually much smoother than the known sharp regularity results in isotropic spaces suggest. An application of our techniques yields new results for the regularity of solutions of Poisson and heat equation on the half space with boundary noise. The main novelty is the flexible treatment of the interplay between the singularity at the boundary and the smoothness in tangential, normal and time direction.


Introduction
There are many works studying the regularity of different kinds of stochastic noise. Oftentimes, regularity results are formulated in terms of Besov spaces. Classical results on the Hölder regularity of sample paths of a Brownian motion have been improved by using Besov spaces and Besov-Orlicz spaces in [8,9]. Similar results have been obtained for Feller processes in [31,32,33], for a summary see [6,Section 5.5], and for Brownian motions with values in Banach spaces in [22]. Closely related to these works are characterizations of the Besov regularity of white noise. For a Gaussian white noise on the torus such characterizations are given in [45]. Lévy white noise on the torus was studied in [15]. Global regularity results for Gaussian and Lévy white noise are given in [4] and [13]. Most of these works have in common that the regularity results are shown to be sharp up to possibly some minor improvements in some of the references. For an n-dimensional Gaussian white noise it is shown for example, that it has a smoothness of exactly or almost − n 2 but not more than − n 2 , depending on the scale of isotropic function spaces. In particular, regularity seems to get worse with increasing dimension. The aim of this paper is to show that these results can be improved for Gaussian as well as Lévy white noise if one works with spaces of dominating mixed smoothness. Roughly speaking, the following results states that an n-dimensional Gaussian white noise has local smoothness − 1 2 − ε separately in each direction, while previous results state that it has regularity − n 2 simultanously in all directions.
In this theorem S and with the tensor product which is defined as the closure of the algebraic tensor product with respect to the so-called p-nuclear tensor norm. We will explain these identifications later in this paper.
If one component is viewed as time, then a white noise is also sometimes called spacetime white noise. In this case, it can also be insightful the split space and time in the description of the smoothness. This way, we obtain that a Gaussian space-time white noise has smoothness − 1 2 in time and − n−1 2 − ε in (the n − 1-dimensional) space. More precisely, we have the follwing result: Theorem 1.2. Let 1 < p, p < ∞ and ε > 0. Then an n-dimensional Gaussian white noise on R n has a modification η such that Here, ξ 1−n−ε := (1 + |ξ| 2 ) 1−n−ε 2 is a weight function. The intervall [0, T ] corresponds to the time direction, while R n−1 corresponds to the space direction.
Note that compared to Theorem 1.1 we can include growth bounds in space this time. Theorem 1.2 can be useful if one studies parabolic partial differential equations driven by noise. We will illustrate this by deriving regularity results for the heat equation with Dirichlet and Neumann boundary noise. The main tool in previous works such as [1,7,10,36] for analyzing solutions of equations with boundary noise were power weights. These weights measure the distance to the boundary and are well suited to describe the singularities of solutions at the boundary. Our approach however adds more flexibility to the description of these singularities, as it allows one to treat regularity in time, tangential and normal directions separately. It will also enable us to analyze the behavior of solutions at the boundary in spaces of higher regularity. This paper is structured as follows: • In Section 2 we introduce weighted Besov spaces with dominating mixed smoothness, Lévy white noise and vector-valued Lévy processes and cite the most important results we need throughout the paper. While most of the results are wellknown, it seems like the description of the dual spaces of Besov spaces with dominating mixed smoothness on the domain [0, T ] n given in Proposition 2.16 has not been available in the literature before. • Section 3 is the main part of this paper. Therein, we derive regularity results for Lévy white noise in spaces with dominating mixed smoothness. • As an application of some of our results, we derive new regularity properties of the solutions of Poisson and heat equation with Dirichlet and Neumann boundary noise in Section 4.
Given a Banach space E we will write E ′ for its topological dual. By D(R n ; E), S (R n ; E) and S ′ (R n ; E) we denote the spaces of E-valued test functions, E-valued Schwartz functions and E-valued tempered distributions, respectively. If E ∈ {R, C} then we will omit it in the notation. On S (R n ; E) we define the Fourier transform As usual, we extend it to S ′ (R n ; E) by [F u]( f ) := u(F f ) for u ∈ S ′ (R n ; E) and f ∈ S (R n ). Given two topological spaces X, Y, we write X ֒→ Y if there is a canonical continuous embedding. We write X d ֒→ Y if the range of this embedding is dense in Y. If E 0 and E 1 are two locally convex spaces, then the spaces of continuous linear operators from E 0 to E 1 will be denoted by B(E 0 , E 1 ). If E 0 = E 1 , then we also write B(E 0 ). Throughout the paper, we will assume that (Ω, F , P) is a complete probability space.

Preliminaries
2.1. Weights. A weight w on R n is a function w : R n → [0, ∞] which takes values in (0, ∞) almost everywhere with respect to the Lebesgue measure. There are several interesting classes of weights one can consider.
(a) We say that w is an admissible weight if w ∈ C ∞ (R n ; (0, ∞)) with the following properties: (ii) There are two constants C > 0 and s ≥ 0 such that We write W(R n ) for the set of all admissible weights on R n .
The set of all A p weights on R n will be denoted by A p (R n ). Moreover, we write A ∞ (R n ) := 1<p<∞ A p (R n ). Such weights are also called Muckenhoupt weights.
The set of all A loc p weights on R n will be denoted by A loc p (R n ). Moreover, we write A loc ∞ (R n ) := 1<p<∞ A loc p (R n ). Such weights are also called local Muckenhoupt weights.
Remark 2.2. The class of local Muckenhoupt weights A loc ∞ (R n ) was introduced in [30] with the aim of unifying Littlewood-Paley theories for function spaces with admissible weights and Muckenhoupt weights. Accordingly, we have that In this paper, we are mainly work with weights of the form for some ρ ∈ R. It will be important for us to which class of weights this function belongs for different choices of ρ ∈ R.
(a) For all ρ ∈ R we have that · ρ ∈ W(R n ), i.e. · ρ is an admissible weight. This can either be computed directly or one can use the following abstract arguments which in turn are based on simple direct computations: For (2-1) one can recall that · ρ is the standard example of a so-called Hörmander symbol of order ρ, see for example [23, Chapter 2, §1, Example 2 • ]. Thus, we even have |D α ξ ρ | ≤ C α,ρ ξ ρ−|α| which trivially implies (2-1). In (2-2) one can take C = 2 |ρ| and s = |ρ| by Peetre's inequality, see for example [29,Proposition 3.3.31]. (b) It holds that · ρ ∈ A p (R n ) if and only if −n < ρ < (p − 1)n. Again, one can directly verify this for example by a similar computation as in [17,Example 9.1.7]. We also refer to [18,Example 1.3] where this has been observed for the equivalent weight (c) It follows directly from part (b) that · ρ ∈ A ∞ (R n ) if and only if −n < ρ. Definition 2.4. Let E be a Banach space, w : R n → [0, ∞] a weight and 1 ≤ p < ∞. Then the weighted Lebesgue-Bochner space L p (R n , w; E) is defined as the space of all strongly measurable functions f : R n → E such that with the usual modification for p = ∞. As usual, functions which coincide on sets of measure 0 are considered as equal.
Remark 2.5. For this work it is important to note that there are different conventions in the literature concerning the definition of weighted Lebesgue-Bochner spaces. Oftentimes, the expression f L p (R n ,w;E) is defined by w f L p (R n ;E) , whereas in our case it is defined by w 1/p f L p (R n ;E) . Unfortunately, we will have to refer to some articles which use the one and to other articles which use the other convention. Thus, we will explicitly mention if a certain reference does not use the convention of Definition 2.4.

Weighted Function Spaces with Dominating Mixed Smoothness.
As general references for the theory of spaces with dominating mixed smoothness we would like to mention [34,43,46]. These spaces are mainly used in approximation theory. They can also be used to study boundary value problems with rough boundary data, see [20]. Our aim here is to derive sharper regularity results for the sample paths of white noise. In this section, let l ∈ N and d = ( Definition 2.6. (a) Let ϕ 0 ∈ D(R n ) be a smooth function with compact support such that 0 ≤ ϕ 0 ≤ 1, For ξ ∈ R n and k ∈ N let further We call such a sequence (ϕ k ) k∈N 0 smooth dyadic resolution of unity and write Φ(R n ) for the space of all such sequences. (b) Let E be a Banach space. To a smooth dyadic resolution of unity (ϕ k ) k∈N 0 ∈ Φ(R n ) we associate the sequence of operators (S k ) k∈N 0 on the space of tempered distributions S ′ (R n ; E) by means of The sequence (S k f ) k∈N 0 is called dyadic decomposition of f . (c) For j ∈ {1, . . . , l} let (ϕ ( j) k j ) k j ∈N 0 ∈ Φ(R d j ) be a smooth dyadic resolution of unity on R d j . Then we define , where the tensor product is the closure of the unique tensor product on tempered distributions in the sense of [38,Lemma B.3] with respect to the p-nuclear tensor norm α p , see [38,Appendix B]. For two Banach spaces E 1 , E 2 the p-nuclear tensor norm is defined by where p ′ denotes the conjugated Hölder index and where the infimum is taken over all representations h = N j=1 x j ⊗ y j for N ∈ N, x 1 , . . . , x N ∈ E 1 and y 1 , . . . , y N ∈ E 2 . (d) In a certain parameter range one can also view a Besov space with dominating mixed smoothness as a Besov space with values in another Besov space. Since it seems like this has not been formulated in the literature so far, we make this more precise in the following.
Theorem 2.9. Let E be a reflexive Banach space and l = 2. Then there are unique isomorphisms Proof. This is one of the kernel theorems from [3, Appendix, Theorem 1.8.9]. Proposition 2.10. Let E be a Banach space, s = (s 1 , s 2 ) ∈ R 2 and let w j : R d j → [0, ∞] ( j = 1, 2) be weights. Suppose that w = w 1 ⊗ w 2 and that 1 < p < ∞. The mappings I 1 , I 2 from Theorem 2.9 yield the following isomorphies: . Remark 2.11. (a) In Theorem 2.9 and Proposition 2.10 we took l = 2 only for notational convenience. The same arguments also work for l ∈ {3, . . . , n}. (b) In this work, we frequently use the representation in Proposition 2.10 of Besov spaces with dominating mixed smoothness. In the following, we omit the isomorphisms I 1 and I 2 in the notation and consider the spaces in Proposition 2.10 as equal.
Corollary 2.12. Let T > 0, l = n, s = (s 1 , . . . , s n ) ∈ R n and p ∈ [1, ∞). Then we have the isomorphisms . Proof. For [0, T ] being replaced by R these are the statements of Proposition 2.10 and Remark 2.8 (c). Thus, the assertion follows by composing the isomorphisms with a suitable extension operator and the restriction to [0, T ] n . Proposition 2.13. Let 1 < p, q < ∞, s ∈ R and let w : R n → (0, ∞) be an admissible weight. Let further p ′ , q ′ ∈ (1, ∞) be the conjugated Hölder indices of p and q, respectively. Then we have Proof. This result is taken from [35, Chapter 5.1.2]. Note however that therein, a different convention concerning the notation of weighted spaces is used. The space B s p,q (R n , w) in the notation of [35] corresponds to B s p,q (R n , w p ) in our notation. Note also that the weights being considered in [35] are even much more general than the admissible weights we consider here.
Lemma 2.14. Let 1 < p < ∞, l = n and s = (s 1 , . . . , s n ) ∈ R n . Then we have that Proof. Let E be a Banach space. It holds that the algebraic tensor product Repeating the same argument for S 0 ([0, T ] n−1 ; B s n p,p,0 ([0, T ]; E)) instead of S 0 ([0, T ] n ; E) and iterating it, we obtain the assertion.
Corollary 2.15. Let 1 < p < ∞, l = n and s = (s 1 , . . . , s n ) ∈ R n . Then we have that where the isomorphism is the same as in Corollary 2.12.
Proof. By iteration we define R n := S 0 ([0, T ]) and Then we have S 0 ([0, T ] n ) ⊂ R 1 so that it follows together with Lemma 2.14 that , where the closures are taken with respect to the topology of the iterated Besov space and thus, the assertion follows.
Proposition 2.16. Let 1 < p < ∞, p ′ the conjugated Hölder index, l = n and s = (s 1 , . . . , s n ) ∈ R n . Then we have that Proof. It follows from Corollary 2.12 together with Corollary 2.15 that we can show the assertion on the level of iterated Besov spaces. Since they are defined by iteration, it suffices to show the assertion for the usual isotropic but vector-valued Besov-spaces on [0, T ], i.e. it suffices to show that , where E is a reflexive Banach space. For these relations, we refer to [3, Chapter VII, Theorem 2.8.4] or [25,Theorem 11]. Even though the former reference considers different domains and the latter treats the scalar-valued situation, their extension-restriction methods also work in our setting.
Proof. Recall that · ρ/p is an admissible weight. This proposition actually holds for all admissible weights, see for example [42,Theorem 6.5]. Note that in this reference a different convention concerning the notation of weighted spaces is used.
Let further θ ∈ (0, 1) and Then we have that ·] θ denotes the complex interpolation functor. In particular, it holds that Proof. This is part of the statement of [37,Theorem 4.5].
In one proof, we also need Bessel potential spaces as a technical tool.
and endow it with the norm .
If l = 1 then we obtain the standard isotropic Bessel potential spaces and write H s p (R n ) instead.
2.3. Lévy White Noise. Now we briefly introduce Lévy white noise as a generalized random process and collect some of the known properties. In the following ν will be a Lévy measure, i.e. a measure on R \ {0} such that R\{0} min{1, x 2 } dν(x) < ∞. Moreover, we take γ ∈ R and σ 2 > 0. We call the triplet (γ, σ 2 , ν) Lévy triplet and the function is called Lévy exponent corresponding to the Lévy triplet (γ, σ 2 , ν). Functions of the form exp •Ψ for some Lévy exponent Ψ are exactly the characteristic functions of infinitely divisible random variables.
We endow the space of tempered distributions S ′ (R n ) with the cylindrical σ-field B c (S ′ (R n )) generated by the cylindrical sets, i.e. sets of the form for some N ∈ N, ϕ 1 , . . . , ϕ N ∈ S (R n ) and some Borel set B ∈ B(R N ). We will also consider (S ′ (O), B c (S ′ (O))) for certain domains O ⊂ R n . We define this by restriction. More precisely, we write S 0 (O) for the closed subspace of S (R n ) which consists of functions with support in O.
and B c (S ′ (O)) is the σ-field generated by sets of the form We also just write This way, the mapping u → u| O is a measurable mapping Definition 2.20. Let (Ω, F , P) be a probability space. A generalized random process s is a measurable function The pushforward measure P s defined by is called probability law of s. Moreover, the characteristic functional P s of s is defined by We will write s(ω) for the tempered distribution at ω ∈ Ω and s, ϕ for the random variable which one obtains by testing s against the Schwartz function ϕ ∈ S (R n ).
In certain situations we also speak of a generalized random process if there only is a null set N ⊂ Ω such that the range of s| Ω\N is a subset of S ′ (R n ). But since we assume our probability space to be complete, we may change every measurable mapping f : (Ω, F ) → (M, A) for a measurable space (M, A) on arbitrary null sets without affecting the measurability. Thus, for our purposes we can neglect the difference between a generalized random process and a mapping which is a generalized random process only after some change on a null set. This also applies to the following definition: be two generalized random processes. We say that s 2 is a modification of s 1 , if Similar to Bochner's theorem for random variables, the Bochner-Minlos theorem gives a necessary and sufficient condition for a mapping C : S (R n ) → C to be the characteristic functional of a generalized random process. Theorem 2.22 (Bochner-Minlos). A mapping C : S (R n ) → C is the characteristic functional of a generalized random process if and only if C is continuous, C(0) = 1 and C is positive definite, i.e. for all N ∈ N, all z 1 , . . . , z N ∈ C, and all ϕ 1 , . . . , ϕ N ∈ S (R n ) it holds that N j,k=1 The Bochner-Minlos also holds if S (R n ) is replaced by a nuclear space as for example the space of test functions D(R n ). It seems like the Bochner-Minlos theorem was first formulated and proved in [24]. (b) An important example of a characteristic functional is given by for a Lévy exponent Ψ. This is always a characteristic functional on the space of test functions D(R n ), see for example [16,Chapter III,Theorem 5]. However, this is not always true for the Schwartz space S (R n ). In fact, C is a characteristic functional on S (R n ) if and only if it has positive absolute moments, i.e. if there is an ε > 0 such that E[|X| ε ] < ∞, where X is an infinitely divisible random variable corresponding to the Lévy triplet Ψ. We refer the reader to [12,Theorem 3] for the sufficiency and to [11] for the necessity.
Definition 2.24. Let (γ, σ 2 , ν) be a Lévy triplet such that the corresponding infinitely divisible random variable has positive absolute moments. A Lévy white noise η : Ω → S ′ (R n ) with Lévy triplet (γ, σ 2 , ν) is the generalized random process with characteristic functional If we speak of a Lévy white noise on a domain O ⊂ R n , then we mean that it is given by η| O for a Levy white noise η on R n .
Remark 2.25. From a modeling point of view, there are some minimum requirements one has on a random process to call it a white noise. For example, a white noise should and indeed our white noise from Definition 2.24 does satisfy the following: (a) A white noise is invariant under Euclidean motions in the sense that for f ∈ D(R n ) and for an Euclidean motion A the random variables η( f ) and η( f • A) have the same distribution. This can for example be seen by comparing their characteristic functions: For the representation of the characteristic function, see for example [27,Theorem 2.7 (iv)]. (b) The random variables η( f ) and η(g) are independent if f, g ∈ D(R n ) have disjoint supports. Indeed, if f, g have disjoint supports then Ψ( f + g) = Ψ( f ) + Ψ(g) and therefore (c) If second moments exist, then we have the relation cov(η( f ), η(g)) = f, g L 2 (R n ) ( f, g ∈ D(R n )).
It seems like this has not been stated in this form for Lévy white noise in the literature before. We therefore refer the reader to the author's Ph.D. thesis, [19,Proposition 3.33].
Remark 2.26. By an approximation procedure it is possible to plug many more functions into a white noise than just test functions or Schwartz functions. For example, it is always possible to apply a Lévy white noise to elements of L 2 (R n ) with compact support. In particular, this includes indicator functions ½ A for bounded Borel sets A ∈ B(R n ) which is useful for the construction of a stochastic integral. The idea for the construction of such an integral goes back to [44] and was further refined in [27]. We also refer the reader to [14] in which the extension of the domain of definition is carried out in full detail. We will now briefly summarize the results we need in this work.
(a) The p-th order Rajput-Risiński exponent Ψ p of η is defined by and endow it with the metric The elements of L 0 (η) will be called η-integrable.
Proposition 2.28. Let η be a Lévy white noise with triplet (γ, σ 2 , ν) and p ≥ 0. (d) Let f ∈ L 0 (η). Then the characteristic function of η, f is again given by Proof. This is a collection of the statements given in [ ) with compact support are always contained in L 0 (η). Moreover, S (R n ) is contained in L 0 (η) if the white noise η admits positive absolute moments, see Remark 2.23. We also refer the reader to [14, Table 1] which contains a list of examples. For instance, in the Gaussian case we have L 0 (η) = L 2 (R n ). The same holds if the Lévy triplet is given by (0, σ 2 , ν) with ν being symmetric and having finite variance, see [14,Proposition 5.10]. If the Lévy triplet is given by (γ, 0, 0) (γ 0) then we have L 0 (η) = L 1 (R n ) and for (γ, σ 2 , 0) (γ 0, σ 2 > 0) by L 1 (R n ) ∩ L 2 (R n ). (b) If one wants to work with paths of a Lévy white noise, then a characterization of the Besov regularity of these paths might be more useful than Proposition 2.28 in certain situations. Fortunately, a lot of nice work has already been done in this direction. For example, local regularity of Gaussian white noise has been studied in [45]. In [15] similar results have been obtained for Lévy white noise. Global smoothness properties of Lévy white noise in weighted spaces have been established in [4] and [13]. Results as in the latter two references will be important for the derivation of mixed smoothness properties. But before we can formulate them, we first need to introduce the Blumenthal-Getoor indices and the moment index of a Lévy white noise. In addition, the moment index is defined by In general it holds that 0 ≤ β ∞ ≤ β ∞ ≤ 2.

(b) Compound Poisson case:
Suppose that ν is a finite measure on B(R n \ {0}) and that σ 2 = 0. Then it holds that P(η ∈ B s p,p (R n , · ρ )) = 1, if s < n( 1 p − 1) and ρ < − np min{p,p max } , P(η B s p,p (R n , · ρ )) = 1, if s ≥ n( 1 p − 1) or ρ > − np min{p,p max } . (c) General non-Gaussian case: Suppose that ν 0 and and that p ≤ 2 or p ∈ 2N. Then it holds that Proof. This is a collection of Proposition 6, 9 and 12 from [4]. Note that the authors of [4] use a different convention concerning the notation of weighted Besov spaces so that the weight parameters in our formulation are multiplied by p compared to the formulation in [4].
(b) If one restricts the white noise η to a bounded set, for example [0, T ] n for some T > 0, then one can also drop the conditions on ρ. More precisely, we have the following: In the Gaussian case it holds that In the compound Poisson case it holds that . In the general con-Gaussian case with p ∈ (1, ∞) it holds that

Lévy processes with values in a Banach space.
We briefly derive some results on the regularity of sample paths of Lévy processes with values in Banach spaces. While they are most probably far from being optimal, they allow us to also apply our methods to Lévy white noise instead of just Gaussian white noise. Although our regularity results for Lévy white noises will not be sharp, we develop our methods in a way such that the result can directly be improved once properties like the ones in [6, Section 5.5] have been derived for Lévy processes in Banach spaces.

Regularity Properties in Spaces of Mixed Smoothness
Lemma 3.1. Let n = n 1 + n 2 with n 1 , n 2 ∈ N and let s, t ∈ R n 1 , s ≤ t. Let η n be a Lévy white noise in R n with Lévy triplet (γ, σ 2 , ν) and let η n 2 be a Lévy white noise in R n 2 with the same Lévy triplet. Then the mapping L 0 (η n 2 ) → L 0 (η n ), ϕ → ½ (s,t] ⊗ ϕ is well-defined and continuous.
Proof. It suffices to show the assertion for n 1 = 1. The general assertion then follows by iteration. So let n 1 = 1. Then Lemma 3.2 shows that, η (s,t] (ω) is indeed a tempered distribution almost surely. Thus, after changing it on a set of measure 0, we can assume that it is a S ′ (R n−1 )-valued mapping. For the measurability, it suffices to show that the preimages of cylindrical sets of the form C := {u ∈ S ′ (R n−1 ) : ( u, ϕ 1 , . . . , u, ϕ N ) ∈ B} for some N ∈ N and some open set B ⊂ R N under η (s,t] are elements of F . So let C be such a set. By Proposition 2.28 and Lemma 3.1, we can take sequences (ψ j,k ) k∈N ⊂ D(R n ), j = 1, . . . , N such that ψ j,k → ½ (s,t] ⊗ ϕ j in L 0 (η) as k → ∞. Hence, we have that η, ψ j,k → η, ½ (s,t] ⊗ ϕ j in probability as k → ∞. By taking subsequences, we may without loss of generality assume that the convergence is also almost surely. Let K ∈ F be the set on which there is no pointwise convergence and K := K ∩ η −1 (s,t] (C). Since the probability space is complete, it follows that K ∈ F . Now we define Let further A k,l := ( η, ψ 1,k , . . . , η, ψ N,k ) −1 (B l ) \ K ⊂ Ω. Note that A k,l ∈ F for all k, l ∈ N since η is a a generalized random process. By construction, we have that lim inf k→∞ A k,l ∈ F and that it consists of all ω ∈ Ω such that ( η(ω), ψ 1,k , . . . , η(ω), ψ N,k ) → ( η(ω), ½ (s,t] ⊗ ϕ 1 , . . . , η(ω), ½ (s,t] ⊗ ϕ N ) as k → ∞ and such that ( η(ω, ψ 1,k , . . . , η(ω), ψ N,k ) ∈ B l for k ∈ N large enough. In particular, for ω ∈ lim inf k→∞ A k,l it holds that ( η(ω), ½ (s,t] ⊗ ϕ 1 , . . . , η(ω), ½ (s,t] ⊗ ϕ N ) ∈ B l and thus lim inf k→∞ A k,l ⊂ ( η (s,t] , ϕ 1 , . . . , η (s,t] , ϕ N ) −1 (B l ) = η −1 (s,t] ( C l ). Together with (3-3) this yields For the converse inclusion let ω ∈ η −1 (s,t] (C) so that for k large enough. Hence, it follows that ω ∈ A k,l for k large enough and therefore ω ∈ lim inf k→∞ A k,l . If in turn ω ∈ K, then ω ∈ K ∩ η −1 (s,t] (C) = K. Hence, it follows that Together with (3)(4) it now follows that so that η (s,t] is indeed a generalized random process. Finally, we show that η (s,t] is even a Lévy white noise with Lévy triplet Leb n 1 ((s, t])(γ, σ 2 , ν) by simply computing its characteristic functional: Let Ψ be the Lévy exponent of η. Then we obtain Since f → · −ρ/p f leaves S (R n ) invariant and since it is an isomorphism between B s ′ p ′ ,q ′ (R n , · ρ(1−p ′ ) ) and B −s p ′ ,q ′ (R n ), it follows from the density of S (R n ) in B −s p ′ ,q ′ (R n ) (see for example [40,Section 2.3.3]) that we have the dense embedding . Therefore, there are sequences (ϕ k,l ) l∈N ⊂ S (R n ) with ϕ k,l B −s p ′ ,q ′ (R n , · ρ(1−p ′ ) ) = 1 such that ϕ k,l → ϕ k as l → ∞. Thus, we obtain Since N 2 is countable, we can rename the functions and obtain the asserted sequence (ψ k ) k∈N . (b) The proof is almost the same as the one of Part (a). One just has to use Proposition 2.16 instead of Proposition 2.13. (c) Since B s p,q (R n , · ρ ) is separable, its Borel σ-field is generated by the open balls. Hence, it suffices to show that for all f ∈ B s p,q (R n , · ρ ) and all r > 0 we have B( f, r) ∈ B c (S ′ (R n )). Now we use part (a). Then we obtain that In the last step we used that a tempered distribution u 0 is an element of B s p,q (R n , · ρ ) if u 0 B s p,q (R n , · ρ ) < ∞. By part (a) this is satisfied if This yields the assertion. (d) The proof is almost the same as the one of part (c). Lemma 3.5. Let η be a Lévy white noise. For t 2 ≥ t 1 ≥ 0 let again Suppose that for all t ≥ 0 the mapping η (0,t] takes values in the Besov space B s p,q (R n−1 , · ρ ) for fixed parameters s, ρ ∈ R and 1 < p, q < ∞. Let B s p,q (R n−1 , · ρ ) be endowed with its Borel σ-field. Then (η (0,t] ) t≥0 is a B s p,q (R n−1 , · ρ )-valued stochastic process with stationary and independent increments. The same assertion holds if η is restricted to [0, T ] n and if B s p,q (R n−1 , · ρ ) is replaced by S s ′ p,p B([0, T ] n−1 ) for some s ′ = (s 2 , . . . , s n ) ∈ R n−1 .
Proof. (a) Let α be chosen as in the assertion. Then we have the embedding so that η (0,t] takes values in B s p,q (R n−1 , · α ) for t ≥ 0. By Proposition 2.13, the dual space of B s p,q (R n−1 , · α ) is given by B −s p ′ ,q ′ (R n−1 , · α(1−p ′ ) ). It follows from Lemma 3.4 that there is a sequence (ψ k ) k∈N ⊂ S (R n−1 ) with u B s p,q (R n−1 , · α ) = sup k∈N |ψ k (u)| and ψ k B −s p ′ ,q ′ (R n−1 , · α(1−p ′ ) ) = 1. Using [13,Proposition 3] and the elementary embedding B s p 1 ,q (R n−1 ) ֒→ L p 1 (R n−1 ) for s > 0, we also obtain that In this case, we have that (ψ k ) k∈N is bounded in L p 1 (R n−1 ) with norms not larger than 1. Therefore, if t, t 0 ∈ [0, T ] then (½ (0,t 0 ] − ½ (0,t] ) ⊗ ψ k goes uniformly in k ∈ N to 0 in But since we have the continuous embeddings it follows that η((½ (0,t 0 ] − ½ (0,t] ) ⊗ ψ k ) goes uniformly in k ∈ N to 0 in probability as t → t 0 . Now Lemma 3.4 shows that η((½ (0,t 0 ] − ½ (0,t] ) ⊗ · ) goes to 0 in probability with respect to the space B s p,q (R n−1 , · α ) as t → t 0 . Together with Lemma 3.5 proves the assertion. (b) This can be shown with the same proof as part (a). One just has to replace B s p,q (R n−1 , · α ) by B r p,q (R n−1 , · ρ ) and L p 1 ([0, T ] × R n−1 ) by L p 2 ([0, T ] × R n−1 ). In this case, we have Except for these changes, the proof can be carried out in the same way. (c) Also this case can be carried out as part (a). One just has to replace B s p,q (R n−1 , · α ) by B r p,q (R n−1 , · α ) and L p 1 ([0, T ] × R n−1 ) by L p 1 ([0, T ] × R n−1 ) ∩ L p 2 ([0, T ] × R n−1 ). Of course, in this case both estimates on r and α have to be satisfied.
Proof. The proof is similar as the one of Theorem 3.6. This time we use that where ⊗ α p denotes the tensor product with respect to the p-nuclear tensor norm and where we used that if − max{s 2 , . . . , s n } − 1 p ′ > ε − 1 p 1 and ε > 0. Here, the first embedding follows directly from the definitions. For the second embedding, we refer to [40, are satisfied for many different kinds of Lévy white noise, see [13, Table 1]. Accordingly, Theorem 3.6 and Theorem 3.7 can be applied to them. As an example, we carry out the Gaussian case: Corollary 3.9. Consider the situation of Theorem 3.6 and suppose that the Lévy triplet is given by (0, 1, 0) so that we have the Gaussian case. Then the process (η (0,t] ) t≥0 has a modification that is a Brownian motion with values in B s p, By Remark 2.29 we know that L 0 (η) = L 2 (R n ). Hence, if 1 < p ≤ 2 we can consider case (a) in Lemma 3.6 with p 1 = 2. In this case, ρ has to satisfy If in turn 2 ≤ p < ∞, then we can use Theorem 3.6 (b) with p 2 = 2 so that we obtain the condition Altogether, we obtain the assertion. extends again to the white noise η. Here, [∂ 1 . . . , ∂ n 1 η (0,t] , ϕ ](ψ) means that we apply the distributional derivatives of the trajectories of ( η (0,t] , ϕ ) t≥0 to the test function ψ.
Remark 3.12. As in Remark 2.32 one can weaken the conditions on p in the non-Gaussian case of Theorem 3.11. More precisely, the assertion of the non-Gaussian case of Theorem 3.11 also holds if p max ∈ 2N and p ∈ (1, ∞) or if p max ∈ (N, N +2) and p ∈ (1, ∞)\(N, N +2) for some N ∈ 2N.
Theorem 3.13. Let ε, T > 0 and let η be a Lévy white noise restricted to [0, T ] n and let p ∈ (1, ∞). Let further l = n, i.e. the smoothness parameters of spaces with dominating mixed smoothness are elements of R n .
(a) The Gaussian case: There is a modification η of η such that for any

Moreover, it holds that
The compound Poisson case: Let p ∈ (1, ∞) and 1 ≤ p 1 < p 2 < ∞ such that Let further t ≤ −1 and t < −1 if p < 2 and s < 1 p − 1. Then η has a modification η such that P η ∈ S (t,...,t,s)   and Proposition 2.12 we obtain the assertion for n = 2. For general n ∈ N we iterate the same argument using Theorem 3.7.
In this section, we sometimes add subscripts to the domains of function spaces in order to indicate with respect to which variables the spaces should understood. For example we write B s 1 p 1 ,q 1 (R t ; B s 2 p 2 ,q 2 (R +,x n ; B s 3 p 3 ,q 3 (R n−1 x ′ ))) where R t corresponds to the time direction, R +,x n to the normal direction and R n−1 x ′ to the tangential directions. R n +,x will refer to the space directions.
Remark 4.4. (a) The reason why we have to multiply η with a cutoff function in time is that we only have local results for the regularity in time of a space-time white noise. If there were global results with some weight in time, then we would be able to remove the cutoff function. (b) As in the elliptic case, we have u ∈ C ∞ (R × R n + ) with certain singularities at the boundary. This time we have s 2 ≥ −1 and s 1 ≥ 1 − n. Thus, if we want to determine a possible weight for the solution of the Dirichlet problem (i.e. j = 0) to be in a weighted L 2 -space, we can take k = t 0 = 0, l > 0 and p 2 = q = p 1 = 2. The restriction (r, t 0 , l, k, q) ∈ P yields that if we take r > 2n + 1, then u ∈ L 2,loc (R × R n + , | pr n | r ).