Subordinated Gaussian Random Fields

Motivated by the subordinated Brownian motion, we define a new class of (in general discontinuous) random fields on higher-dimensional parameter domains: the subordinated Gaussian random field. We investigate the pointwise marginal distribution of the constructed random fields, derive a L\'evy-Khinchin-type formula and semi-explicit formulas for the covariance function. Further, we study the pointwise stochastic regularity and validate our theoretical findings in various numerical examples.

some exemplary fields.

Preliminaries.
In this section we give a short introduction to Lévy processes and Gaussian random fields as basis for the construction of subordinated Gaussian random fields. Throughout the paper, let (Ω, F, P) be a complete probability space.

Lévy processes.
Let T ⊆ R + ∶= [0, +∞) be an arbitrary time domain. A stochastic process X = (X(t), t ∈ T ) on T is a family of random variables on the probability space (Ω, F, P).
Theorem 2.2. (Lévy-Khinchin formula, see [3,Th. 1.3.3]) Let l be a real valued Lévy process on T ⊂ R + ∶= [0, +∞). There exist a drift parameter b ∈ R, a noise parameter σ 2 N ∈ R + and a measure ν on (T , B(T )) such that the characteristic function φ l(t) , for t ∈ T , admits the representation φ l(t) (ξ) ∶= E(exp(iξl(t))) = exp t ibξ − σ 2 It follows from this theorem that every Lévy process is fully characterized by the so called Lévy triplet (b, σ 2 N , ν). Hence, while trying to extend the concept of a Lévy process to a higherdimensional parameter domain, one important goal is to obtain an analogous characterization property.
Within the class of Lévy processes there exists a subclass which is given by the so called subordinators: A Lévy subordinator on T ⊂ R + is a Lévy process that is non-decreasing P-almost surely. The characteristic function of a Lévy subordinator l admits the form φ l(t) (ξ) = E(exp(iξl(t))) = exp t iγξ + ∞ 0 e iξy − 1 ν(dy) , ξ ∈ R, (2.1) for t ∈ T (see [3,Theorem 1.3.15]). Here, ν is the Lévy measure of l and the constant γ is a constant which does in general not coincide with the constant b in the Lévy-Khinchin formula.
In the following, we always mean the triplet (γ, 0, ν) corresponding to representation (2.1) if we refer to the characteristic triplet of a Lévy subordinator.
Note that every Gaussian random field is determined uniquely by its mean and covariance function. We denote by Q ∶ L 2 (D) → L 2 (D) the covariance operator of W which is defined by for ψ ∈ L 2 (D). Here, L 2 (D) denotes the set of all square integrable functions over D. Further, if D is compact, there exists a decreasing sequence (λ i , i ∈ N) of real eigenvalues of Q with corresponding eigenfunctions (e i , i ∈ N) ⊂ L 2 (D) which form an orthonormal basis of L 2 (D) (see [1,Section 3.2] and [27, Theorem VI.3.2 and Chapter II.3]). The GRF W is called stationary if the mean function µ is constant and the covariance function q(x (1) , x (2) ) only depends on the difference x (1) − x (2) of the values x (1) , x (2) ∈ D (see [1], p. 102).
3. The Subordinated Gaussian random field. Throughout the rest of this paper, let d ∈ N be a natural number with d ≥ 2 and T 1 , . . . , T d > 0 be positive values. We define the horizon vector T ∶= (T 1 , . . . , T d ) and consider the spatial domain [0, T] d ∶= [0, T 1 ] × ⋅ ⋅ ⋅ × [0, T d ] ⊂ R d . After a short motivation we define next the subordinated field and show that it is indeed measurable.
3.1. Motivation: the subordinated Brownian motion. In order to motivate the novel subordination approach for the extension of Lévy processes, we shortly repeat the main ideas of the subordinated Brownian motion which is defined as a Lévy-time-changed Brownian motion: Let B = (B(t), t ∈ R + ) be a Brownian motion and l = (l(t), t ∈ R + ) be a subordinator. The subordinated Brownian motion is then defined to be the process L(t) ∶= B(l(t)), t ∈ R + .
It follows from [3,Theorem 1.3.25] that the process L is again a Lévy process. Note that the class of subordinated Brownian motions is a rich class of processes with great distributional flexibility. For example, the well known Generalized Hyperbolic Lévy process can be represented as a NIGsubordinated Brownian motion (see [4] and expecially Lemma 4.1 therein).
3.2. The definition of the subordinated Gaussian random field. Let W = (W (x), x = (x 1 , . . . , x d ) ∈ R d + ) be a GRF such that W is F ⊗B(R d + )−B(R)-measurable. We denote by µ ∶ R d + → R the mean function and by q ∶ R d + ×R d + → R the covariance function of W . Let l k = (l k (x), x ∈ [0, T k ]) be independent Lévy subordinators with triplets (γ k , 0, ν k ), for k ∈ {1, . . . , d}, corresponding to representation (2.1). Further, we assume that the Lévy subordinators are stochastically independent of the GRF W . We consider the random field and call it subordinated Gaussian random field (subordinated GRF).
Remark 3.1. Note that assuming that W has P-almost surely continuous paths is sufficient to ensure that W is a jointly measurable function since W is a Carathéodory function in this case (see [2,Lemma 4.51]). A sufficient condition for the pathwise continuity of GRFs is given, for example, by [1,Theorem 1.4.1] (see also the discussion in [1, Section 1.3, p. 13]). A specific example for a class of GRFs with at least continuous samples is given by the Matérn GRFs: for a given smoothness parameter ν > 1 2 , a correlation parameter r > 0 and a variance parameter σ 2 > 0 the Matérn-ν covariance function on R d where Γ(⋅) is the Gamma function and K ν (⋅) is the modified Bessel function of the second kind (see [14, Section 2.2 and Proposition 1]). Here, ⋅ 2 denotes the Euclidean norm on R d . A Matérn-ν GRF is a centered GRF with covariance function q M .
3.3. Measurabiltiy. In Subsection 3.2 we introduced the subordinated GRF L as a random field. Strictly speaking, we therefore have to verify that point evaluations of the field L are random variables, meaning that we have to ensure measurability of these objects. Note that this is not trivial, since -due to the construction of L -the Lévy subordinators induce an additional ωdependence in the spatial direction of the GRF W . The following lemmas prove measurability of point evaluations of L and spatial measurability of the field, if we consider it as a (stochastically parametrized) space-dependent function.
Proof. The mapping is F − B(R + )-measurable for every k = 1, . . . , d and the identity mapping id Ω ∶ Ω → Ω is F − Fmeasurable, we obtain by [2,Lemma 4.49] that the mapping If we further assume continuity of the GRF W in the spatial variable, we obtain mesaurability of the subordinated GRF L in x ∈ [0, T] d , which allows us to take spatial integrals of the field. Lemma 3.3. We consider the subordinated GRF L = W (l 1 (⋅), . . . , l d (⋅)), and assume that the underlying GRF W has P-almost surely continuous paths. For almost all ω ∈ Ω, the mapping Proof. For any k ∈ {1, . . . , d}, the Lévy process l k has P-almost surely cádlág paths and, hence, the mapping [24,Chapter 1,Theorem 30]). Next, we consider domainextended versions of the processes for a fixed ω ∈ Ω: for any k ∈ {1 . . . , d} and any x = (x 1 , . . . , x d ) ∈ [0, T] d , we define the functionl [2,Lemma 4.51]. An application of [2,Lemma 4.49 By assumption, W (ω) has continuous paths and, hence, it is B(R d + )−B(R) measurable. Therefore, the mapping 4. The pointwise distribution of the subordinated GRF and the Lévy-Khinchinformula. In this section we prove a Lévy-Khinchin-type formula for the subordinated GRF in order to have access to the pointwise distribution. This is important, for example, in view of statistical fitting and other applications (see Section 7). In order to be able to do so we need the following technical lemma about the expectation of the composition of independent random variables. The assertion and its proof is a generalization of the corresponding assertion given in the proof of [25,Theorem 30.1].
s. continuous random field and let Z ∶ Ω → R d + be a R d + -valued random variable which is independent of the random field W . Further, let g ∶ R → R be a deterministic, continuous function. It holds where m(z) ∶= E(g(W (z)) for deterministic z ∈ R d + . Proof.
Step 1: Assume that Z takes only a countable number of values in R d + and g is globally bounded. In this case there exists a family of vectors (z i , i ∈ N) ⊂ R d + such that ∑ i∈N P(Z = z i ) = 1. We observe that and, by the boundedness of g, we obtain for all n ∈ N the estimate Further, by the use of the monotone convergence theorem, we obtain Therefore, we can apply the dominated convergence theorem together with the independence of Z and W to calculate Step 2: In this step, we do not impose any assumptions on the range of the random vector Z but we still assume that the function g is globally bounded on R and therefore also the function m is globally bounded. We write Z = (Z (1) , . . . , Z (d) ) and define for i = 1, . . . , d. By construction we obtain the pointwise convergence Z (i) n → Z (i) P−a.s. for n → ∞ for every component i = 1, . . . , d. Further, since g is continuous and W has P − a.s. continuous paths we obtain Here, the continuity of m follows from the boundedness and the continuity of g. Using the boundedness of the functions g and m and the dominated convergence theorem together with the first step we obtain Step 3: In this step we assume that g(x) ≥ 0 on R but g does not necessarily have to be bounded. It follows that m is also non-negative on R d + . For a fixed threshold A > 0, we define the cut function χ A ∶ R → R: Since g and m are nonnegative we obtain the P − a.s. monotone convergence of χ A (g(W (Z)))) → g(W (Z))) for A → +∞. We define m A (z) ∶= E(χ A (g(W (z))), for z ∈ R d + , and obtain by the monotone convergence theorem m A (Z) → m(Z) P − a.s. for A → +∞.
Step 4: Finally, we consider an arbitrary continuous function g ∶ R → R. We write g + = max{0, g}, g − = − min{0, g} as well asm + (z) = E(g + (W (z))),m − (z) = E(g − (W (z))) for z ∈ R d + and obtain the additive decomposition g(x) = g + (x) − g − (x) for x ∈ R and m(z) =m + (z) −m − (z) for z ∈ R d + by the additivity of the integral with respect to the integration domain. We apply Step 3 to optain which proves the assertion.
Remark 4.2. Note that following Steps 1 and 2 of the proof of Lemma 4.1 one obtains that the assertion of the lemma also holds for complex valued, bounded, continuous and deterministic functions g ∶ R → C.
An application of Lemma 4.1 gives the following semi-explicit formula for the characteristic function of a subordinated GRF.
for ξ ∈ R and any fixed point The GRF W is pointwise normally distributed with parameters specified by the functions µ and σ 2 W . More precise, for a fixed point x ∈ R d + , the characteristic function of W admits the form for ξ ∈ R (see [17,Satz 15.12]). The assertion then follows by an application of Lemma 4.1 together with Remark 4.2 .
In the one-dimensional case, the Lévy-Khinchin formula gives an explixit representation of the characteristic function of a Lévy process. This representation also applies to the subordinated Brownian motion, since it is itself a Lévy process (see Subsection 3.1). Note that in the construction of the subordinated Brownian motion one cannot replace the Brownian motion by a general oneparameter GRF on R + without losing the validity of the Lévy-Khinchin formula. Hence, in the case the subordinated GRF on a higher-dimensional parameter space, it is natural that we have to restrict the class of admissible GRFs in order to obtain a Lévy-Khinchin-type formula which is the d-dimensional analogon of Theorem 2.2. We recap that the characteristic function of a standard Brownian motion B is given by for t ≥ 0. Note that the Brownian motion ist not characterized by this property, i.e. not every zeromean GRF on R + with the above characteristic function is a Brownian motion, since this specific characteristic function can be attained by different covariance functions, whereas the covariance function of the Browian motion is given uniquely by q BM (s, t) = Cov(B(s), B(t)) = min{s, t} for s, t ≥ 0 (see for example [26,Section 3.2.2]). Motivated by this, we impose the following assumptions on the GRF on R d + .
be a zero-mean continuous GRF. We assume that there exists a constant σ > 0 such that the characteristic function of W is given by for ξ ∈ R and every x = (x 1 , . . . , x d ) ∈ R d + . Remark 4.5. Note that for a zero-mean, continuous and stationary GRFW We are now able to derive the Lévy-Khinchin formula for the subordinated GRF.
Theorem 4.6 (Lévy-Khinchin formula). Let Assumption 4.4 hold. We assume independent Lévy subordinators l k = (l k (x), x ∈ [0, T k ]), with Lévy triplets (γ k , 0, ν k ), for k = 1, . . . , d, are given corresponding to representation (2.1). Further, we assume that these processes are independent of the GRF W . We consider the subordinated GRF defined by The characteristic function of the random field L is then given by Here, the jump measure ν ext is defined through for a, b ∈ R where the measure ν # k , for k = 1, . . . , d and a, b ∈ R, is given by Proof. For notational simplicity we prove the assertion for d = 2. For general d ∈ N the assertion follows by the same arguments.
Claim 1: For a Lévy measure ν on (R + , B(R + )) it holds for every ξ ∈ R: 2s ) for s > 0 and x ∈ R and derive this equation by a direct calculation using the definition of the measure ν ♯ : In the last step we used that the characteristic function of a N (0, s)-distributed random variable is given by for t ≥ 0 and ξ > 0.
With these two assertions at hand we can now prove the Lévy-Khinchin formula. The case ξ = 0 is trivial since both sides equal 1 in this case. Let (x, y) ∈ [0, T] 2 and 0 ≠ ξ ∈ R be fixed. Using Lemma 4.1 and Remark 4.2 with g(⋅) = exp(iξ⋅) and Z = (l 1 (x), l 2 (y)) we calculate where we used Assumption 4.4. Therefore, using the independence of the processes l 1 and l 2 together with Claim 2 we obtain where the measuresν ♯ k for k = 1, 2 are given by: for a, b ∈ R. This finishes the proof.
Further, we assume that independent Lévy processesl k on [0, T k ] are given with triplets (0, σ 2 γ k 2, ν # k ) for k = 1, . . . , d in the sense of the one-dimensional Lévy-Khinchin formula, see Theorem 2.2. Here, the Lévy measure ν # k is defined by for k = 1, . . . , d and a, b ∈ R. The pointwise marginal distribution of the subordinated GRF satisfies Proof. By Theorem 4.6 the characteristic function of L at a fixed point The assertion then follows by the convolution theorem.
We point out that the case of stationary GRFs is excluded by Assumption 4.4. Therefore, we consider this situation in the following remark where we again assume d = 2 for notational simplicity.
+ , and pointwise variance σ 2 ∶=q((0, 0)) > 0. Let l 1 and l 2 be independent Lévy subordinators, which are also independent of W. We obtain by Lemma 4.1 the following representation for the characteristic function of the subordinated random field defined by which is a constant function in (x ′ , y ′ ). Therefore we obtain for (x, y) ∈ [0, T] 2 . Hence, in case of a stationary GRF, the subordinated GRF is pointwise normally distributed with variance σ 2 .
We conclude this subsection with a remark on the given Lévy-Khinchin formula and its meanings.
Remark 4.9. With the approach of subordinating GRFs on a higher-dimensional domain, we obtain a discontinuous Lévy-type random field and a Lévy-Khinchin formula which allows access to the pointwise distribution of the random field. Further we obtain a similar parametrization of the class of subordinated random fields, as it is the case for Lévy processes on a one-dimensional parameter space: Under the assumptions of Theorem 4.6, every subordinated GRF can be characterized by the tupel ( is the covariance function of the GRF. Further, the class of subordinated GRFs is linear in the sense that for the sum of two independent subordinated GRFs one can construct a single subordinated GRF with the same pointwise characteristic function.
5. Covariance function. One advantage of the subordinated GRF is that the correlation between spatial points is accessible. The correlation structure is hereby determined by the covariance function of the underlying GRF and the specific choice of the subordinators. For statistical applications it is often important to image or enforce a specific correlation structure in view of fitting random fields to physical phenomena. In this context the question arises whether one can find analytically explicit formulas for the covariance function of a subordinated Gaussian random field. This will be explored in the following section.
For notational simplicity we restrict the dimension to be d = 2 in this section but we point out that analogous results apply for dimensions d ≥ 3. A direct application of Lemma 4.1 yields the following corollary.
Corollary 5.1. Let W be a continuous, zero-mean GRF on R 2 + . Further, let l 1 and l 2 be two independent Lévy subordinators which are independent of W . Then the subordinated GRF L defined by L(x, y) ∶= W (l 1 (x), l 2 (y)), for (x, y) ∈ R 2 + , is zero-mean with covariance function where the function m is given by for x 1 , y 1 , x 2 , y 2 ∈ R + , which finishes the proof.
5.1. The stationary case. We use Corollary 5.1 to derive a semi-explicit formula for the covariance function of the subordinated GRF, where the underlying GRF is stationary.
. Further, suppose that l 1 and l 2 are independent Lévy subordinators on [0, is the density function of l 1 (x) (resp. l 2 (y)) for (x, y) ∈ [0, T] 2 . The covariance function of the subordinated GRF L with L(x, y) ∶= W (l 1 (x), l 2 (y)), for (x, y) ∈ [0, T] 2 , admits the representation and for (x, y) = (x ′ , y ′ ) the pointwise variance is given by Proof. The assertion follows immediately by Corollary 5.1 together with the independence of the processes l 1 and l 2 and the fact that T k ] and k = 1, 2 by the definition of a Lévy process.

5.2.
The non-stationary case. In this subsection, we derive a formula for the covariance function of the subordinated GRF for the case that the underlying GRF is non stationary. The following lemma will be useful in the proof of the covariance representation.
is stochastically independent of the random variable l(x), which yields For the case that x ′ < x the same argument yields which finishes the proof.
Remark 5.4. Note that Lemma 5.3 immediately implies that the joint density f Z (s, t) of the two-dimensional random vector Z = (l(x), l(x ′ )) for a Lévy subordinator l on [0, T ] and x ≠ x ′ ∈ [0, T ] is given by , for s, t ∈ R + , and the joint probability admits the form With this lemma at hand we are able to derive a formula for the covariance function of the subordinated (non-stationary) GRF.
Lemma 5.5. Let W ∶ R 2 + → R be a zero-mean, continuous and non-stationary GRF with covariance function q W . Further, suppose that l 1 and l 2 are independent Lévy subordinators on [0, T 1 ] (resp. [0, T 2 ]) with density functions f 1 and f 2 , i.e. f x 1 (⋅) (resp. l y 2 (⋅)) is the density function of l 1 (x) (resp. l 2 (y)) for (x, y) ∈ [0, T] 2 . The covariance function of the subordinared GRF L with L(x, y) ∶= W (l 1 (x), l 2 (y)), for (x, y) ∈ [0, T] 2 , admits the representation for (x, y), (x ′ , y ′ ) ∈ [0, T ] 2 with x ≠ x ′ and y ≠ y ′ . For x = x ′ and y ≠ y ′ , it holds and for x ≠ x ′ and y = y ′ it holds For (x, y) = (x ′ , y ′ ) one obtains for the pointwise variance of the field Proof. Using Corollary 5.1, the independence of the processes l 1 and l 2 together with Lemma 5.3 and Remark 5.4 we calculate for (x, y), (x ′ , y ′ ) ∈ [0, T] 2 with x ≠ x ′ and y ≠ y ′ : The remaining cases follow by the same argument. where we use the notationq L ∶= {q L (x i , y i ), i, j = 1, . . . , N } and ⋅ * is an appropriate norm on R N , e.g. the euclidian norm. In order to solve this problem, the formulas for the covariance function given by Lemma 5.2 and Lemma 5.5 can be used, but still the solution cannot be found easily due to the complexity of the set of admissible parameters.
6. Stochastic regularity -pointwise moments. In this section we consider pointwise moments of a subordinated GRF L. In particular, we derive conditions which ensure the existence of pointwise p-th moments of the subordinated GRF L defined by Obviously, in order to guarantee the existence of moments of the random variable L(x), we have to impose conditions on the GRF W and the subordinators l 1 , . . . , l d . The following theorem gives a better insight into the interaction between the underlying GRF and the stochastic regularity of the subordinators and presents coupled regularity conditions on the tail behaviour of both components of the random field.
Theorem 6.1. We assume that W is a centered and continuous GRF on R d + with covariance function q W ∶ R d , for x 1 , . . . , x d ≥ 0. (6.1) Here, we use the notation We consider a fixed point x ∈ [0, T] d and assume that the densities f x1 1 , . . . , f x d d of the evaluated processes l 1 (x 1 ), . . . , l d (x d ) fulfill with decay rates {η i , i = 1, . . . , d} and a finite constant K > 0. We define the number Then, the random variable L(x) admits a p-th moment for p ∈ [1, a), i.e. L(x) ∈ L p (Ω; R) for p ∈ [1, a).
Proof. Let Z ∼ N (0, σ 2 ) be a real-valued, centered, normally distributed random variable with variance σ 2 > 0. It follows by Equation (18) in [28] that the p-th absolute moment of Z admits the formula for all p > −1. Let p ≥ 1 be a fixed number. We use Lemma 4.1 to calculate for the p-th moment of L(x): for (x ′ 1 , . . . , x ′ d ) ∈ R d + where we used Equation (6.3). Hence, we obtain E( L(x) p ) = C p E(σ p W (l 1 (x 1 ), . . . , l d (x d ))).
Next, we use the tail estmations (6.1) and (6.2), Hölder's inequality and the independence of the subordinators to calculate It remains to show that all the integrals I j i are finite. For i ∈ {1, . . . , d} and j ∈ {1, . . . , N } with α where the integral in the last step is finite since pα We close this section with three remarks on the assumptions and possible extensions of Theorem 6.1.
Remark 6.3. We point out that the statement of Theorem 6.1 remains valid if we consider Lévy distributions with discrete probability distribution which satisfy a discrete version of (6.2): If the GRF W satisfies (6.1) and the evaluated (discrete) subordinators l 1 (x 1 ), . . . , l d (x d ) satisfy then we obtain that E( L(x 1 , . . . , x d ) p ) < ∞ for p ∈ [1, a) with the real number a defined in Theorem 6.1.
Remark 6.4. For the pointwise existence of moments given by Theorem 6.1, it is not necessary to restrict the subordinating processes to the class of Lévy subordinators. More generally, one could consider a GRF W satisfying (6.1) and general Lévy processes l 1 , . . . , l d satisfying (6.2) for z ≥ K. In this case, Theorem 6.1 still holds for the random field L defined by L(x) ∶= W ( l 1 (x 1 ) , . . . , l d (x d ) ), for x = (x 1 , . . . , x d ) ∈ [0, T] d .

Numerical Examples.
In this section we present numerical experiments about the theoretical results given in this paper. We verify the Lévy-Khinchin formula which allows access to the pointwise distribution of a subordinated GRF and validate the formulas for the covariance functions given in Section 5. Further, we introduce an approach of marginal-distribution-fitting for the subordinated GRF and present numerical experiments addressing the pointwise stochastic regularity of these fields (see Section 6).

Verification of the Lévy-Khinchin formula.
In this subsection we investigate the pointwise distribution of subordinated GRFs in order to verify the Lévy-Khinchin formula given by Theorem 4.6. To be more precise, we use Corollary 4.7 to obtain a pointwise distributional representation of a subordinated GRF as the sum of one-dimensional Lévy processes with transformed Lévy triplets.
Assume L = (W (l 1 (x), l 2 (y)), (x, y) ∈ [0, 1] 2 ) is a subordinated GRF where the GRF W satisfies Assumption 4.4 and the two subordinators l 1 and l 2 are characterized by the Lévy triplets (γ k , 0, ν k ) for k = 1, 2. It follows by Corollary 4.7 that L admits the distributional representation L(x, y) D =l 1 (x) +l 2 (y), (7.1) for (x, y) ∈ [0, 1] 2 . Here, the processesl k on [0, 1] are independent Lévy processes with triplets (0, σ 2 γ k 2, ν # k ) for k = 1, 2 in the sense of the one-dimensional Lévy-Khinchin formula (see Theorem 2.2) and the Lévy measure ν # k is defined by a, b ∈ R and k = 1, 2. In order to validate Equation (7.1) we choose specific spatial points and sample the subordinated GRF L to compare it with the distribution given by the right hand side of (7.1). We use two different methods to approximate the distribution of the Lévy processes on the right hand side of (7.1): the compound Poisson approximation (CPA) (see [26,Section 8.2.1]) and the Fourier inversion method for Lévy processes (see [13] and [4]) which allows for a direct approximation of the density of the right hand side of (7.1).
7.1.1. First approach: CPA. We recall that a Gamma(a G , b G ) process l G has independent Gamma-distributed increments and l G (t) follows a Gamma(a G ⋅ t, b G ) distribution. In our first example we use Gamma (4,12) processes to subordinate the GRF W which is defined by W (x, y) = √ x + yW (x, y), for (x, y) ∈ [0, 1] 2 , whereW is a Matérn-1.5-GRF with pointwise standard deviation σ = 2 (see Remark 4.5). We fix the evaluation point (x, y) = (1, 1) and use the CPA method to obtain samples of the Lévy process on the right hand side of (7.1) which can then be compared with samples of the subordinated GRF. Figure 2 (left and middle) shows the corresponding histograms for 10.000 samples of each distribution. Visually, we obtain a relatively accurate fit of the distributions since the histograms have similar characteristics. However, in contrast to the left histogram, the distribution corresponding to the middle histogram in Figure 2 is not symmetric. This is not due to the specific choice of the CPA parameters but due to the method itself since in CPA a Lévy process is approximated essentially by the sum of compensated Poisson processes which is never symmetric as one can see in the right histogram of Figure 2.
We conclude that, even if the histograms indicate that the underlying distributions match, the CPA is not perfectly suitable to approximate the pointwise marginal distribution of the Lévy processes given on the right hand side of Equation (7.1) since the CPA cannot image the symmetry properties of this distribution.

Second approach: Fourier inversion method.
The second approach is to approximate the density function of the right hand side of (7.1) by the Fourier inversion (FI) method (see [13] and [4]) and compare it with samples of the subordinated GRF. Figure 3 illustrates the results for this approach where we used the evaluation point (x, y) = (1, 1), the same GRF as in Subsection 7.1.1, Gamma(4, 12) subordinators and 100.000 samples of the subordinated GRF.
As one can see in Figure 3, the pointwise distribution of the subordinated GRF perfectly matches the approximated density of the right hand side of (7.1). We want to confirm this by a Kolmogorov-Smirnov-Test (see for example [23,Section VII.4]). Figure 4 illustrates how the empirical CDF, obtained by sampling the subordinated GRF, converges to the target CDF which is computed by the Fourier inversion method using Equation (7.1). A Kolmogorov-Smirnov-test with 10.000 samples and a level of significance of 5% is passed.
In the next experiment we use a modified subordinator, which results in a less smooth pointwise density of the subordinated GRF. We repeat the above experiment with Gamma(0.5, 10) subordinators where the GRF, the evaluation point and the sample size remain unchanged. Figure 5 shows 100.000 samples of the subordinated GRF and the density of the process given by the right hand side of Equation (7.1) computed via the Fourier Inversion method.
As in the first experiment, the results given by Figure 5 indicate that the pointwise distribution of the subordinated GRF matches the approximated density of the right hand side of (7.1). Figure  6 illustrates how the empirical CDF, obtained by sampling of the subordinated GRF, converges to the target CDF of the right hand side of Equation (7.1), which is computed by the Fourier inversion method. A Kolmogorov-Smirnov-test with a level of significance of 5% is passed.

Verification of Covariance formulas.
In Section 5 we derived semi-explicit formulas for the covariance function of centered subordinated GRFs. In the following subsection we validate these fomulas by comparing them with numerically estimated covariances using specific GRFs and subordinators. We use both, stationary and non-stationary underlying GRFs.
In our numerical experiments, we evaluate the integrals appearing in the derived formulas for the covariance function via the trapezoidal rule. Further, we estimate the empirical covariance of the subordinated GRF by a standard Monte Carlo (MC) estimation: for spatial points (x, y), (x ′ , y ′ ) ∈ [0, T] 2 and a sample number M ∈ N we estimate where (W (l 1 (x), l 2 (y)) (i) , i = 1, . . . , M ) (resp. (W (l 1 (x ′ ), l 2 (y ′ )) (i) , i = 1, . . . , M ) are i.i.d. copies of the subordinated GRF evaluated at (x, y) (resp. (x ′ , y ′ )). In order to interpret our results we state the following standard result.   (2) G ). We consider the covariance of the field at different points to cover different situations: points with small distance, points, which are equal in one coordinate, and points with large distance.
As in the first example, we use the 500 independent MC runs to estimate the RMSE (see Subsection 7.2.1). Figure 8 shows  7.3. Stochastic fitting: pointwise distribution. In this section, we present an approach to fit the parameters of a subordinated GRF in order to obtain a specific pointwise distribution. We work under Assumption 4.4 and assume that a subordinated GRF with an unknown set of parameters (γ 1 , γ 2 , σ, ν 1 , ν 2 ) is given (see Remark 4.9). We further assume that the distribution of the random field at n ∈ N fixed spatial points is known and we want to estimate parameters of the field in order to obtain a fitted subordinated GRF which admits approximately the same pointwise distribution at the given n points.
In order to fit the parameters of the subordinated GRF we follow two different approaches: fitting based on pointwise densities and fitting based on pointwise characteristic functions. The procedure is as follows: Theorem 4.6 allows access to the pointwise characteristic functions of a subordinated GRF with a given set of parameters. The pointwise characteristic functions in turn allow to calculate the pointwise density functions (see Subsection 7.1.2 and [13]). Hence, Theorem 4.6 gives access to the pointwise characteristic functions and the pointwise density functions of a subordinated GRF. Having either the pointwise densities or the pointwise characteristic functions, a fitting-error can be computed by comparing those with the desired density functions (resp. characteristic functions) at the specific points. Therefore, starting with an initial set of parameters (initial guess), we compute the L 2 -fitting-error of the densities (resp. characteristic functions) for all n points and optimize the parameters in order to minimize the fitting-error using the Levenberg-Marquardt (LM) algorithm (see for example [21]). We will call these two approaches density-approach and chararecteristic-function-approach to fit the parameters of the subordinated GRF. Figure 9 illustrates the workflow of the two approaches.  Fig. 9: Schematic illustration of the density-approach (blue) and the characteristic-function-approach for parameter optimization of a subordinated GRF.
We consider a Gamma-subordinated GRF with parametersã G = 10,σ = 2. In the following, we perform two experiments, varying the initial guess for the parameter optimization. Figure 11 shows the results after 5 iterations of the LM-algorithm using the following initial guess parameters: a (1) As we see in Figure 11, the density functions at the 4 considered points match the desired densities corresponding to the true parameter set. Since this might also be caused by the fact that the initial guess parameters are quite close to the true parameters, in the second experiment we choose the initial guess far away from the true values. Figure 12 shows that the set of initial guess parameters leading to a good fit of the densities corresponding to the optimized parameters is limited: if we consider the initial parameters a (1) G = 1, σ = 1, we see that densities do not match the desired ones anymore. This is also true for a higher number of iterations of the LM-algorithm.
The results show that the density-approach brings two problems: the first one is that we need evaluations of the approximated pointwise density functions which is very expensive as iterated integrals that have to be computed. This is also the reason why we only perform 5 iterations of the LM-algorithm. Furthermore, the method is quite sensitive to the parameters for the approximation  of the integrals which have to be computed for the error. This is also a reason why the method fails if we choose an initial guess parameter set far away from the true parameters.

Characteristic function fitting.
In the next examples we use pointwise characteristic functions to optimize the parameters in order to obtain the desired pointwise distributions.
We again consider a Gamma-subordinated GRF withã G = 10,σ = 2 and we perform 3 experiments, varying the initial guess. Note that for the characteristic-functionapproach, we always compare the real part of the characteristic functions since the imaginary parts are numerically zero in all considered examples. Figure 13 shows the results after 50 iterations of the LM-algorithm using the initial guess parameters a (1) We see that the pointwise characteristic functions corresponding to the optimized parameters match the desired characteristic functions in this example.
As described in Subsection 7.3.1, moving the initial parameters further away uncovers the limitations of the density-approach. Therefore, the results for the initial parameters given by a (1) Figure 14) are important for the comparison of the two approaches. We see that the characteristic-function-approach still works for this set of initial parameters: the characteristic functions of the subordinated GRF corresponding to the optimized parameters at the four specified point match the desired distributions. A further impressive result is shown in Figure 15: even if we set the initial parameters to be a (1) (2) G = 0, σ = 1, which corresponds to the situation that the initial guess distribution is deterministic and zero, the optimized parameters lead to matching characteristic functions at the four points for the characteristic-function-approach.  After these experiments for both approaches introduced in the beginning of Subsection 7.3, we conclude that the characteristic function approach performs far better than the density approach. It is less computationally expensive, less sensitive to the parameter choice and yields better results, especially for the borderline cases.

Pointwise Moments.
In this subsection we numerically validate Theorem 6.1 which guarantees the existence of moments of the subordinated GRF if the GRF and the corresponding subordinators satisfy certain conditions. In our numerical examples we set d = 2 and assume W to be a Brownian sheet on R 2 + . We use different Lévy processes with different stochastic regularityin terms of the existence of moments -to subordinate the GRF W . In order to validate Theorem 6.1 we use different statistical methods to verify or disprove the existence of moments of a specific order.
7.4.1. Statistical methods to test the existence of moments of a random variable. The existence of moments of a specific distribution is one of the most frequently formulated assumptions in statistical applications. For example, already the strong law of large numbers assumes finiteness of the first moment of the corresponding random variable. Nevertheless, in the literature only few statistical methods exist to verify or disprove the existence of moments, given a specific sample of random variables (see e.g. [20,15,22,10,12,11]) . One of the earlier methods to verify the existence of moments of a distribution was proposed in 1963 by Mandelbrot (see [20] and [8]). It is based on the simple observation that the estimated (sample-)moments will converge to a certain value for an increasing sample size if the theoretical moment exists. On the other side, if the theoretical moment does not exist, the estimated moment will diverge or behave unstable when the sample size increases. However, this quite intuitive method is rather heuristic and depends highly on the experience of the researcher (see also [10]). Another popular direct way to investigate the existence of moments of a certain distribution is the sample-based estimation of a decay rate α for the corresponding density function proposed by Hill in [15]. However, the Hill-estimator requires a parameter k > 0 which specifies the sample values which are considered as the tail of the distribution and it turned out that the Hill-estimator is very sensitive to the choice of this parameter k. Further, the method makes the quite restrictive assumption that the underlying distribution is of Pareto-type (see [22,10,12] and [11]). In 2013, Fedotenkov proposed a bootstrap test for the existence of moments of a given distribution (see [10]). The test performs well for specific distributions, however, its accuracy deteriorates fast when moments of higher order are considered (see also [12]). Recently, Ng and Yau proposed another sample-based bootstrap test for the existence of moments which outperforms the previously mentioned methods for many distributions (see [22]). The test is based on a result from bootstrap asymptotic theory which states that the m out of n bootstrap sample mean (see [7]) converges weakly to a normal distribution. For details we refer to [22].
Based on these observations, we conduct a combination of direct moment estimation via Monte Carlo (see also Subsection 7.2) and the bootstrap test proposed by Ng and Yau to investigate the existence of (pointwise) moments of the subordinated GRF.
For our numerical examples we choose three different Lévy distributions to subordinate the Brownian sheet W : a Poisson distribution, a Gamma distribution and a Student-t distribution. Therefore, we use a discrete and a continuous distribution where all moments are finite and a continuous distribution, which only admits a limited number of moments and. Hence, we consider three fundamentally different situations. In all three experiments, we consider the evaluation point x = (x 1 , x 2 ) = (1, 1) ∈ R 2 + for the subordinated GRF L. Note that the two-dimensional Brownian sheet satisfies Equation (6.1) in Theorem 6.1 with N = 1, c 1 = 1 and α (1) = (1 2, 1 2).

7.4.2.
Poisson-subordinated Brownian sheet. In this example, we use Poisson(3) processes to subordinate the two-dimensional Brownian sheet. We recall that a Poisson(λ)-distributed random variable admits the density Hence, condition (6.4) is satisfied for any η i > 0, i = 1, 2, since point evaluations of a Poisson process are Poisson distributed. Theorem 6.1 implies the existence of the p-th moment of the evaluated field L(1, 1) for any p < ∞ (see Remark 6.3). We estimate the p-th moment for p ∈ {4, 6, 8} by a Monte Carlo estimation using M samples of the evaluated GRF L(1, 1) with M ∈ N. Figure  16 shows the development of the MC-estimator for the p-th moment as a function of the number of samples. For every moment, we take 5 independent MC-runs to validate that they converge to the same value. As expected, Figure 16 shows a stable convergence of the MC estimator for a growing number of samples for every considered moment. Further, the different independent MC-runs converge to the same value -the theoretical p-th moment for p ∈ {4, 6, 8}.
In order to further confirm this result, we perform the bootstrap test (see [22]). We test the existence of the p-th moment for p ∈ {1, 2, 3, 4, 5, 6, 7, 8} using M = 10 7 samples of the subordinated evaluated GRF L(1, 1). Hence, the null and alternative hypothesis are given by for the different values of p. We choose the significance level α s = 1% and perform 10 independent test runs. Figure 17 shows the proportion of acceptance of the null hypothesis in the 10 test runs as a function of the considered moment p ∈ {1, 2, 3, 4, 5, 6, 7, 8}. As we see in Figure 17, the bootstrap test accepts the null hypothesis H 0 in every test run for every considered moment p ∈ {1, 2, 3, 4, 5, 6, 7, 8}. Therefore, the test results further support the expected observation that all of the considered moments exist. 7.4.3. Gamma-subordinated Brownian sheet. In our second numerical example we consider Gamma processes to subordinate the Brownian sheet. We recall that, for a G , b G > 0, a Gamma(a G , b G )-distributed random variable admits the density function where Γ(⋅) denotes the Gamma function. A Gamma process (l(t)) t≥0 has independent Gamma distributed increments and l(t) follows a Gamma(a G ⋅ t, b G )-distribution for t > 0. Therefore, condition (6.2) holds for any η i > 0, for i = 1, 2 and, hence, Theorem (6.1) again implies the existence of every moment, i.e. E( L(1, 1) p ) < ∞ for any p ≥ 1. We choose a G = 4, b G = 10 and estimate the p-th moment of L(1, 1) with p ∈ {4, 6, 8} by a Monte Carlo estimation using a growing number of samples M ∈ N. Figure 18 shows the development of the MC-estimator for the p-th moment as a function of the number of samples. As in the first experiment, we take 5 independent MC-runs to validate the convergence to a unique value. In line with our expectations, the results show a stable convergence of the MC estimations for the different moments.
In this experiment we again perform the bootstrap test for the existence of the p-th moment for p ∈ {1, 2, 3, 4, 5, 6, 7, 8} using M = 10 7 samples of the subordinated evaluated GRF L(1, 1). Hence, the null and alternative hypothesis are given by for the different values of p. We choose the significance level α s = 1% and perform 10 independent test runs. Figure 19 shows the proportion of acceptance of the null hypothesis in the 10 test runs as a function of the considered moment p ∈ {1, 2, 3, 4, 5, 6, 7, 8}.  Fig. 19: Results for ten independent runs of the bootstrap test for the existence of the p-th moment using Gamma(4,10) processes to subordinate the Brownian sheet.
The test results again confirm our theoretical findings, since every test run accepts the null hypothesis for any moment p ∈ {1, 2, 3, 4, 5, 6, 7, 8}, indicating that the moments exist. p = 5: in this case the estimation stabilizes with growning sample size and all 5 independent MC-estimations seem to converge to a unique value. However, for the higher moments p = 6 and p = 8, we see upward breakouts and instable behaviour of the corresponding MC-estimator for increasing sample sizes. Further, the 5 independent MC-runs do not indicate a convergence to a unique value. Therefore, these experiments hit our expectations, since the p-th moment of the evaluated subordinated GRF L(1, 1) admits a p-th moment for p < 6 and this boundary is sharp (see Theorem 6.1).
We perform the bootstrap test for the existence of the p-th moment for p ∈ {1, 2, 3, 4, 4.5, 5, 5.2, 5.4, 5.6, 5.8, 6, 6.5, 7, 8} using M = 10 7 samples of the subordinated GRF L(1, 1). Hence, the null and alternative hypothesis are again given by for the different values of p. We choose the significance level α s = 1% and perform 10 independent test runs. Figure 21 shows the proportion of acceptance of the null hypothesis in the 10 test runs as a function of the considered moment p and the test statistic values for the ten test runs. The results further confirm our theoretical finding: in all of the ten test runs the null hypothesis is accepted for the cases p ∈ {1, 2, 3, 4, 4.5, 5}. Further, in all of the ten test runs H 0 is rejected for the cases p ∈ {6, 6.5, 7, 8}, which is correct since the corresponding theoretical moments do not exist. Only the results for p ∈ (5, 6) are not perfectly accurate since the corresponding theoretical moments exist and the test rejects the null hypothesis in some cases. Nevertheless, the results are remarkable since even for p = 5.8 the test accepts H 0 in 5 of the 10 testruns. This again implies that the power and accuracy of the test is impressively high in our numerical examples.