1 Introduction

In many applications of stochastic modeling, it is meaningful to consider random fields which are discontinous in space (e.g. in fractured porous media modeling). In the situation of a one-dimensional parameter space, like financial modeling, Lévy processes turned out to be a very powerful class of (in general) discontinuous stochastic processes, combined with useful properties, see for example Schoutens (2003), Applebaum (2009), Sato (2013).

Whereas the extension of \(\mathbb {R}\)-valued Lévy processes with one-dimensional parameter space to Hilbert space \(\mathscr{H}\)-valued Lévy processes is straight forward (see for example Barth and Stein (2018b)), the extension of Lévy processes to higher-dimensional parameter spaces is more challenging. The reason can be found at the very starting point of the definition of Lévy processes where time increments are considered: In fact, the definition of Lévy processes makes explicitly use of the total ordered structure underlying the considered time interval. The absence of such a structure on a higher-dimensional parameter space makes it difficult to extend the definition of a standard Lévy process to higher-dimensional parameter spaces.

Subordinated fields did receive only little attention in the recent literature. In some classical papers on generalized random fields, of which Dobrushin (1979) is an important representative (see also the references therein), subordinated fields are defined in terms of iterated Itô-integrals. In the recent article, Makogin and Spodarev (2021), the authors investigate deterministic transformations of Gaussian random fields, so called Gaussian subordinated fields, and study excursion sets. The Rosenblatt distributions and long-range dependence of (subordinated) fields are looked into in Leonenko et al. (2017). The article Barndorff-Nielsen et al. (2001) presents an extension of the concept of subordination to multivariate Lévy processes and investigates self-decomposibility of the resulting processes defined on a one-dimensional parameter domain. In the recent paper Buchmann et al. (2019), the authors define the so called weak subordination of multivariate Lévy processes as a generalization of the classical subordination. The resulting multivariate processes depend on a one-dimensional time parameter and the authors prove that weak subordination is an extension of the classical (strong) subordination. In Buchmann et al. (2017) and Buchmann et al. (2020), the authors consider multivaritate Brownian motions (weakly) subordinated by multivariate Thorin subordinators, investigate self-decomposibility as well as the existence of moments of the resulting distributions and present some applications in mathematical finance. The considered subordinated multivariate processes are defined on a one-dimensional parameter domain.

In contrast, the main contribution of our work is to prove properties of the (discontinuous) subordinated random fields on higher-dimensional parameter domains and of their pointwise distributions, which are important in applications (see for example Zhang and Kang (2004), Bastian (2014) and Barth and Stein (2018a)).

We present an approach for an extension of a subclass of Lévy processes to more general parameter spaces: Motivated by the subordinated Brownian motion, we employ a higher-dimensional subordination approach using a Gaussian random field together with Lévy subordinators.

Figure 1 illustrates the approach with samples of a Gaussian random field (GRF) on \([0,1]^2\) with Matérn-1.5 covariance function and the corresponding subordinated field, where we used Poisson and Gamma processes on [0, 1] to subordinate the GRF.

Fig. 1
figure 1

Sample of Matérn-1.5-GRF (left), Poisson-subordinated GRF (middle) and Gamma-subordinated GRF (right)

These examples illustrate how the jumps of the Lévy subordinators produce jumps in the two-dimensional subordinated GRF. The flexibility of the resulting random fields make them attractive for a variety of applications. In a recent article by Barth and Merkle (2020), the authors consider a randomized elliptic partial differential equation, where the subordinated GRF occur in the diffusion coefficient, to name just one possible application.

The question arises whether it is possible to transfer some theoretical results of one-dimensional Lévy processes to these random fields on higher-dimensional parameter spaces. In particular, a Lévy-Khinchin-type formula to access the pointwise distribution of the constructed random field is of great interest (see Sect. 4). In Sect. 5 we investigate the covariance structure of the subordinated fields and show how it is influenced by the choice of subordinators. The stochastic regularity of the subordinated fields is studied in Sect. 6. There, we derive conditions which ensure the existence of pointwise moments. In the last section we present some numerical experiments on the theoretical results presented in this paper intended to help fitting random fields to data.

2 Preliminaries

In this section we give a short introduction to Lévy processes and Gaussian random fields as basis for the construction of subordinated Gaussian random fields. Throughout the paper, let \((\Omega ,\mathscr {F},\mathbb {P})\) be a complete probability space.

2.1 Lévy Processes

Let \(\mathscr {T}\subseteq \mathbb {R}_+:=[0,+\infty )\) be an arbitrary time domain. A stochastic process \(X=(X(t),~t\in \mathscr {T})\) on \(\mathscr {T}\) is a family of random variables on the probability space \((\Omega ,\mathscr {F},\mathbb {P})\). A stochastic process l on \(\mathscr {T}=[0,+\infty )\) is said to be a Lévy process if \(l(0)=0~\mathbb {P}-a.s.\), l has independent and stationary increments and l is stochastically continuous. A very important characterization property of Lévy processes is given by the so called Lévy-Khinchin formula.

Theorem 2.1

(Lévy-Khinchin formula, see (Applebaum 2009, Th. 1.3.3 and p. 29)) Let l be a real-valued Lévy process on \(\mathscr {T}=\mathbb {R}_+:=[0,+\infty )\). There exist parameters \(b\in \mathbb {R}, ~\sigma _N^2\in \mathbb {R}_+\) and a measure \(\nu\) on \((\mathbb {R},\mathscr {B}(\mathbb {R}))\) such that the pointwise characteristic function \(\phi _{l(t)}\), for \(t\in \mathscr {T}\), admits the representation

$$\begin{aligned} \phi _{l(t)}(\xi ):= & \ \mathbb {E}(\exp (i\xi l(t))) \\= & \ \exp \Big (t\big (ib\xi - \frac{\sigma _N^2}{2}\xi ^2 + \int _{\mathbb {R}\setminus \{0\}} e^{i\xi y} - 1 - i\xi y\mathbbm {1}_{\{|y|\le 1\}}(y)\,\nu (dy)\big )\Big ), \end{aligned}$$

for \(\xi \in \mathbb {R}\). The measure \(\nu\) on \((\mathbb {R},\mathscr {B}(\mathbb {R}))\) satisfies

$$\begin{aligned} \int _\mathbb {R} \min (y^2,1) \,\nu (dy)<\infty , \end{aligned}$$

and a measure with this property is called Lévy measure.

It follows from the Lévy-Khinchin formula that every Lévy process is fully characterized by the so called Lévy triplet \((b,\sigma _N^2,\nu )\).

Within the class of Lévy processes there exists a subclass which is given by the so called subordinators: A Lévy subordinator on \(\mathscr {T}\) is a Lévy process that is non-decreasing \(\mathbb {P}\)-almost surely. The pointwise characteristic function of a Lévy subordinator l(t), for \(t\in \mathscr {T}\), admits the form

$$\begin{aligned} \phi _{l(t)}(\xi ) = \mathbb {E}(\exp (i\xi l(t))) = \exp \Big (t\big (i\gamma \xi + \int _0^\infty e^{i\xi y} - 1 \,\nu (dy)\big )\Big ),~\text { for }\xi \in \mathbb {R}, \end{aligned}$$
(1)

(see (Applebaum 2009, Theorem 1.3.15)). Here, \(\nu\) is the Lévy measure and \(\gamma\) is called drift parameter of l. The Lévy measure \(\nu\) on \((\mathbb {R},\mathscr {B}(\mathbb {R}))\) of a Lévy subordinator satisfies

$$\begin{aligned} \nu (-\infty ,0)=0 \text { and } \int _0^\infty \min (y,1) \,\nu (dy)<\infty . \end{aligned}$$

Since any Lévy subordinator l is a Lévy process, the Lévy-Khinchin formula holds and we obtain \(\sigma _N^2=0\) and \(b=\gamma + \int _0^1 y\,\nu (dy)\) in Theorem 2.1 for the subordinator l. In the following, we always mean the triplet \((\gamma ,0,\nu )\) corresponding to representation (1) if we refer to the characteristic triplet of a Lévy subordinator.

2.2 Gaussian Random Fields

Let \(\mathscr {D}\subset \mathbb {R}^d\) be a spatial domain. A random field \(R=(R(\underline{x}),~\underline{x}\in \mathscr {D})\) is a family of random variables on the probability space \((\Omega ,\mathscr {F},\mathbb {P})\). In our approach to extend Lévy processes on higher-dimensional parameter domains, one important component is given by the Gaussian random field.

Definition 2.2

(see (Adler and Taylor 2007, Sc. 1.2)) A random field \(W:\Omega \times \mathscr {D} \rightarrow \mathbb {R}\) on a d-dimensional domain \(\mathscr {D}\subset \mathbb {R}^d\) is said to be a Gaussian random field (GRF) if, for any \(\underline{x}^{(1)},\dots ,\underline{x}^{(n)} \in \mathscr {D}\) with \(n\in \mathbb {N}\), the n-dimensional random variable \((W(\underline{x}^{(1)}),\dots ,W(\underline{x}^{(n)}))\) is multivariate Gaussian distributed. For a GRF W and arbitrary points \(\underline{x}^{(1)},~\underline{x}^{(2)}\in \mathscr {D}\), we define the mean function by \(\mu _W(\underline{x}^{(1)}):=\mathbb {E}(W(\underline{x}^{(1)}))\) and the covariance function by

$$\begin{aligned} q_W(\underline{x}^{(1)},\underline{x}^{(2)}):=\mathbb {E}\big ((W(\underline{x}^{(1)})-\mu _W(\underline{x}^{(1)}))(W(\underline{x}^{(2)})-\mu _W(\underline{x}^{(2)}))\big ). \end{aligned}$$

The GRF W is called centered, if \(\mu _W(\underline{x}^{(1)})=0\) for all \(\underline{x}^{(1)}\in \mathscr {D}\).

Note that every Gaussian random field is determined uniquely by its mean and covariance function. We denote by \(Q:\mathscr {L}^2(\mathscr {D})\rightarrow \mathscr {L}^2(\mathscr {D})\) the covariance operator of W which is, for \(\psi \in \mathscr {L}^2(\mathscr {D})\), defined by

$$\begin{aligned} Q(\psi )(\underline{x})=\int _{\mathscr {D}}q_W(\underline{x},\underline{y})\psi (\underline{y})d\underline{y} \text {, for } \underline{x}\in \mathscr {D}. \end{aligned}$$

Here, \(\mathscr {L}^2(\mathscr {D})\) denotes the set of all square integrable functions over \(\mathscr {D}\). Further, if \(\mathscr {D}\) is compact, there exists a decreasing sequence \((\lambda _i,~i\in \mathbb {N})\) of real eigenvalues of Q with corresponding eigenfunctions \((e_i,~i\in \mathbb {N})\subset \mathscr {L}^2(\mathscr {D})\) which form an orthonormal basis of \(\mathscr {L}^2(\mathscr {D})\) (see (Adler and Taylor 2007, Section 3.2) and (Werner 2011, Theorem VI.3.2 and Chapter II.3)). The GRF W is called stationary if the mean function \(\mu _W\) is constant and the covariance function \(q_W(\underline{x}^{(1)},\underline{x}^{(2)})\) only depends on the difference \(\underline{x}^{(1)}-\underline{x}^{(2)}\) of the values \(\underline{x}^{(1)},~\underline{x}^{(2)}\in \mathscr {D}\). Further, the stationary GRF W is called isotropic if the covariance function \(q_W(\underline{x}^{(1)},\underline{x}^{(2)})\) only depends on the Euclidean length \(\Vert \underline{x}^{(1)}-\underline{x}^{(2)}\Vert _2\) of the difference of the values \(\underline{x}^{(1)},~\underline{x}^{(2)}\in \mathscr {D}\) (see Adler and Taylor (2007), p. 102 and p. 115).

3 The Subordinated Gaussian Random Field

Throughout the rest of this paper, let \(d\in \mathbb {N}\) be a natural number with \(d\ge 2\) and \(T_1,\dots ,T_d>0\) be positive values. We define the horizon vector \(\mathbb {T}:=(T_1,\dots ,T_d)\) and consider the spatial domain \([0,\mathbb {T}]_d:=[0,T_1]\times \dots \times [0,T_d]\subset \mathbb {R}^d\). In the following, we will also use the notation \((0,\mathbb {T}]_d:=(0,T_1]\times \dots \times (0,T_d]\). After a short motivation we define next the subordinated field and show that it is indeed measurable.

3.1 Motivation: The Subordinated Brownian Motion

In order to motivate the novel subordination approach for the extension of Lévy processes, we shortly repeat the main ideas of the subordinated Brownian motion which is defined as a Lévy-time-changed Brownian motion: Let \(B=(B(t),~t\in \mathbb {R}_+)\) be a Brownian motion and \(l=(l(t),~t\in \mathbb {R}_+)\) be a subordinator. The subordinated Brownian motion is then defined to be the process

$$\begin{aligned} L(t):=B(l(t)), ~t\in \mathbb {R}_+. \end{aligned}$$

It follows from (Applebaum 2009, Theorem 1.3.25) that the process L is again a Lévy process. Note that the class of subordinated Brownian motions is a rich class of processes with great distributional flexibility. For example, the well known Generalized Hyperbolic Lévy process can be represented as a NIG-subordinated Brownian motion (see Barth and Stein (2018b) and especially Lemma 4.1 therein).

3.2 The Definition of the Subordinated Gaussian Random Field

Let \(W=(W(\underline{x}),~\underline{x}=(x_1,\dots ,x_d)\in \mathbb {R}_+^d)\) be a GRF such that W is \(\mathscr {F}\otimes \mathscr {B}(\mathbb {R}_+^d)-\mathscr {B}(\mathbb {R})\)-measurable. We denote by \(\mu _W:\mathbb {R}_+^d\rightarrow \mathbb {R}\) the mean function and by \(q_W:\mathbb {R}_+^d\times \mathbb {R}_+^d\rightarrow \mathbb {R}\) the covariance function of W. Let \(l_k=(l_k(x),~x\in [0,T_k])\) be independent Lévy subordinators with triplets \((\gamma _k,0,\nu _k)\), for \(k\in \{1, \dots , d\}\), corresponding to representation (1). Further, we assume that the Lévy subordinators are stochastically independent of the GRF W. We consider the random field

$$\begin{aligned} L:\Omega \times [0,\mathbb {T}]_d\rightarrow \mathbb {R} \text { with } L(x_1,\dots ,x_d):=W(l_1(x_1),\dots ,l_d(x_d)) \text {, for } (x_1,\dots ,x_d)\in [0,\mathbb {T}]_d, \end{aligned}$$

and call it subordinated Gaussian random field (subordinated GRF).

Remark 3.1

Note that assuming that W has continuous paths is sufficient to ensure that W is a jointly measurable function since W is a Carathéodory function in this case (see (Aliprantis and Border 2006, Lemma 4.51)). A sufficient condition for the pathwise continuity of GRFs is given, for example, by (Adler and Taylor 2007, Theorem 1.4.1) (see also the discussion in (Adler and Taylor 2007, Section 1.3, p. 13)). A specific example for a class of GRFs with at least continuous samples is given by the Matérn GRFs: for a given smoothness parameter \(\nu > \frac{1}{2}\), a correlation parameter \(r>0\) and a variance parameter \(\sigma ^2>0\) the Matérn-\(\nu\) covariance function on \(\mathbb {R}_+^d\times \mathbb {R}_+^d\) is given by \(q_W^M(\underline{x},\underline{y})=\rho _M(\Vert \underline{x}-\underline{y}\Vert _2)\) with

$$\begin{aligned} \rho _M(s) = \sigma ^2 \frac{2^{1-\nu }}{\Gamma (\nu )}\Big (\frac{2s\sqrt{\nu }}{r}\Big )^\nu K_\nu \Big (\frac{2s\sqrt{\nu }}{r}\Big ), \text { for }s\ge 0, \end{aligned}$$

where \(\Gamma (\cdot )\) is the Gamma function and \(K_\nu (\cdot )\) is the modified Bessel function of the second kind (see (Graham et al. 2015, Section 2.2 and Proposition 1)). Here, \(\Vert \cdot \Vert _2\) denotes the Euclidean norm on \(\mathbb {R}^d\). A Matérn-\(\nu\) GRF is a centered GRF with covariance function \(q_W^M\).

The subordinated GRF constructed above is one possible way to extend the concept of the subordinated Brownian motion to higher-dimensional parameter domains. However, construction of a random field by subordination in each spatial variable is not confined to this approach. For example, the construction itself is not limited to the case that W is a GRF and l is a Lévy subordinator. One could consider more general random fields \((R(\underline{x}),~\underline{x}\in \mathbb {R}_+^d)\) subordinated by d positive valued stochastic processes. However, in general it might be difficult or impossible to investigate theoretical properties of the resulting random field. In contrast, the subordinated GRFs inherits several properties from the GRF and the Lévy subordinators investigated in the following sections.

3.3 Measurability

In Subsection 3.2 we introduced the subordinated GRF L as a random field. Strictly speaking, we therefore have to verify that point evaluations of the field L are random variables, meaning that we have to ensure measurability of these objects. Note that this is not trivial, since - due to the construction of L - the Lévy subordinators induce an additional \(\omega\)-dependence in the spatial direction of the GRF W. The following lemma proves joint measurability of L.

Lemma 3.2

Let L be a subordinated GRF on the spatial domain \([0,\mathbb {T}]_d\) as constructed in Subsection 3.2, where we use the notation \(\underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\). The mapping

$$\begin{aligned} L:\Omega \times [0,\mathbb {T}]_d\rightarrow \mathbb {R},\quad (\omega ,\underline{x})\mapsto W(\omega ,l_1(\omega ,x_1)\dots ,l_d(\omega ,x_d)), \end{aligned}$$

is \(\mathscr {F}\otimes \mathscr {B}([0,\mathbb {T}]_d)-\mathscr {B}(\mathbb {R})\)-measurable.

Proof

For any \(k\in \{1,\dots ,d\}\), the Lévy process \(l_k\) has càdlàg paths and, hence, the mapping \(l_k:\Omega \times [0,T_k]\rightarrow \mathbb {R}_+,\) is \(\mathscr {F}\otimes \mathscr {B}([0,T_k])-\mathscr {B}(\mathbb {R}_+)\)-measurable (see (Protter 2004, Chapter 1, Theorem 30) and (Sato 2013, Section 30)). We consider domain-extended versions of the processes: for any \(k\in \{1\dots ,d\}\), we define the mapping \(\tilde{l}_k(\omega ,\underline{x}):= l_k(\omega ,x_k),\text { for }(\omega ,\underline{x})\in \Omega \times [0,\mathbb {T}]_d\), which is \(\mathscr {F}\otimes \mathscr {B}([0,\mathbb {T}]_d)-\mathscr {B}(\mathbb {R}_+)\) measurable by (Aliprantis and Border (2006), Lemma 4.51). An application of (Aliprantis and Border 2006, Lemma 4.49) yields the \(\mathscr {F}\otimes \mathscr {B}([0,\mathbb {T}_d])-\mathscr {B}(\mathbb {R}_+^d)\)-measurability of the mapping

$$\begin{aligned} \Omega \times [0,\mathbb {T}]_d \rightarrow \mathbb {R}_+^d,\quad (\omega ,\underline{x})&\mapsto (\tilde{l}_1(\omega ,\underline{x}),\dots ,\tilde{l}_d(\omega ,\underline{x}))= (l_1(\omega ,x_1),\dots ,l_d(\omega ,x_d)). \end{aligned}$$

Further, the mapping \((\omega ,\underline{x})\mapsto \omega\) is \(\mathscr {F}\otimes \mathscr {B}([0,\mathbb {T}]_d)-\mathscr {F}\)-measurable and, hence, (Aliprantis and Border 2006, Lemma 4.49) yields that the mapping

$$\begin{aligned} \Omega \times [0,\mathbb {T}]_d&\rightarrow \Omega \times \mathbb {R}_+^d,\\ (\omega ,\underline{x})&\mapsto (\omega ,(\tilde{l}_1(\omega ,\underline{x}),\dots ,\tilde{l}_d(\omega ,\underline{x})))= (\omega , (l_1(\omega ,x_1),\dots ,l_d(\omega ,x_d))), \end{aligned}$$

is \(\mathscr {F}\otimes \mathscr {B}([0,\mathbb {T}_d])-\mathscr {F}\otimes \mathscr {B}(\mathbb {R}_+^d)\)-measurable. By assumption, the GRF W is \(\mathscr {F}\otimes \mathscr {B}(\mathbb {R}_+^d)-\mathscr {B}(\mathbb {R})\)-measurable and, therefore, the mapping

$$\begin{aligned} L:\Omega \times [0,\mathbb {T}]_d\rightarrow \mathbb {R},\quad (\omega ,\underline{x})\mapsto W(\omega ,(l_1(\omega ,x_1)\dots ,l_d(\omega ,x_d))), \end{aligned}$$

is \(\mathscr {F}\otimes \mathscr {B}([0,\mathbb {T}_d])-\mathscr {B}(\mathbb {R})\)-measurable as composition of measurable functions. \(\square\)

4 The Pointwise Distribution of the Subordinated GRF and the Lévy-Khinchin-formula

In this section we prove a Lévy-Khinchin-type formula for the subordinated GRF in order to have access to the pointwise distribution. This is important, for example, in view of statistical fitting and other applications. In order to be able to do so we need the following technical lemma about the expectation of the composition of independent random variables, which is a generalization of the corresponding assertion given in the proof of (Sato 2013, Theorem 30.1).

Lemma 4.1

Let \(W:\Omega \times \mathbb {R}_+^d\rightarrow \mathbb {R}\) be a \(\mathbb {P}-a.s.\) continuous random field and let \(Z:\Omega \rightarrow \mathbb {R}_+^d\) be a \(\mathbb {R}_+^d\)-valued random variable which is independent of the random field W. Further, let \(g:\mathbb {R}\rightarrow \mathbb {R}\) be a deterministic, continuous function. It holds

$$\begin{aligned} \mathbb {E}(g(W(Z))=\mathbb {E}(m(Z)), \end{aligned}$$

where \(m(z):=\mathbb {E}(g(W(z))\) for deterministic \(z\in \mathbb {R}_+^d\).

Proof

Step 1: Assume that g is globally bounded. We denote by \(C_b(\mathbb {R}_+^d)\) the space of continuous, bounded functions on \(\mathbb {R}_+^d\) equipped with the usual supremum norm. We define the function

$$\begin{aligned} F:C_b(\mathbb {R}_+^d) \times \mathbb {R}_+^d \rightarrow \mathbb {R},~(f,\underline{x})\mapsto g(f(\underline{x})), \end{aligned}$$

which is continuous and, hence, Borel-measurable. For a fixed threshold \(A>0\), we define the cut function \(\chi _A(x):=min(x,A)\), for \(x\in \mathbb {R}\) and consider the random field \(W_A(\omega ,\underline{x}):=W(\omega ,\chi _A(x_1),\dots ,\chi _A(x_d))\), for \(\omega \in \Omega\) and \(\underline{x}=(x_1,\dots ,x_d)\in \mathbb {R}_+^d\). Since W has continuous paths and \([0,A]^d\) is compact, \(W_A\) has paths in \(C_b(\mathbb {R}_+^d)\) and we have the pathwise identity \(g(W_A(Z))=F(W_A,Z)\). Using the independence of W and Z together with (Da Prato and Zabczyk 2014, Proposition 1.12) yields

$$\begin{aligned} \mathbb {E}(g(W_A(Z)) = \mathbb {E}(F(W_A,Z)) = \mathbb {E}(\mathbb {E}(F(W_A,Z)\,|\, \sigma (Z))) = \mathbb {E}(m_A(Z)), \end{aligned}$$

where \(m_A(z) := \mathbb {E}(g(W_A(z)))\) for \(z\in \mathbb {R}_+^d\). Further, since g is continuous and bounded and W has continuous paths, we obtain the pathwise convergence \(g(W_A(Z))\rightarrow g(W(Z))\) and \(m_A(Z)\rightarrow m(Z)\), for \(A\rightarrow \infty\). Using again the boundedness of g and the dominated convergence theorem, we obtain \(\mathbb {E}(g(W(Z))) = \mathbb {E}(m(Z))\).

Step 2: In this step we assume that \(g(x)\ge 0\) on \(\mathbb {R}\) but g does not necessarily have to be bounded. It follows that m is also non-negative on \(\mathbb {R}_+^d\). Since g and m are non-negative we obtain the \(\mathbb {P}-a.s.\) monotone convergence of \(\chi _A(g(W(Z))))\rightarrow g(W(Z)))\) for \(A\rightarrow +\infty\). We define \(m_A(z):=\mathbb {E}(\chi _A(g(W(z)))\), for \(z\in \mathbb {R}_+^d\), and obtain by the monotone convergence theorem

$$\begin{aligned} m_A(Z)\rightarrow m(Z) ~\mathbb {P}-a.s. \text { for } A\rightarrow +\infty . \end{aligned}$$

Using Step 1 and the monotone convergence theorem we obtain:

$$\begin{aligned} \mathbb {E}(g(W(Z))) = \underset{A\rightarrow +\infty }{\lim } \mathbb {E}(\chi _A(g(W(Z)))) = \underset{A\rightarrow +\infty }{\lim } \mathbb {E}(m_A(Z)) = \mathbb {E}(m(Z)). \end{aligned}$$

Step 3: Finally, we consider an arbitrary continuous function \(g:\mathbb {R}\rightarrow \mathbb {R}\). We write \(g^+=\max \{0,g\},~g^-=-\min \{0,g\}\) as well as \(\tilde{m}^+(z)=\mathbb {E}(g^+(W(z))),~\tilde{m}^-(z)=\mathbb {E}(g^-(W(z)))\) for \(z\in \mathbb {R}_+^d\) and obtain the additive decomposition \(g(x)=g^+(x) - g^-(x)\) for \(x\in \mathbb {R}\) and \(m(z)=\tilde{m}^+(z) - \tilde{m}^-(z)\) for \(z\in \mathbb {R}_+^d\) by the additivity of the integral with respect to the integration domain. We apply Step 2 to optain

$$\begin{aligned} \mathbb {E}(g(W(Z))) = \mathbb {E}(g^+(W(Z))) - \mathbb {E}(g^-(W(Z))) = \mathbb {E}(\tilde{m}^+(Z)) - \mathbb {E}(\tilde{m}^-(Z)) = \mathbb {E}(m(Z)), \end{aligned}$$

which proves the assertion. \(\square\)

Remark 4.2

We emphasize that the assumptions on the random field W and the random variable Z in Lemma 4.1 are very mild. In particular, we do not assume the existence of continuous densities of the random field W or the random vector Z. Further, we mention that the assertion obviously may be extended to deterministic, bounded and continuous functions \(g:\mathbb {R}\rightarrow \mathbb {C}\) which are complex-valued.

A GRF W is pointwise normally distributed with parameters specified by the mean \(\mu _W\) and covariance function \(q_W\). Using this together with Lemma 4.1 and Remark 4.2 we obtain the following semi-explicit formula for the pointwise characteristic function of a subordinated GRF.

Corollary 4.3

Let W be a \(\mathbb {P}-a.s.\) continuous GRF on \(\mathbb {R}_+^d\) with mean function \(\mu _W:\mathbb {R}_+^d\rightarrow \mathbb {R}\) and covariance function \(q_W:\mathbb {R}_+^d\times \mathbb {R}_+^d\rightarrow \mathbb {R}\). Further, let \(l_k=(l_k(t),~t\in [0,T_k])\), for \(k=1,\dots ,d\), be independent Lévy subordinators which are independent of W. The pointwise characteristic function of the subordinated GRF defined by \(L(\underline{x}):=W(l_1(x_1),\dots ,l_d(x_d))\), for \(\underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\), admits the formula

$$\begin{aligned} \mathbb {E}(\exp (i\xi L(\underline{x})))&=\mathbb {E}(\exp (i\xi W(l_1(x_1),\dots ,l_d(x_d)))) \\&= \mathbb {E}\Big (\exp \big (i\mu _W(l_1(x_1),\dots ,l_d(x_d)) - \frac{1}{2}\xi ^2 \sigma _W^2(l_1(x_1),\dots ,l_d(x_d))\big )\Big ), \end{aligned}$$

for \(\xi \in \mathbb {R}\) and any fixed point \(\underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\). Here, the variance function \(\sigma _W^2:\mathbb {R}_+^d\rightarrow \mathbb {R}_+\) is given by \(\sigma _W^2(\underline{x}):=q_W(\underline{x},\underline{x})\) for \(\underline{x}\in \mathbb {R}_+^d\).

In the one-dimensional case, the Lévy-Khinchin formula gives an explicit representation of the pointwise characteristic function of a Lévy process. This representation also applies to the subordinated Brownian motion, since it is itself a Lévy process (see Subsection 3.1). Note that in the construction of the subordinated Brownian motion one cannot replace the Brownian motion by a general one-parameter GRF on \(\mathbb {R}_+\) without losing the validity of the Lévy-Khinchin formula. Hence, in the case of a subordinated GRF on a higher-dimensional parameter space, it is natural that we have to restrict the class of admissible GRFs in order to obtain a Lévy-Khinchin-type formula which is the d-dimensional analogon of Theorem 2.1. We recap that the pointwise characteristic function of a standard Brownian motion B is given by

$$\begin{aligned} \phi _{B(t)}(\xi )=\mathbb {E}(\exp (i\xi B(t)))=\exp \big (-\frac{1}{2}t\xi ^2\big ) \text {, for }\xi \in \mathbb {R}, \end{aligned}$$

for \(t\ge 0\). Note that the Brownian motion ist not characterized by this property, i.e. not every zero-mean GRF on \(\mathbb {R}_+\) with the above pointwise characteristic function is a Brownian motion, since this specific characteristic function can be attained by different covariance functions, whereas the covariance function of the Browian motion is given uniquely by \(q_{BM}(s,t)=\text {Cov}(B(s),B(t))=\min \{s,t\}\) for \(s,t\ge 0\) (see for example (Schoutens 2003, Section 3.2.2)). Motivated by this, we impose the following assumptions on the GRF on \(\mathbb {R}_+^d\).

Assumption 4.4

Let \(W=(W(\underline{x}),~\underline{x}\in \mathbb {R}_+^d)\) be a zero-mean continuous GRF. We assume that there exists a constant \(\sigma >0\) such that the pointwise characteristic function of W is given by

$$\begin{aligned} \phi _{W(\underline{x})}(\xi )=\mathbb {E}(\exp (i\xi W(\underline{x}))) = \exp \big (-\frac{1}{2}\sigma ^2\xi ^2(x_1+\dots +x_d)\big ), \end{aligned}$$

for \(\xi \in \mathbb {R}\) and every \(\underline{x}=(x_1,\dots ,x_d)\in \mathbb {R}_+^d\).

Remark 4.5

Note that for a zero-mean, continuous and stationary GRF \(\tilde{W}=(\tilde{W}(\underline{x}),~\underline{x}\in \mathbb {R}_+^d)\), the GRF W defined by

$$\begin{aligned} W(\underline{x}):=\sqrt{x_1+\dots +x_d}\tilde{W}(\underline{x}),\text { for }\underline{x}=(x_1,\dots ,x_d)\in \mathbb {R}_+^d, \end{aligned}$$

satisfies Assumption 4.4.

We are now able to derive the Lévy-Khinchin-type formula for the subordinated GRF.

Theorem 4.6 (Lévy-Khinchin-type formula)

Let Assumption 4.4 hold. We assume independent Lévy subordinators \(l_k=(l_k(x),~x\in [0,T_k])\), with Lévy triplets \((\gamma _k,0,\nu _k)\), for \(k=1,\dots ,d\), are given corresponding to representation (1). Further, we assume that these processes are independent of the GRF W. We consider the subordinated GRF defined by \(L:\Omega \times [0,\mathbb {T}]_d\rightarrow \mathbb {R} \text { with } L(\underline{x}):=W(l_1(x_1),\dots ,l_d(x_d)) \text { for } \underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\). The pointwise characteristic function of the random field L is, for any \(\underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\), given by

$$\begin{aligned}\phi _{L(\underline{x})}(\xi )&=\mathbb {E}(\exp (i\xi W(l_1(x_1),\dots , l_d(x_d)))\\&= \exp \Big (-(x_1,\dots ,x_d) \cdot \big (\frac{\sigma ^2\xi ^2}{2}(\gamma _1,\dots , \gamma _d)^t + \int _{\mathbb {R}\setminus \{0\}} 1 - e^{i\xi z} + i\xi z \mathbbm {1}_{\{|z|\le 1\}}(z)\nu _{ext}(dz)\big )\Big ), \end{aligned}$$

for \(\xi \in \mathbb {R}\). Here, the jump measure \(\nu _{ext}\) is defined through

$$\begin{aligned} \nu _{ext}([a,b]):=\left( \begin{array}{c} \nu _1^\#([a,b]) \\ \vdots \\ \nu _d^\#([a,b]) \end{array}\right) , \end{aligned}$$

for \(a,b\in \mathbb {R}\) where the Lévy measure \(\nu _k^\#\), for \(k =1,\dots ,d\) and \(a,b\in \mathbb {R}\), is given by

$$\begin{aligned} \nu _k^\#([a,b]):=\int _0^\infty \int _a^b\frac{1}{\sqrt{2\pi \sigma ^2t}}\exp \left( -\frac{x^2}{2\sigma ^2t}\right) dx\,\nu _k(dt). \end{aligned}$$

Proof

It follows by (Sato 2013, Theorem 30.1 and Lemma 30.3) that the measures \(\nu _k^\#\) are Lévy measures for \(k=1,...,d\). For notational simplicity we prove the assertion for \(d=2\). For general \(d\in \mathbb {N}\) the assertion follows by the same arguments.

Claim 1: For a Lévy measure \(\nu\) on \((\mathbb {R}_+,\mathscr {B}(\mathbb {R}_+))\) it holds for every \(\xi \in \mathbb {R}\):

$$\begin{aligned} \int _0^\infty \exp (-\frac{\xi ^2}{2}y)-1\nu (dy) = \int _{\mathbb {R}\setminus \{0\}} \exp (i\xi x) - 1 - i\xi x \mathbbm {1}_{\{|x|\le 1\}}(x)\nu ^\sharp (dx), \end{aligned}$$

where the measure \(\nu ^\sharp\) is defined by \(\nu ^\sharp (\mathscr {I})=\int _0^\infty \int _a^b \frac{1}{\sqrt{2\pi t}} \exp (-\frac{x^2}{2t})dx \nu (dt)\), for \(\mathscr {I}=[a,b]\) with \(a,b\in \mathbb {R}\). We use the notation \(f_s(x):= \frac{1}{\sqrt{2\pi s}}\exp (-\frac{x^2}{2s})\) for \(s>0\) and \(x\in \mathbb {R}\) and derive this equation by a direct calculation using the definition of the measure \(\nu ^\sharp\):

$$\begin{aligned} \int _{\mathbb {R}\setminus \{0\}} \exp (i\xi x) - 1&- i\xi x \mathbbm {1}_{\{|x|\le 1\}}(x)\nu ^\sharp (dx)\\ {}&= \int _{\mathbb {R}\setminus \{0\}}(\exp (i\xi x) - 1 - i\xi x \mathbbm {1}_{\{|x|\le 1\}}(x)) \int _0^\infty f_s(x)\nu (ds)dx\\&=\int _0^\infty \int _{\mathbb {R}\setminus \{0\}}\exp (i\xi x)f_s(x)dx - 1 - i\xi \int _{-1}^1 xf_s(x)dx \nu (ds)\\&=\int _0^\infty \exp (-\frac{s\xi ^2}{2}) - 1\nu (ds). \end{aligned}$$

In the last step we used that the characteristic function of a \(\mathscr {N}(0,s)\)-distributed random variable is given by \(\phi (\xi )=\exp (-\frac{s\xi ^2}{2})\) for \(\xi \in \mathbb {R}\) and \(s>0\). Further, we used the fact that \(f_s'(x)=-x/sf_s(x)\) to see that

$$\begin{aligned} \int _{-1}^1 xf_s(x)=-s(f_s(1)-f_s(-1))=0. \end{aligned}$$

Claim 2: (See (Applebaum 2009), P. 53) For a Lévy subordinator l with triplet \((\gamma , 0, \nu )\) it holds

$$\begin{aligned} \mathbb {E}(\exp (-\xi l(t))) = \exp (-t(\gamma \xi + \int _0^\infty (1-\exp (-\xi y))\nu (dy))), \end{aligned}$$

for \(t\ge 0\) and \(\xi >0\).

With these two assertions at hand we can now prove the Lévy-Khinchin-type formula. The case \(\xi =0\) is trivial since both sides equal 1 in this case. Let \((x,y)\in [0,\mathbb {T}]_2\) and \(0\ne \xi \in \mathbb {R}\) be fixed. Using Lemma 4.1 and Remark 4.2 with \(g(\cdot )=\exp (i\xi \cdot )\) and \(Z=(l_1(x), l_2(y))\) we calculate

$$\begin{aligned} \mathbb {E}(\exp (i\xi W(l_1(x),l_2(y)))) = \mathbb {E}(m(l_1(x),l_2(y))) \end{aligned}$$

where

$$\begin{aligned} m(x',y'):=\mathbb {E}(\exp (i\xi W(x',y'))) = \exp (-\frac{1}{2}\sigma ^2\xi ^2(x'+y')) \text { for }(x',y')\in \mathbb {R}_+^2, \end{aligned}$$

where we used Assumption 4.4. Therefore, using the independence of the processes \(l_1\) and \(l_2\) together with Claim 2 we obtain

$$\begin{aligned} \phi _{L(x,y)}(\xi )&= \mathbb {E}(\exp (-\frac{1}{2}\sigma ^2\xi ^2l_1(x)))\, \mathbb {E}(\exp (-\frac{1}{2}\sigma ^2\xi ^2l_2(x)))\\&=\exp (-x(\gamma _1 \frac{\sigma ^2\xi ^2}{2} + \int _0^\infty (1-\exp (-\frac{\xi ^2}{2}y))\hat{\nu }_1(dy))) \\&~~~\cdot \exp (-y(\gamma _2 \frac{\sigma ^2\xi ^2}{2} + \int _0^\infty (1-\exp (-\frac{\xi ^2}{2}y))\hat{\nu }_2(dy))), \end{aligned}$$

where we define the (Lévy-)measures \(\hat{\nu }_1\) and \(\hat{\nu }_2\) by \(\hat{\nu }_k([a,b])=\nu _k([a/\sigma ^2,b/\sigma ^2])\) for \(a,~b\in \mathbb {R}_+\) and \(k=1,2\). Now, using Claim 1 we calculate

$$\begin{aligned} \phi _{L(x,y)}(\xi )&=\exp \big (-x(\gamma _1 \frac{\sigma ^2\xi ^2}{2} - \int _{\mathbb {R}\setminus \{0\}} \exp (i\xi x) - 1 - i\xi x \mathbbm {1}_{\{|x|\le 1\}}(x)\hat{\nu }_1^\sharp (dx))) \\&-y(\gamma _2 \frac{\sigma ^2\xi ^2}{2} - \int _{\mathbb {R}\setminus \{0\}} \exp (i\xi x) - 1 - i\xi x \mathbbm {1}_{\{|x|\le 1\}}(x)\hat{\nu }_2^\sharp (dx))\big ), \end{aligned}$$

where the measures \(\hat{\nu }_k^\sharp\) for \(k=1,2\) are given by:

$$\begin{aligned} \hat{\nu }_k^\sharp ([a,b])&=\int _0^\infty \int _a^b\frac{1}{\sqrt{2\pi t}}\exp (-\frac{x^2}{2t})dx\hat{\nu }_k(dt)\\&=\int _0^\infty \int _a^b \frac{1}{\sqrt{2\pi \sigma ^2 t}} \exp (-\frac{x^2}{2\sigma ^2 t})dx \nu _k(dt), \end{aligned}$$

for \(a,b\in \mathbb {R}\). This finishes the proof. \(\square\)

Using Theorem 4.6 together with the convolution theorem (see for example (Klenke 2013, Lemma 15.11 (iv))) we immediately obtain the following corollary.

Corollary 4.7

Let Assumption 4.4 hold. We assume d independent Lévy subordinators \(l_k=(l_k(x),~x\in [0,T_k])\) are given for \(k=1,\dots ,d\), which are independent of W and the corresponding Lévy triplets are given by \((\gamma _k,0,\nu _k)\) for \(k=1,\dots ,d\). We consider the subordinated GRF \(L:\Omega \times [0,\mathbb {T}]_d\rightarrow \mathbb {R}\) defined by \(L(\underline{x}):=W(l_1(x_1),\dots ,l_k(x_d)) \text {, for } \underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\). Further, we assume that independent Lévy processes \(\tilde{l}_k\) on \([0,T_k]\) are given with triplets \((0, \sigma ^2\gamma _k, \nu _k^\#)\) for \(k=1,\dots ,d\) in the sense of the one-dimensional Lévy-Khinchin formula, see Theorem 2.1. Here, the Lévy measure \(\nu _k^\#\) is defined by

$$\begin{aligned} \nu _k^\#([a,b]):=\int _0^\infty \int _a^b\frac{1}{\sqrt{2\pi \sigma ^2t}}\exp \left( -\frac{x^2}{2\sigma ^2t}\right) dx\,\nu _k(dt), \end{aligned}$$

for \(k =1,\dots , d\) and \(a,b\in \mathbb {R}\). The pointwise marginal distribution of the subordinated GRF satisfies

$$\begin{aligned} L(\underline{x}){\mathop {=}\limits ^{\mathscr {D}}}\tilde{l}_1(x_1) + \dots + \tilde{l}_d(x_d), \end{aligned}$$

for every \(\underline{x}=(x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\).

We point out that the case of stationary GRFs is excluded by Assumption 4.4. Therefore, we consider this situation in the following remark where we again assume \(d=2\) for notational simplicity.

Remark 4.8

Let W be a stationary, centered GRF with covariance function \(q_W((x,y),(x',y'))=\tilde{q}_W((x-x',y-y'))\), for \((x,y),~(x',y')\in \mathbb {R}_+^2\), and pointwise variance \(\sigma ^2:=\tilde{q}_W((0,0))>0\). Let \(l_1\) and \(l_2\) be independent Lévy subordinators, which are also independent of W. We obtain by Lemma 4.1 the following representation for the pointwise characteristic function of the subordinated random field defined by \(L(x,y):=W(l_1(x),l_2(y))\), for \((x,y)\in [0,\mathbb {T}]_2\):

$$\begin{aligned} \phi _{L(x,y)}(\xi )=\mathbb {E}(\exp (i\xi W(l_1(x),l_2(y)))=\mathbb {E}(m(l_1(x),l_2(y))), \end{aligned}$$

where

$$\begin{aligned} m(x',y')=\mathbb {E}(\exp (i\xi W(x',y')))=\exp \big (-\frac{1}{2}\sigma ^2\xi ^2\big ), \end{aligned}$$

which is a constant function in \((x',y')\). Therefore we obtain

$$\begin{aligned} \phi _{L(x,y)}(\xi )=\exp \big (-\frac{1}{2}\sigma ^2\xi ^2\big ), \end{aligned}$$

for \((x,y)\in [0,\mathbb {T}]_2\). Hence, in case of a stationary GRF, the subordinated GRF is pointwise normally distributed with variance \(\sigma ^2\).

We conclude this subsection with a remark on the given Lévy-Khinchin formula.

Remark 4.9

With the approach of subordinating GRFs on a higher-dimensional domain, we obtain a discontinuous Lévy-type random field and a Lévy-Khinchin formula which allows access to the pointwise distribution of the random field. Further we obtain a similar parametrization of the class of subordinated random fields, as it is the case for Lévy processes on a one-dimensional parameter space: Under the assumptions of Theorem 4.6, every subordinated GRF can be characterized by the tuple \((\sigma ^2,\gamma _1,\dots ,\gamma _d,\nu _{ext},q_W)\), where \(q_W:\mathbb {R}_+^d\times \mathbb {R}_+^d\rightarrow \mathbb {R}\) is the covariance function of the GRF. Further, the class of subordinated GRFs is linear in the sense that for the sum of two independent subordinated GRFs one can construct a single subordinated GRF with the same pointwise characteristic function.

5 Covariance Function

One advantage of the subordinated GRF is that the correlation between spatial points is accessible. The correlation structure is hereby determined by the covariance function of the underlying GRF and the specific choice of the subordinators. For statistical applications it is often important to image or enforce a specific correlation structure in view of fitting random fields to physical phenomena. In this context the question arises whether one can find analytically explicit formulas for the covariance function of a subordinated Gaussian random field. This will be explored in the following section.

For notational simplicity we restrict the dimension to be \(d=2\) in this section but we point out that analogous results apply for dimensions \(d\ge 3\). A direct application of Lemma 4.1 yields the following corollary.

Corollary 5.1

Let W be a continuous, zero-mean GRF on \(\mathbb {R}_+^2\). Further, let \(l_1\) and \(l_2\) be two independent Lévy subordinators which are independent of W. Then the subordinated GRF L defined by \(L(x,y):=W(l_1(x),l_2(y))\), for \((x,y)\in \mathbb {R}_+^2\), is zero-mean with covariance function

$$\begin{aligned} q_L((x,y),(x',y')):=\mathbb {E}(L(x,y)L(x',y'))=\mathbb {E}\big (q_W((l_1(x),l_2(y)),(l_1(x'),l_2(y')))\big ), \end{aligned}$$

for \((x,y),~(x',y')\in \mathbb {R}_+^2\), where \(q_W:\mathbb {R}_+^2\times \mathbb {R}_+^2\rightarrow \mathbb {R}\) denotes the covariance function of the GRF W.

Proof

For \((x,y)\in [0,\mathbb {T}]_2\), we use Lemma 4.1 and the fact that the GRF W is centered to deduce \(\mathbb {E}(L(x,y))=\mathbb {E}(W(l_1(x),l_2(y)))=0\). Let \((x,y),~(x',y')\in [0,\mathbb {T}]_2\) be fixed. Another application of Lemma 4.1 with \(\tilde{W}(x_1,y_1,x_2,y_2):=W(x_1,y_1)\cdot W(x_2,y_2)\), \(g=\text {id}_\mathbb {R}\) and \(Z:=(l_1(x),l_2(y),l_1(x'),l_2(y'))\) yields the desired formula. \(\square\)

5.1 The Isotropic Case

We use Corollary 5.1 to derive a semi-explicit formula for the covariance function of the subordinated GRF, where the underlying GRF is isotropic.

Lemma 5.2

Let \(W:\Omega \times \mathbb {R}_+^2\rightarrow \mathbb {R}\) be a zero-mean, continuous and isotropic GRF with covariance function \(q_W((x,y),(x',y'))=\tilde{q}_W(|x-x'|,|y-y'|)\). Further, suppose that \(l_1\) and \(l_2\) are independent Lévy subordinators on \([0,T_1]\) (resp. \([0,T_2]\)) with density functions \(f_1\) and \(f_2\), i.e. \(f_1^x(\cdot )\) (resp. \(l_2^y(\cdot )\)) is the density function of \(l_1(x)\) (resp. \(l_2(y)\)) for \((x,y)\in (0,\mathbb {T}]_2\). The covariance function of the subordinated GRF L with \(L(x,y):=W(l_1(x),l_2(y))\), for \((x,y)\in [0,\mathbb {T}]_2\), admits the representation

$$\begin{aligned} q_L((x,y),(x',y'))=\int _{\mathbb {R}_+}\int _{\mathbb {R}_+} \tilde{q}_W(s,t)f_1^{|x-x'|}(s)f_2^{|y-y'|}(t)dsdt, \end{aligned}$$

for \((x,y),~(x',y')\in [0,\mathbb {T}]_2\) with \(x\ne x'\) and \(y\ne y'\).

For \(x=x'\) and \(y\ne y'\) it holds

$$\begin{aligned} q_L((x,y),(x,y'))=\int _{\mathbb {R}_+}\tilde{q}_W(0,t)f_2^{|y-y'|}(t)dt, \end{aligned}$$

for \(x\ne x'\) and \(y=y'\) one obtains

$$\begin{aligned} q_L((x,y),(x',y))=\int _{\mathbb {R}_+}\tilde{q}_W(s,0)f_1^{|x-x'|}(s)ds, \end{aligned}$$

and for \((x,y)=(x',y')\) the pointwise variance is given by

$$\begin{aligned} \mathrm{Var}(L(x,y))=q_L((x,y),(x,y))=\tilde{q}_W(0,0). \end{aligned}$$

Proof

The assertion follows immediately by Corollary 5.1 together with the independence of the processes \(l_1\) and \(l_2\) and the fact that \(|l_k(x)-l_k(x')|{\mathop {=}\limits ^{\mathcal {D}}}l_k(|x-x'|)\) for \(x,~x'\in [0,T_k]\) and \(k=1,2\) by the definition of a Lévy process. \(\square\)

5.2 The Non-isotropic Case

In this subsection, we derive a formula for the covariance function of the subordinated GRF for the case that the underlying GRF is not isotropic. In the following, we use the notation \(x\wedge y:=\min (x,y)\) and \(x\vee y:= \max (x,y)\) for real numbers \(x,y\in \mathbb {R}\). The next lemma will be useful in the proof of the covariance representation.

Lemma 5.3

Let \(l=(l(x),~x\in [0,T])\) be a general Lévy process with density function \(f:(0,T]\times \mathbb {R}\rightarrow \mathbb {R}\), i.e. the probability density function of the random variable l(x) is given by \(f^x(\cdot )\), for \(x\in (0,T]\). In this case, the joint probability density function of the random vector \(Z:=(l(x\wedge x'),l(x\vee x'))\), with \(x\ne ~x'\in (0,T]\), is given by \(f_Z(s,t)=f^{\min (x,x')}(s)\cdot f^{|x'-x|}(t-s)\) for \(t,s\in \mathbb {R}\).

Proof

Let \(x,~x'\in (0,T]\) with \(x< x'\) and \(x_1,x_2\in \mathbb {R}\) be fixed. The increment \(l(x')-l(x)\) is stochastically independent of the random variable l(x), which yields

$$\begin{aligned} \mathbb {P}(l(x)\le x_1 \wedge l(x')\le x_2)&= \mathbb {E}(\mathbbm {1}_{\{l(x)\le x_1\}}\mathbbm {1}_{\{l(x')\le x_2\}})\\&=\mathbb {E}(\mathbbm {1}_{\{l(x)\le x_1\}}\mathbbm {1}_{\{l(x')-l(x)\le x_2-l(x)\}})\\&=\int _\mathbb {R}\int _\mathbb {R}\mathbbm {1}_{\{s\le x_1\}}\mathbbm {1}_{\{t\le x_2-s\}}f^x(s)f^{x'-x}(t)dtds\\&=\int _{-\infty }^{x_1}\int _{-\infty }^{x_2-s}f^x(s)f^{x'-x}(t)dtds\\&=\int _{-\infty }^{x_1}\int _{-\infty }^{x_2}f^x(s)f^{x'-x}(t-s)dtds. \end{aligned}$$

For the case that \(x'< x\) the same argument yields

$$\begin{aligned} \mathbb {P}(l(x)\le x_1 \wedge l(x')\le x_2)=\int _{-\infty }^{x_1}\int _{-\infty }^{x_2}f^{x'}(s)f^{x-x'}(t-s)dsdt, \end{aligned}$$

which finishes the proof. \(\square\)

Remark 5.4

Note that Lemma 5.3 immediately implies that the joint density \(f_Z(s,t)\) of the two-dimensional random vector \(Z=(l(x\wedge x'),l(x\vee x'))\) for a Lévy subordinator l with \(x\ne x'\in (0,T]\) is given by

$$\begin{aligned} f_Z(s,t)= f^{\min (x,x')}(s)\cdot f^{|x'-x|}(t-s), \text { for } s,t\in \mathbb {R}_+. \end{aligned}$$

With this lemma at hand we are able to derive a formula for the covariance function of the subordinated (non-isotropic) GRF. Without loss of generality we consider points \((x,y),~(x',y')\) with \(x\le x'\) and \(y\le y'\) in the following Lemma. Formulas for the other cases follow by the same arguments with Lemma 5.3 and Remark 5.4.

Lemma 5.5

Let \(W:\Omega \times \mathbb {R}_+^2\rightarrow \mathbb {R}\) be a zero-mean, continuous and non-isotropic GRF with covariance function \(q_W\). Further, suppose that \(l_1\) and \(l_2\) are independent Lévy subordinators on \([0,T_1]\) (resp. \([0,T_2]\)) with density functions \(f_1\) and \(f_2\), i.e. \(f_1^x(\cdot )\) (resp. \(l_2^y(\cdot )\)) is the density function of \(l_1(x)\) (resp. \(l_2(y)\)) for \((x,y)\in (0,\mathbb {T}]_2\). The covariance function of the subordinared GRF L with \(L(x,y):=W(l_1(x),l_2(y))\), for \((x,y)\in [0,\mathbb {T}]_2\), admits the representation

$$\begin{aligned} q_L((x,y),(x',y'))= & \ \int _{\mathbb {R}_+}\int _{\mathbb {R}_+}\int _{\mathbb {R}_+} \int _{\mathbb {R}_+} q_W((x_1,x_2),(x_3,x_4))f_1^{x}(x_1)f_2^{y}(x_2)\\& \times f_1^{x'-x}(x_3-x_1)f_2^{y'-y}(x_4-x_2)dx_1\,dx_2\,dx_3\,dx_4, \end{aligned}$$

for \((x,y),~(x',y')\in (0,\mathbb {T}]_2\) with \(x<x'\) and \(y< y'\).

For \(x=x'\) and \(y<y'\), it holds

$$\begin{aligned} q_L((x,y),(x,y'))=& \ \int _{\mathbb {R}_+}\int _{\mathbb {R}_+}\int _{\mathbb {R}_+} q_W((x_1,x_2),(x_1,x_4))f_1^{x}(x_1)f_2^{y}(x_2)\\&\times f_2^{y'-y}(x_4-x_2)dx_1\,dx_2\,dx_4, \end{aligned}$$

and for \(x< x'\) and \(y=y'\) it holds

$$\begin{aligned} q_L((x,y),(x',y))=&\ \int _{\mathbb {R}_+}\int _{\mathbb {R}_+}\int _{\mathbb {R}_+} q_W((x_1,x_2),(x_3,x_2))f_1^{x}(x_1)f_2^{y}(x_2)\\&\times f_1^{x'-x}(x_3-x_1)dx_1\,dx_2\,dx_3. \end{aligned}$$

For \((x,y)=(x',y')\) one obtains for the pointwise variance of the field

$$\begin{aligned} \mathrm{Var}(L(x,y))=q_L((x,y),(x,y))=\int _{\mathbb {R}_+} \int _{\mathbb {R}_+} q_W(x_1,x_2,x_1,x_2)f_1^x(x_1)f_2^y(x_2)dx_1dx_2. \end{aligned}$$

Proof

Using Corollary 5.1, the independence of the processes \(l_1\) and \(l_2\), Lemma 5.3 and Remark 5.4 we calculate for \((x,y),~(x',y')\in (0,\mathbb {T}]_2\) with \(x< x'\) and \(y< y'\):

$$\begin{aligned} q_L((x,y),(x',y'))= & \ \int _{\mathbb {R}_+^4}q_W((x_1,x_2),(x_3,x_4))d\mathbb {P}_{(l_1(x),l_2(y),l_1(x'),l_2(y'))}(x_1,x_2,x_3,x_4)\\= &\ \int _{\mathbb {R}_+}\int _{\mathbb {R}_+}\int _{\mathbb {R}_+} \int _{\mathbb {R}_+} q_W((x_1,x_2),(x_3,x_4))f_1^{x}(x_1)f_2^{y}(x_2)\\&\times f_1^{x'-x}(x_3-x_1)f_2^{y'-y} (x_4-x_2)dx_1\,dx_2\,dx_3\,dx_4. \end{aligned}$$

The remaining cases follow by the same argument. \(\square\)

5.3 Statistical Fitting of the Covariance Function

The parametrization property of the subordinated GRF (see Remark 4.9) motivates a direct approach of covariance fitting: For a natural number \(N\in \mathbb {N}\), we assume that discrete points \(\{(x_i,y_i),~i=1,\dots ,N\}\) are given with corresponding empirical covariance function data \(C^{emp}=\{C_{i,j}^{emp},~i,j=1,\dots ,N\}\), where \(C_{i,j}^{emp}\) represents the empirical covariance of the field evaluated at the points \((x_i,y_i)\) and \((x_j,y_j)\). We search for the solution to the problem

$$\begin{aligned} argmin\Big \{\Vert \tilde{q}_L - C^{emp}\Vert _*~\Big |~ \text { admissible tuples } (\sigma ^2,\gamma _1,\gamma _2,\nu _{ext},q_W)\Big \} \end{aligned}$$

where we use the notation \(\tilde{q}_L:=\{q_L(x_i,y_i),~i,j=1,\dots ,N\}\) and \(\Vert \cdot \Vert _*\) is an appropriate norm on \(\mathbb {R}^N\), e.g. the euclidian norm. In order to solve this type of problem, the formulas for the covariance function given by Lemma 5.2 and Lemma 5.5 can be used, but still accessing the solution will be challenging due to the complexity of the set of admissible parameters.

6 Stochastic Regularity - Pointwise Moments

In this section we consider pointwise moments of a subordinated GRF L. In particular, we derive conditions which ensure the existence of pointwise p-th moments of the subordinated GRF L defined by \(L(\underline{x}):=W(l_1(x_1),\dots ,l_d(x_d))\), for \(\underline{x} = (x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\).

Obviously, in order to guarantee the existence of moments of the random variable \(L(\underline{x})\), we have to impose conditions on the GRF W and the subordinators \(l_1,\dots ,l_d\). The following theorem gives a better insight into the interaction between the underlying GRF and the stochastic regularity of the subordinators and presents coupled regularity conditions on the tail behaviour of both components of the random field.

Theorem 6.1

We assume that W is a centered and continuous GRF on \(\mathbb {R}_+^d\) with covariance function \(q_W:\mathbb {R}_+^d\times \mathbb {R}_+^d\rightarrow \mathbb {R}\). Further, we assume that there exist a positive number \(N\in \mathbb {N}\), coefficients \(\{c_j,~j=1,\dots ,N\}\subset [0,+\infty )\) and d-dimensional exponents \(\{\underline{\alpha }^{(j)},~j=1,\dots ,N\}\subset \mathbb {R}_+^d\) such that the pointwise variance function \(\sigma _W^2\) of W satisfies

$$\begin{aligned} \sigma _W(\underline{z}) =q_W(\underline{z},\underline{z})^{1/2}\le \sum _{j=1}^N c_j \underline{z}^{\underline{\alpha }^{(j)}}, \text { for } z_1,\dots ,z_d\ge 0. \end{aligned}$$
(2)

Here, we use the notation \(\underline{z}^{\underline{\alpha }} = z_1^{\alpha _1}\cdot \dots \cdot z_d^{\alpha _d}\) for \(\underline{z}=(z_1,\dots ,z_d)\in \mathbb {R}_+^d\) and \(\underline{\alpha }=(\alpha _1,\dots ,\alpha _d)\in \mathbb {R}_+^d\). We consider a fixed point \(\underline{x}\in [0,\mathbb {T}]_d\) and assume that the densities \(f_1^{x_1},\dots ,f_d^{x_d}\) of the evaluated processes \(l_1(x_1),\dots ,l_d(x_d)\) fulfill

$$\begin{aligned} f_i^{x_i}(z) \le C |z|^{-\eta _i},\text { for } z\ge K \text { and } i=1,\dots ,d, \end{aligned}$$
(3)

with positive decay rates \(\{\eta _i,~i=1,\dots ,d\}\). Here, the constants C and K are independent of z but may depend on the evaluation point \(\underline{x}=(x_1,\dots ,x_d)\) and \(\eta _i\) may depend on \(x_i\), for \(i=1,\dots ,d\). We define the number

$$\begin{aligned} a := \min \{ {(\eta _i-1)}/{\alpha _i^{(j)}}~\big |~ i=1,\dots ,d,~j=1,\dots ,N, ~\alpha _i^{(j)}\ne 0\}. \end{aligned}$$

Then, the random variable \(L(\underline{x})\) admits a p-th moment for \(p\in [1,a)\), i.e. \(L(\underline{x})\in \mathcal {L}^p(\Omega ;\mathbb {R})\) for \(p\in [1,a)\).

Proof

Let \(Z\sim \mathcal {N}(0,\sigma ^2)\) be a real-valued, centered, normally distributed random variable with variance \(\sigma ^2>0\). It follows by Eq. (18) in Winkelbauer (2012) that the p-th absolute moment of Z admits the form \(\mathbb {E}(|Z|^p) =C_p\sigma ^p,\) for all \(p>-1\), with a constant \(C_p\) depending on p. Let \(p\ge 1\) be a fixed number. We use Lemma 4.1 to calculate

$$\begin{aligned} \mathbb {E}(|L(\underline{x})|^p)&= \mathbb {E}(|W(l_1(x_1),\dots ,l_d(x_d))|^p) =\mathbb {E}(m(l_1(x_1),\dots ,l_d(x_d))), \end{aligned}$$

with

$$\begin{aligned} m(x_1',\dots ,x_d'):=\mathbb {E}(|W(x_1',\dots ,x_d')|^p) = C_p\sigma _W^p(x_1',\dots ,x_d'), \end{aligned}$$

for \((x_1',\dots ,x_d') \in \mathbb {R}_+^d\). Hence, we obtain

$$\begin{aligned} \mathbb {E}(|L(\underline{x})|^p) = C_p\mathbb {E}\big (\sigma _W^p(l_1(x_1),\dots ,l_d(x_d))\big ). \end{aligned}$$

Next, we use the tail estmations (2) and (3), Hölder’s inequality and the independence of the subordinators to calculate

$$\begin{aligned} \mathbb {E}(|L(\underline{x})|^p)&= C_p\mathbb {E}\big (\sigma _W^p(l_1(x_1),\dots ,l_d(x_d)\big )\\&\le C_p \int _{\mathbb {R}_+^d}\Big (\sum _{j=1}^N c_j \underline{z}^{{\underline{\alpha }}^{(j)}}\Big )^p f_1^{x_1}(z_1) \dots f_d^{x_d}(z_d)d(z_1,\dots ,z_d)\\&\le C(N,p) \sum _{j=1}^N c_j^p \prod _{i=1}^d \underbrace{\int _0^{+\infty } z_i^{p\alpha _i^{(j)}} f_i^{x_i}(z_i)dz_i}_{=:I_i^j}. \end{aligned}$$

It remains to show that all the integrals \(I_i^j\) are finite. For \(i\in \{1,\dots ,d\}\) and \(j\in \{1,\dots ,N\}\) with \(\alpha _i^{(j)}=0\) we have \(I_i^j=1\). If \(\alpha _i^{(j)}\ne 0\) it holds

$$\begin{aligned} I_i^j&= \Big (\int _0^{K} + \int _K^{+\infty }\Big ) z_i^{p\alpha _i^{(j)}} f_i^{x_i}(z_i)dz_i\\&\le K^{p\alpha _i^{(j)}} + C\int _K^{+\infty } z_i^{p\alpha _i^{(j)} - \eta _i}dz_i<+\infty , \end{aligned}$$

where the integral in the last step is finite since \(p\alpha _i^{(j)}-\eta _i < -1\) for all \(i\in \{1,\dots ,d\}\) and \(j\in \{1,\dots ,N\}\) with \(\alpha _i^{(j)}\ne 0\). \(\square\)

We close this section with three remarks on the assumptions and possible extensions of Theorem 6.1.

Remark 6.2

The assumption given by Eq. (2) is, for example, fulfilled for the d-dimensional Brownian sheet with \(N=1\), \(c_1=1\) and \(\alpha ^{(1)}=(1/2,\dots ,1/2)\in \mathbb {R}_+^d\). Condition (2) also accomodates the GRFs we considered in the Lévy-Khinchin formula (see Theorem 4.6 and Assumption 4.4) with \(N=d\), \(c_1=\dots =c_d=1\) and \(\underline{\alpha }^{(j)}=1/2\cdot \hat{e}_j\) for \(j=1,\dots ,d\), where \(\hat{e}_j\) is the j-th unit vector in \(\mathbb {R}^d\). Further, this assumption is fulfilled for any stationary GRF W. Indeed, in case of a stationary GRF the assumption is satisfied for \(\alpha ^{(1)}=(\varepsilon ,0,\dots ,0)\) for any \(\varepsilon >0\) and, hence, Theorem 6.1 yields that every moment of the corresponding evaluated subordinated GRF exists, independently of the specific choice of the subordinators. This is consistent with Remark 4.8. The assumption on the Lévy subordinators in Eq. (3) is natural and can be verified easily in many cases, see also (Barth and Stein 2018b, Assumption 3.7 and Remark 3.8). For example, if for some non-negative integer \(n\in \mathbb {N}\), the n-th derivative of the characteristic function \(\phi _{l(x_i)}(\cdot )\) is integrable over \(\mathbb {R}\), then Eq. (3) holds with \(\eta _i=n\), \(K=0\) and \(C=\frac{1}{2\pi } \int _{-\infty }^{+\infty } |\frac{d^n}{dt^n}\, \phi _{l(x_i)}(t)|\, dt\) (cf. (Hughett 1998, Lemma 12)).

Remark 6.3

We point out that the statement of Theorem 6.1 remains valid if we consider Lévy distributions with discrete probability distribution which satisfy a discrete version of (3): If the GRF W satisfies (2) and the evaluated discrete subordinators \(l_1(x_1),\dots ,l_d(x_d)\) satisfy

$$\begin{aligned} f_i^{x_i}(k) = \mathbb {P}(l_i(x_i)=k)\le C|k|^{-\eta _i}, \text { for } k\ge K \text { and } i\in \{1,\dots ,d\}, \end{aligned}$$
(4)

then we obtain that \(\mathbb {E}(|L(x_1,\dots ,x_d)|^p)<\infty\) for \(p\in [1,a)\) with the real number a defined in Theorem 6.1.

Remark 6.4

For the pointwise existence of moments given by Theorem 6.1, it is not necessary to restrict the subordinating processes to the class of Lévy subordinators. More generally, one could consider a GRF W satisfying (2) and general Lévy processes \(l_1,\dots ,l_d\) satisfying (3) for \(|z|\ge K\). In this case, Theorem 6.1 still holds for the random field L defined by \(L(\underline{x}) := W(|l_1(x_1)|,\dots ,|l_d(x_d)|)\), for \(\underline{x} = (x_1,\dots ,x_d)\in [0,\mathbb {T}]_d\).

7 Numerical Examples

In the following, we present numerical experiments on the theoretical results given in this paper. The goal of this section is to use the knowledge on theoretical properties of the subordinated GRF to investigate existing numerical methods for the approximation of pointwise distributions (Subsection 7.1) as well as methods to verify or disprove the existence of moments of a random variable (Subsection 7.2). The numerical methods may also be useful for a fitting of random fields to existing data in applications. All our numerical experiments have been performed with MATLAB.

7.1 Experiments on the Lévy-Khinchin Formula

The Lévy-Khinchin-type formula (Theorem 4.6) allows access to the pointwise distribution of a subordinated GRF which motivates the investigation of numerical methods to approximate the pointwise distribution. To be more precise, we use Corollary 4.7 to obtain a pointwise distributional representation of a subordinated GRF as the sum of one-dimensional Lévy processes with transformed Lévy triplets. We use this representation to investigate the performance of different methods to approximate the distribution of Lévy processes.

Assume \(L=(W(l_1(x),l_2(y)),~(x,y)\in [0,1]^2)\) is a subordinated GRF where the GRF W satisfies Assumption 4.4 and the two subordinators \(l_1\) and \(l_2\) are characterized by the Lévy triplets \((\gamma _k,0,\nu _k)\) for \(k=1,2\). It follows by Corollary 4.7 that L admits the pointwise distributional representation

$$\begin{aligned} L(x,y){\mathop {=}\limits ^{\mathcal {D}}}\tilde{l}_1(x) +\tilde{l}_2(y), \end{aligned}$$
(5)

for \((x,y)\in [0,1]^2\). Here, the processes \(\tilde{l}_k\) on [0, 1] are independent Lévy processes with triplets \((0, \sigma ^2\gamma _k, \nu _k^\#)\), for \(k=1,2\), in the sense of the one-dimensional Lévy-Khinchin formula (see Theorem 2.1) and the Lévy measure \(\nu _k^\#\) is defined by

$$\begin{aligned} \nu _k^\#([a,b]):=\int _0^\infty \int _a^b\frac{1}{\sqrt{2\pi \sigma ^2t}}\exp \left( -\frac{x^2}{2\sigma ^2t}\right) dx\,\nu _k(dt), \end{aligned}$$

\(a,b\in \mathbb {R}\) and \(k=1,2\). We choose specific spatial points and use two different methods to approximate the distribution of the Lévy processes on the right hand side of (5): the compound Poisson approximation (CPA) (see (Schoutens 2003, Section 8.2.1)) and the Fourier inversion method for Lévy processes (see Gil-Pelaez (1951) and Barth and Stein (2018b)) which allows for a direct approximation of the density of the right hand side of (5). In order to investigate the performance of these two approaches, the corresponding results are then compared with samples of the subordinated GRF on the left hand side of Eq. (5).

7.1.1 Compound Poisson Approximation

We recall that a \(Gamma(a_G,b_G)\) process \(l_G\) has independent Gamma-distributed increments and \(l_G(t)\) follows a \(Gamma(a_G\cdot t,b_G)\) distribution. In our first example we choose Gamma(4, 12) processes to subordinate the GRF W defined by \(W(x,y)=\sqrt{x+y}\,\tilde{W}(x,y)\), for \((x,y)\in \mathbb {R}_+^2\), where \(\tilde{W}\) is a Matérn-1.5-GRF with pointwise standard deviation \(\sigma =2\) (see Remark 4.5). We fix the evaluation point \((x,y)=(1,1)\) and use the CPA method to obtain samples of the Lévy process on the right hand side of (5) which can then be compared with samples of the subordinated GRF. Figure 2 (left and middle) shows the corresponding histograms for 10.000 samples of each distribution.

Fig. 2
figure 2

Samples of the subordinated GRF \(W(l_1(1),l_2(1))\) (left), the sum of the corresponding transformed Lévy processes \(\tilde{l}_1(1) + \tilde{l}_2(1)\) generated by the CPA method (middle) and both histograms in one plot (right)

We observe an accurate fit of the samples generated by the different approaches: the first histogram, corresponding to the exact sampling of the subordinated GRF, displays the same characteristics as the histogram generated by CPA, which shows that the CPA method is appropriate to simulate the distribution of the right hand side of Eq. (5).

7.1.2 Fourier Inversion Method

The second approach is to approximate the density function of the right hand side of (5) by the Fourier inversion (FI) method (see Gil-Pelaez (1951) and Barth and Stein (2018b)) and compare it with samples of the subordinated GRF. Figure 3 illustrates the results for this approach where we used the evaluation point \((x,y)=(1,1)\), the same GRF as in Subsection 7.1.1, Gamma(4, 12) subordinators and 100.000 samples of the subordinated GRF.

Fig. 3
figure 3

Samples of Gamma(4, 12)-subordinated GRF and approximated density (FI)

As one can see in Fig. 3, the approximated density of the right hand side of (5) perfectly matches the pointwise distribution of the subordinated GRF. We want to confirm this observation by a Kolmogorov-Smirnov-Test (see for example (Pestman 2009, Section VII.4)). Figure 4 illustrates how the empirical CDF, obtained by sampling the subordinated GRF, converges to the target CDF which is approximated by the Fourier inversion method using Eq. (5). A Kolmogorov-Smirnov-test with 10.000 samples and a level of significance of \(5\%\) is passed.

Fig. 4
figure 4

Approximated target CDF (FI) vs. empirical CDF using 100 (left), 1.000 (middle) and 10.000 (right) samples of the subordinated GRF with Gamma(4, 12) subordinators

In the next experiment we use a modified subordinator, which results in a less smooth pointwise density of the subordinated GRF. We repeat the experiment with Gamma(0.5, 10) subordinators where the GRF, the evaluation point and the sample size remain unchanged. Figure 5 shows 100.000 samples of the subordinated GRF and the density of the process given by the right hand side of Eq. (5) approximated via the Fourier Inversion method.

Fig. 5
figure 5

Samples of Gamma(0.5, 10)-subordinated GRF and approximated density (FI)

As in the first experiment, the results given by Fig. 5 indicate that the approximated density of the right hand side of (5) matches the pointwise distribution of the subordinated GRF. Figure 6 illustrates how the empirical CDF, obtained by sampling of the subordinated GRF, converges to the approximated target CDF of the right hand side of Eq. (5), which is computed by the Fourier inversion method. A Kolmogorov-Smirnov-test with a level of significance of \(5\%\) is passed.

Fig. 6
figure 6

Approximated target CDF (FI) vs. empirical CDF using 100 (left), 1.000 (middle) and 10.000 (right) samples of the subordinated GRF with Gamma(0.5, 10) subordinators

7.2 Pointwise Moments

Theorem 6.1 guarantees the existence of pointwise moments of the subordinated GRF if the GRF and the corresponding subordinators satisfy certain conditions. In the following numerical experiments, we investigate the results of different statistical methods to investigate numerically the existence of pointwise moments of a certain order in the specific situation of the subordinated Gaussian random field. We set \(d=2\) and assume W to be a Brownian sheet on \(\mathbb {R}_+^2\). Further, we use Lévy processes with different stochastic regularity - in terms of the existence of moments - to subordinate the GRF W.

7.2.1 Statistical Methods to Test the Existence of Moments of a Random Variable

The existence of moments of a specific distribution is one of the most frequently formulated assumptions in statistical applications. For example, already the strong law of large numbers assumes finiteness of the first moment of the corresponding random variable. Nevertheless, in the literature only few statistical methods exist to verify or disprove the existence of moments, given a specific sample of random variables (see e.g. Mandelbrot (2012); Hill (1975); Ng and Yau (2018); Fedotenkov (2013a20142013b)). One of the earlier methods to verify the existence of moments of a distribution was proposed in 1963 by Mandelbrot (see Mandelbrot (2012) and Cont (2001)). It is based on the simple observation that the estimated (sample-)moments will converge to a certain value for an increasing sample size if the theoretical moment exists. On the other side, if the theoretical moment does not exist, the estimated moment will diverge or behave unstable when the sample size increases. However, this quite intuitive method is rather heuristic and depends highly on the experience of the researcher (see also Fedotenkov (2013b)). Another popular direct way to investigate the existence of moments of a certain distribution is the sample-based estimation of a decay rate \(\alpha\) for the corresponding density function proposed by Hill in Hill (1975). However, the Hill-estimator requires a parameter \(k>0\) which specifies the sample values which are considered as the tail of the distribution and it turned out that the Hill-estimator is very sensitive to the choice of this parameter k. Further, the method makes the quite restrictive assumption that the underlying distribution is of Pareto-type (see Ng and Yau (2018); Fedotenkov (2013b2014, 2013a)). In 2013, Fedotenkov proposed a bootstrap test for the existence of moments of a given distribution (see Fedotenkov (2013b)). The test performs well for specific distributions, however, its accuracy deteriorates fast when moments of higher order are considered (see also Fedotenkov (2014)). Recently, Ng and Yau proposed another sample-based bootstrap test for the existence of moments which outperforms the previously mentioned methods for many distributions (see Ng and Yau (2018)). The test is based on a result from bootstrap asymptotic theory which states that the m out of n bootstrap sample mean (see Bickel et al. (1997)) converges weakly to a normal distribution. For a detailed description of the test statistic and further theoretical investigations we refer the interested reader to Ng and Yau (2018).

Based on these observations, we investigate the results of direct moment estimation via Monte Carlo (MC) and the bootstrap test proposed by Ng and Yau to analyze the existence of (pointwise) moments of the subordinated GRF.

For our numerical examples we choose three different Lévy distributions to subordinate the Brownian sheet W: a Poisson distribution, a Gamma distribution and a Student-t distribution. Therefore, we use a discrete and a continuous distribution where all moments are finite and a continuous distribution, which only admits a limited number of moments. Hence, we consider three fundamentally different situations. In all three experiments, we consider the evaluation point \(\underline{x}=(x_1,x_2)=(1,1)\in \mathbb {R}_+^2\) for the subordinated GRF L. Note that the two-dimensional Brownian sheet satisfies Eq. (2) in Theorem 6.1 with \(N=1\), \(c_1=1\) and \(\underline{\alpha }^{(1)}=(1/2,1/2)\).

7.2.2 Poisson-subordinated Brownian sheet

In this example, we use Poisson(3) processes to subordinate the two-dimensional Brownian sheet. It is easy to verify that condition (4) is satisfied for any \(\eta _i>0\), \(i=1,2\), since point evaluations of a Poisson process are Poisson distributed. Theorem 6.1 implies the existence of the p-th moment of the evaluated field L(1, 1) for any \(p<\infty\) (see Remark 6.3). We estimate the p-th moment for \(p\in \{4,6,8\}\) by a MC-estimation using M samples of the evaluated GRF L(1, 1) for different values of \(M\in \mathbb {N}\), i.e.

$$\begin{aligned} \mathbb {E}(|L(1,1)|^p)\approx E_M(|L(1,1)|^p)=\frac{1}{M}\sum _{i=1}^M |L^{(i)}(1,1)|^p, \end{aligned}$$

where \((L^{(i)}(1,1),~i\ge 1)\) are i.i.d. samples of the evaluated field L(1, 1). As explained in Subsection 7.2.1, the MC-estimator \(E_M(|L(1,1)|^p)\) is expected to converge for \(M\rightarrow \infty\) if the p-th moment exists and one expects unstable behaviour if this is not the case. Figure 7 shows the development of the MC-estimator \(E_M(|L(1,1)|^p)\) for the p-th moment as a function of the number of samples M. For every moment, we take 5 independent MC-runs to validate that they converge to the same value.

Fig. 7
figure 7

Five independent realizations of the MC-estimator \(E_M(|L(1,1)|^p)\approx \mathbb {E}(|L(1,1)|^p)\) as a function of the sample numbers M with a Poisson(3)-subordinated Brownian sheet; \(p=4\) (left), \(p=6\) (middle), \(p=8\) (right)

As expected, Fig. 7 shows a stable convergence of the MC-estimator for a growing number of samples for every considered moment. Further, the different independent MC-runs converge to the same value - the theoretical p-th moment for \(p\in \{4,6,8\}\).

In the next step, we perform the bootstrap test (see Subsection 7.2.1 and Ng and Yau (2018)). We test the existence of the p-th moment for \(p\in \{1,2,3,4,5,6,7,8\}\) using \(M = 10^7\) samples of the subordinated evaluated GRF L(1, 1). Hence, the null and alternative hypothesis are given by

$$\begin{aligned} H_0: ~\mathbb {E}(|L(1,1)|^p)<+\infty \text { vs. } H_1: ~\mathbb {E}(|L(1,1)|^p)=+\infty , \end{aligned}$$

for the different values of p. We choose the significance level \(\alpha _s = 1\%\) and perform 100 independent test runs. Figure 8 shows the proportion of acceptance of the null hypothesis in the 100 test runs as a function of the considered moment \(p\in \{1,2,3,4,5,6,7,8\}\).

Fig. 8
figure 8

Results for 100 independent runs of the bootstrap test for the existence of the p-th moment using Poisson(3) processes to subordinate the Brownian sheet

As we see in Fig. 8, the bootstrap test accepts the null hypothesis \(H_0\) in almost every test run for every considered moment \(p\in \{1,2,3,4,5,6,7,8\}\) which is in line with our expectatations since all these moments exist. We conclude that both approaches, the MC moment estimation and the bootstrap test, perform as expected in this experiment.

7.2.3 Gamma-subordinated Brownian sheet

In our second numerical example we consider Gamma processes to subordinate the Brownian sheet. We recall that, for \(a_G,b_G>0\), a \(Gamma(a_G,b_G)\)-distributed random variable admits the density function

$$\begin{aligned} x\mapsto \frac{b_G^{a_G}}{\Gamma (a_G)}x^{a_G-1}\exp (-xb_G),~\text { for }x>0, \end{aligned}$$

where \(\Gamma (\cdot )\) denotes the Gamma function. A Gamma process \((l(t))_{t\ge 0}\) has independent Gamma distributed increments and l(t) follows a \(Gamma(a_G\cdot t,b_G)\)-distribution for \(t>0\). Therefore, condition (3) holds for any \(\eta _i>0\), for \(i=1,2\) and, hence, Theorem 6.1 again implies the existence of every moment, i.e. \(\mathbb {E}(|L(1,1)|^p)<\infty\) for any \(p\ge 1\). We choose \(a_G=4\), \(b_G=10\) and estimate the p-th moment of L(1, 1) with \(p\in \{4,6,8\}\) by a MC-estimation using a growing number of samples \(M\in \mathbb {N}\). Figure 9 shows the development of the MC-estimator \(E_M(|L(1,1)|^p)\) for the p-th moment as a function of the number of samples M. As in the first experiment, we take 5 independent MC-runs to validate the convergence to a unique value. In line with our expectations, the results show a stable convergence of the MC-estimations for the different moments of this subordinated GRF.

Fig. 9
figure 9

Five independent realizations of the MC-estimator \(E_M(|L(1,1)|^p)\approx \mathbb {E}(|L(1,1)|^p)\) as a function of the sample numbers M with a Gamma(4, 10)-subordinated Brownian sheet; \(p=4\) (left), \(p=6\) (middle), \(p=8\) (right)

In this experiment we again perform the bootstrap test for the existence of the p-th moment for \(p\in \{1,2,3,4,5,6,7,8\}\) using \(M = 10^7\) samples of the subordinated evaluated GRF L(1, 1). Hence, the null and alternative hypothesis are given by

$$\begin{aligned} H_0: ~\mathbb {E}(|L(1,1)|^p)<+\infty \text { vs. } H_1: ~\mathbb {E}(|L(1,1)|^p)=+\infty , \end{aligned}$$

for the different values of p. We choose the significance level \(\alpha _s = 1\%\) and perform 100 independent test runs. Figure 10 shows the proportion of acceptance of the null hypothesis in the 100 test runs as a function of the considered moment \(p\in \{1,2,3,4,5,6,7,8\}\).

Fig. 10
figure 10

Results for 100 independent runs of the bootstrap test for the existence of the p-th moment using Gamma(4,10) processes to subordinate the Brownian sheet

As in the first experiment, the test results meet our expectations, since almost every test run accepts the null hypothesis for any moment \(p\in \{1,2,3,4,5,6,7,8\}\).

7.2.4 Student t-subordinated Browinan Sheet

In our last experiment we consider a Lévy process where the pointwise distribution only admits a finite number of moments. The Student’s t-distribution with three degress of freedom admits the density function

$$\begin{aligned} f_t(x) = \frac{\Gamma (2)}{\sqrt{3\pi } \Gamma (3/2)}\Big (1+\frac{x^2}{3}\Big )^{-2}, \text { for }x\in \mathbb {R}. \end{aligned}$$
(6)

It follows by (Kelker (1971), Theorem 3) that a Student-t distributed random variable with three degrees of freedom is infinitely divisible. Hence, we can define Lévy processes \(l_j\), for \(j=1,2\), such that \(l_j(1)\) follows a Student-t distribution with three degrees of freedom (see Sato (2013, Theorem 7.10)). Using these processes and the Brownian sheet W, we consider the subordinated GRF \(L(x_1,x_2):=W(|l_1(x_1)|,|l_2(x_2)|)\) for \((x_1,x_2)\in [0,T_1]\times [0,T_2]\) (see Remark 6.4). For our numerical experiment we again evaluate the field at \((x_1,x_2)=(1,1)\). Using (6) we obtain

$$\begin{aligned} f_t(x) \le C|x|^{-4} \text {, for } x\in \mathbb {R}. \end{aligned}$$

Therefore, condition (3) is satisfied for \(\eta _i = 4\), for \(i=1,2\), and it is violated for any \(\eta _i >4\) (see also Remark 6.4). Since the Brownian sheet satisfies condition (2) with \(N=1\), \(c_1=1\) and \(\underline{\alpha }^{(1)}=(1/2,1/2)\), Theorem 6.1 yields that \(\mathbb {E}(|L(1,1)|^p)<\infty\) for \(p< 6\) and we expect that this boundary is sharp, i.e. we expect that \(\mathbb {E}(|L(1,1)|^p)=\infty\) for \(p\ge 6\).

We estimate the p-th moment for \(p\in \{5,6,8\}\) with the MC-estimator \(E_M(|L(1,1)|^p)\) with growing sample number \(M\in \mathbb {N}\). In Fig. 11 we see the development of the MC-estimator for the p-th moment as a function of the number of samples. For every moment, we take 5 independent MC-runs. The results indicate a convergence of the MC-estimations of the p-th moment for \(p=5\): in this case the estimation stabilizes with growning sample size and all 5 independent MC-estimations seem to converge to a unique value. However, for the higher moments \(p=6\) and \(p=8\), we see upward breakouts and instable behaviour of the corresponding MC-estimator for increasing sample sizes. Further, the 5 independent MC-runs do not indicate a convergence to a unique value. For all the considered moments \(p\in \{5,6,8\}\), these results are in line with our expectations, since the p-th moment of the evaluated subordinated GRF L(1, 1) admits a p-th moment for \(p<6\) and this boundary is sharp (see Theorem 6.1).

Fig. 11
figure 11

Five independent realizations of the MC-estimator \(E_M(|L(1,1)|^p)\approx \mathbb {E}(|L(1,1)|^p)\) as a function of the sample numbers M with a Student-t-subordinated Brownian sheet; \(p=5\) (left), \(p=6\) (middle), \(p=8\) (right)

We perform the bootstrap test for the existence of the p-th moment for \(p\in \{1,2,3,4,4.5,5,5.2,5.4,5.6,5.8,6,6.5,7,8\}\) using \(M = 10^7\) samples of the subordinated GRF L(1, 1). Hence, the null and alternative hypothesis are again given by

$$\begin{aligned} H_0: ~\mathbb {E}(|L(1,1)|^p)<+\infty \text { vs. } H_1: ~\mathbb {E}(|L(1,1)|^p)=+\infty , \end{aligned}$$

for the different values of p. We choose the significance level \(\alpha _s = 1\%\) and perform 100 independent test runs. Figure 12 shows the proportion of acceptance of the null hypothesis in the 100 test runs as a function of the considered moment p and the test statistic values for the 100 test runs.

Fig. 12
figure 12

Results for 100 independent runs of the bootstrap test for the existence of the p-th moment using Student-t-distributed random variables as subordinators: share acceptance of \(H_0\) (left), test statistic values for the test runs (right)

In all of the 100 test runs the null hypothesis is accepted for \(p\in \{1,2,3,4,4.5,5\}\). Further, in almost all of the 100 test runs \(H_0\) is rejected for the cases \(p\in \{6,6.5,7,8\}\), which is absolutely in line with the theoretical results for this specific choice of the subordinated GRF. Only for \(p\in (5,6)\), the test rejects the null hypothesis in some of the test runs although the theoretical moment exists. Overall, the test results for the existence of moments of the Student-t-subordinated GRF match our expectations based on Theorem 6.1.