Abstract
In this work we present different results concerning mixing properties of multivariate infinitely divisible (ID) stationary random fields. First, we derive some necessary and sufficient conditions for mixing of stationary ID multivariate random fields in terms of their spectral representation. Second, we prove that (linear combinations of independent) mixed moving average fields are mixing. Further, using a simple modification of the proofs of our results, we are able to obtain weak mixing versions of our results. Finally, we prove the equivalence of ergodicity and weak mixing for multivariate ID stationary random fields.
Similar content being viewed by others
1 Introduction
In 1970 in his fundamental work [11], Maruyama provided pivotal results for infinitely divisible (ID) processes. Among them, he proved that under certain conditions, known afterwards as Maruyama conditions, these processes are mixing (see Theorem 6 of [11]). After him various authors contributed on this line of research, see for example Gross [7] and Kokoszka and Taqqu [9]. In 1996 Rosinski and Zak extended Maruyama results proving that the a stationary ID process \((X_{t})_{t\in {\mathbb {R}}}\) is mixing if and only if \(\lim \nolimits _{t\rightarrow \infty }{\mathbb {E}}\left[ e^{i(X_{t}-X_{0})} \right] ={\mathbb {E}}\left[ e^{iX_{0}} \right] {\mathbb {E}}\left[ e^{-iX_{0}} \right] \), provided the Lévy measure of \(X_{0}\) has no atoms in \(2\pi {\mathbb {Z}}\). More recently, Fuchs and Stelzer [6] extended some of the main results of Rosinski and Zak to the multivariate case. Parallel to this line of research, new developments have been obtained for ergodic and weak mixing properties of infinitely divisible random fields. In particular, see Roy [15] and [16] for Poissonian ID random fields and Roy [17], Roy and Samorodnitsky [18] and [22] for \(\alpha \)-stable univariate random fields.
In the present work we fill an important gap by extending the results of Maruyama [11], Rosinski and Zak [13, 14], and Fuchs and Stelzer [6] to the multivariate random field case. First, this is crucial for applications since many of them consider a multidimensional domain composed by both spatial and temporal components (and not just temporal ones). This is typically the case for many physical systems, like turbulences (e.g. [1, 2]), and in econometrics (e.g. models based on panel data). Second, with the present work, we also close the gap between the two lines of research presented above by focusing on the more general case of multivariate stationary ID random fields.
On the modelling/application level, we prove that multivariate mixed moving average fields are mixing. This is a relevant result since Lévy driven moving average fields are extensively used in many applications throughout different disciplines, like brain imaging [8], tumour growth [3] and turbulences [2, 3], among many.
Moreover, we discuss conditions which ensure that a multivariate random fields is weakly mixing. In particular, we show that the proofs of the results obtained for the mixing case can be slightly modified to obtain similar results for the weak mixing case. Finally, we prove that a multivariate stationary ID random field is weak mixing if and only if it is ergodic.
The present work is structured as follows. In Sect. 2, we discuss some preliminaries on mixing and derive the mixing conditions for multivariate ID stationary random fields. In addition, we study some extensions and other related results. In Sect. 3, we prove that (sums of independent) mixed moving averages (MMA) are mixing, including MMA with an extended subordinated basis. In Sect. 4 we obtain weak mixing versions of the results obtained in Sect. 2 and we prove the equivalence between ergodicity and weak mixing for stationary ID random fields.
In order to simplify the exposition, we decided to put long proofs in the appendices.
2 Preliminaries and Results on Mixing Conditions
In this section we analyse mixing conditions for stationary infinite divisible random fields. We work with the probability space \((\Omega ,{\mathscr {F}},{\mathbb {P}})\) and the measurable space \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\), where \({\mathscr {B}}({\mathbb {R}}^{d})\) is the Borel \(\sigma \)-algebra on the vector field \({\mathbb {R}}^{d}\). We write \({\mathscr {L}}(X_{t})\) for the distribution, or law, of the random variable \(X_{t}\). Now, let \((\theta _{t})_{t\in {\mathbb {R}}^{l}}\) be a measure preserving \({\mathbb {R}}^{l}\) action on \((\Omega ,{\mathscr {F}},{\mathbb {P}})\). Consider the random field \(X_{t}(\omega )=X_{0}\circ \theta _{t}(\omega )\), \(t\in {\mathbb {R}}^{l}\). The random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) defined in this way is stationary and, conversely, any stationary measurable random field can be expressed in this form. Further, we have, with a little bit of abuse of notation, \(\theta _{v}(B):=\{\theta _{v}(\omega )\in \Omega :\omega \in B\}=\{\omega '\in \Omega :X_{0}(\omega ')=X_{v}(\omega ) ~\text{ for } ~\omega \in B\}\). We denote by \(\Vert \cdot \Vert _{\infty }\) the supremum norm.
Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if (see Wang, Roy and Stoev [22] equation (4.4)):
for all \(A,B\in \sigma _{X}\) and all \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), where \(\sigma _{X}:=\sigma (\{X_{t}:t\in {\mathbb {R}}^{l}\})\) is the \(\sigma \)-algebra generated by \((X_{t})_{t\in {\mathbb {R}}^{l}}\) and \({\mathscr {T}}:=\left\{ (t_{n})_{n\in {\mathbb {N}}}\subset {\mathbb {R}}^{l}:\lim \nolimits _{n\rightarrow \infty }\Vert t_{n}\Vert _{\infty }=\infty \right\} \).
The following definition is based on the characteristic function of \((X_{t})_{t\in {\mathbb {R}}^{l}}\) [see [22] equation (A.6)]:
for all \(r,q\in {\mathbb {N}},\beta _{j},\gamma _{k}\in {\mathbb {R}},p_{j},s_{k}\in {\mathbb {R}}^{l}\) and \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\). Further, for the multivariate (or \({\mathbb {R}}^{d}\)-valued) random field, we have the following definition based on the characteristic function of \((X_{t})_{t\in {\mathbb {R}}^{l}}\).
Definition 2.1
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is said to be mixing if for all \(\lambda =(s_{1},\ldots ,s_{m})',\mu =(p_{1},\ldots ,p_{m})'\in {\mathbb {R}}^{ml}\) and \(\theta _{1},\theta _{2}\in {\mathbb {R}}^{md}\)
where \(X_{\lambda }:=(X_{s_{1}}',\ldots ,X_{s_{m}}')'\in {\mathbb {R}}^{md}\) and \({\tilde{\mu }}_{n}=(p_{1}+t_{n},\ldots ,p_{m}+t_{n})'\), where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}\).
Further, we recall the definition of an infinitely divisible random field.
Definition 2.2
An \({\mathbb {R}}^{d}\) valued random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) (or its distribution) is said to be infinitely divisible if for every \((X_{t_{1}},\ldots ,X_{t_{k}})\), where \(k\in {\mathbb {N}}\), and for every \(n\in {\mathbb {N}}\) there exist i.i.d random vectors \(Y^{(n,k)}_{i},i=1,\ldots ,n\), in \({\mathbb {R}}^{d\times k}\) (possibly on a different probability space) such that \((X_{t_{1}},\ldots ,X_{t_{k}}){\mathop {=}\limits ^{d}}Y^{(n,k)}_{1}+\cdot \cdot \cdot +Y^{(n,k)}_{n}\).
It is straightforward to see that the above definition is equivalent to the following definition. An \({\mathbb {R}}^{d}\) valued random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) (or its distribution) is said to be infinitely divisible if for every finite dimensional distribution of \((X_{t})_{t\in {\mathbb {R}}^{l}}\), namely \(F_{t_{1},\ldots ,t_{k}}(x_{1},\ldots ,x_{k}):={\mathbb {P}}\left( X_{t_{1}}<x_{1},\ldots ,X_{t_{k}}<x_{k}\right) \) where \(k\in {\mathbb {N}}\), and for every \(n\in {\mathbb {N}}\) there exists a probability measure \(\mu _{n,k}\) on \({\mathbb {R}}^{d\times k}\) such that \(F_{t_{1},\ldots ,t_{k}}(x_{1},\ldots ,x_{k})={\mathop {\mu _{n,k}*\cdot \cdot \cdot *\mu _{n,k}}\limits ^{\text{(n } \text{ times) }}}=\mu _{n,k}^{*n}\).
We are now ready to state our first result.
Theorem 2.3
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if
for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
The above theorem relies on the following result, which is the multivariate random field extension of the Maruyama conditions (see Theorem 6 of [11]).
Theorem 2.4
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if
- (MM1):
-
the covariance matrix function \(\Sigma (t_{n})\) of the Gaussian part of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\) tends to 0, as \(n\rightarrow \infty \),
- \((MM2')\) :
-
\(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\),
where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}\).
Notice that the above conditions are fewer than the Maruyama conditions. This is because we used the following lemma, which is a multivariate random field extension of Lemma 1 of [10] and Lemma 2.2 of [6].
Lemma 2.5
Assume that \(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) holds for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\), and \((t_{n})_{n\in {\mathbb {N}}}\in {\mathbb {R}}^{l}\). Then one has
2.1 Related Results and Extensions
In this section, we present different results which follow from, are related to or extend the theorems presented in the previous section.
The first result is a corollary which follows immediately from Theorem 2.3, and states that a multivariate random field is mixing if and only if its components are pairwise mixing.
Corollary 2.6
An \({\mathbb {R}}^{d}\)-valued strictly stationary i.d. random field \(X=(X_{t})_{t\in {\mathbb {R}}^{l}}\) with \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\) is mixing if and only if the bivariate random fields \((X^{(j)},X^{(k)})\), \(j,k\in \{1,\ldots ,d\}\), \(j<k\), are all mixing.
Proof
It follows immediately from Theorem 2.3. \(\square \)
The following corollary is a generalisation of Corollary 2.5 of [6].
Corollary 2.7
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary ID random field. Then with the previous notation, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
Proof
We can follow the argument by [6]. To this end, note that if we assume that (5) holds, then conditions (MM1) and \((MM2')\) hold and, thus, Theorem 2.4 implies that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.
For the other direction assume that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing then by Theorem 2.4 condition (MM1) holds. Furthermore, for every \(\delta >0\) with \(Q_{jk}(\partial K_{\delta })=0\) and any \(j,k=1,\ldots ,d\), (cf. (21)),
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), where the symbol “\(\rightharpoonup \)” means convergence in the weak topology. In addition, we know that the Lévy measures \(Q_{jk}\) are concentrated on the axes of \({\mathbb {R}}^{2}\). Now consider a \(\delta >0\) such that conditions (22) and (6) hold. Then we have
Letting \(\epsilon \searrow 0\) we obtain that \(\lim \sup \nolimits _{n\rightarrow \infty }\int _{{\mathbb {R}}^{2}}\min (1,|xy|)Q_{0t_{n}}^{(jk)}(\mathrm{d}x,\mathrm{d}y)=0\) for any \(j,k=1,\ldots ,d\). Finally,
Therefore, this implies that
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), hence we obtain that condition (5) is satisfied. \(\square \)
The next two results are a reformulation of Theorem 2.3. However, the first requires a short preliminary introduction, which will be useful for Sect. 4 as well. Recall that a codifference\(\tau (X_{1},X_{2})\) of an ID real bivariate random vector \((X_{1},X_{2})\) is defined as follows
where \(\log \) is the distinguished logarithm as defined in [19] p. 33. Following [6] we recall that the autocodifference function for an \({\mathbb {R}}^{d}\)-valued strictly stationary ID process \((X_{t})_{t\in {\mathbb {R}}}\) is defined as \(\tau (t)=\left( \tau ^{(jk)}(t)\right) _{j,k=1,\ldots ,d}\) with \(\tau ^{(jk)}(t):=\tau \left( X_{0}^{(k)},X_{t}^{(j)}\right) \). For an \({\mathbb {R}}^{d}\)-valued strictly stationary ID random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) the autocodifference field\(\tau (t)\) is defined as \(\tau (t)=\left( \tau ^{(jk)}(t)\right) _{j,k=1,\ldots ,d}\) with \(\tau ^{(jk)}(t):=\tau \left( X_{0}^{(k)},X_{t}^{(j)}\right) \), where \(t\in {\mathbb {R}}^{l}\).
Corollary 2.8
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if \(\tau (t_{n})\rightarrow 0\) as \(n\rightarrow \infty \) for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
Proof
It follows immediately from Theorem 2.3. \(\square \)
Corollary 2.9
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if
for any \(j,k=1,\ldots ,d\), where \(\Vert \cdot \Vert \) is any norm on \({\mathbb {R}}^{l}\) (e.g. the sup or the Euclidean norm) and \(t\in {\mathbb {R}}^{l}\).
Proof
“\(\Rightarrow \)”: Assume that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing. Then by Theorem 2.3 we know that
holds for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\). Now consider the following simple result.
Let \(M_{1}=(A_{1},d_{1})\) and \(M_{2}=(A_{2},d_{2})\) be two metric spaces. Let \(S\subseteq A_{1}\) be an open set of \(M_{1}\). Let f be a mapping defined on S. Then \(\lim \nolimits _{x\rightarrow c}f(x)=l\) iff for any sequence \((x_{n})_{n\in {\mathbb {N}}}\) of points in S such that \(\forall n\in {\mathbb {N}}:\)\(x_{n\ne c}\) and \(\lim \nolimits _{n\rightarrow \infty }x_{n}=c\) we have \(\lim \nolimits _{n\rightarrow \infty }f(x_{n})=l\).
From this result and from the fact that we are considering any sequence such that \(\lim \nolimits _{n\rightarrow \infty }\Vert t_{n}\Vert _{\infty }=\infty \), we obtain Eq. (7).
“\(\Leftarrow \)”: Assume that (7) holds. Then we have that (8) holds by the result stated above. Then by Theorem 2.3 we obtain that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.
Now consider the set \({\mathscr {E}}:=\left\{ (t_{n})_{n\in {\mathbb {N}}}\subset {\mathbb {R}}^{l}:\lim \nolimits _{n\rightarrow \infty }\Vert t_{n}\Vert =\infty , \text{ where } \Vert \cdot \Vert \text{ is } \text{ any } \text{ norm } \text{ on } {\mathbb {R}}^{l} \right\} \). Notice that any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) belongs to \({\mathscr {E}}\) and vice versa, because on the finite dimensional vector space \({\mathbb {R}}^{l}\) any norm \(\Vert \cdot \Vert _{a}\) is equivalent to any other norm \(\Vert \cdot \Vert _{b}\). Hence, we obtain that Eq. (8) holds for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {E}}\), and by applying the argument above we obtain our result. \(\square \)
Remark 2.10
It is possible to see that the extension that we have done in the above corollary, can be applied to all our results that holds for “any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\)”.
The next result is a multivariate and random field extension of Theorem 2 of Rosinski and Zak [13] and it will help us to generalise Theorem 2.3.
Theorem 2.11
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary ID random field such that \(Q_{0}\), the Lévy measure of \(X_{0}\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})\ne 0\). In other words, \(Q_{0}\) has atoms in this set. Let
Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if for some \(a=(a_{1},\ldots ,a_{d})\in {\mathbb {R}}^{d}{\setminus } Z\), with \(a_{p}\ne 0\) for \(p=1,\ldots ,d\),
for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
Proof
Consider an element \(a\in {\mathbb {R}}^{d}{\setminus } Z\) with \(a_{p}\ne 0\) for \(p=1,\ldots ,d\). We know that the set of atoms of any \(\sigma \)-finite measure is a countable set (the proof is straightforward) and that any Lévy measure is \(\sigma \)-finite. Hence, the set of atoms of \(Q_{0}\) is countable, which implies that Z is countable. This implies that our a exists. Now, let
Notice that \(M_{a}\) is an invertible \(d\times d\) matrix and \(X_{t}\) is a d-dimensional column vector. We have that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if \((M_{a}X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing. This is because by looking at the Definition 2.1 it is enough to show that for every \(m\in {\mathbb {N}},\)\(\lambda =(s_{1},\ldots ,s_{m})'\in {\mathbb {R}}^{ml}\) and \(\theta =(\theta _{1},\ldots ,\theta _{m})'\in {\mathbb {R}}^{md}\) we have \(\langle \theta ,M_{a}\star X_{\lambda }\rangle =\langle {\tilde{\theta }},X_{\lambda }\rangle \), where \(M_{a}\star X_{\lambda }:=(M_{a}X_{s_{1}},\ldots ,M_{a}X_{s_{m}})'\) and \({\tilde{\theta }}\in {\mathbb {R}}^{md}\). Notice that for \(m=1\) we have \(M_{a}\star X_{t}:=M_{a} X_{t}\), \(t\in {\mathbb {R}}^{l}\). Indeed, we have \(\langle \theta ,M_{a}\star X_{\lambda }\rangle =\sum _{j=1}^{d}\sum _{k=1}^{m}a_{j}X_{s_{k}}^{(j)}\theta _{jk}=\langle M_{a}\star \theta , X_{\lambda }\rangle =\langle {\tilde{\theta }},X_{\lambda }\rangle \), where \( M_{a}\star \theta :=(M_{a}\theta _{1},\ldots ,M_{a}\theta _{m})'\in {\mathbb {R}}^{md}\).
Now, the Lévy measure \(Q^{a}_{0}\) of \(M_{a}X_{0}\) is given by \(Q^{a}_{0}(\cdot )=Q_{0}(M_{a}^{-1}(\cdot ))\) (see Proposition 11.10 of [19]). Since \(a\notin Z\), \(Q^{a}_{0}\) has no atoms in the set \(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\}\). This is because
since \(a\notin Z\) then \(\exists j\in \{1,\ldots ,d\}:a_{j}\ne 2\pi k/y_{j}\) for any \(k\in {\mathbb {Z}}\) and any atom y of \(Q_{0}\), hence
The equality to zero comes from the fact that the set considered has no intersection with the set of atoms of the measure \(Q_{0}\). Finally, by using Theorem 2.3 the proof is complete. \(\square \)
From this result we have the following generalisation of Theorem 2.3.
Corollary 2.12
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{q}\)-valued stationary ID random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if \({\mathscr {L}}(X_{t_{n}}-X_{0}){\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}{\mathscr {L}}(X_{0}-X_{0}')\) for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), where \(X_{0}'\) is an independent copy of \(X_{0}\).
Proof
This result is an immediate consequence of Theorem 2.3, Corollary 2.6 and Theorem 2.11. \(\square \)
We end this section with a simple general result which will also be useful for the next section.
Proposition 2.13
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be a linear combination of independent, stationary, ID and mixing random fields. In other words, let \(r\in {\mathbb {N}}\) and let \((X_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\sum _{k=1}^{r}Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), where \(( Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), \(k=1,\ldots ,r\), are independent \({\mathbb {R}}^{q}\)-valued stationary, ID and mixing random fields. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is stationary, ID and mixing.
3 Mixed Moving Average Field
In this section we will focus on a specific random field: the mixed moving average (MMA) random field. Before introducing this random field we need to recall the definition of an \({\mathbb {R}}^{d}\)-valued Lévy basis and the related integration theory. Lévy basis are also called infinitely divisible independently scattered random measures in the literature. In the following let S be a non-empty topological space, \({\mathscr {B}}(S)\) be the Borel-\(\sigma \)-field on S and \(\pi \) be some probability measure on \((S,{\mathscr {B}}(S))\). We denote by \({\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) the collection of all Borel sets in \(S\times {\mathbb {R}}^{l}\) with finite \(\pi \otimes \lambda ^{l}\)-measure, where \(\lambda ^{l}\) denotes the l-dimensional Lebesgue measure.
Definition 3.1
A d-dimensional Lévy basis on \(S\times {\mathbb {R}}^{l}\) is an \({\mathbb {R}}^{d}\)-valued random measure \(\Lambda =\{\Lambda (B):B\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\}\) satisfying:
-
(i)
the distribution of \(\Lambda (B)\) is infinitely divisible for all \(B\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\),
-
(ii)
for an arbitrary \(n\in {\mathbb {N}}\) and pairwise disjoint sets \(B_{1},\ldots ,B_{n}\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) the random variables \(\Lambda (B_{1}),\ldots ,\Lambda (B_{n})\) are independent,
-
(iii)
for any pairwise disjoint sets \(B_{1},B_{2},\ldots \in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) with \(\bigcup _{n\in {\mathbb {N}}}B_{n}\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) we have, almost surely, \(\Lambda (\bigcup _{n\in {\mathbb {N}}}B_{n})=\sum _{n\in {\mathbb {N}}}\Lambda (B_{n})\).
Throughout this section, we shall restrict ourselves to time-homogeneous and factorisable Lévy bases, i.e. Lévy bases with characteristic function given by
for all \(\theta \in {\mathbb {R}}^{d}\) and \(B\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\), where \(\Pi =\pi \otimes \lambda ^{l}\) is the product measure of the probability measure \(\pi \) on S and the Lebesgue measure \(\lambda ^{l}\) on \({\mathbb {R}}^{l}\) and
is the cumulant transform of an ID distribution with characteristic triplet \((\gamma ,\Sigma ,Q)\). We note that the quadruple \((\gamma ,\Sigma ,Q,\pi )\) determines the distribution of the Lévy basis completely and therefore it is called the generating quadruple. Now, we provide an extension of Theorem 3.2 of [6], which does not need a proof since it is a combination of Theorem 3.2 of [6] and Theorem 2.7 of [12]. It concerns the existence of integrals with respect to a Lévy basis.
Remark 3.2
In this section we are considering a q-valued random field, since the d is used for the \({\mathbb {R}}^{d}\)-valued Lévy basis, and we denote by \(M_{q\times d}({\mathbb {R}})\) the collection of \(q\times d\) matrices over the field \({\mathbb {R}}\).
Theorem 3.3
Let \(\Lambda \) be an \({\mathbb {R}}^{d}\)-valued Lévy basis with characteristic function of the form (9) and let \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) be a measurable function. Then f is \(\Lambda \)-integrable as a limit in probability in the sense of Rajput and Rosinski [12], if and only if
If f is \(\Lambda \)-integrable, the distribution of \(\int _{S}\int _{{\mathbb {R}}^{l}}f(A,s)\Lambda (\mathrm{d}A,\mathrm{d}s)\) is infinitely divisible with characteristic triplet \((\gamma _\mathrm{int},\Sigma _\mathrm{int},v_\mathrm{int})\) given by
for all Borel sets \(B\subseteq {\mathbb {R}}^{q}{\setminus }\{0\}.\)
Proof
This theorem is a specific representation of Theorem 3.2 of [6] and Theorem 2.7 of [12]. \(\square \)
Let us now introduce the main object of interest of this section: the mixed moving average random field.
Definition 3.4
(Mixed Moving Average Random Field) Let \(\Lambda \) be an \({\mathbb {R}}^{d}\)-valued Lévy basis on \(S\times {\mathbb {R}}^{l}\) and let \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) be a measurable function. If the random field
exists in the sense of Theorem 3.3 for all \(t\in {\mathbb {R}}^{l}\), it is called an n-dimensional mixed moving average random field (MMA random field for short). The function f is said to be its kernel function.
MMA random field have been discussed in Surgailis et al. [20] and Veraart [21]. Note that an MMA random field is an ID and strictly stationary random field.
The following lemma is a direct application of Corollary 2.7 to our MMA random field case.
Lemma 3.5
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\int _{S}\int _{{\mathbb {R}}^{l}}f(A,t-s)\Lambda (\mathrm{d}A,\mathrm{d}s))_{t\in {\mathbb {R}}^{l}}\) be an MMA random field where \(\Lambda \) is an \({\mathbb {R}}^{d}\)-valued Lévy basis on \(S\times {\mathbb {R}}^{l}\) with generating quadruple \((\gamma ,\Sigma , Q,\pi )\) and \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) is a measurable function. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
The following theorem is the main result of this section, while the next corollary is an extension of it.
Theorem 3.6
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\int _{S}\int _{{\mathbb {R}}^{l}}f(A,t-s)\Lambda (\mathrm{d}A,\mathrm{d}s))_{t\in {\mathbb {R}}^{l}}\) be an MMA random field where \(\Lambda \) is an \({\mathbb {R}}^{d}\) valued Lévy basis on \(S\times {\mathbb {R}}^{l}\) with generating quadruple \((\gamma ,\Sigma , Q,\pi )\) and \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) is a measurable function. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.
From the above result and Proposition 2.13, we have this corollary.
Corollary 3.7
Sums of independent MMA random fields are stationary, ID and mixing random fields.
Proof
This corollary is an immediate consequence of Theorem 3.6 and Proposition 2.13. \(\square \)
Remark 3.8
The above corollary holds for any linear combination of independent MMA, including MMA with different Lévy basis and different parameter space S.
3.1 Meta-Times and Subordination
In this section we give a brief introduction of the concepts of meta-times and subordination, and present a result which is a corollary of Theorem 3.6.
First, we recall the definition of an homogeneous Lévy sheet (see [4] definition 2.1). Let \(\triangle ^{b}_{a}F\) indicate the increments of a function F over an interval \((a,b]\subset {\mathbb {R}}^{k}_{+}\) and let \(a\le b\) indicate \(a^{i}\le b^{i}\) for \(i=1,\ldots ,k\) (see [4]), where \({\mathbb {R}}^{m}_{+}=\{x\in {\mathbb {R}}^{m}:x^{i}\ge 0, i=1,\ldots ,m \}\) and \(m\in {\mathbb {N}}\). Let \(k,l\in {\mathbb {N}}\).
Definition 3.9
Let \(X=\{X_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) be a family of random vectors in \({\mathbb {R}}^{d}\). We say that X is an homogeneous Lévy sheet on \({\mathbb {R}}^{k}_{+}\) if \(X_{t}=0\) for all \(t\in \{t\in {\mathbb {R}}^{k}_{+}:t^{j}=0\,\,\text { for some }j\in \{1,\ldots ,k\} \}\) a.s., \(\triangle _{a_{1}}^{b_{1}}X,\ldots ,\triangle _{a_{n}}^{b_{n}}X\) are independent whenever \(n\ge 2\) and \((a_{1},b_{1}],\ldots ,(a_{n},b_{n}]\subset {\mathbb {R}}^{k}_{+}\) are disjoint, X is continuous in probability, \(\triangle _{a+t}^{b+t}X{\mathop {=}\limits ^{d}}\triangle _{a}^{b}X\) for all \(a,b,t\in {\mathbb {R}}^{k}\) with \(a\le b\), and all sample paths of X are lamp.
The concept of lamp (i.e. limits along monotone paths) is the analogue of càdlàg, but in the multiparameter setting. If \(X=\{X_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) is a homogeneous Lévy sheet then \({\mathscr {L}}(\triangle _{a}^{b}X)\in ID({\mathbb {R}}^{d})\).
Now let \(X=\{X_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) be an \({\mathbb {R}}^{d}\)-valued homogeneous Lévy sheet on \({\mathbb {R}}^{k}_{+}\) and \(\Lambda _{X}=\{\Lambda _{X}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}_{+}) \}\) be the homogeneous Lévy basis induced by X, namely \(\Lambda _{X}([0,t])=X_{t}\) a.s. for all \(t\in {\mathbb {R}}^{k}_{+}\). Let \(T=\{T_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) be an \({\mathbb {R}}_{+}\)-valued homogeneous Lévy sheet and \(\Lambda _{T}=\{\Lambda _{T}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}_{+}) \}\) be the non-negative homogeneous Lévy basis induced by T. Define \({\mathscr {F}}^{T}=\sigma (\Lambda _{T}(A):A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+}))\) to be the \(\sigma \)-field generated by \(\Lambda _{T}\). Then there exists a \(({\mathscr {F}}^{T},{\mathscr {B}}({\mathbb {R}}^{k}_{+}),{\mathscr {B}}({\mathbb {R}}^{k}))\)-measurable mapping \(\phi _{T}:\Omega \times {\mathbb {R}}^{k}_{+}\rightarrow {\mathbb {R}}^{k}\) such that for all \(\omega \in \Omega \) and \(A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+})\), the set \(\mathbf{T }(A)(\omega ),\) given by \(\mathbf{T }(A)(\omega )=\{x\in {\mathbb {R}}^{k}_{+}:\phi _{T}(\omega ,x)\in A \}\) is bounded and
For each \(\omega \), \(\mathbf{T }(\cdot )(\omega )\) is called a meta-time associated with \(\Lambda _{T}(\cdot )(\omega )\). Let \(M=\{M(A):A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+}) \}\) be defined as
for all \(A\in {\mathscr {B}}({\mathbb {R}}^{k}_{+})\). We say that M appears by extended subordination of\(\Lambda _{X}\)by\(\Lambda _{T}\) (or of XbyT). Then by Theorem 5.1 of [4] we have that M ia a homogeneous Lévy basis.
Therefore, we have the following corollary of Theorem 3.6.
Corollary 3.10
Let \(X=\{X_{t}:t\in {\mathbb {R}}^{k} \}\) be an \({\mathbb {R}}^{d}\)-valued homogeneous Lévy sheet on \({\mathbb {R}}^{k}\) and \(\Lambda _{X}=\{\Lambda _{X}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}) \}\) be the homogeneous Lévy basis induced by X. Let \(T=\{T_{t}:t\in {\mathbb {R}}^{k} \}\) be an \({\mathbb {R}}_{+}\)-valued homogeneous Lévy sheet and \(\Lambda _{T}=\{\Lambda _{T}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}) \}\) be the non-negative homogeneous Lévy basis induced by T. Let \(M=\{M(A):A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+}) \}\) be an extended subordination of \(\Lambda _{X}\) by \(\Lambda _{T}\). Let \((Y_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\int _{{\mathbb {R}}^{k-l}}\int _{{\mathbb {R}}^{l}}f(B,t-s)M(dB,\mathrm{d}s))_{t\in {\mathbb {R}}^{l}}\), where \(f:{\mathbb {R}}^{k-l}\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) is a measurable function. Then \((Y_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.
Proof
It is sufficient to notice that the framework introduced above holds for the case \({\mathbb {R}}^{k}\) and not just for \({\mathbb {R}}^{k}_{+}\) (see [4]) and that M is an \({\mathbb {R}}^{d}\)-valued homogeneous Lévy basis on \({\mathbb {R}}^{k}\). Then by using Theorem 3.6 we obtain the result. \(\square \)
4 Weak Mixing and Ergodicity
In this section, we will first show how to modify our results to obtain weak mixing version of the results presented before and then prove that for stationary ID random fields ergodicity and weak mixing are equivalent. We start with a definition of a density one set and of weak mixing for stationary random fields.
Definition 4.1
A set \(E\subset {\mathbb {R}}^{l}\) is said to have density zero in \({\mathbb {R}}^{l}\) with respect to the Lebesgue measure \(\lambda \) if
A set \(D\subset {\mathbb {R}}^{l}\) is said to have density one in \({\mathbb {R}}^{l}\) if \({\mathbb {R}}^{l}{\setminus } D\) has density zero in \({\mathbb {R}}^{l}\).
The class of all sequences on D that converge to infinity will be denoted by
Definition 4.2
Consider the random field \(X_{t}(\omega )=X_{0}\circ \theta _{t}(\omega )\), \(t\in {\mathbb {R}}^{l}\), where \(\{\theta _{t} \}_{t\in {\mathbb {R}}^{l}}\) is a measure preserving \({\mathbb {R}}^{l}\)-action. Let \(\sigma _{X}\) be the \(\sigma \)-algebra generated by the field \((X_{t})_{t\in {\mathbb {R}}^{l}}\). We say that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is weakly mixing if there exists a density one set D such that
for all \(A,B\in \sigma _{X}\) and all \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\).
We are now ready to state the weak mixing version of Theorem 2.3.
Theorem 4.3
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if there exists a density one set \(D\subset {\mathbb {R}}^{l}\) such that
for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\).
Proof
It is possible to see that the argument used in the first part of the proof of Theorem 2.3 applies and holds also for the case \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\). Moreover, using Theorem 4.4 the proof is complete. \(\square \)
Theorem 4.4
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if there exists a density one set \(D\subset {\mathbb {R}}^{l}\) such that
(MM1) the covariance matrix function \(\Sigma (t_{n})\) of the Gaussian part of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\) tends to 0, as \(n\rightarrow \infty \),
\((MM2')\)\(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\),
where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}_{D}\).
Proof
It is possible to see that the arguments used in the proof of Theorem 2.4 go through for the case \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\). \(\square \)
Remark 4.5
It is possible to obtain weak mixing version also for the following results previously stated: Corollaries 2.6, 2.7, 2.8, 2.9, Theorem 2.11, Corollary 2.12 and Proposition 2.13. The proofs of these results are omitted because they follow exactly the same arguments used in the proofs of their respective results. The only difference is that we have \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\) and not \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) but this does not trigger any change in the arguments used in the proofs of these results.
Among these results, we have the following corollary, which is the weak mixing version of Corollary 2.7 and it will be useful for our next result: the equivalence between weak mixing and ergodicity for stationary ID random fields.
Corollary 4.6
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary ID random field. Then with the previous notation, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if there exists a density one set \(D\subset {\mathbb {R}}^{l}\) such that
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\).
Proof
See discussion in Remark 4.5. \(\square \)
In order to prove the equivalence between ergodicity and weak mixing for ID stationary random fields we will need various preliminary results, some of which have already been proven in the literature. We start with two known results.
Lemma 4.7
(Lemma 4.3 of the ArXiv last version (i.e. v3) of [22]) Let \(f:{\mathbb {R}}^{l}\rightarrow {\mathbb {R}}\) be non-negative and bounded. A necessary and sufficient condition for
is that there exists a subset D of density one in \({\mathbb {R}}^{l}\) such that
Lemma 4.8
(Theorem 2.3.2 of [5]) Let \(\mu \) be a finite measure on \({\mathbb {R}}^{l}\). Then
where \({\hat{\mu }}\) denotes the Fourier transform of \(\mu \).
The following lemma is an adaptation to our framework of Lemma 3 of [14].
Lemma 4.9
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary ID random field and \(Q_{0t}^{jk}\) be the Lévy measure of \({\mathscr {L}}(X_{0}^{(j)},X_{t}^{(k)})\). Then for every \(\delta >0\) and \(j,k=1,\ldots ,d\), the family of finite measures of \((Q_{0t}^{jk}|_{K_{\delta }^{c}})_{t\in {\mathbb {R}}^{l}}\) is weakly relatively compact and
where \(K_{\delta }=\{(x,y):x^{2}+y^{2}\le \delta ^{2} \}\).
Proof
This result comes directly from the proof of Lemma 3 of [14]. \(\square \)
Now we will investigate the auto-codifference matrix of the \({\mathbb {R}}^{d}\)-valued stationary ID random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\), which was already introduced in Section 2.1. Consider
with
where \(\Gamma ^{jk}_{t_{n}}\) is the covariance function of the Gaussian part of \((X_{0}^{(k)},X_{t_{n}}^{(j)})\) and it is given by
Proposition 4.10
Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued ID random field (not necessarily stationary). Then for any \(j,k=1,\ldots ,d\) the function
is non-negative definite.
Proof
We argue as in Proposition 2 of [14]. Without loss of generality let \(t\ge s\) with \(t,s\in {\mathbb {R}}^{l}\). As seen above, we have
Since \((s,t)\rightarrow \sigma _{t-s}^{jk}\) is non-negative definite because it is a covariance function, it just remains to show that the second element on the RHS of (12) is non-negative definite. However, this is a consequence of Lemma 4 in [14]. \(\square \)
We can now state and later prove (see “Appendix C”) the second main theorem of this section, which states the equivalence between ergodicity and weak mixing for ID stationary random fields.
Theorem 4.11
Let \(l,d\in {\mathbb {N}}\). Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary ID random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is ergodic if and only if it is weakly mixing.
5 Conclusion
In this work we derived different results concerning ergodicity and mixing properties of multivariate stationary infinitely divisible random fields. A possible future direction consists of the investigation of statistical properties of the results presented in this paper. For example for multivariate stochastic processes, showing that mixed moving average (MMA) processes are mixing implies that the corresponding moment based estimator (like the generalised method of moments (GMM)) are consistent (see [6]). However, it is not clear that a similar result holds for the random fields case. Other possible directions would be to extend the present results to the case of random fields on manifolds or on infinite dimensional vector spaces. However, the literature is not as developed as for the \({\mathbb {R}}^{l}\)-case and requires further work.
References
Barndorff-Nielsen, O.E., Benth, F.E., Veraart, A.E.D.: Recent Advances in Ambit Stochastics with a View Towards Tempo-Spatial Stochastic Volatility/Intermittency, vol. 104, pp. 25–60. Banach Center Publications, Warszawa (2014)
Barndorff-Nielsen, O.E., Schmiegel, J.: Lévy-based tempo-spatial modelling; with applications to turbulence. Uspekhi Mat. Nauk 159, 63–90 (2004)
Barndorff-Nielsen, O.E., Schmiegel, J.: Ambit processes: with applications to turbulence and tumour growth. In: Stochastic Analysis and Applications: The Abel Symposium 2005, pp. 93–124. Springer (2007)
Barndorff-Nielsen, O.E., Pedersen, J.: Meta times and extended subordination. Theory Probab. Appl. 56(2), 319–327 (2012)
Cuppens, R.: Decomposition of Multivariate Distributions. Academic Press, New York (1975)
Fuchs, F., Stelzer, R.: Mixing conditions for multivariate infinitely divisible processes with an application to mixed moving averages and the supOU stochastic volatility model. ESAIM Probab. Stat. 17, 455–471 (2013)
Gross, A.: Some mixing conditions for stationary symmetric stable stochastic processes. Stoch. Process. Appl. 51, 277–295 (1994)
Jónsdóttir, K., Rønn-Nielsen, A., Mouridsen, K., Vedel Jensen, E.: Lévy-based modelling in brain imaging. Scand. J. Stat. 40(3), 511–529 (2013)
Kokoszka, P., Taqqu, M.S.: A characterization of mixing processes of type G. J. Theor. Probab. 9, 3–17 (1996)
Magdziarz, M.: A note on Maruyama’s mixing theorem. Theory Probab. Appl. 54, 322–324 (2010)
Maruyama, G.: Infinitely divisible processes. Theory Probab. Appl. 15, 1–22 (1970)
Rajput, B.S., Rosinski, J.: Spectral representations of infinitely divisible processes. Probab. Theory Relat. Fields 82, 451–487 (1989)
Rosinski, J., Zak, T.: Simple conditions for mixing of infinitely divisible processes. Stoch. Process. Appl. 61, 277–288 (1996)
Rosinski, J., Zak, T.: The equivalence of ergodicity and weak mixing for infinitely divisible processes. J. Theor. Probab. 10(1), 73–86 (1997)
Roy, E.: Ergodic properties of Poissonian ID processes. Ann. Probab. 35, 551–576 (2007)
Roy, E.: Poisson suspensions and infinite ergodic theory. Ergod. Theory Dyn. Syst. 29, 667–683 (2009)
Roy, P.: Nonsingular group actions and stationary S\(\alpha \)S random fields. Proc. Am. Math. Soc. 138, 2195–2202 (2010)
Roy, P., Samorodnitsky, G.: Stationary symmetric \(\alpha \)-stable discrete parameter random fields. J. Theor. Probab. 21, 212–233 (2008)
Sato, K.: Lévy Processes and Infinitely Divisible Distributions, Volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1999)
Surgailis, D., Rosinski, J., Mandrekar, V., Cambanis, S.: Stable mixed moving averages. Probab. Theory Relat. Fields 97(4), 543–558 (1993)
Veraart, A.E.D.: Stationary and multi-self-similar random fields with stochastic volatility. Stochastics 87(5), 848–870 (2015)
Wang, Y., Roy, P., Stoev, S.A.: Ergodic properties of sum- and max-stable stationary random fields via null and positive group actions. Ann. Probab. 41(1), 206–228 (2013)
Acknowledgements
RP would like to thank the Centre for Doctoral Training in Mathematics of Planet Earth and the Grantham Institute for providing funding for this research. Funding was provided by Engineering and Physical Sciences Research Council (Grant No. 1643696).
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proofs of Sect. 2
1.1 Proof of Theorem 2.3
This proof is an extension to the random field case of the proof of Theorem 2.1 of [6].
“\(\Rightarrow \)”: Assume \((X_{t})_{t\in {\mathbb {R}}^{l}}\) to be mixing. This implies
for any \(\theta _{1},\theta _{2}\in {\mathbb {R}}^{d}\). Now, setting \((\theta _{1},\theta _{2})=(-e_{k},e_{j})\), \(j,k=1,\ldots ,d\) with \(e_{j}\) the unit j-th vector in \({\mathbb {R}}^{d}\), Eq. (4) is satisfied.
“\(\Leftarrow \)”: Assume that Eq. (4) holds for every \(j,k=1,\ldots ,d\), and then we have
for every \(j,k=1,\ldots ,d\). This is true because of the following reasoning. In particular, we extend the proof of Rosinski and Zak [13], Theorem 1, to the multivariate random field case. Assume that Eq. (4) holds. We will initially prove the following. For every \(Y\in L^{2}(\Omega ,{\mathscr {F}},{\mathbb {P}})\) (complex valued)
for \(j=1,\ldots ,d\). It is possible to see that Eq. (14) holds for \(Y\in H_{0}:=lin\{1,e^{iX_{t}^{(j)}}, t\in {\mathbb {R}}^{l}\}:=\{Z\in L^{2}(\Omega ,{\mathscr {F}},{\mathbb {P}}):Z=a_{0}1+\sum _{i=1}^{n}a_{i}e^{iX_{t_{i}}^{(j)}},t_{i}\in {\mathbb {R}}^{l},n\in {\mathbb {N}}\}\). Consider now the \(L^{2}\)-closure of \(H_{0}\) and call it H. Then by standard density argument Eq. (14) is true for any \(Y\in H\). Now, consider any \(Y\in L^{2}(\Omega ,{\mathscr {F}},{\mathbb {P}})\) (complex valued). We can write \(Y=Y_{1}+Y_{2}\), where \(Y_{1}\in H\) and \(Y_{2}\in H^{\bot }:=\{W\in L^{2}(\Omega ,{\mathscr {F}},{\mathbb {P}}):{\mathbb {E}}[Z{\bar{W}}]=0 ~\forall Z\in H \}\), where \({\mathbb {E}}[\cdot \,\,{\bar{\cdot }}]\) denotes the inner product (usually written as \(<\cdot ,\cdot>\)) from \(L^{2}(\Omega ,{\mathscr {F}},{\mathbb {P}})\times L^{2}(\Omega ,{\mathscr {F}},{\mathbb {P}})\) to \({\mathbb {R}}\). Notice that we are using the conjugate (i.e. in symbol “ \({\bar{~}}\) ”) because this is how the inner product space over a complex space is defined. Further, notice that we can write \(Y=Y_{1}+Y_{2}\) since the \(L^{2}\)-space endowed with that inner product is a Hilbert space.
Since \({\mathbb {E}}[e^{iX_{t_{n}}}{\bar{Y}}_{2}]=0\) for every \(t_{n}\in {\mathbb {R}}^{l}\) and \({\mathbb {E}}[{\bar{Y}}_{2}]=0\) by definition of \(H^{\bot }\), we get that
Hence we have equation (14). Putting now \(Y=e^{-iX_{0}^{(k)}}\) in (14), for \(k=1,\ldots ,d\), we obtain
which is Eq. (13). We now prove that equations (4) and (13) imply the multidimensional Maruyama conditions:
(MM1) the covariance matrix function \(\Sigma (t_{n})\) of the Gaussian part of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\) tends to 0, as \(n\rightarrow \infty \), where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}\).
(MM2) \(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) and \(\lim \nolimits _{n\rightarrow \infty }\int _{0<\Vert x\Vert ^{2}+\Vert y\Vert ^{2}\le 1}\Vert x\Vert \cdot \Vert y\Vert Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y)=0\) for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\).
Actually, we will not prove (MM2) but we will prove instead the following condition:
\((MM2')\)\(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\).
This is because in Lemma 2.5 we will prove that \((MM2')\) implies (MM2).
Regarding (MM1), we have the following. Since \((X_{0},X_{t_{n}})\) has a 2d-dimensional ID distribution, its characteristic function can be written, using the Lévy–Khintchine formulation for every \(\theta =(\theta _{1},\theta _{2})'\in {\mathbb {R}}^{d}\times {\mathbb {R}}^{d}\), as (see Theorem 8.1 of [19] and the proof of Theorem 2.1 of [6])
By substituting \((-e_{k},e_{j})\), \((0,e_{j})\) and \((-e_{k},0)\), \(j,k=1,\ldots ,d\) for \((\theta _{1},\theta _{2})\) in (16) we get the description of equation (4) in terms of the covariance matrix function of the Gaussian part of \((X^{(j)}_{t_{n}},X^{(k)}_{0})\) and the Lévy measure \(Q_{0t_{n}}\), namely
for arbitrary \(j,k=1,\ldots ,d\), where \(\sigma _{jk}(t_{n})\) is the (k, j)-th element of \(\Sigma (t_{n})\). By using the identity Real\((e^{i(-x+y)}-e^{-ix}-e^{iy}+1)=(\cos x-1)(\cos y-1)+\sin x\sin y\) and taking the logarithm on both sides, we get
for any \(j,k=1,\ldots ,d.\) Applying the same argument to \({\mathbb {E}}\left[ e^{i\left( X_{t_{n}}^{(j)}+X_{0}^{(k)}\right) }\right] \left( {\mathbb {E}}\left[ e^{iX_{0}^{(j)}}\right] {\mathbb {E}}\left[ e^{iX_{0}^{(k)}}\right] \right) ^{-1}\) we obtain
for any \(j,k=1,\ldots ,d\). Putting together equations (17) and (18), due to the consistency of the Lévy measures (see Proposition 11.10 of [19] and the proof of Theorem 2.1 of [6]),
for any \(j,k=1,\ldots ,d\), where \(Q_{0t_{n}}^{(jk)}\) denotes the Lévy measure of \({\mathscr {L}}(X_{0}^{(k)},X_{t_{n}}^{(j)})\) on \(({\mathbb {R}}^{2},{\mathscr {B}}({\mathbb {R}}^{2}))\).
Now, for fixed \(j,k\in \{1,\ldots ,d\}\) the family of measures \(\{{\mathscr {L}}(X_{0}^{(k)},X_{t_{n}}^{(j)})\}_{n\in {\mathbb {N}}}\) is tight. To see this, note that by letting \(K_{r}=\{(x,y)\in {\mathbb {R}}^{2}:x^{2}+y^{2}\le r^{2}\}\) and using the stationarity of \((X_{t})_{t\in {\mathbb {R}}^{l}}\) we have
hence \(\lim \nolimits _{r\rightarrow \infty }\sup \nolimits _{t_{n}\in {\mathbb {R}}^{l}}{\mathbb {P}}((X_{0},X_{t_{n}})\notin K_{r})=0\). Since \(\{{\mathscr {L}}(X_{0},X_{t_{n}})\}_{n\in {\mathbb {N}}}\) is tight, then \(\{{\mathscr {L}}(X_{0}^{(k)},X_{t_{n}}^{(j)})\}_{n\in {\mathbb {N}}}\) is tight. Notice that we are proving more than necessary because it is sufficient to prove the above limit for \(\sup \nolimits _{n\in {\mathbb {N}}}\). Thus, by Prokhorov’s theorem we have that the family \(\{{\mathscr {L}}(X_{0}^{(k)},X_{t_{n}}^{(j)})\}_{n\in {\mathbb {N}}}\) is sequentially compact (in the topology of weak convergence). Choose any sequence \((\tau _{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) and let \(F_{jk}\) be a cluster (or accumulation) point of the family \(\{{\mathscr {L}}(X_{0}^{(k)},X_{\tau _{n}}^{(j)})\}_{n\in {\mathbb {N}}}\). Then, using Lemma 7.8 of page 34 of Sato’s book [19], we have that \(F_{jk}\) is an ID distribution on \({\mathbb {R}}^{2}\) with some Lévy measure \(Q_{jk}\). Moreover, let \((t_{m})_{m\in {\mathbb {N}}}\) be a subsequence of \((\tau _{n})_{n\in {\mathbb {N}}}\) such that
Notice that \((t_{m})_{m\in {\mathbb {N}}}\in {\mathscr {T}}\) as well. We know that \(F_{jk}\) exists by Prokhorov theorem on Euclidean spaces. Then, for every \(\delta >0\) with \(Q_{jk}(\partial K_{\delta })=0\),
Since \((\cos x-1)(\cos y-1)\ge 0\) and using equations (19) and (21), we deduce that
Every Lévy measure \(Q_{jk}\) is concentrated on the set of straight lines \(\{(x,y)\in {\mathbb {R}}^{2}:x\in 2\pi {\mathbb {Z}}~or ~y\in 2\pi {\mathbb {Z}}\}\). This is because only on these lines the integrand \((\cos x-1)(\cos y-1)\) is zero, otherwise it is positive, and because \(\delta \) can be taken arbitrarily small.
By the stationarity of the process and (20), the projection of \(Q_{jk}\) onto the first and second axis coincides with \(Q_{0}^{(k)}\) and \(Q_{0}^{(j)}\), respectively, on the complement of every neighbourhood of zero (because (21) holds on any complement of every neighbourhood of zero). Recall the assumption on \(Q_{0}\), i.e. \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Hence, we have for every \(a\in {\mathbb {Z}}\), \(a\ne 0\),
and similarly \(Q_{jk}({\mathbb {R}}\times \{2\pi a\})=0.\) Therefore, \(Q_{jk}\), \(j,k=1,\ldots ,d\) is concentrated on the axes of \({\mathbb {R}}^{2}\) and on each of them coincides with \(Q_{0}^{(k)}\) and \(Q_{0}^{(j)}\). It is important to stress the main ideas of the above argument. First, we showed that \(Q_{jk}\) is concentrated only on \(\{(x,y):x\in 2\pi {\mathbb {Z}}~\text{ or }~y\in 2\pi {\mathbb {Z}}\}\) and then using the assumption of our theorem we showed that only when \(x=0\) or \(y=0\) the measure \(Q_{jk}\) is nonzero. Further, the stationarity of the process allows the fact that when we project \(Q_{jk}\) on the axes the projections coincide with \(Q_{0}^{(k)}\) and \(Q_{0}^{(j)}\), avoiding the case of having \(Q_{0}^{(k)}\) on one axis and \(Q_{t_{m}}^{(j)}\) on the other.
Now observe that, for every \(t\in {\mathbb {R}}^{l}\), using the consistency of the Lévy measure
for any positive \(\epsilon \) and any \(j,k=1,\ldots ,d\), if only \(\delta \) is small enough. This implies that, by (22) for every \(j,k=1,\ldots ,d\) we have for all \(m\in {\mathbb {N}}\)
for sufficiently small \(\delta >0\). Since \(Q_{jk}\) is concentrated on the axes of \({\mathbb {R}}^{2}\) then \(\int _{K_{\delta }^{c}}\sin x \sin yQ_{jk}(\mathrm{d}x,\mathrm{d}y)=0\) and using (21) we get \(\lim \nolimits _{m\rightarrow \infty }\int _{K_{\delta }^{c}}\sin x \sin yQ_{0t_{m}}^{(jk)}(\mathrm{d}x,\mathrm{d}y)=0\). Thus,
for every \(j,k=1,\ldots ,d\). Combining equations (17), (19) and (23) we obtain that \(\sigma _{jk}(t_{m})\rightarrow 0\) as \(m\rightarrow \infty \) for all \(j,k=1,\ldots ,d\). Since \((t_{m})_{m\in {\mathbb {N}}}\) is a subsequence of an arbitrary sequence \(\tau _{n}\in {\mathscr {T}}\), it follows that \(\sigma _{jk}(t_{n})\rightarrow 0\) as \(n\rightarrow \infty \) and thus \(\Sigma (t_{n})\rightarrow 0\) as \(n\rightarrow \infty \), for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\). This is because we have used the fact that if a sequence has the property that any subsequence has a further subsequence that converges to the same limit, then the sequence converges to that limit. Hence, condition (MM1) follows.
To prove \((MM2')\), observe that, for any \(m\in {\mathbb {N}}\),
In view of (21) we also get
for any \(\delta >0\) and \(j,k=1,\ldots ,d\). Hence, \(\lim \nolimits _{m\rightarrow \infty }Q_{0t_{m}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) for any \(\delta >0\) and together with the fact that \((t_{k})_{k\in {\mathbb {N}}}\) is a subsequence of an arbitrary sequence \(\tau _{n}\in {\mathscr {T}}\) we obtain condition \((MM2')\).
Now, by Theorem 2.4 the proof is complete.
1.2 Proof of Theorem 2.4
This proof is an extension to the random field case of the proof of Theorem 2.3 of [6].
“\(\Rightarrow \)”: We have shown in the proof of Theorem 2.3 that mixing implies conditions (MM1) and \((MM2')\). In particular, mixing implies formula (4), which implies (MM1) and \((MM2') \).
“\(\Leftarrow \)”: For the other direction we have the following. Assume that (MM1) and \((MM2')\) hold. Then by Lemma 2.5 condition (MM2) holds. We need to prove that the process is mixing, i.e. for all \(m\in {\mathbb {N}},\lambda =(s_{1},\ldots ,s_{m})',\mu =(p_{1},\ldots ,p_{m})'\in {\mathbb {R}}^{ml}\) and \(\theta _{1},\theta _{2}\in {\mathbb {R}}^{md}\)
where \(X_{\lambda }:=(X_{s_{1}}',\ldots ,X_{s_{m}}')'\in {\mathbb {R}}^{md}\) and \({\tilde{\mu }}_{n}=(p_{1}+t_{n},\ldots ,p_{m}+t_{n})'\), where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}\).
The family of \({\mathbb {R}}^{2md}\)-valued distributions of the ID random fields \((X_{\lambda },X_{{\tilde{\mu }}_{n}})\), denoted \(\{{\mathscr {L}}(X_{\lambda },X_{{\tilde{\mu }}_{n}})\}_{n\in {\mathbb {N}}}\) is tight. Indeed, let K be the 2md-dimensional ball around the origin with radius of length \(\sqrt{2m}a\), then by stationarity of the process \((X_{t})_{t\in {\mathbb {R}}^{l}}\) we have
hence \(\lim \nolimits _{a\rightarrow \infty }\sup \nolimits _{\lambda ,\mu ,t_{n}}{\mathbb {P}}((X_{\lambda },X_{{\tilde{\mu }}_{n}})\notin K)=0\). Again notice that we are proving more than necessary because it is sufficient to prove the above limit for \(\sup \nolimits _{\lambda ,\mu ,n}\). Let \((\alpha ^{1},\Sigma ^{1},Q^{1})\) and \((\alpha ^{2},\Sigma ^{2},Q^{2})\) be the characteristic triplets of \({\mathscr {L}}(X_{\lambda })\) and \({\mathscr {L}}(X_{\mu })\) respectively.
Suppose F with characteristic triplet \((\alpha ,R,Q)\) is a cluster point of the distributions of \((X_{\lambda },X_{{\tilde{\mu }}_{n}})\). As in the proof of the previous theorem, we have that F is the limit as \(r\rightarrow \infty \) of the distributions of \((X_{\lambda },X_{{\tilde{\mu }}_{r}})\), where \({\tilde{\mu }}_{r}=(p_{1}+t_{r},\ldots ,p_{m}+t_{r})'\) with \(t_{r}\) is a subsequence of \(t_{n}\). Let \((\alpha _{r},\Sigma _{r},Q_{r})\) be the characteristic triplets of \((X_{\lambda },X_{{\tilde{\mu }}_{r}})\). By Lemma 7.8 of [19] F is an ID distribution on \({\mathbb {R}}^{2md}\). We denote by \(\Phi _{r}(\theta _{1},\theta _{2})\) the characteristic function of \({\mathscr {L}}(X_{\lambda },X_{{\tilde{\mu }}_{r}})\) at the point \((\theta _{1},\theta _{2})\in {\mathbb {R}}^{md}\times {\mathbb {R}}^{md}\). The logarithm of \(\Phi _{r}(\theta _{1},\theta _{2})\) can be written (see proof of Theorem 2.3 and Theorem 2.3.of [6]) as
We need to prove that \(\log \Phi _{r}(\theta _{1},\theta _{2})\rightarrow \log \Phi _{1}(\theta _{1})+\log \Phi _{2}(\theta _{2})\) as \(r\rightarrow \infty \) for all \(\theta _{1},\theta _{2}\in {\mathbb {R}}^{md}\) where \(\Phi _{1}\) and \(\Phi _{2}\) are the characteristic functions of \(X_{\lambda }\) and \(X_{\mu }\), respectively.
It is possible to see immediately that \(I_{1}=i\langle \alpha ^{1},\theta _{1}\rangle +i\langle \alpha ^{2},\theta _{2}\rangle \), just by setting \(\alpha _{r}=(\alpha ^{1}_{r},\alpha ^{2}_{r})'=(\alpha ^{1},\alpha ^{2})'\). Further, by condition (MM1) \(I_{2}\) converges to \(-\frac{1}{2}\langle \Sigma ^{1}\theta _{1},\theta _{1}\rangle -\frac{1}{2}\langle \Sigma ^{2}\theta ^{2},\theta ^{2}\rangle \) as \(r\rightarrow \infty \), where \(\Sigma ^{1}\) and \(\Sigma ^{2}\) are the md-dimensional covariance matrix of \((X_{\lambda })\) and \((X_{\mu })\) respectively.
For \(I_{4}\), we have
This is because of the identity
which is valid when \(a_{i}b_{j}=0\) for any \(1\le i,j\le m\), and because, letting \(x=\left( x^{(1)'},\ldots ,x^{(m)'}\right) \in ({\mathbb {R}}^{d})^{m}\) and \(y=\left( y^{(1)'},\ldots ,y^{(m)'}\right) \in ({\mathbb {R}}^{d})^{m}\), we have
for any \(\delta >0\), which shows in particular that \(Q(\Vert x\Vert \cdot \Vert y\Vert >0)=0\).
Analogously to x and y we denote by \(\theta _{1}^{(j)}\) and \(\theta _{2}^{(j)}\) the j-th \({\mathbb {R}}^{d}\)-component of \(\theta _{1}\) and \(\theta _{2}\), respectively. Concerning \(I_{3}\), consider the multivariate Taylor expansion of
with respect to the variable \((x,y)'\) at the point \((x_{0},y_{0})'\equiv 0\). For any \(\delta >0\) small enough, we obtain
where R is the reminder and in the integral form it is given by
and so
and thus \(6|R|<\epsilon \) for any positive \(\epsilon \) if only \(\delta \) is sufficiently small. Notice that our estimates are sharper than the ones of [6] because we work with the explicit integral form of the remainder. Moreover, we obtain for every \(j,k=1,\ldots ,m\) and any \(\delta \) small enough that
by using condition (MM2). Finally, we have
For \(J_{1}\), we have
For \(J_{2}\), we have
and, by using the multivariate Taylor expansion and noticing that in this case the expansion is only for the variable x, we obtain
Similar arguments apply to the second addend of the first term of \(I_{3}\).
Combining all the different results, we get
and consequently we obtain the desired result in (24), which concludes the proof.
1.3 Proof of Lemma 2.5
Fix \(\epsilon >0\), put \(B_{\delta }=\{(x,y)\in {\mathbb {R}}^{d}\times {\mathbb {R}}^{d}:\Vert x\Vert ^{2}+\Vert y\Vert ^{2}\le \delta ^{2}\}\) and \(R_{\delta }=\{(x,y)\in {\mathbb {R}}^{d}\times {\mathbb {R}}^{d}:\delta ^{2}<\Vert x\Vert ^{2}+\Vert y\Vert ^{2}\le 1\}\). Then, we get
We will estimate the terms \(P_{1}\) and \(P_{2}\) separately. Using the stationarity of \(Q_{0t_{n}}\) (due to the stationarity of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\)) and the consistency of the Lévy measure, we get
Here \(Q_{0}\) is the Lévy measure of \(X_{0}\). Thus, for some appropriately small \(\delta _{0}\) we have
For the second term, set \(c_{0}=\min \left\{ \delta _{0},\frac{\epsilon }{8q}\right\} \) with \(q=Q_{0}\left( \Vert x\Vert ^{2}>\frac{\delta _{0}^{2}}{2}\right) <\infty \). Then, for \(C=R_{\delta _{0}}\cap \{\Vert x\Vert \cdot \Vert y\Vert >c_{0}\}\) we obtain
For n large enough we have \(Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >c_{0})<\frac{\epsilon }{2}\) and therefore
Finally, combining (25) and (26), and letting \(\epsilon \rightarrow 0\), we obtain the result of the lemma.
1.4 Proof of Proposition 2.13
By independence of the random fields \(( Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), \(k=1,\ldots ,r\), we have that the Lévy–Khintchine representation of \((X_{t})_{t\in {\mathbb {R}}^{l}}\) can be written as the product of the Lévy–Khintchine representation of the \((Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), \(k=1,\ldots ,r\). In other words, for any \(n\in {\mathbb {N}}\) and \(\theta _{j}\in {\mathbb {R}}^{d}\), \(j=1,\ldots ,n\), we have
Now \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is stationary since for any \(h\in {\mathbb {R}}^{l}\)
because \(( Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\) is stationary, for any \(k=1,\ldots ,r\). Moreover, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is ID since it is a sum of independent ID random fields.
To show that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing we proceed as follows. Consider any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) and consider the joint random field \((X_{0},X_{t_{n}})\). First, notice that the covariance function of the Gaussian part of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\), call it \(\Sigma _{X}(t_{n})\), is given by the sum of the covariance functions of the \((Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), call them \(\Sigma _{Y^{k}}(t_{n})\), \(k=1,\ldots ,r\). Moreover, notice also that the Lévy measure of the Lévy–Khintchine formula of the law \({\mathscr {L}}(X_{t_{n}},X_{0})\), call it \(Q_{X,0t_{n}}\), is given by the sum of the Lévy measures of the Lévy–Khintchine formula of the laws \({\mathscr {L}}(Y^{k}_{t_{n}},Y^{k}_{0})\), call them \(Q_{Y^{k},0t_{n}}\), \(k=1,\ldots ,r\). It is possible to see this in formulae.
where
In order to prove that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing we need to show that conditions (MM1) and \((MM2')\) hold. However, these conditions hold for each \(\Sigma _{Y^{k}}(t_{n})\) and \(Q_{Y^{k},0t_{n}}\), where \(k=1,\ldots ,r\). Moreover, since both the covariance matrix function \(\Sigma _{X}(t_{n})\) of the Gaussian part of \((X_{t})_{t\in {\mathbb {R}}^{l}}\) and its Lévy measure \(Q_{X,0t_{n}}\) are sums of \(\Sigma _{Y^{k}}(t_{n})\) and \(Q_{Y^{k},0t_{n}}\) respectively, for \(k=1,\ldots ,r\), then we have that these conditions hold also for them. Hence, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.
Appendix B: Proofs of Sect. 3
1.1 Proof of Lemma 3.5
The argument of the proof of Lemma 3.4 of [6] can be extended to our setting. To this end, notice that we can write
From this and from Theorem 3.3 it is possible to compute the covariance matrix function of the Gaussian part of \((X_{t})_{t\in {\mathbb {R}}^{l}}\) by
where
The Lévy measure \(Q_{0t_{n}}\) of \({\mathscr {L}}(X_{0},X_{t_{n}})\) is given by
for all Borel sets \(B\subseteq {\mathbb {R}}^{2q}{\setminus }\{0\}\), using again Theorem 3.3. Therefore, given this explicit representation of \(Q_{0t_{n}}\) we have that
Now by using Corollary 2.7 we complete the proof.
1.2 Proof of Theorem 3.6
We generalise the arguments given in the proof of Theorem 3.5 of [6]. Following Lemma 3.5, in order to prove this theorem it is sufficient to show that
and
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
First, we concentrate on proving that \(\Vert \Sigma (t_{n})\Vert {\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}0.\) Consider that by the existence of the MMA field we have (see Theorem 3.3)
for any \(t\in {\mathbb {R}}^{l}\), where \(\Sigma ^{\frac{1}{2}}\) denotes the unique square root of \(\Sigma \). Therefore for any \(t\in {\mathbb {R}}^{l}\), the function \(g_{t}:S\times {\mathbb {R}}^{l}\rightarrow {\mathbb {R}}, (A,s)\mapsto \Vert f(A,t-s)\Sigma ^{\frac{1}{2}}\Vert \) is an element of \(L^{2}(S\times {\mathbb {R}}^{l},{\mathscr {B}}(S\times {\mathbb {R}}^{l}),\pi \otimes \lambda ^{l};{\mathbb {R}})\). The fact that the measure \(\pi \otimes \lambda ^{l}\) is \(\sigma \)-finite implies that every \(L^{2}\)-function can be approximated (in the \(L^{2}\)-norm) by an elementary function in
Let us now fix an arbitrary \(\epsilon >0\) and choose an elementary function \({\tilde{g}}\in {\mathscr {E}}\) such that
Now we have that for any \(t_{n}\in {\mathbb {R}}^{l}\)
By the Cauchy–Schwarz inequality we obtain
Now notice that
and that \(\Vert g_{t_{n}}\Vert _{L^{2}}=\Vert g_{0}\Vert _{L^{2}}\), by a simple change of variables. In addition, \(\Vert {\tilde{g}}\Vert _{L^{2}}\le \Vert g_{0}\Vert _{L^{2}}+\Vert {\tilde{g}}-g_{0}\Vert _{L^{2}}<\Vert g_{0}\Vert _{L^{2}}+\epsilon \).
Finally, it is possible to see that
for sufficiently large \(t_{n}\) (or equivalently for sufficiently large n, where \(t_{n}\) is an element of \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\)). This is because \({\tilde{g}}\in {\mathscr {E}}\). In particular, given \({\tilde{g}}\in {\mathscr {E}}\) then \({\tilde{g}}(A,s)=\sum _{i=1}^{k}c_{i} \mathbf{1 }_{D_{i}\times R_{i}}(A,s), k\in {\mathbb {N}}\), hence \({\tilde{g}}(A,s-t_{n})=0\) for \(t_{n}\) sufficiently large using the fact that the rectangles \(R_{i}\) cannot cover the whole \({\mathbb {R}}^{l}\) since \(\int _{S}\int _{{\mathbb {R}}^{l}}\Vert {\tilde{g}}\Vert ^{2}\mathrm{d}s\pi (\mathrm{d}A)<\infty \), for any \(i=1,\ldots ,k\). Therefore, we have
for sufficiently large n. This yields \(\Vert \Sigma (t_{n})\Vert \rightarrow 0\) as \(n\rightarrow \infty \), for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
We now move to the second objective of the proof. Indeed, we now prove that
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).
Consider an arbitrary \(\epsilon >0\) and set \(B_{r}:=\{(x,y)\in {\mathbb {R}}^{q}\times {\mathbb {R}}^{q}:\Vert x\Vert ^{2}+\Vert y\Vert ^{2}\le r^{2}\}\). Recall now the argument used to prove (21). In that argument we did not assume that the random field was mixing, but only that was stationary and ID. Thus, we have that for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) the following holds
for some \(R>1\) and some \(n_{0}>0\). Therefore, for all \(n\ge n_{0}\), we obtain that
Now notice that when \(\max \{\Vert u\Vert ,\Vert v\Vert \}\le R\) we have
Hence, we have
By the existence of the MMA random field, we have that for any \(t_{n}\in {\mathbb {R}}^{l}\) the function \(h_{t_{n}}:S\times {\mathbb {R}}^{l}\times {\mathbb {R}}^{d}\rightarrow {\mathbb {R}}\), \(h_{t_{n}}(A,s,x):=\min (1,\Vert f(A,t_{n}-s)x\Vert )\) is an element of \(L^{2}(S\times {\mathbb {R}}^{l}\times {\mathbb {R}}^{d},{\mathscr {B}}(S\times {\mathbb {R}}^{l}\times {\mathbb {R}}^{d}),\pi \otimes \lambda ^{l}\otimes Q;{\mathbb {R}})\). Also, the fact that every Lévy measure is \(\sigma \)-finite implies that the product measure \(\pi \otimes \lambda ^{l}\otimes Q\) is \(\sigma \)-finite as well and therefore it is possible to use the same approximation argument used above in the first part of this proof to show that
for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), which completes the proof.
Appendix C: Proofs of Sect. 4
1.1 Proof of Theorem 4.11
This proof is a multivariate and a random field extension of the proof of Theorem 1 of [14].
“\(\Rightarrow \)”: It is well known that any weakly mixing random field is ergodic.
“\(\Leftarrow \)”: For the other direction we argue as follows. Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an ergodic \({\mathbb {R}}^{d}\)-valued stationary ID random field. In this proof we will work with \(j,k=1,\ldots ,d\) and we will not repeat it every time. We showed before that
with
is the autocodifference field of \((X_{t})_{t\in {\mathbb {R}}^{l}}\). Since \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is an \({\mathbb {R}}^{d}\)-valued ID and stationary random field, we have that
Hence, as shown before in Proposition 4.10, the function
is non-negative definite and \(\tau ^{(jk)}(0)=-\log \left( {\mathbb {E}}[e^{iX_{0}^{(j)}}] \cdot {\mathbb {E}}[e^{iX_{0}^{(k)}}] \right) \) which is a constant. Hence, we can use Bochner’s theorem, which implies that there exists a finite Borel measure v on \({\mathbb {R}}^{l}\) such that
Thus,
where \(\exp (v)=\sum _{n=o}^{\infty }(v^{*n}/n!),v^{*0}=\delta _{0}\) and the symbol “ \(\hat{}\) ” denotes the Fourier transform. Notice that the last equality comes from the convolution theorem. Hence, \(\tau ^{(jk)}={\hat{v}}\). Moreover, since both terms on the RHS of (27) are non-negative definite thanks to Proposition 4.10, again by Bochner’s theorem there exists finite Borel measures \(v_{G}\) and \(v_{P}\) on \({\mathbb {R}}^{l}\) such that
Thus \(v=v_{G}+v_{P}\). The ergodicity of the process implies that
Hence, combining this result with Lemma 4.8, we obtain
which implies that
and so \(v_{G}*v_{G}(\{0\})=0\). Therefore, we have that
Since \(\sigma _{t}^{jk}\) is real, we deduce from Lemma 4.7 that there exists a set D of density one in \({\mathbb {R}}^{l}\), such that
Now we would like to prove a similar result for the Lévy measure of \({\mathscr {L}}(X_{0}^{(j)},X_{t_{n}}^{(k)})\), so that we can then apply Corollary 4.6, which will give us weak mixing of our random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\). By equations (27), (28), (29) and Lemma 4.8 we have that
When taking the real part of \(\frac{1}{(2T)^{l}}\int _{(-T,T]^{l}}\int _{{\mathbb {R}}^{2}}(e^{ix}-1)\overline{(e^{iy}-1)}Q_{0t}^{jk}(\mathrm{d}x,\mathrm{d}y)dt\) we get that
Now consider the two integrands of Eq. (31). By stationarity and Proposition 4.10, the functions
are non-negative definite. By Bochner’s theorem, this implies that there exist finite Borel measures \(\lambda _{1}\) and \(\lambda _{2}\) such that equation (31) can be written as
and using Lemma 4.8 we obtain \(\lambda _{1}(\{0\})=\lambda _{2}(\{0\})=0\). Focusing on the first one, we conclude that
Define \(R_{T}(\mathrm{d}x,\mathrm{d}y)=\frac{1}{(2T)^{l}}\int _{(-T,T]^{l}}Q_{0t}^{jk}(\mathrm{d}x,\mathrm{d}y)dt\). Then we have
It is possible to notice that the family of finite measures \((R_{T}|_{K^{c}_{\delta }})_{T>0}\) is weakly relatively compact for every \(\delta >0\). This is because by Lemma 4.9\((Q_{0t}^{jk}|_{K^{c}_{\delta }})_{t\in {\mathbb {R}}^{l}}\) is weakly relatively compact for every \(\delta >0\). The goal is now to show that
So, let \(T_{n}\rightarrow \infty \), \(T_{n}\in {\mathbb {R}}\). Using the diagonalization procedure we can find a subsequence \((T'_{n})\) of \((T_{n})\) and a measure R on \({\mathbb {R}}^{2}{\setminus }\{0\}\) such that \(R_{T'_{n}}|_{K_{\delta }^{c}}\Rightarrow R|_{K_{\delta }^{c}}\) as \(n\rightarrow \infty \) for every \(\delta >0\). Now, notice that \((\cos x-1)(\cos y -1)\ge 0\) and \((\cos x-1)(\cos y -1)= 0\) if \(x=2\pi k\) or \(y=2\pi k\), for \(k\in {\mathbb {N}}\). Moreover, by Eq. (33) we have that
for every \(\delta >0\). Therefore, the measure R is concentrated on the set of lines \(\{(x,y):x\in 2\pi {\mathbb {Z}}\,\, \text{ or } \,\,y\in 2\pi {\mathbb {Z}} \}\). Since the random field is stationary, the projections of \(Q_{0t}\) (and of \(Q_{0t}^{jk}\)) onto the first and second axis (excluding zero) are equal to \(Q_{0}\) (and to \(Q_{0}^{j}\) for the first axis and to \(Q_{0}^{k}\) for the second). The same holds for \(R_{T}\) and for R.
Suppose for now that \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\), which implies that for \(\alpha \in {\mathbb {Z}}{\setminus }\{0\}\) we have \(Q_{0}^{jk}(2\pi \alpha \times {\mathbb {R}})=Q_{0}^{jk}({\mathbb {R}}\times 2\pi \alpha )=0\). Then R must be concentrated on the axes of \({\mathbb {R}}^{2}\). Hence, for every \(\delta >0\) such that \(R(\{(x,y):x^{2}+y^{2}=\delta ^{2} \})=0\),
Equation (10) implies that the last quantity can be made arbitrarily small. Hence, (34) follows. From Eq. (34) and Lemma 4.7 we obtain that there exists a set \(D'\) of density one in \({\mathbb {R}}^{l}\) such that
In case D of Eq. (30) and \(D'\) of Eq. (35) are different this is not a problem because the intersection of two (or a finite number) of density one sets is again a density one set.
Hence, following the proof of Corollary 2.7 (and using its weak mixing version, namely Corollary 4.6) we obtain that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is weakly mixing with the additional assumption that \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). However, this assumption can be eliminated by first using the fact that if \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is ergodic then \((M_{a}X_{t})_{t\in {\mathbb {R}}^{l}}\) is ergodic too, and then by following the arguments of Theorem 2.11. In particular, let \(Z=\{z=(z_{1},\ldots ,z_{j})\in {\mathbb {R}}^{d}: z_{j}=2\pi k/y_{j}~\forall j\in \{1,\ldots ,d\}, \text{ where } k\in {\mathbb {Z}} \text{ and } y=(y_{1},\ldots ,y_{j}) \text{ is } \text{ an } \text{ atom } \text{ of } Q_{0}\}\). The set Z is countable and hence there exists a nonzero \(a\in {\mathbb {R}}^{d}{\setminus } Z\). Consider the random field \((M_{a}X_{t})_{t\in {\mathbb {R}}^{l}}\) and let \(Q^{a}_{0}\) be the Lévy measure of \(M_{a}X_{0}\). Then \(Q^{a}_{0}\) has no atoms in the set \((\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})\); since \((M_{a}X_{t})_{t\in {\mathbb {R}}^{l}}\) is also ergodic, it is weakly mixing by the arguments of this proof. Therefore, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is weakly mixing and the proof is complete.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Passeggeri, R., Veraart, A.E.D. Mixing Properties of Multivariate Infinitely Divisible Random Fields. J Theor Probab 32, 1845–1879 (2019). https://doi.org/10.1007/s10959-018-0864-7
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-018-0864-7
Keywords
- Multivariate random field
- Infinitely divisible
- Mixed moving average
- Lévy process
- Mixing
- Weak mixing
- Ergodicity