1 Introduction

In 1970 in his fundamental work [11], Maruyama provided pivotal results for infinitely divisible (ID) processes. Among them, he proved that under certain conditions, known afterwards as Maruyama conditions, these processes are mixing (see Theorem 6 of [11]). After him various authors contributed on this line of research, see for example Gross [7] and Kokoszka and Taqqu [9]. In 1996 Rosinski and Zak extended Maruyama results proving that the a stationary ID process \((X_{t})_{t\in {\mathbb {R}}}\) is mixing if and only if \(\lim \nolimits _{t\rightarrow \infty }{\mathbb {E}}\left[ e^{i(X_{t}-X_{0})} \right] ={\mathbb {E}}\left[ e^{iX_{0}} \right] {\mathbb {E}}\left[ e^{-iX_{0}} \right] \), provided the Lévy measure of \(X_{0}\) has no atoms in \(2\pi {\mathbb {Z}}\). More recently, Fuchs and Stelzer [6] extended some of the main results of Rosinski and Zak to the multivariate case. Parallel to this line of research, new developments have been obtained for ergodic and weak mixing properties of infinitely divisible random fields. In particular, see Roy [15] and [16] for Poissonian ID random fields and Roy [17], Roy and Samorodnitsky [18] and [22] for \(\alpha \)-stable univariate random fields.

In the present work we fill an important gap by extending the results of Maruyama [11], Rosinski and Zak [13, 14], and Fuchs and Stelzer [6] to the multivariate random field case. First, this is crucial for applications since many of them consider a multidimensional domain composed by both spatial and temporal components (and not just temporal ones). This is typically the case for many physical systems, like turbulences (e.g. [1, 2]), and in econometrics (e.g. models based on panel data). Second, with the present work, we also close the gap between the two lines of research presented above by focusing on the more general case of multivariate stationary ID random fields.

On the modelling/application level, we prove that multivariate mixed moving average fields are mixing. This is a relevant result since Lévy driven moving average fields are extensively used in many applications throughout different disciplines, like brain imaging [8], tumour growth [3] and turbulences [2, 3], among many.

Moreover, we discuss conditions which ensure that a multivariate random fields is weakly mixing. In particular, we show that the proofs of the results obtained for the mixing case can be slightly modified to obtain similar results for the weak mixing case. Finally, we prove that a multivariate stationary ID random field is weak mixing if and only if it is ergodic.

The present work is structured as follows. In Sect. 2, we discuss some preliminaries on mixing and derive the mixing conditions for multivariate ID stationary random fields. In addition, we study some extensions and other related results. In Sect. 3, we prove that (sums of independent) mixed moving averages (MMA) are mixing, including MMA with an extended subordinated basis. In Sect. 4 we obtain weak mixing versions of the results obtained in Sect. 2 and we prove the equivalence between ergodicity and weak mixing for stationary ID random fields.

In order to simplify the exposition, we decided to put long proofs in the appendices.

2 Preliminaries and Results on Mixing Conditions

In this section we analyse mixing conditions for stationary infinite divisible random fields. We work with the probability space \((\Omega ,{\mathscr {F}},{\mathbb {P}})\) and the measurable space \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\), where \({\mathscr {B}}({\mathbb {R}}^{d})\) is the Borel \(\sigma \)-algebra on the vector field \({\mathbb {R}}^{d}\). We write \({\mathscr {L}}(X_{t})\) for the distribution, or law, of the random variable \(X_{t}\). Now, let \((\theta _{t})_{t\in {\mathbb {R}}^{l}}\) be a measure preserving \({\mathbb {R}}^{l}\) action on \((\Omega ,{\mathscr {F}},{\mathbb {P}})\). Consider the random field \(X_{t}(\omega )=X_{0}\circ \theta _{t}(\omega )\), \(t\in {\mathbb {R}}^{l}\). The random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) defined in this way is stationary and, conversely, any stationary measurable random field can be expressed in this form. Further, we have, with a little bit of abuse of notation, \(\theta _{v}(B):=\{\theta _{v}(\omega )\in \Omega :\omega \in B\}=\{\omega '\in \Omega :X_{0}(\omega ')=X_{v}(\omega ) ~\text{ for } ~\omega \in B\}\). We denote by \(\Vert \cdot \Vert _{\infty }\) the supremum norm.

Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if (see Wang, Roy and Stoev [22] equation (4.4)):

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {P}}(A\cap \theta _{t_{n}}(B))={\mathbb {P}}(A){\mathbb {P}}(B), \end{aligned}$$
(1)

for all \(A,B\in \sigma _{X}\) and all \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), where \(\sigma _{X}:=\sigma (\{X_{t}:t\in {\mathbb {R}}^{l}\})\) is the \(\sigma \)-algebra generated by \((X_{t})_{t\in {\mathbb {R}}^{l}}\) and \({\mathscr {T}}:=\left\{ (t_{n})_{n\in {\mathbb {N}}}\subset {\mathbb {R}}^{l}:\lim \nolimits _{n\rightarrow \infty }\Vert t_{n}\Vert _{\infty }=\infty \right\} \).

The following definition is based on the characteristic function of \((X_{t})_{t\in {\mathbb {R}}^{l}}\) [see [22] equation (A.6)]:

$$\begin{aligned}&\lim \limits _{n\rightarrow \infty }{\mathbb {E}}\left[ \exp \left( i\sum _{j=1}^{r}\beta _{j}X_{s_{j}}\right) \exp \left( i\sum _{k=1}^{q}\gamma _{k}X_{p_{k}+t_{n}}\right) \right] \nonumber \\&\quad ={\mathbb {E}}\left[ \exp \left( i\sum _{j=1}^{r}\beta _{j}X_{s_{j}}\right) \right] {\mathbb {E}}\left[ \exp \left( i\sum _{k=1}^{q}\gamma _{k}X_{p_{k}}\right) \right] , \end{aligned}$$
(2)

for all \(r,q\in {\mathbb {N}},\beta _{j},\gamma _{k}\in {\mathbb {R}},p_{j},s_{k}\in {\mathbb {R}}^{l}\) and \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\). Further, for the multivariate (or \({\mathbb {R}}^{d}\)-valued) random field, we have the following definition based on the characteristic function of \((X_{t})_{t\in {\mathbb {R}}^{l}}\).

Definition 2.1

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is said to be mixing if for all \(\lambda =(s_{1},\ldots ,s_{m})',\mu =(p_{1},\ldots ,p_{m})'\in {\mathbb {R}}^{ml}\) and \(\theta _{1},\theta _{2}\in {\mathbb {R}}^{md}\)

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {E}}\left[ \exp \left( i\langle \theta _{1},X_{\lambda }\rangle +i\langle \theta _{2},X_{{\tilde{\mu }}_{n}}\rangle \right) \right] ={\mathbb {E}}\left[ \exp \left( i\langle \theta _{1},X_{\lambda }\rangle \right) \right] {\mathbb {E}}\left[ \exp \left( i\langle \theta _{2},X_{\mu }\rangle \right) \right] \nonumber \\ \end{aligned}$$
(3)

where \(X_{\lambda }:=(X_{s_{1}}',\ldots ,X_{s_{m}}')'\in {\mathbb {R}}^{md}\) and \({\tilde{\mu }}_{n}=(p_{1}+t_{n},\ldots ,p_{m}+t_{n})'\), where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}\).

Further, we recall the definition of an infinitely divisible random field.

Definition 2.2

An \({\mathbb {R}}^{d}\) valued random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) (or its distribution) is said to be infinitely divisible if for every \((X_{t_{1}},\ldots ,X_{t_{k}})\), where \(k\in {\mathbb {N}}\), and for every \(n\in {\mathbb {N}}\) there exist i.i.d random vectors \(Y^{(n,k)}_{i},i=1,\ldots ,n\), in \({\mathbb {R}}^{d\times k}\) (possibly on a different probability space) such that \((X_{t_{1}},\ldots ,X_{t_{k}}){\mathop {=}\limits ^{d}}Y^{(n,k)}_{1}+\cdot \cdot \cdot +Y^{(n,k)}_{n}\).

It is straightforward to see that the above definition is equivalent to the following definition. An \({\mathbb {R}}^{d}\) valued random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) (or its distribution) is said to be infinitely divisible if for every finite dimensional distribution of \((X_{t})_{t\in {\mathbb {R}}^{l}}\), namely \(F_{t_{1},\ldots ,t_{k}}(x_{1},\ldots ,x_{k}):={\mathbb {P}}\left( X_{t_{1}}<x_{1},\ldots ,X_{t_{k}}<x_{k}\right) \) where \(k\in {\mathbb {N}}\), and for every \(n\in {\mathbb {N}}\) there exists a probability measure \(\mu _{n,k}\) on \({\mathbb {R}}^{d\times k}\) such that \(F_{t_{1},\ldots ,t_{k}}(x_{1},\ldots ,x_{k})={\mathop {\mu _{n,k}*\cdot \cdot \cdot *\mu _{n,k}}\limits ^{\text{(n } \text{ times) }}}=\mu _{n,k}^{*n}\).

We are now ready to state our first result.

Theorem 2.3

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {E}}\left[ e^{i(X_{t_{n}}^{(j)}-X_{0}^{(k)})}\right] ={\mathbb {E}}\left[ e^{iX_{0}^{(j)}}\right] \cdot {\mathbb {E}}\left[ e^{-iX_{0}^{(k)}}\right] , \end{aligned}$$
(4)

for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).

The above theorem relies on the following result, which is the multivariate random field extension of the Maruyama conditions (see Theorem 6 of [11]).

Theorem 2.4

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if

(MM1):

the covariance matrix function \(\Sigma (t_{n})\) of the Gaussian part of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\) tends to 0, as \(n\rightarrow \infty \),

\((MM2')\) :

\(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\),

where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}\).

Notice that the above conditions are fewer than the Maruyama conditions. This is because we used the following lemma, which is a multivariate random field extension of Lemma 1 of [10] and Lemma 2.2 of [6].

Lemma 2.5

Assume that \(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) holds for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\), and \((t_{n})_{n\in {\mathbb {N}}}\in {\mathbb {R}}^{l}\). Then one has

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\int _{0<\Vert x\Vert ^{2}+\Vert y\Vert ^{2}\le 1}\Vert x\Vert \cdot \Vert y\Vert Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y)=0. \end{aligned}$$

2.1 Related Results and Extensions

In this section, we present different results which follow from, are related to or extend the theorems presented in the previous section.

The first result is a corollary which follows immediately from Theorem 2.3, and states that a multivariate random field is mixing if and only if its components are pairwise mixing.

Corollary 2.6

An \({\mathbb {R}}^{d}\)-valued strictly stationary i.d. random field \(X=(X_{t})_{t\in {\mathbb {R}}^{l}}\) with \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\) is mixing if and only if the bivariate random fields \((X^{(j)},X^{(k)})\), \(j,k\in \{1,\ldots ,d\}\), \(j<k\), are all mixing.

Proof

It follows immediately from Theorem 2.3. \(\square \)

The following corollary is a generalisation of Corollary 2.5 of [6].

Corollary 2.7

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary ID random field. Then with the previous notation, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Big \{\Vert \Sigma (t_{n})\Vert +\int _{{\mathbb {R}}^{2d}}\min (1,\Vert x\Vert \cdot \Vert y\Vert )Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y)\Big \}=0, \end{aligned}$$
(5)

for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).

Proof

We can follow the argument by [6]. To this end, note that if we assume that (5) holds, then conditions (MM1) and \((MM2')\) hold and, thus, Theorem 2.4 implies that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.

For the other direction assume that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing then by Theorem 2.4 condition (MM1) holds. Furthermore, for every \(\delta >0\) with \(Q_{jk}(\partial K_{\delta })=0\) and any \(j,k=1,\ldots ,d\), (cf. (21)),

$$\begin{aligned} Q_{0t_{n}}^{(jk)}\Big |_{K_{\delta }^{c}}\rightharpoonup Q_{jk}\Big |_{K_{\delta }^{c}} ~~as ~n\rightarrow \infty . \end{aligned}$$
(6)

for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), where the symbol “\(\rightharpoonup \)” means convergence in the weak topology. In addition, we know that the Lévy measures \(Q_{jk}\) are concentrated on the axes of \({\mathbb {R}}^{2}\). Now consider a \(\delta >0\) such that conditions (22) and (6) hold. Then we have

$$\begin{aligned}&\lim \sup \limits _{n\rightarrow \infty }\int _{{\mathbb {R}}^{2}}\min (1,|xy|)Q_{0t_{n}}^{(jk)}(\mathrm{d}x,\mathrm{d}y)\le \epsilon \\ {}&+\lim \sup \limits _{n\rightarrow \infty }\int _{B^{c}_{\delta }}\min (1,|xy|)Q_{0t_{n}}^{(jk)}(\mathrm{d}x,\mathrm{d}y)=\epsilon . \end{aligned}$$

Letting \(\epsilon \searrow 0\) we obtain that \(\lim \sup \nolimits _{n\rightarrow \infty }\int _{{\mathbb {R}}^{2}}\min (1,|xy|)Q_{0t_{n}}^{(jk)}(\mathrm{d}x,\mathrm{d}y)=0\) for any \(j,k=1,\ldots ,d\). Finally,

$$\begin{aligned}&\int _{{\mathbb {R}}^{2d}}\min \left( 1,\sum _{k=1}^{d}|x_{k}|\cdot \sum _{j=1}^{d}|y_{j}|\right) Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y)\\&\quad \le \sum _{j,k=1}^{d}\int _{{\mathbb {R}}^{2d}}\min (1,|x_{k}y_{j}|)Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y) \\&\quad =\sum _{j,k=1}^{d}\int _{{\mathbb {R}}^{2}}\min (1,|xy|)Q_{0t_{n}}^{(jk)}(\mathrm{d}x,\mathrm{d}y)\rightarrow 0, \quad as \quad n\rightarrow 0. \end{aligned}$$

Therefore, this implies that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\int _{{\mathbb {R}}^{2d}}\min (1,\Vert x\Vert \cdot \Vert y\Vert )Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y), \end{aligned}$$

for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), hence we obtain that condition (5) is satisfied. \(\square \)

The next two results are a reformulation of Theorem 2.3. However, the first requires a short preliminary introduction, which will be useful for Sect. 4 as well. Recall that a codifference\(\tau (X_{1},X_{2})\) of an ID real bivariate random vector \((X_{1},X_{2})\) is defined as follows

$$\begin{aligned} \tau (X_{1},X_{2}):=\log {\mathbb {E}}\Big [e^{i(X_{1}-X_{2})}\Big ] -\log {\mathbb {E}}\Big [e^{iX_{1}}\Big ]-\log {\mathbb {E}}\Big [e^{-iX_{2}}\Big ], \end{aligned}$$

where \(\log \) is the distinguished logarithm as defined in [19] p. 33. Following [6] we recall that the autocodifference function for an \({\mathbb {R}}^{d}\)-valued strictly stationary ID process \((X_{t})_{t\in {\mathbb {R}}}\) is defined as \(\tau (t)=\left( \tau ^{(jk)}(t)\right) _{j,k=1,\ldots ,d}\) with \(\tau ^{(jk)}(t):=\tau \left( X_{0}^{(k)},X_{t}^{(j)}\right) \). For an \({\mathbb {R}}^{d}\)-valued strictly stationary ID random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\) the autocodifference field\(\tau (t)\) is defined as \(\tau (t)=\left( \tau ^{(jk)}(t)\right) _{j,k=1,\ldots ,d}\) with \(\tau ^{(jk)}(t):=\tau \left( X_{0}^{(k)},X_{t}^{(j)}\right) \), where \(t\in {\mathbb {R}}^{l}\).

Corollary 2.8

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if \(\tau (t_{n})\rightarrow 0\) as \(n\rightarrow \infty \) for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).

Proof

It follows immediately from Theorem 2.3. \(\square \)

Corollary 2.9

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if

$$\begin{aligned} \lim \limits _{\Vert t\Vert \rightarrow \infty }{\mathbb {E}} \left[ e^{i(X_{t}^{(j)}-X_{0}^{(k)})}\right] ={\mathbb {E}}\left[ e^{iX_{0}^{(j)}}\right] \cdot {\mathbb {E}}\left[ e^{-iX_{0}^{(k)}}\right] , \end{aligned}$$
(7)

for any \(j,k=1,\ldots ,d\), where \(\Vert \cdot \Vert \) is any norm on \({\mathbb {R}}^{l}\) (e.g. the sup or the Euclidean norm) and \(t\in {\mathbb {R}}^{l}\).

Proof

\(\Rightarrow \)”: Assume that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing. Then by Theorem 2.3 we know that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {E}}\left[ e^{i(X_{t_{n}}^{(j)}-X_{0}^{(k)})}\right] ={\mathbb {E}}\left[ e^{iX_{0}^{(j)}}\right] \cdot {\mathbb {E}}\left[ e^{-iX_{0}^{(k)}}\right] \end{aligned}$$
(8)

holds for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\). Now consider the following simple result.

Let \(M_{1}=(A_{1},d_{1})\) and \(M_{2}=(A_{2},d_{2})\) be two metric spaces. Let \(S\subseteq A_{1}\) be an open set of \(M_{1}\). Let f be a mapping defined on S. Then \(\lim \nolimits _{x\rightarrow c}f(x)=l\) iff for any sequence \((x_{n})_{n\in {\mathbb {N}}}\) of points in S such that \(\forall n\in {\mathbb {N}}:\)\(x_{n\ne c}\) and \(\lim \nolimits _{n\rightarrow \infty }x_{n}=c\) we have \(\lim \nolimits _{n\rightarrow \infty }f(x_{n})=l\).

From this result and from the fact that we are considering any sequence such that \(\lim \nolimits _{n\rightarrow \infty }\Vert t_{n}\Vert _{\infty }=\infty \), we obtain Eq. (7).

\(\Leftarrow \)”: Assume that (7) holds. Then we have that (8) holds by the result stated above. Then by Theorem 2.3 we obtain that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.

Now consider the set \({\mathscr {E}}:=\left\{ (t_{n})_{n\in {\mathbb {N}}}\subset {\mathbb {R}}^{l}:\lim \nolimits _{n\rightarrow \infty }\Vert t_{n}\Vert =\infty , \text{ where } \Vert \cdot \Vert \text{ is } \text{ any } \text{ norm } \text{ on } {\mathbb {R}}^{l} \right\} \). Notice that any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) belongs to \({\mathscr {E}}\) and vice versa, because on the finite dimensional vector space \({\mathbb {R}}^{l}\) any norm \(\Vert \cdot \Vert _{a}\) is equivalent to any other norm \(\Vert \cdot \Vert _{b}\). Hence, we obtain that Eq. (8) holds for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {E}}\), and by applying the argument above we obtain our result. \(\square \)

Remark 2.10

It is possible to see that the extension that we have done in the above corollary, can be applied to all our results that holds for “any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\)”.

The next result is a multivariate and random field extension of Theorem 2 of Rosinski and Zak [13] and it will help us to generalise Theorem 2.3.

Theorem 2.11

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary ID random field such that \(Q_{0}\), the Lévy measure of \(X_{0}\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})\ne 0\). In other words, \(Q_{0}\) has atoms in this set. Let

$$\begin{aligned}&Z=\{z=(z_{1},\ldots ,z_{j})\in {\mathbb {R}}^{d}: z_{j}\\&\quad =2\pi k/y_{j}~\forall j\in \{1,\ldots ,d\}, \text{ where } k\in {\mathbb {Z}} \text{ and } y=(y_{1},\ldots ,y_{j}) \text{ is } \text{ an } \text{ atom } \text{ of } Q_{0}\}. \end{aligned}$$

Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if for some \(a=(a_{1},\ldots ,a_{d})\in {\mathbb {R}}^{d}{\setminus } Z\), with \(a_{p}\ne 0\) for \(p=1,\ldots ,d\),

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {E}}\left[ e^{i(a_{j}X_{t_{n}}^{(j)}-a_{k}X_{0}^{(k)})}\right] ={\mathbb {E}}\left[ e^{ia_{j}X_{0}^{(j)}}\right] \cdot {\mathbb {E}}\left[ e^{-ia_{k}X_{0}^{(k)}}\right] , \end{aligned}$$

for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).

Proof

Consider an element \(a\in {\mathbb {R}}^{d}{\setminus } Z\) with \(a_{p}\ne 0\) for \(p=1,\ldots ,d\). We know that the set of atoms of any \(\sigma \)-finite measure is a countable set (the proof is straightforward) and that any Lévy measure is \(\sigma \)-finite. Hence, the set of atoms of \(Q_{0}\) is countable, which implies that Z is countable. This implies that our a exists. Now, let

$$\begin{aligned}M_{a}:= \begin{pmatrix} a_{1} &{}\quad 0 &{}\quad \cdot \cdot \cdot &{}\quad 0 \\ 0 &{}\quad a_{2} &{}\quad \cdot \cdot \cdot &{}\quad 0\\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0&{}\quad \dots &{}\quad a_{d} \end{pmatrix}. \end{aligned}$$

Notice that \(M_{a}\) is an invertible \(d\times d\) matrix and \(X_{t}\) is a d-dimensional column vector. We have that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if \((M_{a}X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing. This is because by looking at the Definition 2.1 it is enough to show that for every \(m\in {\mathbb {N}},\)\(\lambda =(s_{1},\ldots ,s_{m})'\in {\mathbb {R}}^{ml}\) and \(\theta =(\theta _{1},\ldots ,\theta _{m})'\in {\mathbb {R}}^{md}\) we have \(\langle \theta ,M_{a}\star X_{\lambda }\rangle =\langle {\tilde{\theta }},X_{\lambda }\rangle \), where \(M_{a}\star X_{\lambda }:=(M_{a}X_{s_{1}},\ldots ,M_{a}X_{s_{m}})'\) and \({\tilde{\theta }}\in {\mathbb {R}}^{md}\). Notice that for \(m=1\) we have \(M_{a}\star X_{t}:=M_{a} X_{t}\), \(t\in {\mathbb {R}}^{l}\). Indeed, we have \(\langle \theta ,M_{a}\star X_{\lambda }\rangle =\sum _{j=1}^{d}\sum _{k=1}^{m}a_{j}X_{s_{k}}^{(j)}\theta _{jk}=\langle M_{a}\star \theta , X_{\lambda }\rangle =\langle {\tilde{\theta }},X_{\lambda }\rangle \), where \( M_{a}\star \theta :=(M_{a}\theta _{1},\ldots ,M_{a}\theta _{m})'\in {\mathbb {R}}^{md}\).

Now, the Lévy measure \(Q^{a}_{0}\) of \(M_{a}X_{0}\) is given by \(Q^{a}_{0}(\cdot )=Q_{0}(M_{a}^{-1}(\cdot ))\) (see Proposition 11.10 of [19]). Since \(a\notin Z\), \(Q^{a}_{0}\) has no atoms in the set \(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\}\). This is because

$$\begin{aligned}&Q^{a}_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\}) \\&\quad =Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}/a_{j}\}), \end{aligned}$$

since \(a\notin Z\) then \(\exists j\in \{1,\ldots ,d\}:a_{j}\ne 2\pi k/y_{j}\) for any \(k\in {\mathbb {Z}}\) and any atom y of \(Q_{0}\), hence

$$\begin{aligned}&=Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\}\\&\text{ such } \text{ that } x_{j}\ne y_{j}, \text{ where } \text{ y } \text{ is } \text{ any } \text{ atom } \text{ of } Q_{0}\})=0. \end{aligned}$$

The equality to zero comes from the fact that the set considered has no intersection with the set of atoms of the measure \(Q_{0}\). Finally, by using Theorem 2.3 the proof is complete. \(\square \)

From this result we have the following generalisation of Theorem 2.3.

Corollary 2.12

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{q}\)-valued stationary ID random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if \({\mathscr {L}}(X_{t_{n}}-X_{0}){\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}{\mathscr {L}}(X_{0}-X_{0}')\) for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\), where \(X_{0}'\) is an independent copy of \(X_{0}\).

Proof

This result is an immediate consequence of Theorem 2.3, Corollary 2.6 and Theorem 2.11. \(\square \)

We end this section with a simple general result which will also be useful for the next section.

Proposition 2.13

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be a linear combination of independent, stationary, ID and mixing random fields. In other words, let \(r\in {\mathbb {N}}\) and let \((X_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\sum _{k=1}^{r}Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), where \(( Y^{k}_{t})_{t\in {\mathbb {R}}^{l}}\), \(k=1,\ldots ,r\), are independent \({\mathbb {R}}^{q}\)-valued stationary, ID and mixing random fields. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is stationary, ID and mixing.

3 Mixed Moving Average Field

In this section we will focus on a specific random field: the mixed moving average (MMA) random field. Before introducing this random field we need to recall the definition of an \({\mathbb {R}}^{d}\)-valued Lévy basis and the related integration theory. Lévy basis are also called infinitely divisible independently scattered random measures in the literature. In the following let S be a non-empty topological space, \({\mathscr {B}}(S)\) be the Borel-\(\sigma \)-field on S and \(\pi \) be some probability measure on \((S,{\mathscr {B}}(S))\). We denote by \({\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) the collection of all Borel sets in \(S\times {\mathbb {R}}^{l}\) with finite \(\pi \otimes \lambda ^{l}\)-measure, where \(\lambda ^{l}\) denotes the l-dimensional Lebesgue measure.

Definition 3.1

A d-dimensional Lévy basis on \(S\times {\mathbb {R}}^{l}\) is an \({\mathbb {R}}^{d}\)-valued random measure \(\Lambda =\{\Lambda (B):B\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\}\) satisfying:

  1. (i)

    the distribution of \(\Lambda (B)\) is infinitely divisible for all \(B\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\),

  2. (ii)

    for an arbitrary \(n\in {\mathbb {N}}\) and pairwise disjoint sets \(B_{1},\ldots ,B_{n}\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) the random variables \(\Lambda (B_{1}),\ldots ,\Lambda (B_{n})\) are independent,

  3. (iii)

    for any pairwise disjoint sets \(B_{1},B_{2},\ldots \in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) with \(\bigcup _{n\in {\mathbb {N}}}B_{n}\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\) we have, almost surely, \(\Lambda (\bigcup _{n\in {\mathbb {N}}}B_{n})=\sum _{n\in {\mathbb {N}}}\Lambda (B_{n})\).

Throughout this section, we shall restrict ourselves to time-homogeneous and factorisable Lévy bases, i.e. Lévy bases with characteristic function given by

$$\begin{aligned} {\mathbb {E}}\bigg [e^{i\langle \theta ,\Lambda (B)\rangle }\bigg ]=e^{\psi (\theta )\Pi (B)}, \end{aligned}$$
(9)

for all \(\theta \in {\mathbb {R}}^{d}\) and \(B\in {\mathscr {B}}_{0}(S\times {\mathbb {R}}^{l})\), where \(\Pi =\pi \otimes \lambda ^{l}\) is the product measure of the probability measure \(\pi \) on S and the Lebesgue measure \(\lambda ^{l}\) on \({\mathbb {R}}^{l}\) and

$$\begin{aligned} \psi (\theta )=i\langle \gamma ,\theta \rangle -\frac{1}{2}\langle \theta ,\Sigma \theta \rangle +\int _{{\mathbb {R}}^{d}}\left( e^{i\langle \theta ,x\rangle }-1-i\langle \theta ,x\rangle \mathbf{1 }_{[0,1]}(\Vert x\Vert ) \right) Q(\mathrm{d}x) \end{aligned}$$

is the cumulant transform of an ID distribution with characteristic triplet \((\gamma ,\Sigma ,Q)\). We note that the quadruple \((\gamma ,\Sigma ,Q,\pi )\) determines the distribution of the Lévy basis completely and therefore it is called the generating quadruple. Now, we provide an extension of Theorem 3.2 of [6], which does not need a proof since it is a combination of Theorem 3.2 of [6] and Theorem 2.7 of [12]. It concerns the existence of integrals with respect to a Lévy basis.

Remark 3.2

In this section we are considering a q-valued random field, since the d is used for the \({\mathbb {R}}^{d}\)-valued Lévy basis, and we denote by \(M_{q\times d}({\mathbb {R}})\) the collection of \(q\times d\) matrices over the field \({\mathbb {R}}\).

Theorem 3.3

Let \(\Lambda \) be an \({\mathbb {R}}^{d}\)-valued Lévy basis with characteristic function of the form (9) and let \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) be a measurable function. Then f is \(\Lambda \)-integrable as a limit in probability in the sense of Rajput and Rosinski [12], if and only if

$$\begin{aligned}&\int _{S}\int _{{\mathbb {R}}^{l}}\Big \Vert f(A,s) \gamma +\int _{{\mathbb {R}}^{d}}f(A,s)x( \mathbf{1 }_{[0,1]}(\Vert f(a,s)x\Vert )\\&\qquad - \mathbf{1 }_{[0,1]}(\Vert x\Vert ))Q(\mathrm{d}x)\Big \Vert \mathrm{d}s\pi (\mathrm{d}A)<\infty , \\&\quad \int _{S}\int _{{\mathbb {R}}^{l}}\Vert f(A,s)\Sigma f(A,s)'\Vert \mathrm{d}s\pi (\mathrm{d}A)<\infty ,\quad \hbox {and} \\&\quad \int _{S}\int _{{\mathbb {R}}^{l}}\int _{{\mathbb {R}}^{d}}\min (1,\Vert f(A,s)x\Vert ^{2})Q(\mathrm{d}x)\mathrm{d}s\pi (\mathrm{d}A)<\infty . \end{aligned}$$

If f is \(\Lambda \)-integrable, the distribution of \(\int _{S}\int _{{\mathbb {R}}^{l}}f(A,s)\Lambda (\mathrm{d}A,\mathrm{d}s)\) is infinitely divisible with characteristic triplet \((\gamma _\mathrm{int},\Sigma _\mathrm{int},v_\mathrm{int})\) given by

$$\begin{aligned} \gamma _\mathrm{int}= & {} \int _{S}\int _{{\mathbb {R}}^{l}}f(A,s)\gamma +\int _{{\mathbb {R}}^{d}}f(A,s)x( \mathbf{1 }_{[0,1]}(\Vert f(a,s)x\Vert )\\&- \mathbf{1 }_{[0,1]}(\Vert x\Vert ))v(\mathrm{d}x) \mathrm{d}s\pi (\mathrm{d}A), \\ \Sigma _\mathrm{int}= & {} \int _{S}\int _{{\mathbb {R}}^{l}}f(A,s)\Sigma f(A,s)'\mathrm{d}s\pi (\mathrm{d}A),\quad \hbox {and} \\ v_\mathrm{int}(B)= & {} \int _{S}\int _{{\mathbb {R}}^{l}}\int _{{\mathbb {R}}^{d}} \mathbf{1 }_{B}(f(A,s)x)Q(\mathrm{d}x)\mathrm{d}s\pi (\mathrm{d}A) \end{aligned}$$

for all Borel sets \(B\subseteq {\mathbb {R}}^{q}{\setminus }\{0\}.\)

Proof

This theorem is a specific representation of Theorem 3.2 of [6] and Theorem 2.7 of [12]. \(\square \)

Let us now introduce the main object of interest of this section: the mixed moving average random field.

Definition 3.4

(Mixed Moving Average Random Field) Let \(\Lambda \) be an \({\mathbb {R}}^{d}\)-valued Lévy basis on \(S\times {\mathbb {R}}^{l}\) and let \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) be a measurable function. If the random field

$$\begin{aligned} X_{t}:= \int _{S}\int _{{\mathbb {R}}^{l}}f(A,t-s)\Lambda (\mathrm{d}A,\mathrm{d}s) \end{aligned}$$

exists in the sense of Theorem 3.3 for all \(t\in {\mathbb {R}}^{l}\), it is called an n-dimensional mixed moving average random field (MMA random field for short). The function f is said to be its kernel function.

MMA random field have been discussed in Surgailis et al. [20] and Veraart [21]. Note that an MMA random field is an ID and strictly stationary random field.

The following lemma is a direct application of Corollary 2.7 to our MMA random field case.

Lemma 3.5

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\int _{S}\int _{{\mathbb {R}}^{l}}f(A,t-s)\Lambda (\mathrm{d}A,\mathrm{d}s))_{t\in {\mathbb {R}}^{l}}\) be an MMA random field where \(\Lambda \) is an \({\mathbb {R}}^{d}\)-valued Lévy basis on \(S\times {\mathbb {R}}^{l}\) with generating quadruple \((\gamma ,\Sigma , Q,\pi )\) and \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) is a measurable function. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if

$$\begin{aligned}&\lim \limits _{n\rightarrow \infty }\bigg \{\bigg \Vert \int _{S}\int _{{\mathbb {R}}^{l}}f(A,-s)\Sigma f(A,t_{n}-s)'\mathrm{d}s\pi (\mathrm{d}A)\bigg \Vert \\&\quad + \int _{S}\int _{{\mathbb {R}}^{l}}\int _{{\mathbb {R}}^{d}}\min (1,\Vert f(A,-s)x\Vert \cdot \Vert f(A,t_{n}-s)x\Vert )Q(\mathrm{d}x)\mathrm{d}s\pi (\mathrm{d}A)\bigg \}=0, \end{aligned}$$

for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\).

The following theorem is the main result of this section, while the next corollary is an extension of it.

Theorem 3.6

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\int _{S}\int _{{\mathbb {R}}^{l}}f(A,t-s)\Lambda (\mathrm{d}A,\mathrm{d}s))_{t\in {\mathbb {R}}^{l}}\) be an MMA random field where \(\Lambda \) is an \({\mathbb {R}}^{d}\) valued Lévy basis on \(S\times {\mathbb {R}}^{l}\) with generating quadruple \((\gamma ,\Sigma , Q,\pi )\) and \(f:S\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) is a measurable function. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.

From the above result and Proposition 2.13, we have this corollary.

Corollary 3.7

Sums of independent MMA random fields are stationary, ID and mixing random fields.

Proof

This corollary is an immediate consequence of Theorem 3.6 and Proposition 2.13. \(\square \)

Remark 3.8

The above corollary holds for any linear combination of independent MMA, including MMA with different Lévy basis and different parameter space S.

3.1 Meta-Times and Subordination

In this section we give a brief introduction of the concepts of meta-times and subordination, and present a result which is a corollary of Theorem 3.6.

First, we recall the definition of an homogeneous Lévy sheet (see [4] definition 2.1). Let \(\triangle ^{b}_{a}F\) indicate the increments of a function F over an interval \((a,b]\subset {\mathbb {R}}^{k}_{+}\) and let \(a\le b\) indicate \(a^{i}\le b^{i}\) for \(i=1,\ldots ,k\) (see [4]), where \({\mathbb {R}}^{m}_{+}=\{x\in {\mathbb {R}}^{m}:x^{i}\ge 0, i=1,\ldots ,m \}\) and \(m\in {\mathbb {N}}\). Let \(k,l\in {\mathbb {N}}\).

Definition 3.9

Let \(X=\{X_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) be a family of random vectors in \({\mathbb {R}}^{d}\). We say that X is an homogeneous Lévy sheet on \({\mathbb {R}}^{k}_{+}\) if \(X_{t}=0\) for all \(t\in \{t\in {\mathbb {R}}^{k}_{+}:t^{j}=0\,\,\text { for some }j\in \{1,\ldots ,k\} \}\) a.s., \(\triangle _{a_{1}}^{b_{1}}X,\ldots ,\triangle _{a_{n}}^{b_{n}}X\) are independent whenever \(n\ge 2\) and \((a_{1},b_{1}],\ldots ,(a_{n},b_{n}]\subset {\mathbb {R}}^{k}_{+}\) are disjoint, X is continuous in probability, \(\triangle _{a+t}^{b+t}X{\mathop {=}\limits ^{d}}\triangle _{a}^{b}X\) for all \(a,b,t\in {\mathbb {R}}^{k}\) with \(a\le b\), and all sample paths of X are lamp.

The concept of lamp (i.e. limits along monotone paths) is the analogue of càdlàg, but in the multiparameter setting. If \(X=\{X_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) is a homogeneous Lévy sheet then \({\mathscr {L}}(\triangle _{a}^{b}X)\in ID({\mathbb {R}}^{d})\).

Now let \(X=\{X_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) be an \({\mathbb {R}}^{d}\)-valued homogeneous Lévy sheet on \({\mathbb {R}}^{k}_{+}\) and \(\Lambda _{X}=\{\Lambda _{X}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}_{+}) \}\) be the homogeneous Lévy basis induced by X, namely \(\Lambda _{X}([0,t])=X_{t}\) a.s. for all \(t\in {\mathbb {R}}^{k}_{+}\). Let \(T=\{T_{t}:t\in {\mathbb {R}}^{k}_{+} \}\) be an \({\mathbb {R}}_{+}\)-valued homogeneous Lévy sheet and \(\Lambda _{T}=\{\Lambda _{T}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}_{+}) \}\) be the non-negative homogeneous Lévy basis induced by T. Define \({\mathscr {F}}^{T}=\sigma (\Lambda _{T}(A):A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+}))\) to be the \(\sigma \)-field generated by \(\Lambda _{T}\). Then there exists a \(({\mathscr {F}}^{T},{\mathscr {B}}({\mathbb {R}}^{k}_{+}),{\mathscr {B}}({\mathbb {R}}^{k}))\)-measurable mapping \(\phi _{T}:\Omega \times {\mathbb {R}}^{k}_{+}\rightarrow {\mathbb {R}}^{k}\) such that for all \(\omega \in \Omega \) and \(A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+})\), the set \(\mathbf{T }(A)(\omega ),\) given by \(\mathbf{T }(A)(\omega )=\{x\in {\mathbb {R}}^{k}_{+}:\phi _{T}(\omega ,x)\in A \}\) is bounded and

$$\begin{aligned} \Lambda _{T}(A)(\omega )=Leb(\mathbf{T }(A)(\omega )). \end{aligned}$$

For each \(\omega \), \(\mathbf{T }(\cdot )(\omega )\) is called a meta-time associated with \(\Lambda _{T}(\cdot )(\omega )\). Let \(M=\{M(A):A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+}) \}\) be defined as

$$\begin{aligned} M(A)(\omega )=\Lambda _{X}(\mathbf{T }(A))(\omega ) \end{aligned}$$

for all \(A\in {\mathscr {B}}({\mathbb {R}}^{k}_{+})\). We say that M appears by extended subordination of\(\Lambda _{X}\)by\(\Lambda _{T}\) (or of XbyT). Then by Theorem 5.1 of [4] we have that M ia a homogeneous Lévy basis.

Therefore, we have the following corollary of Theorem 3.6.

Corollary 3.10

Let \(X=\{X_{t}:t\in {\mathbb {R}}^{k} \}\) be an \({\mathbb {R}}^{d}\)-valued homogeneous Lévy sheet on \({\mathbb {R}}^{k}\) and \(\Lambda _{X}=\{\Lambda _{X}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}) \}\) be the homogeneous Lévy basis induced by X. Let \(T=\{T_{t}:t\in {\mathbb {R}}^{k} \}\) be an \({\mathbb {R}}_{+}\)-valued homogeneous Lévy sheet and \(\Lambda _{T}=\{\Lambda _{T}(A):A\in {\mathscr {B}}({\mathbb {R}}^{k}) \}\) be the non-negative homogeneous Lévy basis induced by T. Let \(M=\{M(A):A\in {\mathscr {B}}_{b}({\mathbb {R}}^{k}_{+}) \}\) be an extended subordination of \(\Lambda _{X}\) by \(\Lambda _{T}\). Let \((Y_{t})_{t\in {\mathbb {R}}^{l}}{\mathop {=}\limits ^{d}}(\int _{{\mathbb {R}}^{k-l}}\int _{{\mathbb {R}}^{l}}f(B,t-s)M(dB,\mathrm{d}s))_{t\in {\mathbb {R}}^{l}}\), where \(f:{\mathbb {R}}^{k-l}\times {\mathbb {R}}^{l}\rightarrow M_{q\times d}({\mathbb {R}})\) is a measurable function. Then \((Y_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing.

Proof

It is sufficient to notice that the framework introduced above holds for the case \({\mathbb {R}}^{k}\) and not just for \({\mathbb {R}}^{k}_{+}\) (see [4]) and that M is an \({\mathbb {R}}^{d}\)-valued homogeneous Lévy basis on \({\mathbb {R}}^{k}\). Then by using Theorem 3.6 we obtain the result. \(\square \)

4 Weak Mixing and Ergodicity

In this section, we will first show how to modify our results to obtain weak mixing version of the results presented before and then prove that for stationary ID random fields ergodicity and weak mixing are equivalent. We start with a definition of a density one set and of weak mixing for stationary random fields.

Definition 4.1

A set \(E\subset {\mathbb {R}}^{l}\) is said to have density zero in \({\mathbb {R}}^{l}\) with respect to the Lebesgue measure \(\lambda \) if

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\frac{1}{(2T)^{l}}\int _{(-T,T]^{l}}{\mathbf {1}}_{E}(x)\lambda (\mathrm{d}x)=0. \end{aligned}$$

A set \(D\subset {\mathbb {R}}^{l}\) is said to have density one in \({\mathbb {R}}^{l}\) if \({\mathbb {R}}^{l}{\setminus } D\) has density zero in \({\mathbb {R}}^{l}\).

The class of all sequences on D that converge to infinity will be denoted by

$$\begin{aligned} {\mathscr {T}}_{D}:=\left\{ (t_{n})_{n\in {\mathbb {N}}}\subset {\mathbb {R}}^{l}\cap D:\lim \limits _{n\rightarrow \infty }\Vert t_{n}\Vert _{\infty }=\infty \right\} . \end{aligned}$$

Definition 4.2

Consider the random field \(X_{t}(\omega )=X_{0}\circ \theta _{t}(\omega )\), \(t\in {\mathbb {R}}^{l}\), where \(\{\theta _{t} \}_{t\in {\mathbb {R}}^{l}}\) is a measure preserving \({\mathbb {R}}^{l}\)-action. Let \(\sigma _{X}\) be the \(\sigma \)-algebra generated by the field \((X_{t})_{t\in {\mathbb {R}}^{l}}\). We say that \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is weakly mixing if there exists a density one set D such that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {P}}(A\cap \theta _{t_{n}}(B))={\mathbb {P}}(A){\mathbb {P}}(B), \end{aligned}$$

for all \(A,B\in \sigma _{X}\) and all \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\).

We are now ready to state the weak mixing version of Theorem 2.3.

Theorem 4.3

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\), with \(l\in {\mathbb {N}}\), be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field such that \(Q_{0}\), the Lévy measure of \({\mathscr {L}}(X_{0})\), satisfies \(Q_{0}(\{x=(x_{1},\ldots ,x_{d})'\in {\mathbb {R}}^{d}:\exists j\in \{1,\ldots ,d\},x_{j}\in 2\pi {\mathbb {Z}}\})=0\). Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if there exists a density one set \(D\subset {\mathbb {R}}^{l}\) such that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }{\mathbb {E}}\left[ e^{i(X_{t_{n}}^{(j)}-X_{0}^{(k)})}\right] ={\mathbb {E}}\left[ e^{iX_{0}^{(j)}}\right] \cdot {\mathbb {E}}\left[ e^{-iX_{0}^{(k)}}\right] , \end{aligned}$$

for any \(j,k=1,\ldots ,d\) and for any sequence \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\).

Proof

It is possible to see that the argument used in the first part of the proof of Theorem 2.3 applies and holds also for the case \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\). Moreover, using Theorem 4.4 the proof is complete. \(\square \)

Theorem 4.4

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary infinite divisible random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if there exists a density one set \(D\subset {\mathbb {R}}^{l}\) such that

(MM1) the covariance matrix function \(\Sigma (t_{n})\) of the Gaussian part of \((X_{t_{n}})_{t_{n}\in {\mathbb {R}}^{l}}\) tends to 0, as \(n\rightarrow \infty \),

\((MM2')\)\(\lim \nolimits _{n\rightarrow \infty }Q_{0t_{n}}(\Vert x\Vert \cdot \Vert y\Vert >\delta )=0\) for any \(\delta >0\), where \(Q_{0t_{n}}\) is the Lévy measure of \({\mathscr {L}}(X_{0},X_{t_{n}})\) on \(({\mathbb {R}}^{d},{\mathscr {B}}({\mathbb {R}}^{d}))\),

where \((t_{n})_{n\in {\mathbb {N}}}\) is any sequence in \({\mathscr {T}}_{D}\).

Proof

It is possible to see that the arguments used in the proof of Theorem 2.4 go through for the case \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\). \(\square \)

Remark 4.5

It is possible to obtain weak mixing version also for the following results previously stated: Corollaries 2.62.72.82.9, Theorem 2.11, Corollary 2.12 and Proposition 2.13. The proofs of these results are omitted because they follow exactly the same arguments used in the proofs of their respective results. The only difference is that we have \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\) and not \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}\) but this does not trigger any change in the arguments used in the proofs of these results.

Among these results, we have the following corollary, which is the weak mixing version of Corollary 2.7 and it will be useful for our next result: the equivalence between weak mixing and ergodicity for stationary ID random fields.

Corollary 4.6

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued strictly stationary ID random field. Then with the previous notation, \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is mixing if and only if there exists a density one set \(D\subset {\mathbb {R}}^{l}\) such that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Big \{\Vert \Sigma (t_{n})\Vert +\int _{{\mathbb {R}}^{2d}}\min (1,\Vert x\Vert \cdot \Vert y\Vert )Q_{0t_{n}}(\mathrm{d}x,\mathrm{d}y)\Big \}=0 \end{aligned}$$

for any \((t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}\).

Proof

See discussion in Remark 4.5. \(\square \)

In order to prove the equivalence between ergodicity and weak mixing for ID stationary random fields we will need various preliminary results, some of which have already been proven in the literature. We start with two known results.

Lemma 4.7

(Lemma 4.3 of the ArXiv last version (i.e. v3) of [22]) Let \(f:{\mathbb {R}}^{l}\rightarrow {\mathbb {R}}\) be non-negative and bounded. A necessary and sufficient condition for

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\frac{1}{(2T)^{l}}\int _{(-T,T]^{l}}f(t)dt=0 \end{aligned}$$

is that there exists a subset D of density one in \({\mathbb {R}}^{l}\) such that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }f(t_{n})=0,\quad \text {for any}\quad (t_{n})_{n\in {\mathbb {N}}}\in {\mathscr {T}}_{D}. \end{aligned}$$

Lemma 4.8

(Theorem 2.3.2 of [5]) Let \(\mu \) be a finite measure on \({\mathbb {R}}^{l}\). Then

$$\begin{aligned} \lim \limits _{T\rightarrow \infty }\frac{1}{(2T)^{l}}\int _{(-T,T]^{l}}{\hat{\mu }}(t)dt=\mu (\{0\}), \end{aligned}$$

where \({\hat{\mu }}\) denotes the Fourier transform of \(\mu \).

The following lemma is an adaptation to our framework of Lemma 3 of [14].

Lemma 4.9

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary ID random field and \(Q_{0t}^{jk}\) be the Lévy measure of \({\mathscr {L}}(X_{0}^{(j)},X_{t}^{(k)})\). Then for every \(\delta >0\) and \(j,k=1,\ldots ,d\), the family of finite measures of \((Q_{0t}^{jk}|_{K_{\delta }^{c}})_{t\in {\mathbb {R}}^{l}}\) is weakly relatively compact and

$$\begin{aligned} \lim \limits _{\delta \rightarrow 0}\sup \limits _{t\in {\mathbb {R}}^{l}}\int _{K_{\delta }}|xy|Q_{0t}^{jk}(\mathrm{d}x,\mathrm{d}y)=0, \end{aligned}$$
(10)

where \(K_{\delta }=\{(x,y):x^{2}+y^{2}\le \delta ^{2} \}\).

Proof

This result comes directly from the proof of Lemma 3 of [14]. \(\square \)

Now we will investigate the auto-codifference matrix of the \({\mathbb {R}}^{d}\)-valued stationary ID random field \((X_{t})_{t\in {\mathbb {R}}^{l}}\), which was already introduced in Section 2.1. Consider

$$\begin{aligned} \tau (t_{n})=\left( \tau ^{(jk)}(t_{n})\right) _{j,k=1,\ldots ,d}, \end{aligned}$$

with

$$\begin{aligned} \tau ^{(jk)}(t_{n}):= & {} \tau \left( X_{0}^{(k)},X_{t_{n}}^{(j)}\right) =\log {\mathbb {E}}\Big [e^{i(X_{0}^{(k)}-X_{t_{n}}^{(j)})}\Big ]-\log {\mathbb {E}}\Big [e^{iX_{0}^{(k)}}\Big ]-\log {\mathbb {E}}\Big [e^{-iX_{t_{n}}^{(j)}}\Big ]\nonumber \\= & {} \sigma _{t_{n}}^{jk}+\int _{{\mathbb {R}}^{2}}(e^{ix}-1)\overline{(e^{iy}-1)}Q_{0t_{n}}^{jk}(\mathrm{d}x,\mathrm{d}y), \end{aligned}$$
(11)

where \(\Gamma ^{jk}_{t_{n}}\) is the covariance function of the Gaussian part of \((X_{0}^{(k)},X_{t_{n}}^{(j)})\) and it is given by

$$\begin{aligned} \Gamma ^{jk}_{t_{n}}= \begin{pmatrix} \sigma _{0}^{jk} &{}\quad \sigma _{t_{n}}^{jk} \\ \sigma _{t_{n}}^{jk} &{}\quad \sigma _{0}^{jk} \end{pmatrix}. \end{aligned}$$

Proposition 4.10

Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued ID random field (not necessarily stationary). Then for any \(j,k=1,\ldots ,d\) the function

$$\begin{aligned} {\mathbb {R}}^{l}\times {\mathbb {R}}^{l}\ni (s,t)\rightarrow \tau ^{(jk)}(X^{(j)}_{s},X^{(k)}_{t})\in {\mathbb {C}} \end{aligned}$$

is non-negative definite.

Proof

We argue as in Proposition 2 of [14]. Without loss of generality let \(t\ge s\) with \(t,s\in {\mathbb {R}}^{l}\). As seen above, we have

$$\begin{aligned} \tau \left( X_{s}^{(k)},X_{t}^{(j)}\right) =\sigma _{t-s}^{jk}+\int _{{\mathbb {R}}^{2}}(e^{ix}-1)\overline{(e^{iy}-1)}Q_{st}^{jk}(\mathrm{d}x,\mathrm{d}y). \end{aligned}$$
(12)

Since \((s,t)\rightarrow \sigma _{t-s}^{jk}\) is non-negative definite because it is a covariance function, it just remains to show that the second element on the RHS of (12) is non-negative definite. However, this is a consequence of Lemma 4 in [14]. \(\square \)

We can now state and later prove (see “Appendix C”) the second main theorem of this section, which states the equivalence between ergodicity and weak mixing for ID stationary random fields.

Theorem 4.11

Let \(l,d\in {\mathbb {N}}\). Let \((X_{t})_{t\in {\mathbb {R}}^{l}}\) be an \({\mathbb {R}}^{d}\)-valued stationary ID random field. Then \((X_{t})_{t\in {\mathbb {R}}^{l}}\) is ergodic if and only if it is weakly mixing.

5 Conclusion

In this work we derived different results concerning ergodicity and mixing properties of multivariate stationary infinitely divisible random fields. A possible future direction consists of the investigation of statistical properties of the results presented in this paper. For example for multivariate stochastic processes, showing that mixed moving average (MMA) processes are mixing implies that the corresponding moment based estimator (like the generalised method of moments (GMM)) are consistent (see [6]). However, it is not clear that a similar result holds for the random fields case. Other possible directions would be to extend the present results to the case of random fields on manifolds or on infinite dimensional vector spaces. However, the literature is not as developed as for the \({\mathbb {R}}^{l}\)-case and requires further work.