1 Introduction

Asymptotic results for extreme values of random fields have attracted much attention recently, see Wu and Samorodnitsky (2020), Basrak and Planinic (2021), and Jakubowski and Soja-Kukieła (2019), to name a few. Extending the basic results of Basrak and Segers (2009) in the context of time series, the newly developed approaches focus on stationary regularly varying \(\mathbb {R}^{d}\)-valued random fields \(\textbf{X}=( \textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\): The random vectors \((\textbf{X}_{\textbf{t}_{1}},....,\textbf{X}_{\textbf{t}_{n}})\) are regularly varying in \(\mathbb {R}^{nd}\) for each \(\textbf{t}_{1},...,\textbf{t}_{n}\in \mathbb {Z}^{k}\). The existence of the spectral tail random field \(\mathbf \Theta :=(\mathbf {\Theta }_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) characterizes the limit behavior of the extremes around the origin \(\{\textbf{0}\}\) under the condition that \(\textbf{X}_{\textbf{0}}\) is extreme and normalized by \(|\textbf{X}_\textbf{0}|\). The random field \(\mathbf \Theta\) characterizes the extrema of \(\textbf{X}\) and hence any asymptotic extreme value set. One phenomenon is the clustering of extrema, i.e., the tendency for extrema to occur locally. To formalize this phenomenon, the approach is to extend the basic result of Davis and Hsing (1995) via the convergence of the point process of exceedances. More precisely, we define \(N_n\) as a simple point process of exceedances on a hyperrectangle \(C_n=[1,n]^k\), \(n\ge 1,\)

$$\begin{aligned} N_{n}:=\sum _{\textbf{t}\in C_n}\varepsilon _{a_{n^k}^{-1}\textbf{X}_{\textbf{t}}}\,. \end{aligned}$$

The level of excesses is set to \(a_n\), which satisfies \(\lim _{n\rightarrow \infty }n\mathbb P(|\textbf{X}_{\textbf{0}}| > a_n)=1\), as if the observations were independent. Wu and Samorodnitsky (2020) show that \(N_n\) converges to a cluster point process N on \(\mathbb R^d\setminus \{0\}\) and an explicit representation of the latter is given in Basrak and Planinic (2021). More precisely, there is a spectral cluster field \(\textbf{Q}:=(\textbf{Q}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^k}\) whose distribution is derived from that of \((\mathbf {\Theta }_{\textbf{t}})\) and for which holds

$$\begin{aligned} N^{\Lambda }=\sum _{i=1}^{\infty } \sum _{\textbf{t}\in \mathbb {Z}^k}\varepsilon _{\Gamma _{i}^{-1/\alpha } \textbf{Q}_{ i,\textbf{t}}}\,, \end{aligned}$$

where \((\sum _{\textbf{t}\in \mathbb {Z}^k}\varepsilon _{ \textbf{Q}_{ i,\textbf{t}})_{i\ge 1}})\) are independent and identically distributed (iid) copies of \(\sum _{\textbf{t}\in \mathbb {Z}^k}\varepsilon _{ \textbf{Q}_{ \textbf{t}}}\), independent of the points \((\Gamma _{i})\) of a standard Poisson process. The random field \(\textbf{Q}\) is crucial since it accurately describes the asymptotic clustering phenomenon. The paper aims to introduce and analyze a new setting adapted to index sets other than the hyperrectangle \(C_n\).

Let us consider \(\Lambda _n\) as an arbitrary index set of \(\mathbb Z^k\setminus \{\textbf{0}\}\). Such an extension of the rectangular index set is not straightforward, as Stehr and Rónn-Nielsen (2021) showed in the asymptotically independent case (\(\mathbf \Theta _{\textbf{t}}=0\) for all \({\textbf{t}}\ne \textbf{0}\)). For index sets \((\Lambda _n)\), a geometric condition must hold. To motivate the study of index sets that are not rectangular, let us describe the most common index sets in the literature. The spatio-temporal sets \(\Lambda _n\) are typically of the form \(\mathcal C\times \{T,\ldots ,mT\}\), where \(\mathcal C\) is a fixed lattice of \(\mathbb {Z}^{2}\) and \(T\ge 1\) is the observation period through time expressed in space-time units, and m is the number of observation periods. As usual, we consider a stationary, regularly varying random field \((\textbf{X}_{\textbf{t}})\). Remark that the assumption of stationarity in time and space on \(\textbf{X}\) is standard in environmental statistics, even when the spatial grid \(\mathcal C\) is large but finite, see the review paper by Davison et al. (2012) and references therein. We first obtain the existence of a limiting point process \(N^ \Lambda\) as follows

$$\begin{aligned} N^{\Lambda }_{n}:=\sum _{\textbf{t}\in \Lambda _{n}}\varepsilon _{{a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}}}{\mathop {\rightarrow }\limits ^{d}}\,N^{\Lambda }\,,\qquad n\rightarrow \infty \,, \end{aligned}$$

where \(a_{n}^\Lambda\) comes from \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|\mathbb {P}(|\textbf{X}_{\textbf{0}}| > a_{n}^\Lambda )=1\).

In contrast to the hyperrectangle case, the distribution of the point process \(N^ \Lambda\) depends on the asymptotic lattice properties of the general lattice \(\Lambda _n\). Not surprisingly, the asymptotic form of the index set \(\Lambda _n\) constrains the clustering effect. We derive the limiting distribution of the point process under a sufficient condition that ensures that \(\Lambda _n\) consists asymptotically of translated versions of countably many fixed sets \(\mathcal D_j\). It appears that the limiting cluster point process distribution is a mixture of expressions of the spectral tail random field over the different \(\mathcal D_j\). However, a representation of the clusters using the original spectral tail random field \((\mathbf {\Theta }_{\textbf{t}})\) is limited to specific \(\Lambda _n\). We derive the representation of the limiting points similar as in Basrak and Planinic (2021) only when all the \(\mathcal D_j\)’s are lattice. This condition is satisfied for \(C_n\), and we recover the characterization of the clusters first provided in Basrak and Planinic (2021).

For irregular index set \(\Lambda _n\), the representation of the (asymptotic) clusters does not naturally use the original spectral tail random field \((\mathbf {\Theta }_{\textbf{t}})\). Instead, one has to introduce the concept of the \(\Upsilon\)-tail field that characterizes the limiting behaviour of the extremes around the region \(\Upsilon\) given that \(\textbf{X}_{\textbf{t}}\) is extreme over \(\Upsilon\) and normalized by a modulus of \((\textbf{X}_{\textbf{t}})_{{\textbf{t}}\in \Upsilon }\). This framework has already been developed for iid sequences by Ferreira and de Haan (2014) and for time series cases by Segers et al. (2017) but not for random fields.

Our main contribution is to introduce a very general setting for index sets, namely Condition \((\mathcal {D}^{\Lambda })\), which does not involve any topological properties. This condition allows for countably many different shapes, and it is only an asymptotic condition. In particular, it implies that some shapes repeat approximately an infinite number of times proportional to \(|\Lambda _n|\). Surprisingly, the shape of the asymptotic local region \(\Upsilon\) is arbitrary. In contrast with existing results such as Stehr and Rónn-Nielsen (2021) we show that convexity is not required when dealing with extremes. Under the anti-clustering condition, we deal with index sets that are local regions such as \(\Upsilon\) reproduced over a lattice.

The rest of the paper is organized as follows. Section 2 is devoted to preliminaries, notations and the main assumptions. The theoretical properties implied by the crucial Condition \((\mathcal {D}^{\Lambda })\) are discussed in Section 3. The asymptotic clusters for any index set \(\Lambda _n\) satisfying Condition \((\mathcal {D}^{\Lambda })\) are studied in Section 4, while in Section 5 we present their characterization based on the \(\Upsilon -\)spectral tail field. Two applications of this new approach are developed in Section 6, determining the extremal index and providing sufficient conditions for max-stable random fields. Section 7 contains the proofs of the results of Section 3 and the rest of the proofs are collected in Sections 8 and 9.

2 Preliminaries, notation and main assumptions

Let \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) be an \(\mathbb {R}^{d}\)-valued regularly varying stationary random field.

2.1 Spectral tail fields

Let us recall two fundamental results of Wu and Samorodnitsky (2020): the existence of the tail field and the time change formula for the tail and spectral tail fields.

Theorem 1

(Theorem 2.1 in Wu and Samorodnitsky (2020)) An \(\mathbb {R}^{d}\)-valued stationary random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) is jointly regularly varying with index \(\alpha\) if and only if there exists a random field \((\textbf{Y}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) such that

$$\begin{aligned} \mathcal {L}\big (x^{-1}\textbf{X}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k}\big ||\textbf{X}_{\textbf{0}}|>x\big ){\mathop {\rightarrow }\limits ^{fdd}}\mathcal {L}(\textbf{Y}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k}) \end{aligned}$$

as \(x\rightarrow \infty\), and \(\mathbb {P}(|\textbf{Y}_{\textbf{0}}|>y)=y^{-\alpha }\) for \(y\ge 1\). We call \((\textbf{Y})_{\textbf{t}\in \mathbb {Z}^{k}}\) the tail field of \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\).

Theorem 2

(Theorem 3.2 in Wu and Samorodnitsky (2020)) Let \((\textbf{Y})_{\textbf{t}\in \mathbb {Z}^{k}}\) be the tail field corresponding to an \(\mathbb {R}^{d}\)-valued stationary random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) that is jointly regularly varying with index \(\alpha\) and define \(\mathbf {\Theta }_{\textbf{t}}=\textbf{Y}_{\textbf{t}}/|\textbf{Y}_{\textbf{0}}|\), \(\textbf{t}\in \mathbb {Z}^{k}\). Let \(g:(\mathbb {R}^{d})^{\mathbb {Z}^{k}}\rightarrow \mathbb {R}\) be a bounded measurable function. Take any \(\textbf{s}\in \mathbb {Z}^{k}\). Then the following identities hold:

$$\begin{aligned} \mathbb {E}[g((\textbf{Y}_{\textbf{t} - \textbf{s}})_{\textbf{t}\in \mathbb {Z}^k})\textbf{1}(\textbf{Y}_{-\textbf{s}}\ne \textbf{0})]=\int _{0}^{\infty }\mathbb {E}[g((r\mathbf {\Theta }_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^k})\textbf{1}(r|\mathbf {\Theta }_{\textbf{s}}|>1)]d(-r^{-\alpha }), \end{aligned}$$
(1)
$$\begin{aligned} \mathbb {E}[g((\mathbf {\Theta }_{\textbf{t} - \textbf{s}})_{\textbf{t}\in \mathbb {Z}^k})\textbf{1}(\mathbf {\Theta }_{-\textbf{s}}\ne \textbf{0})]=\mathbb {E}\bigg [g\bigg (\frac{(\mathbf {\Theta }_{\textbf{t} })_{\textbf{t}\in \mathbb {Z}^k}}{|\mathbf {\Theta }_{\textbf{s}}|}\bigg )|\mathbf {\Theta }_{\textbf{s}}|^{\alpha } \bigg ]. \end{aligned}$$
(2)

We call \((\mathbf {\Theta }_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) the spectral field of \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\).

Denote by \(\le\) the component-wise order on \(\mathbb {Z}^{k}\), thus for \(\textbf{i} = (i_{1},..., i_{k})\), \(\textbf{j} = (j_{1},..., j_{k})\) in \(\mathbb {Z}^{k}\), \(i\le j\) if \(i_{l}\le j_{l}\) for all \(l = 1,..., k\).

We consider a linear order \(\prec\) on \(\mathbb {Z}^{k}\) that is invariant: if \(\textbf{s} \prec \textbf{t}\) for \(\textbf{s}, \textbf{t} \in \mathbb {Z}^{k}\) implies that \(\textbf{s} + \textbf{i} \prec \textbf{t} + \textbf{i}\) for any \(\textbf{i}\in \mathbb {Z}^{k}\). An example of an invariant order is the lexicographic (or dictionary) order: for \(\textbf{s}, \textbf{t} \in \mathbb {Z}^{k}\), we say that \(\textbf{s} \prec \textbf{t}\) if either (1) \(s_{1}< t_{1}\), or (2) there exists \(2 \le j \le k\) such that \(s_{i} = t_{i}\) for all \(i = 1,..., j - 1\), and \(s_{j} < t_{j}\).

2.2 Condition (\(\mathcal {D}^{\Lambda }\)) on the index set

Consider the following simple point process:

$$\begin{aligned} N^{\Lambda }_{n}:=\sum _{\textbf{t}\in \Lambda _{n}}\varepsilon _{{a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}}}, \end{aligned}$$

where the sequence \((a_{n}^\Lambda )\) satisfies \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|\mathbb {P}(|\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda )=1\) and \(\Lambda _{n}\) is any finite subset of \(\mathbb {Z}^{k}\) such that \(|\Lambda _{n}| \rightarrow \infty\) as \(n\rightarrow \infty\). For any set \(\Upsilon \subset \mathbb {Z}^{k}\), \(c>0\), and \(\textbf{t}\in \mathbb {Z}^{k}\), let \((\Upsilon )^+:=\{\textbf{u}\in \Upsilon :\textbf{u}\succ \textbf{0}\}\), \((\Upsilon )_{-\textbf{t}}:=\{\textbf{u}\in \mathbb {Z}^{k}:\textbf{u}=\textbf{s}-\textbf{t},\textbf{s}\in \Upsilon \}\) and \(\Upsilon ^{(\textbf{t},c)}:=((\Upsilon )_{-\textbf{t}}\cap K_{c})^+\) where the hypercube \(K_c\) is defined as \(K_c=[-c,c]^k\cap \mathbb {Z}^{k}\), \(c\ge 0\). Through the paper we assume that \(\Lambda _n\) satisfies the following condition.

Condition (\(\mathcal {D}^{\Lambda }\))

There exist (possibly countably many) different subsets of \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succ \textbf{0}\}\), which we denote by \(\mathcal {D}_{1},\mathcal {D}_{2},...\), s.t. 

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\dfrac{|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{i}\cap K_{p}\}|}{|\Lambda _n|}=:\lambda _{i,p}\rightarrow \lambda _{i}\,, \qquad p\rightarrow \infty \,, \end{aligned}$$

with \(\lambda _{i}>0\) and \(\sum _{i=1}^{q}\lambda _{i}=1\), where \(q\in \mathbb {N}\cup \{\infty \}\) is the number of these \(\mathcal {D}\)s.

Condition (\(\mathcal {D}^{\Lambda }\)) says the following. Consider a point \(\textbf{t}\) in \(\Lambda _{n}\). Translate the set \(\Lambda _{n}\) by \(-\textbf{t}\) so that \(\textbf{t}\) is now at \(\textbf{0}\). Take an hypercube around \(\textbf{0}\) of side 2p, for p large enough, and intersect it with the positive points according to \(\succ\) and with the translated set. Thus, we have obtained \(\Lambda _{n}^{(\textbf{t},p)}\). Now, it might happen that the same set \(\Lambda _{n}^{(\textbf{t},p)}\) is exactly the same for other points in \(\Lambda _{n}\), and also for other points in \(\Lambda _{m}\) with \(m>n\). Condition (\(\mathcal {D}^{\Lambda }\)) imposes that there are different sets (denoted \(\mathcal {D}_{i}\cap K_{p}\), \(i\in \{1,...,q\}\)) such that the number of points \(\textbf{t}\in \Lambda _{n}\) for which \(\Lambda _{n}^{(\textbf{t},p)}\) is equal to one of these sets, divided by \(|\Lambda _n|\), has a limit and these limits form a weighted sum.

This condition provides a minimum requirement to have (at least asymptotically) a structure for studying the long-time clustering behaviour of extremes.

Example 1

Imagine observing precipitations over a specific geographical area \(\mathcal C\). There are stations spread throughout the geographical region that measure the precipitation. Let \(\Lambda _n\) lying in \(\mathbb {Z}^d\) where d is the sum of the time dimension and the space dimension (thus \(d=3\) or \(d=4\) depending on whether we consider the geographical region \(\mathcal C\) to lie in \(\mathbb {Z}^2\) or \(\mathbb {Z}^3\), respectively). In the time direction, each point is the number of rain rained in a certain amount of time, while the space direction indicates the location where this is measured. Thus, \(\textbf{X}_\textbf{t}\) where \(\textbf{t}\in \Lambda _n\) corresponds to the amount of rain measured in a certain period in a specific location. Assume that the measurements over \(\mathcal C\) repeat in a constant frequency (e.g. at every week). This assumption corresponds to condition (\(\mathcal {D}^\Lambda\)), where we take the order \(\succ\) to be increasing with successive (in time) observations. In particular, imagine measuring over \(\mathcal C\) infinitely many times. Denote this set by \(\mathcal{C}_\infty\). Then, each \(\mathcal {D}\) is \(\mathcal{C}_\infty\) centered at \(\textbf{0}\) (that is translated version of \(\mathcal{C}_\infty\) by minus one of its points) and consider the points successive to \(\textbf{0}\). Notice that there are only \(q=|\mathcal C|\) distinct \(\mathcal D\) and their weights are all equal to \(1/|\mathcal C|\). Buhl and Klüppelberg (2019) already considered similar index sets.

Example 2

The framework of Stehr and Rønn-Nielsen (2021, 2022) is a particular specification of our framework. Indeed, consider Assumption 1 in Stehr and Rønn-Nielsen (2021) (which is Assumption 3 in Stehr and Rønn-Nielsen (2022)): The sequence \((C_n)_{n\in \mathbb {N}}\) consists of p-convex bodies (i.e. connected sets which are also unions of p convex sets), where \(C_n=\cup _{i=1}^p C_{n,i}\) and \(|C_n|\rightarrow \infty\) as \(n\rightarrow \infty\), and \(\frac{\sum _{i=1}^p V_j(C_{n,i})}{|C_n|^{j/d}}\) is bounded for each \(j=1,...,d-1\), where \(V_j(C_{n,i})\) indicates the intrinsic volumes of the convex body \(C_{n,i}\). Consider the two dimensional case, so \(d=2\) – similar arguments apply to other dimensions. For any convex body C, we have that \(V_0(C)=1\) and \(V_1(C)\) is equal to the perimeter of C divided by \(\pi\). Then, Assumption 1 in Stehr and Rønn-Nielsen (2021) states that the sum of the perimeters of the \(C_{n,i}\)s must not grow faster than the square root of the volume of \(C_n\). There are cases where this is not true, like when one of the \(C_{n,i}\)s is a rectangle with edges increasing with different speed. In general, this assumption ensures that the \(C_{n,i}\)s must grow in all directions, implying that the number of points in \(C_n\) away from the boundary divided by the number of points in \(C_n\) tends to 1 as \(n\rightarrow \infty\). Formally it implies that, for any \(r\in \mathbb {N}\), \(\frac{|\{\textbf{t}\in C_{n}:C_{n}^{(\textbf{t},r)}=\mathcal {D}\cap K_{r}\}|}{|C_n|}\rightarrow 1\) as \(n\rightarrow \infty\), where \(\mathcal {D}\) is simply given by \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succ \textbf{0}\}\). Therefore, Assumption 1 in Stehr and Rønn-Nielsen (2021) is strictly stronger than condition (\(\mathcal {D}^{\Lambda }\)).

It is important to explicitly look at the differences of our framework with the one of Stehr and Rønn-Nielsen (2021, 2022). First, Condition (\(\mathcal {D}^{\Lambda }\)) is only an asymptotic condition, thus the set \(\Lambda _n\) (or \(C_n\)) does not need to satisfy any constraint for finite n. The lack of a non-asymptotic structure for \(\Lambda _n\) is a challenge, and in particular for the proof of Theorem 17. We overcome this by imposing structures that will be satisfied asymptotically. Another feature of our setting also exacerbates this issue: the possibility of having countably many different asymptotic sets (denoted by \(\mathcal {D}\)s). Indeed, having countably many sets does not allow distinguishing the points in \(\Lambda _n\) that will eventually form an asymptotic set from other points in \(\Lambda _n\), because this distinction happens only asymptotically. We overcome this by using that only finitely many of these sets have weights (denoted by \(\lambda\)s) greater than \(\varepsilon\), for any \(\varepsilon >0\). The third difference is the structure of the asymptotic sets. While in Stehr and Rønn-Nielsen (2021, 2022) the only allowed asymptotic set is \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succ \textbf{0}\}\) as just shown, in our framework any possible subset of \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succ \textbf{0}\}\) is allowed. For example, we might have that \(\Lambda _n\) is a rectangle where only one side increases.

We conclude this section by pointing out that condition (\(\mathcal {D}^{\Lambda }\)) comes from the proof of the main asymptotic results of the paper, and it is the most refined (i.e. weakest) condition we could attain. This condition is satisfied in all the previous settings (see Davis and Hsing 1995; Wu and Samorodnitsky 2020; Stehr and Rønn-Nielsen 2021, 2022; Buhl and Klüppelberg 2019).

2.3 Mixing and anti-clustering conditions

Following the seminal work of Davis and Hsing (1995) on stationary time series, we assume two complementary conditions. The anti-clustering condition avoids too strong clustering effects. The mixing condition approximates the Laplace functional of the point process \(N^{\Lambda }\) over \(\Lambda _n\) in terms of products of Laplace functionals of copies of the point process over a smaller index set. Such conditions were extended to random fields by Samorodnistky and Wu (2020) for the specific index set \(C_n=[1,n]^k\). Some care is required when considering the general index set \(\Lambda _n\).

Take a sequence of positive integers \((r_{n})\) such that \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|/|\Lambda _{r_{n}}|=\infty\) and let \(k_{n}= \lfloor |\Lambda _n|/|\Lambda _{r_{n}}|\rfloor\). Let \(R_{l,\Lambda _{n}}:=\big (\bigcup _{\textbf{t}\in \Lambda _{n}}(\Lambda _{n})_{-\textbf{t}}\big )^+\setminus K_{l}\) and let \(\hat{M}^{\Lambda ,|\textbf{X}|}_{l,n}:=\max _{\textbf{i}\in R_{l,\Lambda _{n}}}|\textbf{X}_{\textbf{i}}|\) and consider the following anti-clustering condition.

Condition (AC\(^{\Lambda }_{\succeq }\))

The \(\mathbb {R}^{d}\)-valued stationary regularly varying random field \((\textbf{X}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k})\) satisfies the (AC\(^{\Lambda }_{\succeq }\)) condition if there exists an integer sequence \(r_{n}\rightarrow \infty\) and \(k_{n}=|\Lambda _n|/|\Lambda _{r_n}|\rightarrow \infty\) such that

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{\Lambda ,|\textbf{X}|}_{l,r_{n}}>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

Let \(d_{n}:=\max _{x,y\in \Lambda _{r_{n}}}\max _{j=1,..,k}|x^{(j)}-y^{(j)}|\), namely \(d_{n}\) be the maximum distance between the points of \(\Lambda _{r_{n}}\). Observe that

$$\begin{aligned} &\lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{|\textbf{X}|}_{l,d_{n};\succeq }>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big )=0\\& \Rightarrow \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{\Lambda ,|\textbf{X}|}_{l,r_{n}}>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big )=0 \end{aligned}$$

where \(\hat{M}^{|\textbf{X}|}_{a,b;\succeq }=\max _{a\le |\textbf{i}|\le b,\, \textbf{i}\succeq \textbf{0} }|\textbf{X}_{\textbf{i}}|\) for \(a,b\in \mathbb {Z}\) and \(|\textbf{i}|:=\max (|i_{1}|,..., |i_{k}|)\). This sufficient condition is often easier to check in practice and is implied by the anti-clustering condition considered in Wu and Samorodnitsky (2020) that required a stronger condition on the maxima over indices \(a\le |\textbf{i}|\le b\) in any directions.

For the mixing condition, we require extra classical notation, namely

$$\begin{aligned} \tilde{N}^{\Lambda }_{r_{n}}:=\sum _{\textbf{t}\in \Lambda _{r_{n}}}\varepsilon _{{a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}}}\,, \end{aligned}$$

and for any \(E\subset \mathbb {R}^{d}\), \(\mathbb {C}^{+}_{K}(E)\) the class of continuous non-negative functions g on E with compact support. Further, let the Laplace functional of a point process \(\xi\) with points \((\textbf{Y}_{\textbf{i}})\) in the space \(E\subset \mathbb {R}^{d}\) be denoted by

$$\begin{aligned} \Psi _{\xi }(g)&:=\mathbb {E}\bigg [\exp \bigg (-\int _{E}gd\xi \bigg ) \bigg ]\\&=\mathbb {E}\bigg [\exp \bigg (-\sum _{\textbf{i}}g(\textbf{Y}_{\textbf{i}}) \bigg ) \bigg ],\quad g\in \mathbb {C}^{+}_{K}(E). \end{aligned}$$

We adopt the notation \(\mathbb {C}^{+}_{K}:=\mathbb {C}^{+}_{K}(\mathbb {R}^{d}\setminus \{\textbf{0}\})\).

Condition \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\)

Choose the integer sequences \(r_{n}\rightarrow \infty\) and \(k_{n}= |\Lambda _n|/|\Lambda _{r_n}|\rightarrow \infty\) from condition (AC\(^{\Lambda }_{\succeq }\)). The \(\mathbb {R}^{d}\)-valued stationary regularly varying random field \((\textbf{X}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k})\) satisfies the condition \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\) if

$$\begin{aligned} \Psi _{N^{\Lambda }_{n}}(g)-(\Psi _{\tilde{N}^{\Lambda }_{r_{n}}}(g))^{k_{n}}\rightarrow 0,\quad n\rightarrow \infty ,\quad g\in \mathbb {C}^{+}_{K}. \end{aligned}$$

In Lemma 18 we show that the anti-clustering and mixing conditions are satisfied for any m-dependent stationary regularly varying random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\).

3 Lattice properties

Before giving the main results, we need to investigate further the lattice properties of the index sets \(\mathcal {D}_{j}\) appearing in Condition (\(\mathcal {D}^{\Lambda }\)) on \(\Lambda _n\). We will distinguish two settings, lattice properties on the upper orthant and on the whole index set. Condition (\(\mathcal {D}^{\Lambda }\)) implicitly involves the upper orthant and it would have been possible to focus on the whole index set by adapting Condition (\(\mathcal {D}^{\Lambda }\)) accordingly. This approach would have been entirely equivalent to ours. But notice that the two settings are crucial for our main results, and one cannot make the economy of one of them.

3.1 Lattice properties on the upper orthant

As usual, we define a lattice in \(\mathbb {Z}^k\) as the set \(\{\sum _{i=1}^{k}a_iv_i:a_i\in \mathbb {Z}\}\) for some basis \(\{v_1,...,v_k\}\) of \(\mathbb {Z}^k\). The rank of the matrix \((v_1,\ldots ,v_k)\) is called the rank of the lattice. Recall that \(\mathcal {D}_{1},\mathcal {D}_{2},...\) are the subsets of the upper-orthant \(\big (\mathbb {Z}^{k}\big )^+\) that appear in Condition (\(\mathcal {D}^{\Lambda }\)).

Proposition 3

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)).

  1. (I)

    For every \(\mathcal {D}_{j}\) and \(\mathcal {D}_{i}\) with \(j\ne i\) there exists a p large enough s.t. \(\mathcal {D}_{j}\cap K_{p}\ne \mathcal {D}_{i}\cap K_{p}\). Further, for every \(\mathcal {D}_{j}\) and every \(p\in \mathbb {N}\) we have the identity

    $$\begin{aligned} \lim \limits _{n\rightarrow \infty }\dfrac{|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}|}{|\Lambda _n|}=\lambda _{j,p}= \sum _{i\in I_{p}^{(j)}}\lambda _{i}, \end{aligned}$$

    where \(I^{(j)}_{p}:=\{i\in \{1,...,q\}:\mathcal {D}_{i}\cap K_{p}=\mathcal {D}_{j}\cap K_{p}\}\).

  2. (II)

    The empty set is a possible \(\mathcal {D}\).

  3. (III)

    For every \(\mathcal {D}_{j}\), there exist \(b_{j}\) many different \(\mathcal {D}\)s, where \(b_{j}\in \mathbb {N}\) s.t. \(b_{j}\le \lfloor 1/\lambda _{j}\rfloor -1\), which we denote by \(\mathcal {D}_{l_{1}},...,\mathcal {D}_{l_{b_{j}}}\) such that, for every \(\textbf{z}\in \mathcal D_j\), \(\mathcal D_{l_i}=((\mathcal D_j)_{-\textbf{z}})^+\) for some \(i=1,...,b_{j}\) and then \(\lambda _{l_i}\ge \lambda _{j}\).

Point (III) of Proposition 3 suggests that \(\mathcal D_j\) contains shifted versions of potentially different \(\mathcal{D}s\). In order to exhibit the lattice property of \(\mathcal D_j\), we define \(\mathcal {G}_j\) as the set of the shifts that yields the same \(\mathcal D_j\), namely

$$\begin{aligned} \mathcal {G}_j :=\{\textbf{z}\in \mathcal {D}_j\cup \{\textbf{0}\}: (\mathcal {D}_{j})_{-\textbf{z}}^+ = \mathcal {D}_j \}\quad \text {and}\quad \mathcal {L}_j:=\mathcal {G}_j\cup -\mathcal {G}_j\,,\qquad 1\le j\le q\,. \end{aligned}$$
(3)

For every \(1\le j\le q\), one can partition the set \(\mathcal D_j\) using the lattice sets \(\mathcal {L}_{l_i}\), \(i=1,...,b_{j}\):

Proposition 4

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)). Fix \(1\le j\le q\), then the set \(\mathcal {L}_{j}\) is a lattice on \(\mathbb {Z}^{k}\). For \(i=1,...,b_j\), denoting \(\textbf{z}_{l_i}\) any point in \(\mathcal {D}_j\) such that \(\mathcal {D}_{l_i}=((\mathcal {D}_j)_{- \textbf{z}_{l_i}})^+\) we have the partition

$$\begin{aligned} \mathcal {D}_{j} =(\mathcal {L}_j)^+\cup \bigcup _{i=1}^{b_j}((\mathcal L_{l_i})_{\textbf{z}_{l_i}})^+. \end{aligned}$$
(4)

Further, for every \(\mathcal {D}_{j}\), we have that \(\mathcal {L}_{l_{i}}\supseteq \mathcal {L}_{j}\), and \(\mathcal {L}_{l_{i}}\) and \(\mathcal {L}_{j}\) have the same rank for \(i=1,...,b_{j}\). In particular, \(\mathcal {D}_{j}\) is bounded if and only if \(\mathcal {L}_{j}=\{\textbf{0}\}\) and in this case \(\mathcal {D}_{j} =\bigcup _{i=1}^{b_j}\{\textbf{z}_{l_i}\}\).

Building on partition (4) we want to exhibit some translation invariant properties of \(\mathcal {D}_j\). Fix any \(j=1,\ldots ,q\) and denote \(l_0=j\) for convenience, then any \(i\in \{0,1,...,b_j\}\) satisfies the Translation Invariance Property (TIP\(_j\)) if it has the following property:

Translation Invariance Property (TIP\(_j\))

The index \(i\in \{0,1,...,b_j\}\) satisfies (TIP\(_j\)) if there is a point \(\textbf{x}\in ((\mathcal {L}_{l_i})_{\textbf{z}_{l_i}})^+\) such that \(\textbf{x}\prec \textbf{y}\) for some \(\textbf{y}\in \mathcal {G}_j\).

Further, we let \(W_j\) denote the subset of \(\{0,...,b_j\}\) satisfying (TIP\(_j\)) and let \(\hat{\mathcal {D}}_{j}:=\bigcup _{h\in W_j}\mathcal {D}_{l_{h}}\).

Proposition 5

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)). Fix any \(j=1,\ldots ,q\). If \(i\in \{0,1,...,b_j\}\) satisfies (TIP\(_j\)) then \(\mathcal {L}_{l_{i}}=\mathcal {L}_{j}\) and \(\lambda _{l_{i}}=\lambda _{j}\). In particular, when \(\mathcal L_j\) is a full rank lattice the (TIP\(_j\)) is satisfied for all \(i=0,...,b_j\) and when \(\mathcal {D}_j\) is bounded the (TIP\(_j\)) is never satisfied.

Further, for every \(i\in W_j\) we have \(\hat{\mathcal {D}}_{l_{i}}=\hat{\mathcal {D}}_{j}\) and \(\hat{\mathcal {D}}_j\cup \{\textbf{0}\}\cup -\hat{\mathcal {D}}_j\) is translation invariant for every point in \(\mathcal {L}_{j}\).

Remark 1

The case of (TIP\(_j\)) not holding for some \(l=l_1,...,l_{b_j}\) is equivalent to the case of \(\mathcal {G}_j\) lying on the hyperplane determined by the order \(\succ\). For example, this is the case when we are in \(\mathbb {R}^2\), the order goes along the horizontal lines (informally \((0,0)\prec (1,0)\prec (2,0)\prec ...\prec (\infty ,0)\prec (-\infty ,1)\prec ...\prec (0,1)\prec ...\)), and \(\Lambda _n\) draws two lines which are parallel to the horizontal axis, see Fig. 1 for an illustration. It is possible to see that in this case one \(\hat{\mathcal {D}}\) (say \(\hat{\mathcal {D}}_1\)) is simply given by x-axis, while for the other (\(\hat{\mathcal {D}}_2\)) we have the set provided in Fig. 2. These sets are translation invariant with respect to the points in the respective \(\mathcal {L}_i\) and in this example \(\mathcal {L}_1\) and \(\mathcal {L}_2\) are both equal to x-axis.

Fig. 1
figure 1

We consider an order that increases along the horizontal axis and then upward. Red chopped half-line starting at (1, 0) is \(\mathcal{D}_1\), \(\mathcal{D}_1\) plus the red line above is \(\mathcal{D}_2\) also in red. Both \(\mathcal{D}\)s have the same \(\mathcal{G}=\mathcal{D}_1\cup \{(0,0)\}\) thus the same \(\mathcal{L}\) which coincides with the x-axis. We check that \(\mathcal{D}_2\) is partitioned into \((\mathcal{L})^+\) and \(\mathcal{L}_{\textbf{z}}\) where \(\textbf{z}\) is any point with 1 as the second coordinate. However, none of the points of \(\mathcal{L}_{\textbf{z}}\) precedes any point in \(\mathcal{G}\), and the TIP property fails. Notice \(\mathcal{E}_1\) is any couple of points \(\{(0,0),{\textbf{z}}\}\) in blue

Example 3

In Remark 1 we implicitly consider the case of \(\Lambda _n\) drawing two parallel lines at the same speed and in this case \(\lambda _{1}=\lambda _2=1/2\); an example of such \(\Lambda _n\) is \(\Lambda _n=\bigcup _{i=1}^{\lfloor n/2\rfloor }\{0,i\}\cup \{1,i\}\). When \(\Lambda _n\) draws the two lines at different speed the weights \(\lambda _{1}\) and \(\lambda _2\) take different values. For example, if \(\Lambda _n\) draws the line above faster than the line below, e.g. \(\Lambda _n=\bigcup _{i=1}^{\lfloor rn\rfloor }\{0,i\}\cup \bigcup _{j=1}^{\lfloor (1-r)n\rfloor }\{1,i\}\) for some \(r\in (0,1/2)\), then \(\lambda _{1}=1-r\) and \(\lambda _{2}=r\). Observe that if \(\Lambda _n\) draws the line below faster, e.g. \(\Lambda _n=\bigcup _{i=1}^{\lfloor rn\rfloor }\{0,i\}\cup \bigcup _{j=1}^{\lfloor (1-r)n\rfloor }\{1,i\}\) for some \(r\in (1/2,1)\), then we would still have that \(\lambda _{1}>\lambda _{2}\), in particular \(\lambda _{1}=r\) and \(\lambda _{2}=1-r\). This is because the line below is faster and so many (precisely \((2r-1)n\) asymptotically many) points in this line we will not have a line above them at a finite distance and so the asymptotic structure for these points is \(\mathcal {D}_1\) and not \(\mathcal {D}_2\).

Fig. 2
figure 2

Representation of \({\hat{\mathcal{D}}}_2\cup \{\textbf{0}\}\cup -{\hat{\mathcal{D}}}_2\) with \({\hat{\mathcal{D}}}_2=\mathcal{D}_2\) because \(W_2=\{0\}\) in Remark 1 (recall \(l_0=2\) by convention). It is possible to see that \({\hat{\mathcal{D}}}_2\cup \{\textbf{0}\}\cup -{\hat{\mathcal{D}}}_2\) is translation invariant for the points in \(\mathcal {L}_2\) which in this case is given by the x-axis

3.2 Lattice properties on the whole index set

In this subsection we consider subsets \(\Xi _j\) of the whole index set \(\mathbb {Z}^k\) that are the equivalent of the subsets \(\mathcal {D}_{j}\) of the upper-orthant. As Condition (\(\mathcal {D}^{\Lambda }\)) defined only the \(\mathcal {D}_{j}\), the existence of the \(\Xi _j\), shown in the next result, is deduced from it.

Proposition 6

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)). For any \(p\in \mathbb {N}\) and any \(\Xi\) subset of \(\mathbb {Z}^k\) with \(\textbf{0}\in \Xi\) we have that the limits

$$\begin{aligned}& \lim \limits _{n\rightarrow \infty }\dfrac{|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|}{|\Lambda _n|},\quad \text {and}\\& \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\dfrac{|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|}{|\Lambda _n|} \end{aligned}$$

exist. Moreover, any such \(\Xi\) such that

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\dfrac{|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|}{|\Lambda _n|}>0 \end{aligned}$$
(5)

satisfies \(\Xi ^+=\mathcal {D}_j\) for some \(j=1,...,q\).

Define by \(\Xi _1,...,\Xi _{q'}\) the sets satisfying (5) with \(q'\in \mathbb {N}\cup \{\infty \}\). For each \(m=1,...,q'\) and \(p\in \mathbb {N}\) define

$$\begin{aligned} \gamma _m&:=\lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi _m\cap K_{p}\}|/|\Lambda _n|,\\ \gamma _{p,m}&:=\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi _m\cap K_{p}\}|/|\Lambda _n|,\\ F^{(m)}_p&:=\{j\in \{1,...,q'\}:\Xi _j\cap K_p=\Xi _m\cap K_p\}. \end{aligned}$$

Let \(l_0:=j\) and \(\textbf{z}_{j}:=\textbf{0}\). From Proposition 3 recall that \(\mathcal {D}_{j} =(\mathcal {L}_j)^+\cup \bigcup _{i=1}^{b_j}((\mathcal L_{l_i})_{\textbf{z}_{l_i}})^+\), which we can rewrite as \(\mathcal {D}_{j} =\bigcup _{i=0}^{b_j}((\mathcal L_{l_i})_{\textbf{z}_{l_i}})^+\).

Proposition 7

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)). Every \(\Xi _m\), \(m=1,...,q'\), is a translation of \(\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\) for some \(j=1,\ldots , q\). Moreover, \(\sum _{m=1}^{q'}\gamma _m=1\), and \(\gamma _{p,m}=\sum _{m'\in F^{(m)}_p}\gamma _{m'}\), for every \(m=1,...,q'\).

It is important to notice that the translations of \({\mathcal L}\) in \(\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\) can coincide. A careful analysis is done in order to describe the distinct translations. Let \(\textbf{x}_1^{(j)},...,\textbf{x}_{n_j}^{(j)}\), for some \(n_j\in \mathbb {N},\) be the points in \(\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\) such that \(\textbf{x}_k^{(j)}\succeq \textbf{0}\), \(k=1,...,n_j\), that \((\mathcal L_{j})_{\textbf{x}_{v}^{(j)} }\ne (\mathcal L_{j})_{\textbf{x}_{r}^{(j)} }\) for \(r,v=1,...,n_j\) with \(r\ne v\), and that \(\bigcup _{h=1}^{n_j}(\mathcal L_{j})_{\textbf{x}_{h}^{(j)} }=\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\). Finally, let \(\mathcal {E}_j=\{\textbf{x}_1^{(j)},...,\textbf{x}_{n_j}^{(j)}\}\) where, for the sake of clarity, we include by convention \(\{\textbf{0}\}\) in \(\mathcal {E}_j\) so that \(\textbf{0}\) is always the lowest (according to \(\succ\)) point in \(\mathcal {E}_j\). Notice that a certain arbitrary choice is still possible when choosing \(\mathcal {E}_j\), see Fig. 1 for an example.

Any \(\Xi _m\) contains \(\{\textbf{0}\}\) by definition. Thus, the different \(\Xi _m\)s correspond to the different translated versions of \(\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\) containing \(\{\textbf{0}\}\). Having in mind the identity \(\bigcup _{\textbf{s} \in \mathcal {E}_j}(\mathcal L_{j})_{\textbf{s}}=\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\), the number of different translations is \(n_j\) and the shifts are the elements of \(\mathcal {E}_j\) (and thus \(n_j=|\mathcal {E}_j|\)). Denote \(I^*\) the set of the indices \(j=1,\ldots ,q\) satisfying

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\Big |\Big \{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\bigcup _{\textbf{s} \in \mathcal {E}_j}(\mathcal L_{j})_{\textbf{s}}\cap K_{p}\Big \}\Big |/|\Lambda _n|>0. \end{aligned}$$
(6)

For every \(j\in I^*\), let \(\Xi ^{*}_j:=\bigcup _{\textbf{s} \in \mathcal {E}_j}(\mathcal L_{j})_{\textbf{s}}\) and \(\gamma ^{*}_j\) be the positive limit in (5) associated to \(\Xi ^{*}_j\). For every \(j\in I^*\) we have \((\Xi ^{*}_j)^+=\mathcal D_j\), that there exists an \(m=1,\ldots ,q'\) such that \(\Xi ^{*}_j=\Xi _m\) and that any \(\Xi _m\), \(m=1\ldots ,q'\), are translated versions of \(\Xi ^{*}_j\), \(j\in I^*\). It essentially means that the \(\Xi ^{*}_j\) for \(j\in I^*\) are the only relevant structures in the asymptotic of \((\Lambda _n)\) as the other ones are translated versions of them:

Proposition 8

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)). We have the identity \(\sum _{j\in I^*}\gamma ^*_jn_j=\sum _{j\in I^*}\gamma ^*_j|\mathcal {E}_j|=1\).

By definition we have

$$\begin{aligned} \Xi ^{*}_j=\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}=\bigcup _{\textbf{s}\in \mathcal {E}_j}(\mathcal {L}_j)_{\textbf{s}}=\bigcup _{\textbf{s}\in \mathcal {L}_j}(\mathcal {E}_j)_{\textbf{s}} \end{aligned}$$

for any \(j\in I^*\).

In the following statement, we link the asymptotic behaviour of \((\Lambda _n)\) with specific non-asymptotic properties of some of its subsets. In particular, we extract from \((\Lambda _n)\) specific disjoint subsets, which have helpful non-asymptotic properties (for the proof of Theorem 17) and show that these subsets asymptotically describe the whole \((\Lambda _n)\) satisfying Condition \((\mathcal D^\Lambda )\).

We introduce the following notation. Let \(l\in \mathbb {N}\). Consider the maximum of the \(m\in \mathbb {N}\cup \{0\}\) such that \(\mathcal {D}_i\cap K_l\ne \mathcal {D}_j\cap K_l\) for every \(i,j\in I^*\) with \(i,j<m\). Denote this maximum by \(\tilde{m}_{l,1}\). Consider the maximum of the \(m\in \mathbb {N}\cup \{0\}\) such that \(\mathcal {D}_w\cap K_{\lfloor l/2\rfloor }\ne \mathcal {D}_{l_h}\cap K_{\lfloor l/2\rfloor }\) for every \(w,v\in I^*\) with \(w,v<m\) and where \(\mathcal {D}_w\) is bounded, \(\mathcal {D}_v\) is unbounded and \(l_h\) is any index \(l_{1},...,l_{b_v}\). Denote this maximum by \(\tilde{m}_{l,2}\). Consider the maximum of the \(m\in \mathbb {N}\cup \{0\}\) such that \(\mathcal {E}_s\subset K_{\lfloor l/4\rfloor }\) for every \(s\in I^*\) with \(s<m\). Denote this maximum by \(\tilde{m}_{l,3}\). Then, we define \(m_l\) as follows \(m_l:=\min (\tilde{m}_{l,1},\tilde{m}_{l,2},\tilde{m}_{l,3})\). Notice that such \(m_l\) exists because l is finite and \(b_j\) is finite for every unbounded \(\mathcal {D}_j\).

Further, let \(S_{i,l}:=\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{l}=\Xi ^{*}_i\cap K_{l}\}\) for every \(i\in I^*\) and \(l,n\in \mathbb {N}\). Notice that \(S_{i,l}\) depends on n, but we omit the dependency to lighten the notation.

Proposition 9

Let \(\Lambda _{n}\) satisfy \(|\Lambda _n|\rightarrow \infty\) as \(n\rightarrow \infty\) together with Condition (\(\mathcal {D}^{\Lambda }\)). Then for every \(n\in \mathbb {N}\), \(j,i\in I^*\) with \(j,i<m_{4l}\) and \(i\ne j\), \(\textbf{t}\in S_{j,4\,l}\) and \(\textbf{s}\in S_{i,4l}\), we have that \((\mathcal {D}_j\cap K_{2l})_{\textbf{t}}\cap (\mathcal {D}_i\cap K_{2l})_{\textbf{s}}=\emptyset\). Further, there exists a set \(S'_{j,4l}\), with \(S'_{j,4l}\subset S_{j,4l}\), such that for every \(\textbf{t}\in S'_{j,4l}\)

$$\begin{aligned} (\mathcal {D}_{j}\cap K_{2l })\setminus \bigcup _{\textbf{s}\in (S_{j,4l})_{-\textbf{t}},\textbf{s}\prec \textbf{0}}(\mathcal {E}_{j})_{\textbf{s}}=(\mathcal {D}_{j}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in -\mathcal {G}_j\setminus \{\textbf{0}\}}(\mathcal {E}_{j})_{\textbf{s}} \end{aligned}$$
(7)

and that \(\lim \limits _{n\rightarrow \infty }|S'_{j,4l}|/|\Lambda _{n}|=\gamma ^*_j\) for every \(j\in I^*\) with \(j<m_{4l}\). Finally, \(\lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{\sum _{i\in I^*,i<m_{4l}}|S'_{i,4l}||\mathcal {E}_i|}{|\Lambda _{n}|}=1\).

We remark that even if the Condition \((\mathcal D^\Lambda )\) is asymptotic, the sets \(S'_{j,4l}\) and \(S_{j,4l}\) have both asymptotic and non-asymptotic properties.

Example 4

(Continuing Example 1) Recall that in this example, the observations formed a pattern \(\mathcal C\) that repeated itself with a certain frequency in order to constitute \(\mathcal{C}_\infty\). Using the notation of this section, we see that such frequency is represented by \(\mathcal {G}_0\). The number of \(\mathcal {D}\)’s is \(b_0+1=|\mathcal D|\). The Translation Invariance Property (TIP\(_j\)) is satisfied for every \(i=0,...,b_j\) and \(j=1,...,|\mathcal D|\). The infinite union of translated patterns is what we denote by \(\Xi\) in this section. Notice that any different centering of \(\mathcal{C}_\infty\) at \(\textbf{0}\) corresponds to a different \(\Xi _m\). In particular, there are \(|\mathcal {C}|\) many different \(\Xi _m\)’s. However, there is only one \(\Xi ^*\) (which we denote \(\Xi ^*_1\)), say \(\Xi _h=\Xi ^*_1\) for some \(h\in \{1,...,|\mathcal {C}|\}\). In this example, it is possible to see that the choice of the \(\Xi ^*_1\) is arbitrary. So, we have \(I^*=\{1\}\), \(\mathcal{C}=\mathcal {E}_{h}\) and \(\gamma ^*_1=1/|\mathcal{C}|\). A non-trivial result, even in this simple example, is the last statement in Proposition 5. Suppose we take the union of all the \(\mathcal {D}\)’s and \(\{\textbf{0}\}\) and their negative counterpart, then this union is translation invariant along with \(\mathcal {L}_0\), namely along that certain frequencies with which the observations repeat their pattern. Moreover, Proposition 9 states that it is important that the observations (\(\Lambda _n\)) repeat the pattern \(\mathcal C\) for a sufficiently long time. For instance, a finite number of observations (for instance, cases where the station is working intermittently – this is typical of non-automatic weather stations) does not matter.

Finally, we refer to the lattice case when every \(\Xi\)’s are lattices, meaning that \(\mathcal {E}_j=\{\textbf{0}\}\) and \(\Xi _j^*=\mathcal {L}_j\) for every \(j=1,\ldots , q\).

4 Main results expressed using the spectral tail field

4.1 Laplace functional of the limiting point process

The first result states the convergence of the Laplace functionals to some Laplace functional without an explicit description of the point process. The proof of the result is based on a telescoping sum argument developed initially in the time series setting by Jakubowski and co-authors (1993) and Bartkiewicz et al. (2011) together with lattice property (I) from Proposition 3.

Theorem 10

Let \(k,d\in \mathbb {N}\). Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random field \((\textbf{X}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k})\) with index \(\alpha > 0\). We assume conditions (\(\mathcal {D}^{\Lambda }\)), (AC\(^{\Lambda }_{\succeq }\)) and \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\). Then \(N_{n}^{\Lambda }{\mathop {\rightarrow }\limits ^{d}}N^{\Lambda }\) on the state space \(\mathbb {R}^{d}\setminus \{\textbf{0}\}\) and the limit random measure has Laplace functional for \(g\in \mathbb {C}^{+}_{K}\), given by

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\int _{0}^{\infty }\sum _{i=1}^{\infty }\lambda _{i}\mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \mathcal {D}_{i}}g(y\mathbf {\Theta }_{\textbf{t}})}\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \bigg ]d(-y^{-\alpha })\bigg ). \end{aligned}$$
(8)

Remark 2

Notice that by Tonelli’s theorem and by the monotone convergence theorem, the Laplace transform is the one of a mixture distribution

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\prod _{i=1}^{\infty }\exp \bigg (-\int _{0}^{\infty }\mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \mathcal {D}_{i}}g(y\mathbf {\Theta }_{\textbf{t}})}\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \bigg ]d(-y^{-\alpha })\bigg )^{\lambda _{i}}. \end{aligned}$$

Remark 3

In the asymptotically independent case, we have that \(|\mathbf \Theta _\textbf{t}|=\textbf{0}\) for every \(\textbf{t}\ne \textbf{0}\) and so the limit random measure has Laplace functional

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\int _{0}^{\infty }\mathbb {E}\Big [1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ]d(-y^{-\alpha })\bigg ). \end{aligned}$$

Thus, it coincides with the case of one \(\mathcal {D}\) and in particular \(\mathcal {D}=\emptyset\).

4.2 The spectral cluster random field in the lattice case

Define for any set \(A\subset \mathbb {Z}^{k}\), any sequence \(\textbf{x}=(\textbf{x}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) and any \(\alpha >0\),

$$\begin{aligned} \Vert \textbf{x}\Vert _{A,\alpha }:=\bigg (\sum _{\textbf{t}\in A}|\textbf{x}_{\textbf{t}}|^{\alpha }\bigg )^{1/\alpha }. \end{aligned}$$

For the spectral tail random field \((\mathbf {\Theta }_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) of a regularly varying stationary random field we use

$$\begin{aligned} \Vert \mathbf {\Theta }\Vert _{ A,\alpha }=\bigg (\sum _{\textbf{t}\in A}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }\bigg )^{1/\alpha } \end{aligned}$$

as the normalisation constant. When \(\Vert \textbf{x}\Vert _{ A,\alpha }<\infty\) a.s., We define the spectral cluster random field by

$$\begin{aligned} \textbf{Q}_{ A}:=\frac{\mathbf {\Theta }}{\Vert \mathbf {\Theta }\Vert _{ A,\alpha }}. \end{aligned}$$

\(\textbf{Q}_A\) is well-defined when the denominator is nonzero, hence when \(\{\textbf{0}\}\subset A\) \(\textbf{Q}_A\) is well-defined almost surely. Using the lattice properties investigated in Proposition 3, we show the existence of the spectral tail random field over some lattice index sets.

Proposition 11

Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random fields \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha >0\). Assume conditions (\(\mathcal {D}^{\Lambda }\)) and (AC\(^{\Lambda }_{\succeq }\)). Then, \(\mathbf {\Theta }_{\textbf{t}}\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}\) and so we have \(\Vert \mathbf {\Theta }\Vert _{\hat{\mathcal {D}}_{j}\cup -\hat{\mathcal {D}}_{j},\alpha }<\infty\) a.s. for every \(j\in \mathbb {N}\).

4.3 Cluster point process expressed using the spectral cluster field in the lattice case

Now, we present an explicit formulation of the asymptotic Laplace functional as a mixture of cluster random fields when the \(\mathcal {D}\)s are lattices (on the positive points).

Theorem 12

Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random fields \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha >0\). We assume conditions (\(\mathcal {D}^{\Lambda }\)), (AC\(^{\Lambda }_{\succeq }\)) and \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\). Assume also that we are in the lattice case. Then, \(N^{\Lambda }_{n}{\mathop {\rightarrow }\limits ^{d}}N^{\Lambda }\) on \(\mathbb {R}_{\textbf{0}}^{d}\) and the limit admits the cluster point process representation

$$\begin{aligned} N^{\Lambda }=\sum _{j=1}^{\infty }\sum _{i\in \mathbb {N}} \sum _{\textbf{t}\in \Xi _j^*}\varepsilon _{\Gamma _{j,i}^{-1/\alpha }\lambda _{j}^{1/\alpha } \textbf{Q}_{\Xi _j^*,i,\textbf{t}}} \end{aligned}$$

where \(\Big (\sum _{\textbf{t}\in \Xi _j^*}\varepsilon _{ \textbf{Q}_{\Xi _j^*,i,\textbf{t}}}\Big )_{i\in \mathbb {N}}\), is an iid sequence of point processes with state space \(\mathbb {R}^{d}\), and where \((\Gamma _{j,i})_{i\in \mathbb {N}}\) are the points of a unit rate homogeneous Poisson process on \((0, \infty )\) independent of \((\textbf{Q}_{\Xi _j^*,i,\textbf{t}})_{\textbf{t}\in \mathcal {D}_{j}}\), for every \(j\ge 1\). Moreover, \(\Big (\sum _{i\in \mathbb {N}} \sum _{\textbf{t}\in \Xi _j^*}\varepsilon _{\Gamma _{j,i}^{-1/\alpha }\lambda _{j}^{1/\alpha } \textbf{Q}_{\Xi _j^*,i,\textbf{t}}}\Big )_{j\ge 1}\) is a sequence of independent point processes with state space \(\mathbb {R}^{d}\).

We extend the characterization of the clusters first provided in Basrak and Planinic (2021) on the whole index set \(\Xi _1^*=\mathbb {Z}^k\), \(q=1\) to potential mixtures of lattices with \(q>1\). For instance, when the observations grow frequently along the axis. In this case, we have \(q=k\), \(\Xi ^*_j=\{{\textbf{0}}\}^{j-1}\times \mathbb {Z}\times \{{\textbf{0}}\}^{k-j}\), and \(\gamma _j^*=\lambda _j\), for every \(j=1,...,k\). The value of the weights depends on how fast the observations grow along one axis relative to the others, e.g. if on axis j there are twice the observations on axis i (as \(n\rightarrow \infty\)) then \(\gamma _j^*=2\gamma _i^*\).

We remark that the proof of Theorem 12 relies on a telescoping sum argument which breaks down in the general case. In the next section, using the \(\Upsilon\)-spectral tail field, we are able to extend this result to the general case.

5 Point random field convergence using \(\Upsilon -\)spectral tail field

For general index set \(\Lambda _n\) satisfying Condition (\(\mathcal {D}^{\Lambda }\)) that are non necessarily lattice, we introduce new spectral tail fields.

5.1 The \(\Upsilon -\)spectral tail field

Following Segers et al. (2017), given a separable Banach space \((S,\Vert \cdot \Vert _{S})\) we say that a function \(f:S\mapsto [0,\infty )\) is a modulus if it is continuous, homogeneous, and such that for every \(\varepsilon >0\) it satisfies \(\inf \{f(x):\Vert x\Vert _S>\varepsilon \}>0\). We remark that for the sake of clarity in this paper we focus on separable Banach spaces, while in Segers et al. (2017) a modulus is defined on complete separable metric spaces. Note also that in Euclidian spaces the choice of the norm is arbitrary and thus will be omitted.

Let \(\rho\) be a modulus on \((\mathbb {R}^d)^{\mathbb {Z}^{k}}\) and for any finite \(\Upsilon \subset \mathbb {Z}^{k}\) let \(\rho _{\Upsilon }\) the truncation of \(\rho\) to \(\mathbb {R}^{d\Upsilon }\). In the following, we extend some of the results of Basrak and Segers (2009) to the case of the random fields. In the time series case, the following result is contained in Theorem 5.1 of Segers et al. (2017).

Proposition 13

Let \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) be a regularly varying of index \(\alpha\) random field in \(\mathbb {R}^{d}\), with \(\alpha \in (0,\infty )\). Let \(\Upsilon\) be a finite subset of \(\mathbb {Z}^{k}\). Then there exists a random field \((\textbf{Y}_{\Upsilon ,\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) in \(\mathbb {R}^{d}\) with \(\mathbb {P}(\rho _{\Upsilon }(\textbf{Y}_{\Upsilon }) > y) = y^{-\alpha }\) for \(y \ge 1\) such that as \(x\rightarrow \infty\),

$$\begin{aligned} \mathcal {L}(x^{-1} \textbf{X}_{\textbf{s}} , . . . , x^{-1} \textbf{X}_{\textbf{t}} | \rho _{\Upsilon }(\textbf{X}) > x){\mathop {\rightarrow }\limits ^{f.d.d.}} \mathcal {L}(\textbf{Y}_{\Upsilon ,\textbf{s}} , . . . , \textbf{Y}_{\Upsilon ,\textbf{t}}). \end{aligned}$$

Moreover, there exists a random field \((\mathbf {\Theta }_{\Upsilon ,\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) in \(\mathbb {R}^{d}\) such that as \(x\rightarrow \infty\)

$$\begin{aligned} \mathcal {L}\bigg ( \frac{x^{-1}\textbf{X}_{\textbf{s}}}{\rho _{\Upsilon }(\textbf{X})} , . . . , \frac{x^{-1}\textbf{X}_{\textbf{t}}}{\rho _{\Upsilon }(\textbf{X})}\, \Big |\, \rho _{\Upsilon }(\textbf{X}) > x\bigg ){\mathop {\rightarrow }\limits ^{f.d.d.}} \mathcal {L}(\mathbf {\Theta }_{\Upsilon ,\textbf{s}} , . . . , \mathbf {\Theta }_{\Upsilon ,\textbf{t}}). \end{aligned}$$

It is possible to see that \(\mathbf {\Theta }_{\Upsilon }\) in distribution is given by \(\textbf{Y}_{\Upsilon }/\rho _{\Upsilon }(\textbf{Y})\). For stationary regularly varying random fields it is possible to extend the time change formula of Theorem 3.2 in Wu and Samorodnitsky (2020) to \(\Upsilon\)-spectral tail field:

Proposition 14

Let \((\textbf{Y}_{\Upsilon ,\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) be the tail random field in Proposition 13 and consider \(\mathbf {\Theta }_{\Upsilon ,\textbf{t}}=\textbf{Y}_{\Upsilon ,\textbf{t}}/\rho _{\Upsilon }(\textbf{Y})\), \(\textbf{t}\in \mathbb {Z}^{k}\). Let \(g:(\mathbb {R}^{d})^{\mathbb {Z}^{k}}\rightarrow \mathbb {R}\) be a bounded measurable function. Then,

  1. (i)

    \(\rho _{\Upsilon }(\textbf{Y})\) is independent of \((\mathbf {\Theta }_{\Upsilon ,\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\).

  2. (ii)

    for any \(\textbf{s}\in \mathbb {Z}^{k}\),

    $$\begin{aligned} &\mathbb {E}[g((\textbf{Y}_{\Upsilon ,\textbf{t} - \textbf{s}})_{\textbf{t}\in \mathbb {Z}^k})\textbf{1}(\rho _{(\Upsilon )_{-\textbf{s}}}(\textbf{Y})\ne \textbf{0})]\\&=\int _{0}^{\infty }\mathbb {E}[g((r\mathbf {\Theta }_{\Upsilon ,\textbf{t}})_{\textbf{t}\in \mathbb {Z}^k})\textbf{1}(r\rho _{(\Upsilon )_{\textbf{s}}}(\mathbf {\Theta })>1)]d(-r^{-\alpha }), \end{aligned}$$
    (9)
  3. (iii)

    for any \(\textbf{s}\in \mathbb {Z}^{k}\),

    $$\begin{aligned} \mathbb {E}[g((\mathbf {\Theta }_{\Upsilon ,\textbf{t}-\textbf{s}})_{\textbf{t}\in \mathbb {Z}^k})\textbf{1}(\rho _{(\Upsilon )_{-\textbf{s}}}(\mathbf {\Theta })\ne \textbf{0})]=\mathbb {E}\bigg [g\bigg (\frac{(\mathbf {\Theta }_{\Upsilon ,\textbf{t}})_{\textbf{t}\in \mathbb {Z}^k}}{\rho _{(\Upsilon )_{\textbf{s}}}(\mathbf {\Theta })}\bigg )\rho _{(\Upsilon )_{\textbf{s}}}(\mathbf {\Theta })^{\alpha } \bigg ]. \end{aligned}$$
    (10)

Remark 4

It is possible to see that by definition \(\rho _{\Upsilon }(\mathbf {\Theta _\Upsilon })= 1\) a.s.

5.2 Asymptotic Laplace functional expressed using the \(\Upsilon -\)spectral tail field

We start with a simple result on the relation between the uniform norm and the other modulus.

Lemma 15

Let \(\Upsilon\) be a finite subset of \(\mathbb {Z}^{k}\). There exists two positive constant C and D with \(C\le D\) such that, for every \(\epsilon >0\), \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|<\epsilon\) implies \(\rho _{\Upsilon }(\textbf{x})<\frac{\epsilon }{C}\), and \(\rho _{\Upsilon }(\textbf{x})<\epsilon\) implies \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|<D\epsilon\).

Corollary 16

Consider the notation of Lemma 15. Then, \(\rho _{\Upsilon }(\textbf{x})=1\) implies that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{t}|\le D\).

Proof

From Lemma 15 we have that \(\rho _{\Upsilon }(\textbf{x})<1+\delta\) implies that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{t}|< (1+\delta )D\) for every \(\delta >0\). \(\square\)

We let \(C_{m}\) and \(D_{m}\) denote the constants of Lemma 15 for \(\mathcal {E}_{m}\), for every \(m\in I^*\). Notice that \(C_{m}\) and \(D_{m}\) depends on the chosen \(\rho\), but we do not write the dependency explicitly in the notation because it does not create confusion and it lightens the notation.

Remark 5

Our setting, and in particular the following Theorem 17, is general enough to allow for countable infinitely different moduli to be used at the same time, one different \(\rho _j\) for each \(\mathcal {E}_{j}\) as in Lemma 29. With some abuse of notation we denote \(\rho _{j,\mathcal {E}_{j}}\) by \(\rho _{ \mathcal {E}_{j}}\).

Now, consider the following assumption on the modulus.

Condition (\(A^{\Lambda }_{\rho }\))

We have \(\sum _{j\in I^*}\gamma ^*_{j}c_{j}D_{j}^{\alpha }<\infty\), where \(c_{j}=\lim \limits _{n\rightarrow \infty }\frac{\mathbb {P}(\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n})}{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>a_{n})}\).

This condition is satisfied in many cases. For example, if the modulus is unique and coincides with the uniform norm then \(D_{j}=1\) and \(c_{j}\le |\mathcal {E}_{j}|\), and so \(\sum _{j\in I^*}\gamma ^*_{j}c_{j}D_{j}^{\alpha }\le 1\). Moreover, for \(\rho _{\alpha }(\cdot ):=\Vert \cdot \Vert _\alpha\) we have that \(D_{j}=1\) and \(c_{j}=|\mathcal {E}_{j}|\) and so \(\sum _{j\in I^*}\gamma ^*_{j}c_{j}D_{j}^{\alpha }= 1\). We remark that such condition is needed to implement a dominated convergence theorem in the proof of Theorem 17 and so, as it happens in most of the cases where a dominated convergence theorem is used, it might be possible to obtain the result for a specific \(\rho\) even if condition \(A^{\Lambda }_{\rho }\) is not satisfied.

We are now ready to state an anti-clustering condition tailored for conditioning on the modulii of \(\textbf{X}\) being large over a local subset and not necessarily \(\textbf{X}_{\textbf{0}}\). For every \(j\in I^*\), let \(R^{(j)}_{l,\Lambda _{n}}:=\bigcup _{\textbf{t}\in \{\textbf{s}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{s}}\supset \mathcal {E}_j\}}((\Lambda _{n})_{-\textbf{t}}\big )^+\setminus K_{l}\) and let \(\hat{M}^{\Lambda ,|\textbf{X}|,(j)}_{l,n}:=\max _{\textbf{i}\in R^{(j)}_{l,\Lambda _{n}}}|\textbf{X}_{\textbf{i}}|\)

Condition (AC\(^{\Lambda }_{\succeq ,I^*}\))

The \(\mathbb {R}^{d}\)-valued stationary regularly varying random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) satisfies the condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) if there exists an integer sequences \(r_{n}\rightarrow \infty\) such that \(k_{n}= |\Lambda _n|/|\Lambda _{r_n}|\rightarrow \infty\) and for every \(j\in I^*\)

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{\Lambda ,|\textbf{X}|,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j} |\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

Remark 6

We remark that condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) is weaker than assuming that for every \(j\in I^*\)

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{\Lambda ,|\textbf{X}|}_{2l,r_{n}}>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j} |\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

Remark 7

If \((\textbf{X}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k})\) is m-dependent then the anti-clustering conditions considered in this paper, namely (AC\(^{\Lambda }_{\succeq }\)) and (AC\(^{\Lambda }_{\succeq ,I^*}\)), are satisfied.

Moreover, it is possible to see that in some cases condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) is strictly weaker than condition (AC\(^{\Lambda }_{\succeq }\)). As we see in the following example.

Example 5

(Continuing Example 1) Recall that in setting of Example 1 \(I^*=\{1\}\) and that we denote \(\mathcal {E}_h\) by \(\mathcal C\). Thus, the condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) in this setting is:

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{\Lambda ,|\textbf{X}|,(1)}_{2l,r_{n}}>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal C} |\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

In case we know that the pattern of observations will be the same, namely \(\Lambda _n=\bigcup _{\textbf{t}\in \mathcal {L}_0}(\mathcal{C})_\textbf{t}\cap K_{n}\), which in practice means that the weather stations perform regularly, then \(R_{l,\Lambda _n}^{(1)}=\Lambda _n\setminus K_l\) and so the condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) becomes

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\max \limits _{\textbf{i}\in \Lambda _{r_n}\setminus K_{2l}}|\textbf{X}_\textbf{i}|>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal C} |\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

On the other hand we have that \(R_{l,\Lambda _{n}}\) is given by \(\bigcup _{\textbf{t}\in \Lambda _n}((\Lambda _n)_{-\textbf{t}})^{+}\setminus K_l\supset R_{l,\Lambda _n}^{(h)}\). Then, it is possible to see that (AC\(^{\Lambda }_{\succeq ,I^*}\)) is strictly weaker than condition (AC\(^{\Lambda }_{\succeq }\)).

Let \(\tilde{\mathcal {D}}_{j}=\bigcup _{\textbf{s}\in \mathcal {G}_{j}\setminus \{\textbf{0}\}}(\mathcal {E}_{j})_{\textbf{s}}\). We are now ready to present one of the main results of this paper.

Theorem 17

Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha > 0\). We assume conditions (\(\mathcal {D}^{\Lambda }\)), (AC\(^{\Lambda }_{\succeq ,I^*}\)), \(\mathcal {A}^{\Lambda }(a_{n})\) and \((A^{\Lambda }_{\rho })\). Then \(N_{n}^{\Lambda }{\mathop {\rightarrow }\limits ^{d}}N^{\Lambda }\) on the state space \(\mathbb {R}^{d}\setminus \{\textbf{0}\}\) and the limit random measure has Laplace functional for \(g\in \mathbb {C}^{+}_{K}\), given by

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\int _{0}^{\infty }\sum _{j\in I^*}\gamma ^*_{j}c_{j}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( y\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )e^{-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}}g( y\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})}\Big ]d(-y^{-\alpha })\bigg ) \end{aligned}$$
(11)

where \(c_{j}=\lim \limits _{n\rightarrow \infty }\frac{\mathbb {P}(\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n})}{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>a_{n})}\).

We check that the mixing condition \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\) and the anti-clustering condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) are satisfied for any m-dependent stationary regularly varying random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\). As usual (see for example Jakubowski and Soja-Kukieła (2019)), we say that a random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) is m-dependent, for some \(m\in \mathbb {N}\), if families \(\{\textbf{X}_{\textbf{t}};\textbf{t}\in A\}\) and \(\{\textbf{X}_{\textbf{t}};\textbf{t}\in B\}\) are independent for every pair of finite sets \(A, B \subset \mathbb {Z}^k\) satisfying \(\min _{\textbf{t}\in A,\textbf{s}\in B}\max _{i=1,...,k}|t_i-s_i|>m\).

Lemma 18

(m-dependent case) Consider an \(\mathbb {R}^{d}\)-valued m-dependent stationary regularly varying random field \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) and assume conditions (\(\mathcal {D}^{\Lambda }\)) and \((A^{\Lambda }_{\rho })\). Then the mixing condition \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\) and the anti-clustering condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) are satisfied.

5.3 The \(\Upsilon -\)spectral cluster field

Recall that \(\mathcal {G}_{j}\) is the lattice intersected with the non negative points associated to \(\mathcal {L}_{j}\) and that the extension \(\mathcal {G}_{j}\) to the whole \(\mathbb {Z}^{k}\) is just given by \(\mathcal {L}_{j}=\mathcal {G}_{j}\cup -\mathcal {G}_{j}\). For every \(\mathcal {E}_{j}\), denote by \(\mathcal {H}_{j}=\bigcup _{\textbf{s}\in \mathcal {L}_{j}}(\mathcal {E}_{j})_{\textbf{s}}\). Notice that \(\mathcal {H}_{j}\) coincides with \(\Xi ^*_j\) for \(j\in I^*\).

Proposition 19

Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random fields \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha >0\). We assume condition (AC\(^{\Lambda }_{\succeq ,I^*}\)). Then, \(|\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}|\rightarrow 0\) a.s. for any \(|\textbf{t}|\rightarrow \infty\) and \(\textbf{t}\in \mathcal {D}_{j}\), and so \(\sum _{\textbf{t}\in \mathcal {L}_{j}}\rho _{(\mathcal {E}_{j})_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }<\infty\) a.s. and \(\sum _{\textbf{t}\in \mathcal {H}_{j}}|\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}|^{\alpha }<\infty\) a.s. for every \(j=1,...,q\).

Let \(\Upsilon\) be a finite subset of \(\mathbb {Z}^{k}\) and A a subset \(\mathbb {Z}^{k}\) and \(\rho\) a modulus. Define

$$\begin{aligned} \Vert \mathbf {\Theta }_{\Upsilon }\Vert _{\rho , A,\alpha }=\bigg (\sum _{\textbf{t}\in A}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }\bigg )^{1/\alpha } \end{aligned}$$

as the normalisation constant. We define the spectral cluster random field by

$$\begin{aligned} \textbf{Q}_{\Upsilon , A}:=\frac{\mathbf {\Theta }_{\Upsilon }}{\Vert \mathbf {\Theta }_{\Upsilon }\Vert _{\rho , A,\alpha }}, \end{aligned}$$

where the dependence on \(\rho\) is implicit. Recall that \(\rho _{\alpha }(\cdot ):=\Vert \cdot \Vert _\alpha\) and notice that when the modulus is \(\rho _{\alpha }\), \(\Upsilon\) is \(\mathcal {E}_{j}\), and A is \(\mathcal {L}_{j}\) then

$$\begin{aligned}& \Vert \mathbf {\Theta }_{\mathcal {E}_{j}}\Vert _{\rho _\alpha ,\mathcal {L}_{j},\alpha }=\bigg (\sum _{\textbf{t}\in \mathcal {H}_{j}}|\mathbf {\Theta }_{\mathcal {E}_{j}}(\textbf{t})|^{\alpha }\bigg )^{1/\alpha }=\Vert \mathbf {\Theta }_{\mathcal {E}_{j}}\Vert _{\mathcal {H}_{j},\alpha }\quad \text{and}\quad \\&\Vert \textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j}}\Vert _{\rho _\alpha ,\mathcal {L}_{j},\alpha }=\Vert \textbf{Q}_{\mathcal {E}_{j}}\Vert _{\mathcal {H}_{j},\alpha }=1. \end{aligned}$$

Observe that for bounded \(\mathcal {D}_{j}\) we have that \(\mathcal {E}_{j}=\{\textbf{0}\}\cup \mathcal {D}_{j}=\mathcal {H}_{j}\) and that \(\mathcal {G}_{j}=\mathcal {L}_{j}=\{\textbf{0}\}\). We remark that when \(\Upsilon =\{\textbf{0}\}\) we have that \(\mathbf {\Theta }_{\{\textbf{0}\}}\), \(\Vert \mathbf {\Theta }_{\{\textbf{ 0}\}}\Vert _{\rho _\alpha , A,\alpha }\), and \(\textbf{Q}_{\{\textbf{ 0}\}, A}\) are simply given by \(\mathbf {\Theta }\), \(\Vert \mathbf {\Theta }\Vert _{ A,\alpha }\), and \(\textbf{Q}_{ A}\) (see Section 4.2).

5.4 Cluster point process expressed using the \(\Upsilon -\)spectral cluster field

Theorem 20

Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random fields \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha >0\). We assume conditions (\(\mathcal {D}^{\Lambda }\)), (AC\(^{\Lambda }_{\succeq ,I^*}\)), \(\mathcal {A}^{\Lambda }(a_{n})\) and \((A^{\Lambda }_{\rho })\). Then, \(N^{\Lambda }_{n}{\mathop {\rightarrow }\limits ^{d}}N^{\Lambda }\) on \(\mathbb {R}_{\textbf{0}}^{d}\) and the limit has Laplace functional for \(g\in \mathbb {C}^{+}_{K}\), with the following expression:

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\sum _{j\in I^*}\gamma ^*_{j}c_{j}\int _{0}^{\infty }\mathbb {E}\Big [1 -e^{-\sum _{\textbf{t}\in \Xi _{j}^*}g( y\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}})}\Big ]d(-y^{-\alpha })\bigg ). \end{aligned}$$

Proof

It follows from Theorem 17 using the same arguments as in the proof of Theorem 12, the time change formula (10) and the fact that from Proposition 19 we have that \(\sum _{\textbf{t}\in \mathcal {L}_j}\rho _{(\mathcal {E}_{j})_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }<\infty\) a.s., for every \(j\in I^*\). \(\square\)

In the following corollary we consider Theorems 17 and 20 when the modulus is \(\rho _{\alpha }\).

Corollary 21

Let the modulus be \(\rho _{\alpha }\). Consider an \(\mathbb {R}^{d}\)-valued stationary regularly varying random fields \((\textbf{X}_{\textbf{t}})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha >0\). We assume conditions (\(\mathcal {D}^{\Lambda }\)), (AC\(^{\Lambda }_{\succeq ,I^*}\)) and \(\mathcal {A}^{\Lambda }(a_{n})\). Then, \(N^{\Lambda }_{n}{\mathop {\rightarrow }\limits ^{d}}N^{\Lambda }\) on \(\mathbb {R}_{\textbf{0}}^{d}\) admitting, for \(g\in \mathbb {C}^{+}_{K}\), the Laplace functional:

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\sum _{j\in I^*}\gamma ^*_{j}|\mathcal {E}_{j}|\int _{0}^{\infty }\mathbb {E}\Big [1 -e^{-\sum _{\textbf{t}\in \Xi ^*_{j}}g( y\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}})}\Big ]d(-y^{-\alpha })\bigg ). \end{aligned}$$

where \(\sum _{j\in I^*}\gamma ^*_{j}|\mathcal {E}_{j}|=1\).

Proof

The result follows from Theorems 17 and 20 and from the fact that when the modulus is \(\rho _{\alpha }\) we have that:

$$\begin{aligned} c_{j}=\lim \limits _{n\rightarrow \infty }\frac{\mathbb {P}((\sum _{\textbf{t}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{t}}|^{\alpha })^{1/\alpha }>a_{n})}{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>a_{n})}=|\mathcal {E}_{j}|. \end{aligned}$$

\(\square\)

Moreover, we have the following result on the representation of \(N^{\Lambda }\).

Proposition 22

Consider \(N^{\Lambda }\) given in Theorems 17 and 20, then

$$\begin{aligned} N^{\Lambda }=\sum _{j\in I^*}\sum _{i\in \mathbb {N}}\sum _{\textbf{t}\in \Xi ^*_{j}}\varepsilon _{\Gamma _{j,i}^{-1/\alpha }(\gamma ^*_{j}c_{j})^{1/\alpha }\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},i,\textbf{t}}} \end{aligned}$$
(12)

where \(\Big (\sum _{\textbf{t}\in \Xi ^*_{j}}\varepsilon _{\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},i,\textbf{t}}}\Big )_{i\in \mathbb {N}}\), is an iid sequence of point processes with state space \(\mathbb {R}^{d}\), and where \((\Gamma _{j,i})_{i\in \mathbb {N}}\) are the points of a unit rate homogeneous Poisson process on \((0, \infty )\) independent of \((\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},i,\textbf{t}})_{\textbf{t}\in \Xi ^*_{j}}\), for every \(j\in I^*\). Moreover, we have that \(\Big (\sum _{i\in \mathbb {N}}\sum _{\textbf{t}\in \Xi ^*_{j}}\varepsilon _{\Gamma _{j,i}^{-1/\alpha }(\gamma ^*_{j}c_{j})^{1/\alpha }\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},i,\textbf{t}}}\Big )_{j\in I^*}\) is a sequence of independent point processes with state space \(\mathbb {R}^{d}\).

Finally, in the setting of Corollary 21 we have \(N^{\Lambda }=\sum _{i\in \mathbb {N}}\sum _{l\in \mathbb {N}}\varepsilon _{\Gamma _{i}^{-1/\alpha }\hat{\textbf{Q}}_{i,l}}\) where \(\Big (\sum _{\textbf{l}\in \mathbb {N}}\varepsilon _{\hat{\textbf{Q}}_{i,l}}\Big )_{i\in \mathbb {N}}\), is an iid sequence of point processes with state space \(\mathbb {R}^{d}\) with mixing distribution \(\mathcal {L}(\sum _{\textbf{l}\in \mathbb {N}}\varepsilon _{\hat{\textbf{Q}}_{i,l}})= \sum _{j\in I^*}\gamma ^*_{j}|\mathcal {E}_{j}|\mathcal {L}(\sum _{\textbf{t}\in \Xi ^*_{j}}\varepsilon _{\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}}})\) for every \(i\in \mathbb {N}\), and where \((\Gamma _{i})_{i\in \mathbb {N}}\) are the points of a unit rate homogeneous Poisson process on \((0, \infty )\) independent of \((\hat{\textbf{Q}}_{l} )_{l\in \mathbb {N}}\).

Proof

The result follows from Theorems 17 and 20 identifying the limiting Laplace functionals such as the ones of cluster Poisson random fields. \(\square\)

Remark 8

Notice that the chosen order does not affect the spectral tail random field. This fact is a clear advantage of the \(\Upsilon -\)spectral cluster field approach compared to the approach of Section 4 because it allows the asymptotic representation (12) in full generality, not just in the lattice case.

Example 6

(Continuing Example 1) Armed with the results of this and the previous section we can present the extreme asymptotic behaviour of \(N_n^\Lambda\), where \(\Lambda _n\) is described in Example 1. Then,

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\int _{0}^{\infty }\mathbb {E}\Big [1 -e^{-\sum _{\textbf{t}\in \Xi _1^*}g( y\textbf{Q}_{\mathcal{C},\mathcal {L}_{h},\textbf{t}})}\Big ]d(-y^{-\alpha })\bigg ), \end{aligned}$$

where we consider \(\rho _\alpha\) as the modulus. We remark that such a clean result is not achievable when the anti-clustering condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) does not hold. Since in this case (AC\(^{\Lambda }_{\succeq ,I^*}\)) is strictly weaker than (AC\(^{\Lambda }_{\succeq }\)), this representation is equivalent to

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\int _{0}^{\infty }\sum _{i=1}^{|\mathcal{C}|}\frac{1}{|\mathcal{C}|}\mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \mathcal {D}_{i}}g(y\mathbf {\Theta }_{\textbf{t}})}\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \bigg ]d(-y^{-\alpha })\bigg ). \end{aligned}$$

Moreover, we can see that it is not important which reference point \(h\in \mathcal C\) we consider, since our results enjoy certain translation properties. In particular, this representation is equivalent to

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\int _{0}^{\infty }\mathbb {E}\Big [1 -e^{-\sum _{\textbf{t}\in \Xi _{s}}g( y\textbf{Q}_{\mathcal{E}_s,\mathcal {L}_{s},\textbf{t}})}\Big ]d(-y^{-\alpha })\bigg ), \end{aligned}$$

where s is any element of \(\{1,...,|\mathcal C|\}\) and \(\Xi _{s}\) is any different centering of \(\mathcal C_\infty\) at \(\textbf{0}\) (in our example \(\mathcal {L}_{h}=\mathcal {L}_{s}\) for every \(h,s=1,...,|\mathcal C|\)).

6 Applications

6.1 The extremal index

In this section we investigate properties of the extremal index for random fields; see the work of Hashorva (2021) for max-stable random fields. First, let us define it.

Definition 1

(\(\Lambda\)-extremal index) Consider an \(\mathbb {R}^d\)-valued stationary random field \((\textbf{X}_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\). Assume that for each positive \(\tau\) there exists a sequence \((u_{n}(\tau ))\) such that \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|\mathbb {P}(|\textbf{X}_\textbf{0}|>u_{n}(\tau ))=\tau \in [0,\infty ]\) holds and the limit \(\lim _{n\rightarrow \infty } \mathbb {P}(\max _{\textbf{t}\in \Lambda _{n} }|\textbf{X}_\textbf{t}|\le u_{n}(\tau )) = e^{-\theta ^{\Lambda }_{X}\tau }\) exists for some \(\theta _{X}^{\Lambda }\in [0, 1]\). Then \(\theta _{X}^{\Lambda }\) is the \(\Lambda\)-extremal index of \((\textbf{X}_\textbf{t})\).

As shown by Wu and Samorodnitsky (2020) when \(\Lambda _{n}=[1,n]^k\), the extremal index is connected with the so called block extremal index. In particular, let

$$\begin{aligned} \theta ^{\Lambda }_{n}:=\frac{\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }|\textbf{X}_\textbf{t}|>u_{n}(\tau ))}{|\Lambda _{r_{n}}|\mathbb {P}(|\textbf{X}_\textbf{0}|>u_{n}(\tau ))} \text { and }\theta ^{\Lambda }_{b}:=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n} \end{aligned}$$

where by \(u_{n}(\tau )\) is such that \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|\mathbb {P}(|\textbf{X}_\textbf{0}|>u_{n}(\tau ))=\tau \in [0,\infty ]\).

For the sake of simplicity, we provide our results for random fields with respect to the modulus \(\rho _{\alpha }\) in this section. Thus, \(D_{j}=1\) and \(\sum _{i\in I^*}\gamma ^*_{i}|\mathcal{E}_{j}|=1\). For every \(j\in I^*\), generalizing the approach of Janssen (2019) for processes to random fields, let \(\textbf{T}^{*}_{j}\) be defined as follows: for \(\textbf{t}\in \mathcal {L}_{j}\) define

$$\begin{aligned} \{\omega :\textbf{T}_{j}^{*}(\omega )=\textbf{t}\} &=\{\omega :\max _{\textbf{z}\in (\mathcal {E}_{j})_{\textbf{t}}}|\mathbf \Theta _{\mathcal {E}_{j},\textbf{z}}(\omega )|-\sup _{\textbf{s}\in \mathcal {L}_{j},\textbf{s}\prec \textbf{t}}\max _{\textbf{v}\in (\mathcal {E}_{j})_{\textbf{s}}}|\mathbf \Theta _{\mathcal {E}_{j},\textbf{v}}(\omega )|>0\}\\ &\cap \{\omega :\max _{\textbf{z}\in (\mathcal {E}_{j})_{\textbf{t}}}|\mathbf \Theta _{\mathcal {E}_{j},\textbf{z}}(\omega )|-\sup _{\textbf{s}\in \mathcal {L}_{j},\textbf{s}\succeq \textbf{t}}\max _{\textbf{v}\in (\mathcal {E}_{j})_{\textbf{s}}}|\mathbf \Theta _{\mathcal {E}_{j},\textbf{v}}(\omega )|=0\} \end{aligned}$$

and for \(\textbf{t}\in \mathbb {Z}^{k}\setminus \mathcal {L}\) let \(\{\omega :\textbf{T}_j^{*}(\omega )=\textbf{t}\}=\emptyset\). If (AC\(^\Lambda _{\succ ,I^*}\)) is satisfied then \(\textbf{T}_{j}^{*}\) is well defined thanks to the summability proved in Proposition 19 (see also the end of the proof of Lemma 30 for the connection between the summability of \(\rho\) and the one of the max norm). If (AC\(^\Lambda _{\succ }\)) is satisfied and all \(\mathcal {H}_j\)’s are lattices (namely all \(\mathcal {D}\cup \{{\textbf{ 0}}\}\cup -\mathcal {D}\)’s and all the \(\Xi\)’s are lattices) then \(\textbf{T}_{j}^{*}\) is also well defined thanks to the summability proved in Proposition 11. Observe that when \(\mathcal {H}_j\) is a lattice then \(\mathcal {E}_j=\{\textbf{0}\}\) and \(\mathcal {H}_j=\mathcal {L}_j\).

Theorem 23

Consider an \(\mathbb {R}^d\)-valued stationary random field \((\textbf{X}_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha\) and a sequence \((\Lambda _n)\) satisfying the condition (\(\mathcal {D}^{\Lambda }\)).

  1. 1.

    If the anti-clustering condition (AC\(^{\Lambda }_{\succ }\)) holds, then the limit \(\theta ^{\Lambda }_{b}:=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists, is positive and has the representations

    $$\begin{aligned} \theta ^{\Lambda }_{b}=\sum _{j=1}^{\infty }\lambda _{j}\mathbb {P}(Y\sup _{\textbf{t}\in \mathcal {D}_{j} }|\mathbf \Theta _\textbf{t}|\le 1)=\sum _{j=1}^{\infty }\lambda _{j}\mathbb {E}\Big [\Big (\sup _{\textbf{t}\in \mathcal {D}_{j}\cup \{\textbf{0}\}}|\mathbf \Theta _\textbf{t}|^{\alpha }-\sup _{\textbf{t}\in \mathcal {D}_{j}}|\mathbf \Theta _\textbf{t}|^{\alpha }\Big ) \Big ]. \end{aligned}$$
    (13)
  2. 2.

    If also all the \(\Xi _j^*\)’s are lattices then \(\theta ^{\Lambda }_{b}\) admits the representations

    $$\begin{aligned} \theta ^{\Lambda }_{b}&=\sum _{j=1}^\infty \lambda _{j}\mathbb {E}\bigg [\sup \limits _{\textbf{t}\in \Xi ^*_{j}}|\textbf{Q}_{\Xi ^*_{j},\textbf{t}}|^{\alpha }\bigg ]=\sum _{j=1}^\infty \lambda _{j}\mathbb {E}\bigg [\frac{\sup _{\textbf{t}\in \Xi ^*_{j}}|\mathbf \Theta _{\textbf{t}}|^{\alpha }}{\sum _{\textbf{s}\in \Xi ^*_{j}}|\mathbf \Theta _{\textbf{s}}|^{\alpha }} \bigg ]\nonumber \\&=\sum _{j=1}^\infty \lambda _{j}\mathbb {E}\bigg [|\mathbf \Theta _{\textbf{0}}|^{\alpha } \textbf{1}(\textbf{T}^{*}_{j}=\textbf{0})\bigg ]\\&=\sum _{j=1}^{\infty }\lambda _{j}\mathbb {E}\Big [\Big (\sup _{\textbf{t}\in \mathcal {D}_{j}\cup \{\textbf{0}\}}|\mathbf \Theta _\textbf{t}|^{\alpha }-\sup _{\textbf{t}\in \mathcal {D}_{j}}|\mathbf \Theta _\textbf{t}|^{\alpha }\Big ) \Big ]\,. \end{aligned}$$
    (14)
  3. 3.

    In any case (1) or (2), if also the mixing condition

    $$\begin{aligned} \mathbb {P}(\max _{\textbf{t}\in \Lambda _{n} }|\textbf{X}_\textbf{t}|\le a^\Lambda _{n}x)-\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }|\textbf{X}_\textbf{t}|\le a^\Lambda _{n}x)^{k_{n}}\rightarrow 0,\quad n\rightarrow \infty , \end{aligned}$$
    (15)

    where \((k_{n}\) and \((r_{n})\) are as in the anti-clustering condition (AC\(^{\Lambda }_{\succ }\)), is satisfied, then \(\theta ^{\Lambda }_{X}\) exists and coincides with \(\theta ^{\Lambda }_{b}\).

It is possible to see that Theorem 23 (2) only applies to \(\mathcal {D}\)’s that are lattices. It is natural to ask whether or not a similar result holds for any \(\mathcal {D}\). The answer is positive if a different anti-clustering condition is assumed. In particular, we have the equivalent result when the anti-clustering condition (AC\(^{\Lambda }_{\succ ,I^*}\)) holds.

Theorem 24

Consider an \(\mathbb {R}^d\)-valued stationary random field \((\textbf{X}_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) with index \(\alpha\) and a sequence \((\Lambda _n)\) satisfying the condition (\(\mathcal {D}^{\Lambda }\)).

  1. 1.

    If the anti-clustering condition (AC\(^{\Lambda }_{\succ ,I^*}\)) holds, then the limit \(\theta ^{\Lambda }_{b}:=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists, is positive and has the representations

    $$\begin{aligned} \theta _{b}^{\Lambda }&=\mathbb {E}\bigg [\sup \limits _{l\in \mathbb {N}}|\hat{\textbf{Q}}_{l}|^{\alpha } \bigg ]=\sum _{j\in I^*}\gamma ^*_{j}|\mathcal{E}_{j}|\mathbb {E}\bigg [\sup \limits _{\textbf{t}\in \Xi ^*_{j}}|\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}}|^{\alpha }\bigg ]\nonumber \\&=\sum _{j\in I^*}\gamma ^*_{j}|\mathcal{E}_{j}|\mathbb {E}\bigg [\max \limits _{\textbf{t}\in (\mathcal {E}_{j})_{\textbf{T}^{*}_{j}}}|\textbf{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}}|^{\alpha }\bigg ]\nonumber \\&=\sum _{j\in I^*}\gamma ^*_{j}|\mathcal{E}_{j}|\mathbb {E}\bigg [\max _{\textbf{z}\in \mathcal {E}_{j}}|\mathbf \Theta _{\mathcal {E}_{j},\textbf{z}}|^{\alpha } \textbf{1}(\textbf{T}^{*}_{j}=\textbf{0})\bigg ]\,. \end{aligned}$$
    (16)
  2. 2.

    If also the mixing condition (15) where \((k_{n})\) and \((r_{n})\) are as in the anti-clustering condition (AC\(^{\Lambda }_{\succ ,I^*}\)), is satisfied, then \(\theta ^{\Lambda }_{X}\) exists and coincides with \(\theta ^{\Lambda }_{b}\).

From the two previous results we obtain the following immedaite corollary.

Corollary 25

Assume that \((\textbf{X}_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) is an \(\mathbb {R}^d\)-valued stationary random field with index \(\alpha\), a sequence \((\Lambda _n)\) satisfying the condition (\(\mathcal {D}^{\Lambda }\)) and either (AC\(^{\Lambda }_{\succ }\)) or (AC\(^{\Lambda }_{\succ ,I^*}\)) and (15). Then the extremal index \(\theta _{X}\) exists, is positive, and

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\mathbb {P}(\max _{\textbf{t}\in \Lambda _{n} }a_{n}^\Lambda |\textbf{X}_{\textbf{t}}|\le x)=\Phi _{\alpha }^{\theta _{X}}(x),\quad x>0, \end{aligned}$$

where \(\Phi _{\alpha }(x) = e^{-x^{-\alpha }}\), \(x > 0\), is the standard Fréchet distribution function and \(\theta _{X}\) is given in either (13) or (16) depending on which anti-clustering and mixing conditions are satisfied, (14) being available only when the \(\Xi ^*_j\) are lattices.

6.2 Max-stable random fields

Consider a non negative stationary random field \(X=(X_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) (with state space \(E=\mathbb R_+\) and \(d=1\)). A fundamental representation theorem by de Haan (1984) states that any stochastically continuous max-stable (real valued) random field X can be represented (in finite dimensional distributions) as

$$\begin{aligned} X_{{\textbf {t}}}=\max _{i\in \mathbb {N}}U_{i}V_{i,{\textbf {t}}},\quad {\textbf {t}}\in \mathbb {Z}^{k}, \end{aligned}$$
(17)

where \((U_{i})_{i\in \mathbb {N}}\) is a decreasing enumeration of the points of a Poisson point process on \((0,+\infty )\) with intensity measure \(u^{-2}du\), \((V_{i})_{i\in \mathbb {N}}\) are i.i.d. copies of a non-negative random field \((V_{{\textbf {t}}})_{{\textbf {t}}\in \mathbb {Z}^{k}}\) such that \(\mathbb {E}[V_{{\textbf {t}}}]<+\infty\) for all \({\textbf {t}}\in \mathbb {Z}^{k}\), the sequences \((U_{i})_{i\in \mathbb {N}}\) and \((V_{i})_{i\in \mathbb {N}}\) are independent. Observe that the above definition implies that the marginal distributions of X are 1-Fréchet, that is \(\mathbb {P}(X_{{\textbf {t}}}\le z)=e^{-\mathbb {E}[V_{{\textbf {t}}}]/z}\) for all \(z>0\), where \(\mathbb {E}[V_{{\textbf {t}}}]>0\) is a scale parameter.

The aim of this section is to find a necessary an sufficient condition for the anti-clustering condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) to hold for stationary max-stable random fields. We recall some notation: \(\mathcal {H}_{j}=\bigcup _{\textbf{s}\in \mathcal {L}_{j}}(\mathcal {E}_{j})_{\textbf{s}}\) where, for every \(j\ge 1\), \(\mathcal {E}_j\) are finite subsets of \(\mathbb Z^k\) including \({{\textbf{ 0}}}\) and \(\mathcal {L}_{j}\) are any lattice of \(\mathbb Z^k\) (possibly degenerate). The following result is an extension of results in Wu and Samorodnitsky (2020). Notice that the limit (12) motivates the introduction of a mixing distribution on \(V_{\textbf{ t}}\) as in the second assertion below.

Proposition 26

Let \((X_{{\textbf {t}}})_{{\textbf {t}}\in \mathbb {Z}^{k}}\) be a stationary max-stable random field with non-negative values. Consider a sequence \(\Lambda _n\) of subsets of translated \(\bigcup _{j\ge 1}\mathcal {H}_{j}\) satisfying the condition (\(\mathcal {D}^{\Lambda }\)) then \((X_{{\textbf {t}}})_{{\textbf {t}}\in \mathbb {Z}^{k}}\) satisfies the (AC\(^{\Lambda }_{\succeq ,I^*}\)) condition for any \(r_{n}\rightarrow \infty\) s.t.  \(\lfloor n/r_{n}\rfloor \rightarrow \infty\) if for any \(i\ge 1\) and \(j\in I^*\)

$$\begin{aligned} \lim \limits _{|{\textbf {t}}|\rightarrow \infty ,{\textbf {t}}\in (\mathcal {H}_{i})^+}V_{{\textbf {t}}}\,\textbf{1}\big (\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\ne 0\big )=0,\quad a.s. \end{aligned}$$
(18)

Consider \(\mathcal L(V_{\textbf{ t}})=\sum _{j\in I^*} \lambda _j \mathcal L(V_{\textbf{ t}}^{(j)})\), \(\lambda _j>0\), \(j\in I^*\), \(\sum _{j\in I^*} \lambda _j=1\) such that each component \(V_{\textbf{ t}}^{(j)}\) is supported by a subset of a translation of a unique \(\mathcal{H}_j\), \(j\ge 1\) then the condition (18) simplifies to

$$\begin{aligned} \lim \limits _{|{\textbf {t}}|\rightarrow \infty ,{\textbf {t}}\in (\mathcal {H}_{j})^+}V_{{\textbf {t}}}\,\textbf{1}\big (\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\ne 0\big )=0,\quad a.s. \end{aligned}$$
(19)

for any \(j\in I^*\).

Notice that these specific max-stable random fields could be used to model any asymptotic clustering due to our result (12).

Remark 9

Under Condition (\(\mathcal {D}^{\Lambda }\)), the index set \(\Lambda _n\) fills up the translated asymptotic index set \(\bigcup _{j\ge 1}\mathcal {H}_{j}\). Checking the conditions (18) and (19) requires the knowledge of the \(\mathcal {E}_j\), \(j\ge 1\), beforehand. Thus it requires some prior knowledge on the grid of the observations.

Example 7

(Continuing Example 1) Max-stable random fields have been introduced to model extremal phenomena such as storm, starting with the pioneer work of Smith (1990). The spatial model (2002), called spectrally stationary, is widely used to model space dependence because of its simplicity. It is defined as follows: consider an iid sequence of stationary random fields \(V_{i,\textbf{s}}\), \({\textbf{s}}\in \mathbb Z^2\) which are not null. Then \(X_{\textbf{s}}^{space}=\max _{i\ge 1}U_iV_{i,{\textbf{s}}}\) is a stationary max-stable random field. However it does not satisfy conditions (18) nor (19) in any direction of \(\mathbb Z^2\) because it would contradict the stationary assumption on \(V_{i,\textbf{s}}\), \({\textbf{s}}\in \mathbb Z^2\). The M3 representation of de Haan and Pereira (2006) and Kabluchko et al. (2009) was introduced to bypass this issue. Consider now a state-space model with space defined over \(\mathcal H_0=\bigcup _{{\textbf{t}}\in \mathcal{L}_0}(\mathcal{C})_{\textbf{ t}}\) where \(\mathcal C\) is a finite subset of \(\mathbb Z^2\). It is sufficient to check (18) and (19) where the limit is taken along the time direction only. Thus a stationary space-time process \(X_t\) such that its space distribution is the one of \(X_{\textbf{ t}}^{space}\) can satisfy conditions (18) and (19) when its extremes are sufficiently independent over time. A basic example is an iid process in time for which the condition \(\max \limits _{\textbf{t}\in \mathcal {E}_0} V_{\textbf{t}}\ne 0=\max \limits _{\textbf{t}\in \mathcal C\times \{0\}} V_{\textbf{t}}\ne 0\) forces that \(V_{\textbf{t}}= 0\) for any other component \({\textbf{ t}}=({\textbf{s}},k)\), \(k\ne 0\). Such spectrally stationary models in space were not attainable in previous studies, see Remark 5 (i) of Buhl and Klüppelberg (2019), because they are not ergodic as shown in Dombry and Kabluchko (2017).

7 Proofs in Section 3

7.1 Proof of Proposition 3

Assume that the first statement of point (I) is false for at least some \(\mathcal {D}_{j}\) and \(\mathcal {D}_{i}\) with \(j\ne i\). Then, for every \(p\in \mathbb {N}\), \(\mathcal {D}_{j}\cap K_{p}= \mathcal {D}_{i}\cap K_{p}\), which implies that \(\mathcal {D}_{j}=\mathcal {D}_{i}\) contradicting Condition (\(\mathcal {D}^{\Lambda }\)). By Condition (\(\mathcal {D}^{\Lambda }\)) we infer on one hand that \(\lambda _{j,p}\ge \sum _{i\in I_{p}^{(j)}}\lambda _{i}\). Indeed \(\mathcal {D}_{j}\cap K_{p}\subseteq \mathcal {D}_{j}\cap K_{p'}\) for \(p'>p\) thus we have the inclusion

$$\begin{aligned} \{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p')}=\mathcal {D}_{j}\cap K_{p'}\}\subseteq \{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\} \end{aligned}$$

and \(\lambda _{j,p} \ge \lambda _j\), \(p\ge 1\), \(1\le j\le q\). Thus for any \(p'>p\) we have

$$\begin{aligned} \{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}&=\bigcup _{i\in I_p^{(j)}}\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{i}\cap K_{p}\}\\&\supseteq \bigcup _{i\in I_p^{(j)}}\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p')}=\mathcal {D}_{i}\cap K_{p'}\}\,. \end{aligned}$$

Fix \(\varepsilon >0\). As \(\sum _{i=1}^q\lambda _i<\infty\), there exists some m sufficiently large such that \(\sum _{i\ge m}\lambda _i< \varepsilon\). Moreover, there exists \(p'\) sufficiently large so that \(\mathcal {D}_{j}\cap K_{p'}\ne \mathcal {D}_{i}\cap K_{p'}\) for any i\(j\le m\) from the reasoning above. Thus

$$\begin{aligned} |\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}|&\ge |\bigcup _{i\in I_p^{(j)}, i\le m}\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p')}=\mathcal {D}_{i}\cap K_{p'}\}|\\&\ge \sum _{i\in I_p^{(j)}, i\le m} |\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p')}=\mathcal {D}_{i}\cap K_{p'}\}| \end{aligned}$$

and dividing both sides by \(|\Lambda _n|\) and letting \(n\rightarrow \infty\) we obtain

$$\begin{aligned} \lambda _{j,p}\ge \sum _{i\in I_p^{(j)}, i\le m}\lambda _{i,p'}\ge \sum _{i\in I_p^{(j)}, i\le m}\lambda _{i}\ge \sum _{i\in I_p^{(j)}}\lambda _{i}-\varepsilon \,. \end{aligned}$$

As it holds for every \(\varepsilon >0\) it implies the desired relation \(\lambda _{j,p}\ge \sum _{i\in I_{p}^{(j)}}\lambda _{i}\). On the other hand \(\lambda _{i,p}\) cannot be strictly greater than \(\sum _{i\in I_{p}^{(j)}}\lambda _{i}\). Indeed, defining the equivalence relation \(i\sim j \Leftrightarrow i\in (I_p^{(j)})_{1\le j\le q}\), one considers the partition of \(\{1,\ldots ,q\}\) generated by the equivalence classes \(\mathfrak {P}_p=\{1,\ldots ,q\}\setminus \sim\). If

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}|/|\Lambda _n|> \sum _{i\in I_{p}^{(j)}}\lambda _{i} \end{aligned}$$

for some \(j\in \{1,\ldots ,q\}\) belonging to the class \(\ell \in \mathfrak {P}_p\) then one gets

$$\begin{aligned} &\lim \limits _{n\rightarrow \infty }\sum _{j\in \mathfrak {P}_p}|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}|/|\Lambda _n|\\&>\lim \limits _{n\rightarrow \infty }\sum _{j\in \mathfrak {P}_p\setminus {\ell }}|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}|/|\Lambda _n|\\&\quad + \sum _{j\in I_{p}^{(\ell )}}\lambda _{j} > \sum _{j\in \mathfrak {P}_p} \sum _{j\in I_{p}^{(j)}}\lambda _{j}=1 \end{aligned}$$

which yields a contradiction.

For point (II), consider the case of \((\Lambda _{r_{n}})_{n\in \mathbb {N}}\) whose points have a distance between each other which increases as n increases. The increase of the distance within the points of \(\Lambda _{r_{n}}\) as n increases allows the following fact: when we consider \(K_{p}\) around one of the points, for every fixed p, all the other points are outside \(K_{p}\) for every n large enough. Then, in this case there is only one \(\mathcal {D}\) for \((\Lambda _{r_{n}})_{n\in \mathbb {N}}\) and it is given by the empty set.

For point (III) we need the following Lemma.

Lemma 27

For any \(1\le j\le q\) and \(\textbf{z}\in \mathcal {D}_{j}\), there exists \(1\le i\le q\) such that \(\mathcal {D}_{i}=((\mathcal {D}_{j})_{-\textbf{z}})^+\) and \(\lambda _i\ge \lambda _j\).

Proof

Consider \(\textbf{z}\in \mathcal {D}_{j}\), \(1\le j\le q\) and let \(D:=((\mathcal {D}_{j})_{-\textbf{z}})^+\). Notice that for every \(q\in \mathbb {N}\) there exists a \(p\in \mathbb {N}\) such that \(K_{p}\supset ((K_{q})_{-\textbf{z}})^{+}\). Thus \(\mathcal {D}_j\cap ((K_{q})_{-\textbf{z}})^{+}\subset \mathcal {D}_j\cap K_p\) and thus \(\textbf{z}+\textbf{s}\) belongs to \(\mathcal {D}_j\cap K_p\) for any \(\textbf{s}\in D\cap K_{q}\). Thus for every \(q\in \mathbb {N}\) we have

$$\begin{aligned} \{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},q)}=D\cap K_{q}\}\supset \{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_j\cap K_{p}\} \end{aligned}$$

so that

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},q)}=D\cap K_{q}\}|/|\Lambda _{n}|\ge \lambda _{j,p}\ge \lambda _{j}\,. \end{aligned}$$
(20)

Assume that D does not coincide with any \(\mathcal {D}_{i}\), \(1\le i\le q\). Fix \(\varepsilon >0\) so small that it satisfies \(\lambda _j>\varepsilon\). Let m satisfies \(\sum _{i>m}\lambda _{i}<\varepsilon\) as above in the proof of point (I) (thus \(j\le m\)) and p sufficiently large such that \(D\cap K_{p}\ne \mathcal {D}_{i}\cap K_{p}\) and \(\mathcal {D}_{i}\cap K_{p}\ne \mathcal {D}_{k}\cap K_{p}\) for every \(i\ne k\le m\). Using the notation introduced in the proof of point (I), we have

$$\begin{aligned}& \limsup \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=D\cap K_{p}\}|/|\Lambda _{n}|\\&\le \limsup \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}\ne \mathcal {D}_{i}\cap K_{p},\forall i\le m\}|/|\Lambda _{n}|\\&\le \limsup \limits _{n\rightarrow \infty }(|\Lambda _{n}|-|\bigcup _{i\le m}\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{i}\cap K_{p}\}|)/|\Lambda _{n}|\\& \le 1-\sum _{i\le m}\lambda _{i,p}\le 1-\sum _{i\le m}\lambda _{i}\le \varepsilon \end{aligned}$$

which is in contradiction with (20). Therefore, \(D=\mathcal {D}_{i}\) for some \(1\le i\le q\) and we have

$$\begin{aligned} \lim \limits _{q\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},q)}=D\cap K_{q}\}|/|\Lambda _{n}|=\lambda _{i} \end{aligned}$$

and the relation \(\lambda _{i}\ge \lambda _{j}\) follows from (20). \(\square\)

To prove Point (III) observe first that for any \(\mathcal {D}_{j}\) there exists \(b_j\in \mathbb {N}\cup \{\infty \}\), with \(b_j\le |\mathcal {D}_j|+1\), distinct sets \((\mathcal {D}_{j})_{-\textbf{z}}^+\), \(\textbf{z}\in \mathcal {D}_{j}\). The sum of the corresponding weights \(\lambda _i\) being smaller than 1 and larger than \(b_j\lambda _j\), by Lemma 27 we get the constraint \(b_j\lambda _j\le 1\) and Point (III) follows.

7.2 Proof of Proposition 4

First, for any \(1\le j\le q\), let us show that \(\mathcal {G}_j\) is invariant by addition in the sense that if \(\textbf{z}\in \mathcal {G}_j\) and \(\textbf{z}'\in \mathcal {G}_j\) we infer that \(\textbf{z}+\textbf{z}'\in \mathcal {G}_j\setminus \{\textbf{0}\}\). Indeed, \(\textbf{z}'\in \mathcal {D}_{j}= ((\mathcal {D}_{j})_{-\textbf{z}})^+\) so that necessarily \(\textbf{z}+\textbf{z}'\in \mathcal {D}_{j}\). Moreover we have

$$\begin{aligned} ((\mathcal {D}_{j})_{-\textbf{z}-\textbf{z}'})^+&= ((((\mathcal {D}_{j})_{-\textbf{z}})^+)_{-\textbf{z}'})^+\\&= ((\mathcal {D}_{j})_{-\textbf{z}'})^+\\&=\mathcal {D}_{j}\,. \end{aligned}$$

It shows that \(\mathcal {G}_j\) is invariant by addition on \((\mathbb {Z}^{k})^+\). Thus \(\mathcal {G}_j\) is given by a lattice, namely given k (not necessarily linearly independent) distinct vectors \(\textbf{v}_{1},...,\textbf{v}_{k}\in \mathbb {Z}^{k}\) (i.e. a basis of \(\mathbb {Z}^{l}\) for \(l\in \{1,...,k\}\) called the rank) we have the identity \(\mathcal {G}_j=\{\sum _{l=1}^{k}a_{l}\textbf{v}_{l}:a_{l}\in \mathbb {Z}\}\cap \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succeq \textbf{0}\}\). We will refer to the degenerate case \(\mathcal {G}_j=\{\textbf{0}\}\) as the case of null rank \(k=0\). Thus, \(\mathcal {L}_j\) is a lattice on \(\mathbb {Z}^k\).

Let us now we prove the existence of the partition (4). We have to show that for any \(\textbf{z}\in \mathcal {D}_j\) there exists a unique \(1\le i\le b_j\) so that \(\textbf{z}-\textbf{z}_{l_i}\in \mathcal L_{l_i}\). We know that we have a unique \(\mathcal {D}_{l_i}\) such that \(\mathcal {D}_{l_i}=((\mathcal {D}_j)_{- \textbf{z}})^+=((\mathcal {D}_j)_{- \textbf{z}_{l_i}})^+\) for some \(1\le i\le b_j\). Assume without loss of generality that \(\textbf{z}\succeq \textbf{z}_{l_i}\). Thus, either \(\textbf{z}=\textbf{z}_{l_i}\) and then \(\textbf{z}-\textbf{z}_{l_i}=\textbf{0}\in \mathcal G_{l_i}\). Or \(\textbf{z}-\textbf{z}_{l_i}\succ \textbf{0}\) and for any \(\textbf{s}\in \mathcal {D}_{l_i}\) we have \(\textbf{s}+\textbf{z}\in \mathcal {D}_{j}\) and thus \(\textbf{s}+\textbf{z}-\textbf{z}_{l_i}\in \mathcal {D}_{l_i}\). That \(\textbf{z}-\textbf{z}_{l_i}\in \mathcal G_{l_i}\) follows by definition of \(\mathcal G_{l_i}\). Since \(\textbf{z}\) is an arbitrary point in \(\mathcal {D}_j\) and since \(l_i\) is unique as \(\mathcal {D}_{l_i}\), we obtain the desired partition.

Consider any \(\mathcal {D}_{l}\) so that there exists \(\textbf{z}\in \mathcal {D}_j\) satisfying \(\mathcal {D}_{l}=((\mathcal {D}_j)_{- \textbf{z}})^+\). Then for any \(\textbf{s}\in \mathcal {G}_j\setminus \{\textbf{0}\}\) we have

$$\begin{aligned} ((\mathcal {D}_{l})_{-\textbf{s}})^+&= ((\mathcal {D}_{j})_{-\textbf{z}-\textbf{s}})^+\\&= ((((\mathcal {D}_{j})_{-\textbf{s}})^+)_{-\textbf{z}})^+\\&= ((\mathcal {D}_{j})_{-\textbf{z}})^+\\&=\mathcal {D}_{l}\,. \end{aligned}$$

Since \(\textbf{z}\in \mathcal {D}_j=((\mathcal {D}_{j})_{-\textbf{s}})^+\) then \(\textbf{z}+\textbf{s}\in \mathcal {D}_{j}\) and \(\textbf{s}\in \mathcal {D}_{l}\). Thus we proved that

$$\begin{aligned} \textbf{s}\in \{\textbf{z}'\in \mathcal {D}_l\cup \{\textbf{0}\}: ( (\mathcal {D}_{l})_{-\textbf{z}'})^+= \mathcal {D}_l \}=: \mathcal {G}_l \end{aligned}$$

and that \(\mathcal {G}_j\subseteq \mathcal {G}_l\).

Further, we show now that \(\mathcal {G}_j\) and \(\mathcal {G}_l\), \(l=l_1,...,l_{b_j}\), have the same rank. Assume the contrary. Thus, let \(\mathcal {G}_j=\{\sum _{l=1}^{m}a_{l}\textbf{v}_{l}:a_{l}\in \mathbb {Z}\}^+\) where \(\textbf{v}_{1},...,\textbf{v}_{m}\in \mathbb {Z}^{k}\) are linearly independent and \(\mathcal {G}_l=\{\sum _{l=1}^{p}a_{l}\textbf{v}'_{l}:a_{l}\in \mathbb {Z}\}^+\) where \(\textbf{v}'_{1},...,\textbf{v}'_{p}\in \mathbb {Z}^{k}\) are linearly independent, with \(k\ge p>m\). Since \(\mathcal {G}_j\subseteq \mathcal {G}_l\) we know that \(\textbf{v}_{i}=c_i\textbf{v}'_{i}\) for some \(c_i\in \mathbb {Z}\), for every \(i=1,...,m\). Since \(a_h\textbf{v}'_h\in \mathcal {G}_l\setminus \mathcal {G}_j\) for any \(a_h\in \mathbb {Z}\) such that \(a_h\textbf{v}'_h\succeq \textbf{0}\) and since \(\mathcal {G}_j\subseteq \mathcal {G}_l\) (and \(\mathcal {G}_l\) is a lattice), we have that \((\mathcal {G}_j)_{a_h\textbf{v}'_h}\subset \mathcal {G}_l\), and again by the lattice structure of \(\mathcal {G}_l\) we have \((\mathcal {L}_j)_{a_h\textbf{v}'_h}^+\subset \mathcal {G}_l\). By induction we obtain

$$\begin{aligned} \bigcup _{a_{m+1}\in \mathbb {Z}}\bigcup _{a_{m+2}\in \mathbb {Z}}\cdots \bigcup _{a_p\in \mathbb {Z}}(\mathcal {L}_j)_{a_{m+1}\textbf{v}'_{m+1}+\cdots +a_{p}\textbf{v}'_{p}}^+\subset \mathcal {G}_l. \end{aligned}$$
(21)

Now, consider \((\mathcal {D}_{j}\cap K_{q})\setminus \bigcup _{\textbf{i}\in \mathcal {G}_j\setminus \{\textbf{0}\}}(\mathcal {D}_{j}\cap K_{q})_{\textbf{i}}\), namely the points in \(\mathcal {D}_{j}\cap K_{q}\) without \(K_{q}^{+}(\textbf{i})\) for every \(\textbf{i}\in \mathcal {G}_j\setminus \{\textbf{0}\}\). By (21) we have

$$\begin{aligned} \Big (\bigcup _{a_{m+1}\in \mathbb {Z}}\bigcup _{a_{m+2}\in \mathbb {Z}}\cdots \bigcup _{a_p\in \mathbb {Z}}(\mathcal {L}_j)_{a_{m+1}\textbf{v}'_{m+1}+\cdots +a_{p}\textbf{v}'_{p}}^+\Big )_{\textbf{z}_l}\subset (\mathcal {G}_l)_{\textbf{z}_l}\subset \mathcal {D}_j, \end{aligned}$$

where \(\textbf{z}_{l}\) is defined in the statement of Point (V). Thus, \(|(\mathcal {D}_{j}\cap K_{q})\setminus \bigcup _{\textbf{i}\in \mathcal {G}_j\setminus \{\textbf{0}\}}(\mathcal {D}_{j}\cap K_{q})_{\textbf{i}}|\rightarrow \infty\) as \(q\rightarrow \infty\) monotonically. In particular, there is a \(q^{*}\) large enough such that \(|(\mathcal {D}_{j}\cap K_{q^*})\setminus \bigcup _{\textbf{i}\in \mathcal {G}_j\setminus \{\textbf{0}\}}(\mathcal {D}_{j}\cap K_{q^{*}})_{\textbf{i}}|>1/\lambda _j\). Since \(\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},q^*)}=\mathcal {D}_j\cap K_{q^*}\}|/|\Lambda _n|=\lambda _{j,q^*}\), there are \(\lambda _{j,q^*}|\Lambda _{n}||(\mathcal {D}_{j}\cap K_{q})\setminus \bigcup _{\textbf{i}\in \mathcal {G}_j\setminus \{\textbf{0}\}}(\mathcal {D}_{j}\cap K_{q^{*}})_{\textbf{i}}|\) asymptotically many points in \(\Lambda _n\), but since \(\lambda _{j,q^*}\ge \lambda _j\) we have that \(\lambda _{j,q^*}|\Lambda _{n}||(\mathcal {D}_{j}\cap K_{q})\setminus \bigcup _{\textbf{i}\in \mathcal {G}_j\setminus \{\textbf{0}\}}(\mathcal {D}_{j}\cap K_{q^{*}})_{\textbf{i}}|>|\Lambda _{n}|\), which leads to a contradiction. Thus, \(\mathcal {G}_j\) and \(\mathcal {G}_l\), \(l=l_1,...,l_{b_j}\), have the same rank.

Finally, if \(\mathcal {D}_j\) is bounded then by definition of \(\mathcal {G}_j\) we have that \(\mathcal {G}_j=\{\textbf{0}\}\). If \(\mathcal {G}_j=\{\textbf{0}\}\) then \(\mathcal {G}_{l_i}=\{\textbf{0}\}\) because \(\mathcal {G}_j\) and \(\mathcal {G}_{l_i}\) have the same rank, for every \(i=1,...,b_j\). Since \(b_j\) is finite, we conclude that \(\mathcal {D}_j\) is finite.

7.3 Proof of Proposition 5

Since \(\mathcal {G}_j\subseteq \mathcal {G}_{l_i}\), it remains to show that \(\mathcal {G}_j\supseteq \mathcal {G}_{l_i}\) considering that i satisfies (TIP\(_j\)). We notice that for \(\textbf{x}\in (\mathcal {L}_{l_i})_{\textbf{z}_{l_i}}^+\) we have \(((\mathcal D_j)_{-\textbf{x}})^+=\mathcal D_{l_i}\) as

$$\begin{aligned} ((\mathcal D_j)_{-\textbf{x}})^+&\mathop{=}\, \Big ((\mathcal {G}_j^+)_{-\textbf{x}}\cup \bigcup _{h=1}^{b_j}((\mathcal L_{l_h})_{\textbf{z}_{l_h}})^+_{-\textbf{x}}\Big )^+\\&=((\mathcal {G}_j)_{-\textbf{x}})^+\cup \bigcup _{h=1}^{b_j}((\mathcal L_{l_h})_{\textbf{z}_{l_h}-\textbf{x}})^+\\&=((\mathcal {G}_j)_{-\textbf{x}})^+\cup \mathcal {G}_i ^+ \cup \bigcup _{h=1,h\ne i}^{b_j}((\mathcal L_{l_h})_{\textbf{z}_{l_h}-\textbf{x}})^+. \end{aligned}$$

So that \(((\mathcal D_j)_{-\textbf{x}})^+\) is the unique \(\mathcal D_l\), \(l=1,\ldots ,b_j\) associated to the lattice \(\mathcal G_l = \mathcal G_{l_i}\) and it coincides with \(\mathcal D_{l_i}\). Then \(\textbf{y}-\textbf{x}\in \mathcal D_{l_i}\), with \(\textbf{y}\in \mathcal {G}_j\), is such that \(((\mathcal D_{l_i})_{-(\textbf{y}-\textbf{x})})^+=\mathcal D_j\). We then obtain that \(\mathcal {G}_j\supseteq \mathcal {G}_{l_i}\) by exchanging the role of \(\mathcal {D}_{l_i}\) with the one of \(\mathcal {D}_j\) in the proof of \(\mathcal {G}_j\subseteq \mathcal {G}_{l_i}\) in Point (V). Further, by applying Lemma 27 to \(\mathcal {D}_{l_i}\) we conclude that \(\lambda _j=\lambda _{l_i}\).

If \(\mathcal {G}_j\) is a full rank lattices then it is spanned by k linearly independent vectors and there always exists a point \(\textbf{s}\in \mathcal {G}_j\) such that \(\textbf{s}\succ \textbf{z}_{l}\) for every \(l=l_1,...,l_{b_j}\). This implies that the (LC\(_l\)) condition must be satisfied and \(\mathcal {G}_j=\mathcal {G}_l\) for every \(l=l_1,...,l_{b_j}\). This concludes the proof of the first statement.

Let us now prove the second statement. By (TIP\(_j\)), for every \(i=0,...,b_j\), and points (V) and (VI) we have that there exists \(\textbf{z}\in \mathcal {D}_{l_i}\) such that \(((\mathcal {D}_{l_i})_{-\textbf{z}})^+=\mathcal {D}_{j}\). Hence, \(\textbf{z}\in \mathcal {D}_{l_i}\) contains a translated copy of \(\mathcal {D}_{j}\) hence a translated copy of any \(\mathcal {D}_{l_h}\), \(h=1,...,b_j\), already contained in \(\mathcal {D}_{j}\). Thus \((l_h; h=0,...,b_j)=(l_h; h=0,...,b_{l_i})\) so that \((l_h; h\in W_j)=(l_h; h\in W_{l_i})\) and then \(\hat{\mathcal {D}}_{l_i}\) is the union of the same sets than \(\hat{\mathcal {D}}_{j}\)

$$\begin{aligned} \hat{\mathcal {D}}_{l_i}=\bigcup _{h\in W_{l_i}}\mathcal D_{l_h}=\bigcup _{h\in W_j}\mathcal D_{l_h}=\hat{\mathcal {D}}_{j}\,. \end{aligned}$$

Concerning the translation invariance property, we need to check that \(\hat{\mathcal {D}}_j\cup \{\textbf{0}\}\cup -\hat{\mathcal {D}}_j\) is invariant to the translation by every point in the lattice \(\mathcal {L}_j\), that is \(\hat{\mathcal {D}}_j\cup \{\textbf{0}\}\cup -\hat{\mathcal {D}}_j=(\hat{\mathcal {D}}_j\cup \{\textbf{0}\}\cup -\hat{\mathcal {D}}_j)_{\textbf{s}}\) for every \(\textbf{s}\in \mathcal {L}_j\). With no loss of generality consider \(\textbf{s}\in \mathcal {G}_j^+\) so that for any \(h\in W_j\) we have \((\mathcal D_{l_h})_{-\textbf{s}}\cap \{\textbf{t}\in \mathbb {Z}^k:\textbf{t}\succeq \textbf{0}\}=\{\textbf{0}\}\cup \mathcal D_{l_h}\) since \(\mathcal G_{l_h}=\mathcal G_j\). Hence \((\hat{\mathcal {D}}_j)_{-\textbf{s}}\cap \{\textbf{t}\in \mathbb {Z}^k:\textbf{t}\succeq \textbf{0}\}=\{\textbf{0}\}\cup \hat{\mathcal {D}}_j\). Moreover, for similar reason for any \(\textbf{z}\in \hat{\mathcal {D}}_j\cup \{\textbf{0}\}\) we have \(\textbf{s}+\textbf{z}\in \hat{\mathcal {D}}_j\) so that \(-(\hat{\mathcal {D}}_j\cup \{\textbf{0}\})_{-\textbf{s}}\subset -\hat{\mathcal {D}}_j\). It remains to show that \(-\hat{\mathcal {D}}_j\setminus -(\hat{\mathcal {D}}_j)_{-\textbf{s}}=(\hat{\mathcal {D}}_j\cup \{\textbf{0}\})_{-\textbf{s}}\setminus (((\hat{\mathcal {D}}_j)_{-\textbf{s}})^+\cup \{\textbf{0}\})\).

7.4 Proof of Proposition 6

Let \(\Xi\) be a subset of \(\mathbb {Z}^k\) such that \(\textbf{0}\in \Xi\) and let \(p\in \mathbb {N}\). Denote by \(-\textbf{v}\) be the lowest point (according to \(\succ\)) of \(\Xi \cap K_p\) and denote the points of \(K_p\setminus \{\textbf{t}\in \mathbb {Z}^k:\textbf{t}\succeq -\textbf{v}\}\) by \(-\textbf{w}_1,...,-\textbf{w}_{u}\), for some \(u\in \mathbb {N}\) which depends on p. Let \(\Phi _{1},...,\Phi _{v}\), for some \(v\in \mathbb {N}\), be the subsets of \(K_{2p}^+\) such that for each \(h=1,...,v\) we have \(\Phi _{h}\cap (K_{p})_{\textbf{v}}=((\Xi \cap K_{p})_{\textbf{v}})^+\). Similarly, for every \(i=1,...,u\), let \(\Psi _{i,1},...,\Psi _{i,v_i}\), for some \(v_i\in \mathbb {N}\), be the subsets of \(K_{2p}^+\) such that for each \(h=1,...,v_i\) we have \(\Psi _{i,h}\cap (K_{p})_{\textbf{w}_{i}}=(\Xi \cap K_{p})_{\textbf{w}_{i}}\). Moreover, for every \(i=1,...,u\), denote by \(\Pi _{i,1},...,\Pi _{i,s_i}\) the non-empty subsets of \(K_p\setminus \{\textbf{t}\in \mathbb {Z}^k:\textbf{t}\succeq \textbf{v}\}\) with highest point (according to \(\succ\)) given by \(-\textbf{w}_i\). Observe that \(((\Xi \cap K_{p})_{\textbf{v}})^+=(\Xi \cap K_{p})_{\textbf{v}}\setminus \{\textbf{0}\}\) and \(((\Xi \cap K_{p})_{\textbf{w}_i})^+=(\Xi \cap K_{p})_{\textbf{w}_i}\) for every \(i=1,...,u\). First, for every \(n\in \mathbb {N}\) we have that

$$\begin{aligned}& |\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}| \\&=|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Xi \cap K_{p})_{\textbf{v}}\}|. \end{aligned}$$

Second, we have the following identities

$$\begin{aligned}&\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Xi \cap K_{p})_{\textbf{v}}\} =\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{v}})^+\\&=((\Xi \cap K_{p})_{\textbf{v}})^+\}\setminus \bigcup _{i=1,...,u}\bigcup _{h=1,...,v_i}\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Pi _{i,h}\cup \Xi \cap K_{p})_{\textbf{v}}\}. \end{aligned}$$

Since

$$\begin{aligned} \bigcup _{i=1,...,u}\bigcup _{h=1,...,v_i}\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Pi _{i,h}\cup \Xi \cap K_{p})_{\textbf{v}}\} \end{aligned}$$

are unions of disjoint sets, since

$$\begin{aligned} &\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{v}})^+=((\Xi \cap K_{p})_{\textbf{v}})^+\}\\& \supset \bigcup _{i=1,...,u}\bigcup _{h=1,...,v_i}\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Pi _{i,h}\cup \Xi \cap K_{p})_{\textbf{v}}\} \end{aligned}$$

and since for every \(i=1,...,u\)

$$\begin{aligned} &|\bigcup _{h=1,...,v_i}\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Pi _{i,h}\cup \Xi \cap K_{p})_{\textbf{v}}\}|\\ &=|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{w}_i})^+=(\Xi \cap K_{p})_{\textbf{w}_i}\}| \end{aligned}$$

we have that

$$\begin{aligned}&|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{v}})^+=((\Xi \cap K_{p})_{\textbf{v}})^+\}\\& \setminus \bigcup _{i=1,...,u}\bigcup _{h=1,...,v_i}\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Pi _{i,h}\cup \Xi \cap K_{p})_{\textbf{v}}\}|\\&=|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{v}})^+=((\Xi \cap K_{p})_{\textbf{v}})^+\}|\\&\quad - \sum _{i=1,...,u}\sum _{h=1,...,v_i}|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap (K_{p})_{\textbf{v}}=(\Pi _{i,h}\cup \Xi \cap K_{p})_{\textbf{v}}\}|\\&=|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{v}})^+=((\Xi \cap K_{p})_{\textbf{v}})^+\}|\\&\quad - \sum _{i=1,...,u}|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap ((K_{p})_{\textbf{w}_i})^+=(\Xi \cap K_{p})_{\textbf{w}_i}\}|\\&=\sum _{l=1,...,v}|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},2p)}=\Phi _{l}\}|- \sum _{i=1,...,u}\sum _{h=1,...,v_i}|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},2p)}=\Psi _{i,h}\}|, \end{aligned}$$

where the last equality follows by the definition of the \(\Phi\)’s and the \(\Psi\)’s and the fact that \(\Phi _l\ne \Phi _m\) for every \(l,m=1,...,v\) with \(l\ne m\), and that \(\Psi _{i,h}\ne \Psi _{i,k}\) for every \(i=1,...,u\) and \(h,k=1,...,v_i\) with \(h\ne k\). Now, thanks to Point (I) in Proposition 3 we have that

$$\begin{aligned}& \lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},2p)}=\Phi _{l}\}|/|\Lambda _n|\\&={\left\{ \begin{array}{ll} \lambda _{2p,z}=\sum _{x\in I^{(z)}_{2p}}\lambda _x&{} \text{if }\ \mathcal {D}_z\cap K_{2p}=\Phi _{l}\ \text {for some}\ z\in \mathbb {N},\\ 0&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

for \(l=1,...,v\), and similarly for \(\Psi _{i,h}\) for \(i=1,...,u\) and \(h=1,...,v_i\). Then, we obtain that the following limit exists

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|/|\Lambda _n|. \end{aligned}$$

Further, observe that for \(p'>p\) we have the inclusion

$$\begin{aligned} \{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p'}=\Xi \cap K_{p'}\}\subseteq \{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}, \end{aligned}$$

thus, the following limit exists

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|/|\Lambda _n|. \end{aligned}$$

This concludes the first part of the statement.

Now, let \(\Xi ^+\ne \mathcal {D}_j\) for every \(1\le j\le q\) and fix \(\varepsilon >0\). As \(\sum _{i=1}^q\lambda _i=1\), there exists some m sufficiently large such that \(\sum _{i\ge m}\lambda _i< \varepsilon\) and there exists p sufficiently large so that \(\Xi \cap K_{p}^+\ne \mathcal {D}_{i}\cap K_{p}\) for any \(i\le m\). Then,

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|/|\Lambda _n|<\varepsilon \end{aligned}$$

Thus,

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi \cap K_{p}\}|/|\Lambda _n|=0, \end{aligned}$$

which concludes the proof.

7.5 Proof of Proposition 7

Consider first the case of bounded \(\Xi _b\). In this case, we have that \(|\Xi _b|<1/\gamma _b+1\); otherwise we will have a contradiction because we will asymptotically end up with more points than the ones in \(\Lambda _n\). Further, denote by \(-\textbf{z}\) its lowest point according to \(\succ\), then we have

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=(\Xi _b)_{\textbf{z}}\cap K_{p}^+\}|/|\Lambda _n|>0, \end{aligned}$$

which implies that \((\Xi _b)_{\textbf{z}}=\mathcal {D}_j\) for some \(j=1,...,q\). Further, since

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\{\textbf{0}\}\cup \mathcal {D}_j\cap K_{p}\}|/|\Lambda _n|>0, \end{aligned}$$

and since by (4) \(\mathcal {L}_j\cup \bigcup _{i=1}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}=\{\textbf{0}\}\cup \mathcal {D}_j\) we obtain that \(\Xi _b=(\{\textbf{0}\}\cup \mathcal {D}_j)_{-\textbf{z}}\) and the first statement follows.

Now, let \(\Xi _b\) be unbounded. We show that \(\Xi _b\) is a finite union of translated lattices. Consider any point \(\textbf{s}\) in \(\Xi _b\). Let \(g_p\in \mathbb {N}\) be such that \(\Xi _b\cap K_{g_p}\supset \Xi _b\cap (K_p^+)_{-\textbf{s}}\). Then, for every \(n\in \mathbb {N}\)

$$\begin{aligned} |\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=(\Xi _b)_{-\textbf{s}}\cap K_{p}^+\}|\ge |\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{g_p}=\Xi _b\cap K_{g_p}\}| \end{aligned}$$

and since this holds for every p large enough, we get

$$\begin{aligned} \lim \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=(\Xi _b)_{-\textbf{s}}\cap K_{p}^+\}|/|\Lambda _n|\ge \gamma _b. \end{aligned}$$

Then, we have that \(((\Xi _b)_{-\textbf{s}})^+=\mathcal {D}_k\) for some \(k=1,...,q\). By Proposition 4, we deduce that \(\Xi _b\) is a union of translated lattices. Further, this union is finite because \(\gamma _b\) is strictly positive.

Now, consider a point \(\textbf{r}\) on the most preceding lattice of \(\Xi _b\). Then, \(((\Xi _b)_{-\textbf{r}})^+=\mathcal {D}_j\), for some \(j=1,...,q\), and so \(((\Xi _b)_{-\textbf{r}})=\bigcup _{i=0}^{b_j}(\mathcal L_{l_i})_{\textbf{z}_{l_i}}\), which concludes the proof of the first statement.

Let us now prove the second statement. Let \(p\in \mathbb {N}\). Define the equivalence relation \(i\sim j \Leftrightarrow i\in (F_p^{(j)})_{1\le j\le q'}\), one considers the partition of \(\{1,\ldots ,q'\}\) generated by the equivalence classes \(\mathfrak {P}'_p=\{1,\ldots ,q'\}\setminus \sim\). Recall the definition of \(\mathfrak {P}_p\) from the proof of point (I) in Proposition 3. For every \(l\in \mathfrak {P}_p\), let \(\mathfrak {P}'_{p,l}\subset \mathfrak {P}'_p\) such that \(i\in \mathfrak {P}'_{p,l}\) if \(\Xi _i\cap K_p^+=\mathcal {D}_l\cap K_p^+\). Since

$$\begin{aligned} |\{\textbf{t}\in \Lambda _{n}:\Lambda _{n}^{(\textbf{t},p)}=\mathcal {D}_{j}\cap K_{p}\}|=\sum _{i=1,...,u}|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Pi _{i}\cup \{\textbf{0}\}\cup \mathcal {D}_{j}\cap K_{p}\}| \end{aligned}$$

where \(\Pi _1,...,\Pi _u\), \(u\in \mathbb {N}\), are the subsets of \(K_p^-\setminus \{\textbf{0}\}\), we obtain that \(\lambda _{p,j}=\sum _{i\in \mathfrak {P}'_{p,j}}\gamma _{p,i}\). Thus, we have that \(1=\sum _{j\in \mathfrak {P}_p}\lambda _{p,j}=\sum _{j\in \mathfrak {P}_p}\sum _{i\in \mathfrak {P}'_{p,j}}\gamma _{p,i}=\sum _{i\in \mathfrak {P}'_p}\gamma _{p,i}\). By applying Fatou’s lemma, we get that

$$\begin{aligned} 1&=\lim \limits _{p\rightarrow \infty }\sum _{i\in \mathfrak {P}'_p}\gamma _{p,i}=\liminf \limits _{p\rightarrow \infty }\sum _{i\in \mathfrak {P}'_p}\gamma _{p,i}\\&\ge \sum _{j=1}^{q'}\liminf \limits _{p\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi _j\cap K_{p}\}|}{|\Lambda _n|}=\sum _{j=1}^{q'}\gamma _j. \end{aligned}$$

Hence, \(1\ge \sum _{j=1}^{q'}\gamma _j\). By applying the same arguments as the ones used in the proof of point (I) in Proposition 3 we have that \(\sum _{i\in F_p^{(j)}}\gamma _i\ge \gamma _{j,p}\) for every \(j\in \mathfrak {P}'_p\), which implies that \(\sum _{i=1}^{q'}\gamma _i=\sum _{j\in \mathfrak {P}'_p}\sum _{i\in F_p^{(j)}}\gamma _i\ge \sum _{j\in \mathfrak {P}'_p}\gamma _{j,p}=1\). Therefore, combining the two results we have that \(\sum _{i=1}^{q'}\gamma _i=1\) and \(\sum _{i\in F_p^{(j)}}\gamma _i=\gamma _{j,p}\) for every \(j\in \mathfrak {P}'_p\).

7.6 Proof of Proposition 8

Let \(j\in I^*\). Consider \((\Xi ^{*}_j)_{-\textbf{x}}\) for every \(\textbf{x}\in \mathcal {E}_j\) and observe that these are the only possible \(\Xi\)s that can be formed by translations of \(\Xi ^{*}_j\). The weights of the sets \((\Xi ^{*}_j)_{-\textbf{x}}\), \(\textbf{x}\in \mathcal {E}_j\), are all equal to \(\gamma _j^*\). This is because of the following arguments. Let \(\gamma _b\) be the weight of \((\Xi ^{*}_j)_{-\textbf{x}}\). Consider a point \(\textbf{x}\in \mathcal {E}_j\setminus \{\textbf{0}\}\). For any \(p\in \mathbb {N}\), let \(g_p\in \mathbb {N}\) be such that \(\Xi ^{*}_j\cap K_{g_p}\supset \Xi ^{*}_j\cap (K_p)_{\textbf{x}}\). Then, for every \(n\in \mathbb {N}\)

$$\begin{aligned} |\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=(\Xi ^{*}_j)_{-\textbf{x}}\cap K_{p}\}|\ge |\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{g_p}=\Xi ^{*}_j\cap K_{g_p}\}| \end{aligned}$$

which implies that \(\gamma _b\ge \gamma _j^*\). Conversely, for any \(p\in \mathbb {N}\), let \(f_p\in \mathbb {N}\) be such that \((\Xi ^{*}_j)_{-\textbf{x}}\cap K_{f_p}\supset (\Xi ^{*}_j)_{-\textbf{x}}\cap (K_p)_{-\textbf{x}}\). Then, for every \(n\in \mathbb {N}\)

$$\begin{aligned} |\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi ^{*}_j\cap K_{p}\}|\ge |\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{f_p}=(\Xi ^{*}_j)_{-\textbf{x}}\cap K_{f_p}\}| \end{aligned}$$

which implies that \(\gamma _j^*\ge \gamma _b\), hence \(\gamma _j^*=\gamma _b\). Thus, for each \(j\in I^*\) we have that the sum of the weights of the \(\Xi\)s composed by the translations of \(\Xi ^{*}_k\) is \(|\mathcal {E}_j|\gamma ^*_j\). Since each \(\Xi _b\), \(b=1,...,q'\), is the translation of a \(\Xi ^{*}_k\) for some \(k\in I^*\), we obtain that \(\sum _{j\in I^*}\gamma ^*_j|\mathcal {E}_j|=\sum _{i=1}^{q'}\gamma _i=1\), where the last equality comes from Proposition 7.

7.7 Proof of Proposition 9

First, we have that \(m_l\rightarrow \infty\) as \(l\rightarrow \infty\) and, since by Lemma 8 we know that \(\sum _{i\in I^*}\gamma _i^*|\mathcal {E}_i|=1\), we obtain that \(\sum _{i\in I^*,i>m_l}\gamma _i^*|\mathcal {E}_i|\rightarrow 0\) as \(l\rightarrow \infty\).

For every \(n\in \mathbb {N}\) and \(j\in I^*\) with \(j<m_{4l}\), consider the set \(S_{j,4l}\). We let the dependency on n be implicit. Let \(j,i\in I^*\) with \(j,i<m_{4l}\) and \(i\ne j\). In the following we show that for every \(\textbf{t}\in S_{j,4l}\) and \(\textbf{s}\in S_{i,4l}\) we have that \((\mathcal {D}_j\cap K_{2l})_{\textbf{t}}\cap (\mathcal {D}_i\cap K_{2l})_{\textbf{s}}=\emptyset\). The idea behind the following proof is that by taking points in \(\Lambda _{n}\) with certain structure on \(K_{4l}\) around them (i.e. \(\Xi \cap K_{4l}\) for \(i\in I^*\) with \(i<m_{4l}\)) where l is large enough (see above), we ensure that the sets \(K^+_{2l}\) around them do not intersect for different structures (i.e. \((\mathcal {D}_j\cap K_{2l})_{\textbf{t}}\cap (\mathcal {D}_i\cap K_{2l})_{\textbf{s}}=\emptyset\), for every \(\textbf{t}\in S_{j,4l}\) and \(\textbf{s}\in S_{i,4l}\) and every \(i,j\in I^*\) with \(i,j<m_{4l}\) and \(i\ne j\)).

First, consider the case of \(\mathcal {D}_i\) and \(\mathcal {D}_j\) bounded. Notice that \(\textbf{t}\ne \textbf{s}\) because \(\mathcal {D}_j\cap K_{4l}\ne \mathcal {D}_i\cap K_{4l}\) and so \(\Xi ^{*}_j\cap K_{4l}\ne \Xi ^{*}_i\cap K_{4l}\). Thus, if \((\mathcal {D}_j\cap K_{2l})_{\textbf{t}}\) and \((\mathcal {D}_i\cap K_{2l})_{\textbf{s}}\) have an intersection then one of the two \(\Xi ^*\cap K_{4l}\)’s will have at least one point in \(K_{4l}^-\setminus \{\textbf{0}\}\) (in particular at \(\textbf{s}-\textbf{t}\) if \(\textbf{t}\succ \textbf{s}\) or at \(\textbf{t}-\textbf{s}\) if \(\textbf{s}\succ \textbf{t}\)) which is impossible by definition of bounded \(\Xi ^*\)’s because its lowest point (according to \(\succ\)) is \(\{\textbf{0}\}\).

Second, consider the case of \(\mathcal {D}_i\) bounded and \(\mathcal {D}_j\) unbounded. Then, as before \(\textbf{t}\ne \textbf{s}\). Moreover, if \((\mathcal {D}_j\cap K_{2l})_{\textbf{t}}\) and \((\mathcal {D}_i\cap K_{2l})_{\textbf{s}}\) have an intersection and \(\textbf{s}\succ \textbf{t}\) then \(\Xi ^{*}_i\cap K_{4l}\) will have at least one point in \(K_{4l}^-\setminus \{\textbf{0}\}\) which is impossible. If they have an intersection and \(\textbf{t}\succ \textbf{s}\), then we have \(\Xi ^{*}_i\cap K_{2l}^+=\mathcal {D}_{l_h}\cap K_{2l}\) for some \(l_h=l_1,...,l_{b_j}\), because \((K_{4l})_{\textbf{t}}\supset (K_{2l})_{\textbf{s}}\) and so the structure of \((\Xi ^{*}_j\cap K_{4l})_{\textbf{t}}\) implies that \((\Lambda _n)_{-\textbf{s}}\cap K^+_{2l}=\mathcal {D}_{l_h}\cap K_{2l}\) for some \(l_h=l_1,...,l_{b_j}\). However, the equality \(\Xi ^{*}_i\cap K_{2l}^+=\mathcal {D}_{l_h}\cap K_{2l}\) is impossible by construction.

Third, consider the case of \(\mathcal {D}_i\) and \(\mathcal {D}_j\) unbounded. Then, as before \(\textbf{t}\ne \textbf{s}\). Further, if \(\textbf{t}\succ \textbf{s}\), then we have \(\Xi ^{*}_i\cap K_{2l}^+=\mathcal {D}_{l_h}\cap K_{2l}\) for some \(l_h=l_1,...,l_{b_j}\) as in the previous paragraph, which is impossible by construction. We conclude by observing that the case \(\textbf{s}\succ \textbf{t}\) is specular to the case \(\textbf{t}\succ \textbf{s}\).

Now, we bound the number of points in \(S_{i,4l}\) for every \(i<m_{4l}\) and \(i\in I^*\). For every \(\mathcal {D}_i\) bounded with \(i<m_{4l}\) and \(i\in I^*\), let \(S'_{i,4l}=S_{i,4l}\) if \(|S_{i,4l}|\le \gamma ^{*}_i|\Lambda _{n}|\) and let \(S'_{i,4l}\) be a subset of \(S_{i,4l}\) with \(|S'_{i,4l}|=\gamma ^{*}_i|\Lambda _{n}|\) if \(|S_{i,4l}|>\gamma ^{*}_i|\Lambda _{n}|\).

For the unbounded case we have the following. Consider any \(\mathcal {D}_i\) unbounded with \(i<m_{4l}\) and \(i\in I^*\). Notice that \((\mathcal {E}_{i})_{\textbf{s}}\cap (\mathcal {E}_{i})_{\textbf{t}}=\emptyset\) for every \(\textbf{s},\textbf{t}\in S_{i,4l}\) because \(\mathcal {E}_{i}\subset K_l^+\cup \{\textbf{0}\}\) and because we are considering \(\Xi _i\cap K_{4l}\) and thus an intersection would violate the structure of \(\Xi _i\cap K_{4l}\). For any \(\textbf{t}\in S_{i,4l}\), consider the set \((\mathcal {D}_{i}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in (S_{i,4l})_{-\textbf{t}},\textbf{s}\prec \textbf{0}}(\mathcal {E}_{i})_{\textbf{s}}\). Consider the set of points \(\textbf{t}\in S_{i,4l}\) such that

$$\begin{aligned} (\mathcal {D}^*_{i}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in (S_{i,4l})_{-\textbf{t}},\textbf{s}\prec \textbf{0}}(\mathcal {E}_{i})_{\textbf{s}}=(\mathcal {D}^*_{i}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in -\mathcal {G}_i\setminus \{\textbf{0}\}}(\mathcal {E}_{i})_{\textbf{s}} \end{aligned}$$

and denote it by \(\tilde{S}_{i,4l}\). We remark that

$$\begin{aligned} (\mathcal {D}_{i}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in (S_{i,4l})_{-\textbf{t}},\textbf{s}\prec \textbf{0}}(\mathcal {E}_{i})_{\textbf{s}}=(\mathcal {D}_{i}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in (\cup _{i\in I^*,i<m_{4l}}S_{i,4l})_{-\textbf{t}},\textbf{s}\prec \textbf{0}}(\mathcal {E}_{i})_{\textbf{s}} \end{aligned}$$

because by construction for every \(\textbf{t}\in S_{i,4l}\) and \(\textbf{s}\in S_{j,4l}\), where \(j\in I^*\) with \(j\ne i\) and \(j<m_{4l}\), we have that \((\mathcal {D}_i\cap K_{2l})_{\textbf{t}}\cap (\mathcal {D}_j\cap K_{2l})_{\textbf{s}}=\emptyset\) and so that \((\mathcal {D}_i\cap K_{2l})_{\textbf{t}}\cap (\mathcal {E}_j)_{\textbf{s}}=\emptyset\).

Now, if \(|\tilde{S}_{i,4l}|>\gamma ^{*}_i|\Lambda _{n}|\) and we arbitrarily take out a point \(\textbf{x}\in S_{i,4l}\) then we might end up taking out more than one point in \(\tilde{S}_{i,4l}\) because the points in \(\tilde{S}_{i,4l}\) need the existence of certain points in \(S_{i,4l}\) around them. Thus, we need to show that it is possible to find a procedure in which by taking out a certain point in \(S_{i,4l}\) we only take out one (and only one) point in \(\tilde{S}_{i,4l}\).

Let \(v:=|\{\textbf{s}\in -\mathcal {G}_i\setminus \{\textbf{0}\}:(\mathcal {D}_{i}\cap K_{2l})\cap (\mathcal {E}_{i})_{\textbf{s}}\ne \emptyset \}|\). For each \(\textbf{t}\in \tilde{S}_{i,4l}\), let \(\textbf{s}_{\textbf{t},1}\prec ...\prec \textbf{s}_{\textbf{t},v}\prec \textbf{0}\) be the points in \((S_{i,4l})_{-\textbf{t}}\) such that \((\mathcal {D}_{i}\cap K_{2\,l})\cap (\mathcal {E}_{i})_{\textbf{s}_{\textbf{t},h}}\ne \emptyset\), \(h=1,...,v\). Consider the lowest point in \(\tilde{S}_{i,4l}\) according to \(\succ\) and denote it by \(\textbf{w}\). Then, by taking out \(\textbf{s}_{\textbf{w},1}\) from \(S_{i,4l}\) we only take out \(\textbf{w}\) from \(\tilde{S}_{i,4l}\) (but not from \(S_{i,4l}\)). This is because \(\textbf{s}_{\textbf{w},1}\) is the lowest among \(\textbf{s}_{\textbf{w},1},...,\textbf{s}_{\textbf{w},v}\) and since \(\textbf{w}\) is the lowest point in \(\tilde{S}_{i,4l}\), this implies that \(\textbf{s}_{\textbf{w},1}\ne \textbf{s}_{\textbf{t},h}\) for every \(\textbf{t}\in \tilde{S}_{i,4l}\setminus \{\textbf{w}\}\) and every \(h=1,...,v\). Thus, by taking out \(\textbf{s}_{\textbf{w},1}\) from \(S_{i,4l}\) we are not taking out any other point in \(\tilde{S}_{i,4l}\) apart from \(\textbf{w}\).

Now, if \(|\tilde{S}_{i,4l}|\le \gamma ^{*}_i|\Lambda _{n}|\) let \(S'_{i,4l}=\tilde{S}_{i,4l}\), while if \(|\tilde{S}_{i,4l}|>\gamma ^{*}_i|\Lambda _{n}|\) then, following the above procedure, reduces the points in \(S_{i,4l}\) to obtain a set, which we denote \(S^{(reduced)}_{i,4l}\), such that \(|\tilde{S}^{(reduced)}_{i,4l}|=\gamma ^{*}_i|\Lambda _{n}|\) and let \(S'_{i,4l}=\tilde{S}^{(reduced)}_{i,4l}\).

Concerning the asymptotic behaviour of \(S'_{i,4l}\), in the bounded case, since \(\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{4l}=\Xi ^{*}_i\cap K_{4l}\}|/|\Lambda _{n}|\ge \gamma ^*_i\), by continuity of the minimum function we obtain that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }|S'_{i,4l}|/|\Lambda _{n}|=\lim \limits _{n\rightarrow \infty }(|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{4l}=\Xi ^{*}_i\cap K_{4l}\}|\wedge \gamma ^*_i|\Lambda _{n}|)/|\Lambda _{n}|=\gamma ^*_i. \end{aligned}$$

In the unbounded case, notice that \(S_{i,4l}\supset \tilde{S}_{i,4l}\supset S_{i,p}\) for every \(p\ge 8l\) and every \(n\in \mathbb {N}\). Since \(\lim \limits _{n\rightarrow \infty }|S_{i,p}|/|\Lambda _{n}|=\lim \limits _{n\rightarrow \infty }|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{p}=\Xi ^{*}_i\cap K_{p}\}|/|\Lambda _{n}|\ge \gamma ^*_i\) for every \(p\in \mathbb {N}\), then we have that

$$\begin{aligned} \gamma ^*_i&=\lim \limits _{n\rightarrow \infty }(|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{4l}=\Xi ^{*}_i\cap K_{4l}\}|\wedge \gamma ^*_i|\Lambda _{n}|)/|\Lambda _{n}|\\&\le \lim \limits _{n\rightarrow \infty }(|\tilde{S}_{i,4l}|\wedge \gamma ^*_i|\Lambda _{n}|)/|\Lambda _{n}| \\&\le \lim \limits _{n\rightarrow \infty }(|\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{8l}=\Xi ^{*}_i\cap K_{8l}\}|\wedge \gamma ^*_i|\Lambda _{n}|)/|\Lambda _{n}|=\gamma ^*_i.\end{aligned}$$

Since \(|\tilde{S}_{i,4l}|\wedge \gamma ^*_i|\Lambda _{n}|=|S'_{i,4l}|\) we obtain that \(|S'_{i,4l}|/|\Lambda _{n}|\rightarrow \gamma ^*_i\) as \(n\rightarrow \infty\).

Finally, since \(\sum _{i\in I^*,i<m_{4l}}|S'_{i,4l}||\mathcal {E}_i|\rightarrow \sum _{i\in I^*,i<m_{4l}}\gamma ^*_i|\mathcal {E}_i|\) as \(n\rightarrow \infty\) for every fixed l and since \(\sum _{j<m_{4l}}\gamma ^*_{j}|\mathcal {E}_{j}|\rightarrow \sum _{j\in I^*}\gamma ^*_{j}|\mathcal {E}_{j}|=1\) monotonically as \(l\rightarrow \infty\), we get \(\lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{|\Lambda _{n}|-\sum _{i\in I^*,i<m_{4l}}|S'_{i,4l}||\mathcal {E}_i|}{|\Lambda _{n}|}=1\).

8 Proofs in Sections 4 and 5

8.1 Proof of Theorem 10

First, by \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\) it suffices to show that for any \(g\in \mathbb {C}^{+}_{K}\), \((\Psi _{\tilde{N}^{\Lambda }_{r_{n}}}(g))^{k_{n}}\) converges to (8) as \(n\rightarrow \infty\). Then, by regular variation of \(|\textbf{X}|\) and the definition of \((a_{n} )\)

$$\begin{aligned} 1-\Psi _{\tilde{N}^{\Lambda }_{r_{n}}}(g) \le \mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\le \frac{|\Lambda _{r_{n}}|}{|\Lambda _n|}[|\Lambda _n|\mathbb {P}(|\textbf{X}|>\delta a_{n})]=O(1/k_{n}) \end{aligned}$$

as \(n\rightarrow \infty\). So by Taylor expansion it suffices to prove that \(k_{n}(1-\Psi _{\tilde{N}^{\Lambda }_{r_{n}}}(g))\) converges to the logarithm of (8) as \(n\rightarrow \infty\). We choose \(\delta > 0\) such that \(g(x) = 0\) for \(x\in (-\delta , \delta )\). Denote \(t_{|\Lambda _{r_{n}}|}\) the highest element of \(\Lambda _{r_{n}}\) according to \(\prec\), by \(t_{|\Lambda _{r_{n}}|-1}\) the second highest one,..., by \(t_{1}\) the lowest one. Let

$$\begin{aligned} \tilde{\Psi }_{m}(g)={\left\{ \begin{array}{ll} \mathbb {E}\Big [\exp \Big (-\sum _{j=m}^{|\Lambda _{r_{n}}|}g({a_{n}^\Lambda }^{-1}\textbf{X}_{t_{j}}) \Big ) \Big ],&{} 1\le m\le |\Lambda _{r_{n}}|,\\ 1, &{} m=|\Lambda _{r_{n}}|+1. \end{array}\right. } \end{aligned}$$

Recall that \(K_{l}=\{\textbf{x}\in \mathbb {Z}^{k}:\textbf{x}\in \{-l,...,l\}^{k}\}\). Using the stationarity of \(\textbf{X}\) and the fact that \(\prec\) is shift invariant, we have

$$\begin{aligned}&\tilde{\Psi }_{m+1}(g)-\tilde{\Psi }_{m}(g)\\&=\mathbb {E}\Big [e^{-\sum _{j=m+1}^{|\Lambda _{r_{n}}|}g({a_{n}^\Lambda }^{-1}\textbf{X}_{t_{j}-t_{m}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ]\\ &=\mathbb {E}\Big [e^{-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \textbf{1}(\hat{M}_{l+1,r_{n}}^{\Lambda ,|\textbf{X}|}\le \delta a_{n}) \Big ]\\&\quad+\ \mathbb {E}\Big [e^{-\sum _{j=m+1}^{|\Lambda _{r_{n}}|}g({a_{n}^\Lambda }^{-1}\textbf{X}_{t_{j}-t_{m}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \textbf{1}(\hat{M}_{l+1,r_{n}}^{\Lambda ,|\textbf{X}|}>\delta a_{n}) \Big ]\\ &=\mathbb {E}\Big [e^{-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ]\\&\quad-\ \mathbb {E}\Big [e^{-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \textbf{1}(\hat{M}_{l+1,r_{n}}^{\Lambda ,|\textbf{X}|}>\delta a_{n})\textbf{1}(|\textbf{X}_{\textbf{0}}|>\delta a_{n}) \Big ]\\&\quad+\ \mathbb {E}\Big [e^{-\sum _{j=m+1}^{|\Lambda _{r_{n}}|}g({a_{n}^\Lambda }^{-1}\textbf{X}_{t_{j}-t_{m}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \textbf{1}(\hat{M}_{l+1,r_{n}}^{\Lambda ,|\textbf{X}|}>\delta a_{n})\textbf{1}(|\textbf{X}_{\textbf{0}}|>\delta a_{n}) \Big ]\\ &=\mathbb {E}\Big [e^{-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ]+J^{(r_{n})}_{l,m} \end{aligned}$$

where \(J^{(r_{n})}_{l,m}\) is such that

$$\begin{aligned} &\lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }k_{n}\sum _{m=1}^{|\Lambda _{r_{n}}|}|J^{(r_{n})}_{l,m}|\\&\le 2 \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}(\hat{M}^{\Lambda ,|\textbf{X}|}_{l+1,r_{n}}>\delta a_{n}\,\big | |\textbf{X}_{\textbf{0}}|>\delta a_{n})|\Lambda _n|\mathbb {P}(|\textbf{X}|>\delta a_{n})=0. \end{aligned}$$

Now, for the every point in \(\Lambda _{r_{n}}\) we have that

$$\begin{aligned}&\mathbb {E}\Big [e^{-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ]\\&=\mathbb {E}\Big [e^{-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}})}\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big )\Big ||\textbf{X}_{\textbf{0}}|>\delta a_{n} \Big ]\mathbb {P}(|\textbf{X}|>\delta a_{n})\\&\le \mathbb {P}(|\textbf{X}|>\delta a_{n}) \end{aligned}$$

To lighten the notation assume that q in Condition (\(\mathcal {D}^{\Lambda }\)) is \(\infty\), so that there are infinitely many \(\mathcal {D}\)s. By point (ii) in the construction of \(\Lambda _{r_{n}}\) only the points \(\bigcup _{i=1}^{\infty }\{\textbf{t}\in \Lambda _{r_{n}}:\Lambda _{r_{n}}^{(\textbf{t},l)}=\mathcal {D}_{i}\cap K_{l}\}\) are asymptotically relevant, because by (i) and (ii) we have that \(|\Lambda _{r_{n}}\setminus \bigcup _{i=1}^{\infty }\{\textbf{t}\in \Lambda _{r_{n}}:\Lambda _{r_{n}}^{(\textbf{t},l)}=\mathcal {D}_{i}\cap K_{l}\}|/|\Lambda _{r_{n}}|\rightarrow 0\).

Observe that there are finitely many different subsets of \(K_{l}\). We denote their total number by \(\tau _{l}\) and denote them by \(\mathcal {U}_{l}^{(1)},....,\mathcal {U}_{l}^{(\tau _{l})}\). Thus, we have

$$\begin{aligned}& k_{n}\sum _{m=1}^{|\Lambda _{r_{n}}|}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}}) \Big )\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ]\\&=\sum _{j=1}^{\tau _{l}}k_{n}\mu _{r_{n}}^{(j)}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {U}_{l}^{(j)}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}}) \Big )\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ] \end{aligned}$$

where \(\mu _{r_{n}}^{(j)}:=|\{t_{m},m=1,...,|\Lambda _{r_{n}}|:\{t_{m+1}-t_{m},...,t_{|\Lambda _{r_{n}}|}-t_{m}\}\cap K_{l}=\mathcal {U}_{l}^{(j)}\}|\). Recall from the proof of Point (I) in Proposition 3, that by defining the equivalence relation \(i\sim j \Leftrightarrow i\in (I_p^{(j)})_{1\le j\le q}\), one considers the partition of \(\{1,\ldots ,q\}\) generated by the equivalence classes \(\mathfrak {P}_p=\{1,\ldots ,q\}\setminus \sim\). Then, by (i) and (ii) and in particular by point (I) in Proposition 3 we have

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{k_{n}\mu _{r_{n}}^{(j)}}{|\Lambda _{n}|}={\left\{ \begin{array}{ll} \lambda _{i,l}, &{} \text {if }\ \mathcal {U}_{l}^{(j)}= \mathcal {D}_{i}\cap K_{l}\ \text {for some}\ i\in \mathfrak {P}_l,\\ 0, &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

Therefore, we have that

$$\begin{aligned}& \lim \limits _{n\rightarrow \infty }\sum _{j=1}^{\tau _{l}}k_{n}\mu _{r_{n}}^{(j)}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {U}_{l}^{(j)}}g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{t}}) \Big )\Big (1-e^{-g({a_{n}^\Lambda }^{-1}\textbf{X}_{\textbf{0}})} \Big ) \Big ]\\&=\delta ^{-\alpha }\sum _{i\in \mathfrak {P}_l}\lambda _{i,l}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {D}_{i}\cap K_{l}}g(\delta \textbf{Y}_{\textbf{t}}) \Big )\Big (1-e^{-g(\delta \textbf{Y}_{\textbf{0}})} \Big ) \Big ]\\&=\delta ^{-\alpha }\sum _{i=1}^{\infty }\lambda _{i}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {D}_{i}\cap K_{l}}g(\delta \textbf{Y}_{\textbf{t}}) \Big )\Big (1-e^{-g(\delta \textbf{Y}_{\textbf{0}})} \Big ) \Big ]\\&=\int _{0}^{\infty }\sum _{i=1}^{\infty }\lambda _{i}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {D}_{i}\cap K_{l}}g(y\mathbf {\Theta }_{\textbf{t}}) \Big )\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \Big ]d(-y^{-\alpha }). \end{aligned}$$

Notice that the above arguments hold for every l large enough. By monotone convergence theorem we have that

$$\begin{aligned}& \lim \limits _{l\rightarrow \infty }\int _{0}^{\infty }\sum _{i=1}^{\infty }\lambda _{i}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {D}_{i}\cap K_{l}}g(y\mathbf {\Theta }_{\textbf{t}}) \Big )\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \Big ]d(-y^{-\alpha })\\&=\int _{0}^{\infty }\sum _{i=1}^{\infty }\lambda _{i}\mathbb {E}\Big [\exp \Big (-\sum _{\textbf{t}\in \mathcal {D}_{i}}g(y\mathbf {\Theta }_{\textbf{t}}) \Big )\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \Big ]d(-y^{-\alpha }). \end{aligned}$$

Finally, the existence of the limiting random measure \(N^{\Lambda }\) is ensured by Corollary 4.14 in Kallenberg (2017).

8.2 Proof of Proposition 11

In order to prove Proposition 11 we need the following Lemma.

Lemma 28

Let \((\textbf{Y}_{\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k})\) be an \(\mathbb {R}^{d}\)-valued random field such that the time change formula (1) is satisfied. Let \(\mathbf {\Theta }_{\textbf{t}}=\textbf{Y}_{\textbf{t}}/|{\textbf{Y}}_{\textbf{0}}|\), \(\textbf{t}\in \mathbb {Z}^{k}\). Let \(\Upsilon\) be a subset of \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succeq \textbf{0}\}\) containing \(\{\textbf{0}\}\) and assume that \(\Upsilon \cup -\Upsilon\) is translation invariant along the points of a (not necessarily full rank) lattice. Then \(|\mathbf {\Theta }_{\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \Upsilon\) implies that \(\sum _{\textbf{t}\in \Upsilon \cup -\Upsilon }|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }<\infty\) a.s.

Proof

The proof is divided in two parts. In the first part we show that \(|\mathbf {\Theta }_{\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \Upsilon \cup -\Upsilon\) and then that \(\sum _{\textbf{t}\in \Upsilon \cup -\Upsilon }|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }<\infty\) a.s.

Denote by \(\mathcal {L}\) the lattice and let \(\mathcal {G}:=\mathcal {L}\cap \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succeq \textbf{0}\}\). We stress that \(\{\textbf{0}\}\in \mathcal {G}\). Let \(\epsilon >0\). Suppose that \(\mathbb {P}(\sum _{\textbf{h}\in -\Upsilon }\textbf{1}(|\textbf{Y}_{\textbf{h}}|>\epsilon )=\infty )>0\). Recall that \(|\textbf{Y}_{\textbf{0}}|\) follows a Pareto(\(\alpha\)) distribution, thus \(\mathbb {P}(|\textbf{Y}_{\textbf{0}}|\ge 1)=1\), and observe that the sets \(\Big \{|\textbf{Y}_{\textbf{t}}|\ge C>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|\Big \}\), \(\textbf{t}\in \mathcal {G}\), are disjoint for every \(C>0\). Then, we have that for every \(0<D\le 1\)

$$\begin{aligned} \mathbb {P}\bigg (\bigcup _{\textbf{t}\in \mathcal {G}}\Big \{|\textbf{Y}_{\textbf{t}}|\ge D>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|\Big \}\bigg )=\sum _{\textbf{t}\in \mathcal {G}}\mathbb {P}\Big (|\textbf{Y}_{\textbf{t}}|\ge D>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|\Big )= 1, \end{aligned}$$

and for every \(D'>1\)

$$\begin{aligned} \mathbb {P}\bigg (\bigcup _{\textbf{t}\in \mathcal {G}}\Big \{|\textbf{Y}_{\textbf{t}}|\ge D'>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|\Big \}\bigg )=\sum _{\textbf{t}\in \mathcal {G}}\mathbb {P}\Big (|\textbf{Y}_{\textbf{t}}|\ge D'>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|\Big )\le 1. \end{aligned}$$

we have that \(\mathbb {P}(\sum _{\textbf{h}\in -\Upsilon }\textbf{1}(|\textbf{Y}_{\textbf{h}}|>\epsilon )=\infty )=\sum _{\textbf{t}\in \mathcal {G}}\mathbb {P}(\sum _{\textbf{h}\in -\Upsilon }\textbf{1}(|\textbf{Y}_{\textbf{h}}|>\epsilon )=\infty ,|\textbf{Y}_{\textbf{t}}|\ge 1>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|)\). Consider any \(\textbf{t}\in \mathcal {G}\) s.t. \(\mathbb {P}(\sum _{\textbf{h}\in -\Upsilon }\textbf{1}(|\textbf{Y}_{\textbf{h}}|>\epsilon )=\infty ,|\textbf{Y}_{\textbf{t}}|\ge 1>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|)>0\). By the time change formula (1) we get

$$\begin{aligned} \infty&\ =\mathbb {E}\bigg [\sum _{\textbf{h}\in -\Upsilon }\textbf{1}(|\textbf{Y}_{\textbf{h}}|>\epsilon ,|\textbf{Y}_{\textbf{t}}|\ge 1>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|)\bigg ]\\&\ =\sum _{\textbf{h}\in -\Upsilon }\mathbb {E}\Big [\textbf{1}(|\textbf{Y}_{\textbf{h}}|>\epsilon ,|\textbf{Y}_{\textbf{t}}|\ge 1>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|)\Big ]\\&\ =\sum _{\textbf{h}\in -\Upsilon }\mathbb {P}\Big (|\textbf{Y}_{\textbf{h}}|>\epsilon ,|\textbf{Y}_{\textbf{t}}|\ge 1>\sup \limits _{\textbf{t}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\textbf{Y}_{\textbf{s}}|\Big )\\&\ =\sum _{\textbf{h}\in -\Upsilon }\int _{\epsilon }^{\infty }\mathbb {P}\Big (r|\mathbf {\Theta }_{-\textbf{h}}|>1,r|\mathbf {\Theta }_{\textbf{t}-\textbf{h}}|\ge 1>r\sup \limits _{\textbf{t}-\textbf{h}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\mathbf {\Theta }_{\textbf{s}}|\Big )d(-r^{-\alpha })\\&{\mathop {=}\limits ^{(r=q\epsilon )}}\epsilon ^{-\alpha }\sum _{\textbf{h}\in -\Upsilon }\int _{1}^{\infty }\mathbb {P}\Big (q\epsilon |\mathbf {\Theta }_{-\textbf{h}}|>1,q\epsilon |\mathbf {\Theta }_{\textbf{t}-\textbf{h}}|\ge 1>q\epsilon \sup \limits _{\textbf{t}-\textbf{h}\prec \textbf{s},\textbf{s}\in \mathcal {G}}|\mathbf {\Theta }_{\textbf{s}}|\Big )d(-q^{-\alpha }) \\&\ \le \epsilon ^{-\alpha }\int _{1}^{\infty }\sum _{\textbf{h}\in -\Upsilon }\mathbb {P}\Big (|\mathbf {\Theta }_{\textbf{t}-\textbf{h}}|\ge \frac{1}{q\epsilon }>\sup \limits _{\textbf{t}-\textbf{h}\prec \textbf{s},\textbf{s}\in \mathcal {G}_{-\textbf{h}}}|\mathbf {\Theta }_{\textbf{s}}|\Big )d(-q^{-\alpha })\\&\ \le b\epsilon ^{-\alpha }\int _{1}^{\infty }d(-q^{-\alpha })=\epsilon ^{-\alpha }<\infty , \end{aligned}$$

where b is the number of points of \(\Upsilon\) inside the fundamental parallelotope of \(\mathcal {L}\). Notice that we used that for every \(\textbf{t}\in \mathcal {G}\) and \(\textbf{h}\in -\Upsilon\) (i.e. \(-\textbf{h}\in \Upsilon\)) we have that \(\textbf{t}-\textbf{h}\in \Upsilon\), that is \(\textbf{t}-\textbf{h}\in (\mathcal {G})_{\textbf{x}}\) where \(\textbf{x}\) is one of the b different points in the fundamental parallelotope, which we denote by \(\hat{B}\) to be consistent with the notation of the proof of Proposition 3. Thus, we have a contradiction and so \(|\mathbf {\Theta }_{\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \Upsilon \cup -\Upsilon\).

Now, suppose that the event \(\{|\mathbf {\Theta }_{\textbf{t}}|\rightarrow 0\ \text {as}\ |\textbf{t}|\rightarrow \infty ,\textbf{t}\in \Upsilon \cup -\Upsilon \}\) has probability 1. Denote this event by E. Observe that \(\sup _{\textbf{t}\in \Upsilon \cup -\Upsilon }|\mathbf {\Theta }_{\textbf{t}}|\) is a well defined random variable since it is the supremum of measurable functions over a countable set. Since \(|\mathbf {\Theta }_{\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \Upsilon \cup -\Upsilon\) and since we are in (a subset of) \(\mathbb {Z}^{k}\), for every \(\omega \in E\) there exist finitely many \(t_{1},...,t_{m}\in \Upsilon \cup -\Upsilon\) such that \(|\mathbf {\Theta }(t_{1})(\omega )|=...=|\mathbf {\Theta }(t_{m})(\omega )|=\sup _{\textbf{t}\in \Upsilon \cup -\Upsilon }|\mathbf {\Theta }_{\textbf{t}}(\omega )|\). For every \(\omega \in E\), let \(T^{*}(\omega )\) be such that \(|\mathbf {\Theta }(T^{*}(\omega ))(\omega )|=\sup _{\textbf{t}\in \Upsilon \cup -\Upsilon }|\mathbf {\Theta }_{\textbf{t}}(\omega )|\) with \(T^{*}(\omega )\) being the smallest of these finitely many points according to \(\succ\). That is for every \(\textbf{t}\in \Upsilon \cup -\Upsilon\) we have

$$\begin{aligned}\{\omega :T^{*}(\omega )=\textbf{t}\}& =\{\omega :\mathbf {\Theta }_{\textbf{t}}(\omega )-\sup _{\textbf{s}\in \Upsilon \cup -\Upsilon ,\textbf{s}\prec \textbf{t}}|\mathbf {\Theta }_{\textbf{s}}(\omega )|>0\}\\&\cap \{\omega :\mathbf {\Theta }_{\textbf{t}}(\omega )-\sup _{\textbf{s}\in \Upsilon \cup -\Upsilon ,\textbf{s}\succeq \textbf{t}}|\mathbf {\Theta }_{\textbf{s}}(\omega )|=0\} \end{aligned}$$

and for \(\textbf{t}\in \mathbb {Z}^{k}\setminus (\Upsilon \cup -\Upsilon )\) we have \(\{\omega :T^{*}(\omega )=\textbf{t}\}=\emptyset\).

By construction \(|\mathbf {\Theta }_{T^{*}}|\) is a measurable function. Since the difference of two measurable functions is measurable and the intersection of two measurable sets is also measurable we have that \(\{\omega :T^{*}(\omega )=\textbf{t}\}\) is a measurable set. Further, for any subset A of \(\mathbb {Z}^{k}\), since \((T^{*})^{-1}(A)=\cup _{\textbf{t}\in A} \{\omega :T^{*}(\omega )=\textbf{t}\}\) and since the union of measurable sets is measurable we have that \((T^{*})^{-1}(A)\) is measurable. Thus, \(T^{*}\) is a well defined random variable. Using the same arguments we can construct \(T_{\mathcal {L}}^{*}\), where the supremum is taken over \(\mathcal {L}\) instead of \(\Upsilon \cup -\Upsilon\). In the same way we can construct \(T_{(\mathcal {L})_{\textbf{x}}}^{*}\) where the supremum is taken over \((\mathcal {L})_{\textbf{x}}\) where \(\textbf{x}\in \hat{B}\).

Consider any \(\textbf{x}\in \hat{B}\). Assume that \(\mathbb {P}(\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }=\infty )>0\). We have that \(\mathbb {P}(\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }=\infty )=\sum _{\textbf{i}\in \mathcal {L}}\mathbb {P}(\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }=\infty ,T^{*}_{\mathcal {L}}=\textbf{i})=\sum _{\textbf{i}\in H}\mathbb {P}(\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }=\infty ,T^{*}_{\mathcal {L}}=\textbf{i})\), where H is the subset of \(\mathcal {L}\) s.t. \(\mathbb {P}(\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }=\infty ,T^{*}_{\mathcal {L}}=\textbf{i})>0\) for every \(\textbf{i}\in H\). Let \(\textbf{i}\in H\), then

$$\begin{aligned} \infty =\mathbb {E}\bigg [\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }\textbf{1}(T^{*}_{\mathcal {L}}=\textbf{i})\bigg ]=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }\textbf{1}(T_{\mathcal {L}}^{*}=\textbf{i})\bigg ]. \end{aligned}$$

Now, we generalise the arguments adopted in the proof of Lemma 3.3 in Wu and Samorodnitsky (2020). For each \(\textbf{i}\in \mathcal {L}\) define a function \(g_{\textbf{i}}:(\bar{\mathbb {R}}^{d})^{\mathbb {Z}^{k}}\rightarrow \mathbb {R}\) as follows. If \((\mathbf {\Theta }_{\textbf{s}}, \textbf{s}\in \mathbb {Z}^{k})\) is such that

$$\begin{aligned} |\mathbf {\Theta }_{\textbf{j}}|<|\mathbf {\Theta }_{\textbf{i}}|\quad \text {for }\ \textbf{j}\prec \textbf{i}\ \text { and }\ \textbf{j}\in \mathcal {L},\quad |\mathbf {\Theta }_{\textbf{j}}|\le |\mathbf {\Theta }_{\textbf{i}}|\quad \text {for }\ \textbf{j}\succeq \textbf{i}\ \text{ and }\ \textbf{j}\in \mathcal {L}, \end{aligned}$$

then set \(g_{\textbf{i}}(\mathbf {\Theta }_{\textbf{s}}, \textbf{s}\in \mathbb {Z}^{k})=1\). Otherwise set \(g_{\textbf{i}}(\mathbf {\Theta }_{\textbf{s}}, \textbf{s}\in \mathbb {Z}^{k})=0\). Observe that \(g_{\textbf{i}}\) is a bounded measurable function and observe that, for any \(\textbf{t}\in \mathbb {Z}^{k}\), \(g_{\textbf{i}}(\cdot -\textbf{t})=1\) when

$$\begin{aligned} |\mathbf {\Theta }_{\textbf{j}-\textbf{t}}|<|\mathbf {\Theta }_{\textbf{i}-\textbf{t}}|\quad \text {for }\ \textbf{j}\prec \textbf{i}\ \text{ and }\ \textbf{j}\in \mathcal {L},\quad |\mathbf {\Theta }_{\textbf{j}-\textbf{t}}|\le |\mathbf {\Theta }_{\textbf{i}-\textbf{t}}|\quad \text {for }\ \textbf{j}\succeq \textbf{i}\ \text { and }\ \textbf{j}\in \mathcal {L}, \end{aligned}$$
$$\begin{aligned} \Leftrightarrow |\mathbf {\Theta }_{\textbf{j}}|<|\mathbf {\Theta }_{\textbf{i}-\textbf{t}}|\quad \text {for }\ \textbf{j}\prec \textbf{i}-\textbf{t}\ \text { and }\ \textbf{j}\in (\mathcal {L})_{-\textbf{t}},\quad |\mathbf {\Theta }_{\textbf{j}}|\le |\mathbf {\Theta }_{\textbf{i}-\textbf{t}}|\quad \text {for }\ \textbf{j}\succeq \textbf{i}-\textbf{t}\ \text { and }\ \textbf{j}\in (\mathcal {L})_{-\textbf{t}}, \end{aligned}$$

and zero otherwise. Then, by time change formula we have

$$\begin{aligned} \infty&=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }\textbf{1}(T_{\mathcal {L}}^{*}=\textbf{i})\bigg ]=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }g_{\textbf{i}}(\mathbf {\Theta }_{\textbf{s}}, \textbf{s}\in \mathbb {Z}^{k})\bigg ]\\&=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [g_{\textbf{i}}(\mathbf {\Theta }(\textbf{s}-\textbf{t}), \textbf{s}\in \mathbb {Z}^{k})\textbf{1}(\mathbf {\Theta }(-\textbf{t})\ne \textbf{0})\bigg ]\\&\le \sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [g_{\textbf{i}}(\mathbf {\Theta }(\textbf{s}-\textbf{t}), \textbf{s}\in \mathbb {Z}^{k})\bigg ]\\&=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [\textbf{1}(\textbf{T}_{(\mathcal {L})_{-t}}^{*}=\textbf{i}-\textbf{t})\bigg ]=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}\mathbb {E}\bigg [\textbf{1}(\textbf{T}_{(\mathcal {L})_{-x}}^{*}\\&=\textbf{i}-\textbf{t})\bigg ]=\sum _{\textbf{t}\in (\mathcal {L})_{-\textbf{x}}}\mathbb {E}\bigg [\textbf{1}(\textbf{T}_{(\mathcal {L})_{-x}}^{*}=\textbf{i}+\textbf{t})\bigg ]\\&=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{r}-\textbf{x}}}\mathbb {E}\bigg [\textbf{1}(\textbf{T}_{(\mathcal {L})_{\textbf{r}-x}}^{*}=\textbf{i}+\textbf{t})\bigg ]\\&=\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{r}-\textbf{x}}}\mathbb {E}\bigg [\textbf{1}(\textbf{T}_{(\mathcal {L})_{\textbf{r}-x}}^{*}=\textbf{t})\bigg ]=1, \end{aligned}$$

which is a contradiction. Notice that we used the fact that by construction, for every \(\textbf{t}\in (\mathcal {L})_{\textbf{x}}\), we have \((\mathcal {L})_{-t}=(\mathcal {L})_{-x}\), \(-\textbf{t}\in (\mathcal {L})_{-\textbf{x}}\), \((\mathcal {L})_{-\textbf{x}}=(\mathcal {L})_{\textbf{r}-\textbf{x}}\) where \(\textbf{r}\) is the highest point in the closure of the fundamental parallelotope (as defined in the proof of Proposition 3), and that \(\textbf{t}+\textbf{i}\in (\mathcal {L})_{\textbf{r}-\textbf{x}}\) for any \(\textbf{i}\in \mathcal {L}\).

Thus, we have \(\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }<\infty\) almost surely. The same arguments can be repeated for every \(\textbf{x}\in \hat{B}\) and use the fact proven in the proof of Proposition 3 that for every \(\textbf{x}\in \hat{B}\) we know that \(\textbf{r}-\textbf{x}\in \hat{B}\). Therefore, since \(\hat{B}\) is finite we conclude that \(\sum _{\textbf{t}\in \Upsilon \cup -\Upsilon }|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }=\sum _{\textbf{x}\in \hat{B}}\sum _{\textbf{t}\in (\mathcal {L})_{\textbf{x}}}|\mathbf {\Theta }_{\textbf{t}}|^{\alpha }<\infty\) a.s. \(\square\)

We first prove that for every \(\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}\) there exists a \(n_{\textbf{t}}\in \mathbb {N}\) s.t. \(\textbf{t}\in R_{0,\Lambda _{m}}\) for every \(m>n_{\textbf{t}}\). First, notice that if \(\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}\) then \(\textbf{t}\in \mathcal {D}_{i}\) for some \(i\in \mathbb {N}\). Then, by condition point (I) in Proposition 3 for every \(i\in \mathbb {N}\) and every \(p\in \mathbb {N}\) there exists an \(n_{i,p}^{*}\in \mathbb {N}\) such that for every \(m>n_{i,p}^{*}\) we have \(|\{\textbf{t}\in \Lambda _{m}:\Lambda _{m}^{(\textbf{t},p)}=\mathcal {D}_{i}\cap K_{p}\}|\ge 1\). Thus, for every \(i,p\in \mathbb {N}\) there exists an \(n^{*}_{i,p}\) s.t. \(\mathcal {D}_{i}\cap K_{p}\subset R_{0,\Lambda _{m}}\) for every \(m>n^{*}_{i,p}\). Therefore, for every \(\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}\) (notice that for each \(\textbf{t}\) we have \(\textbf{t}\in \mathcal {D}_{i}\cap K_{p}\) for some \(i,p\in \mathbb {N}\)) there exists a \(n_{\textbf{t}}\in \mathbb {N}\) (namely \(n^{*}_{i,p}\)) s.t. \(\textbf{t}\in R_{0,\Lambda _{m}}\) for every \(m>n_{\textbf{t}}\).

Now, choose \((d_{n})_{n\in \mathbb {N}}\) such that \(d_{n}\) is the highest integer s.t. \(\max _{|\textbf{t}|\le d_{n},\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}}n_{\textbf{t}}< r_{n}\). Notice that \(\{|\textbf{t}|\le d_{n},\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}\}\subset R_{0,\Lambda _{r_{n}}}\). It is possible to see that \(d_{n}\rightarrow \infty\) as \(n\rightarrow \infty\) and that for every \(l<r_{n}\)

$$\begin{aligned} &\mathbb {P}\Big (\max _{l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big ) \\&=\mathbb {P}\Big (\max _{l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j} \,|\,\textbf{t}\in R_{l,\Lambda _{r_{n}}} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big )\\&\le \mathbb {P}\Big (\max _{\textbf{t}\in R_{l,\Lambda _{r_{n}}} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big ). \end{aligned}$$

Therefore, condition (AC\(^{\Lambda }_{\succeq }\)) implies the following anti-clustering condition:

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\max _{l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\Big )=0. \end{aligned}$$
(22)

Now, for any \(z>0\), by the regular variation of \(|\textbf{X}_{\textbf{0}}|\) and by (22) we have that

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\bigg (\max _{l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>za_{n}^\Lambda x\,\big ||\textbf{X}_{\textbf{0}}|>a_{n}^\Lambda x\bigg )=0. \end{aligned}$$

In other words, for any \(\epsilon >0\) and \(z>0\), there exists \(l>0\) such that for all \(w>l\)

$$\begin{aligned} \mathbb {P}\bigg (\max _{l\le |\textbf{t}|\le w\,|\,\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j} }|\textbf{Y}_{\textbf{t}}|>z\bigg )\le \epsilon . \end{aligned}$$

This, implies that \(\mathbb {P}(\lim \limits _{|\textbf{t}|\rightarrow \infty }|\textbf{Y}_{\textbf{t}}|=0)=1\) and so \(\mathbb {P}(\lim \limits _{|\textbf{t}|\rightarrow \infty }|\mathbf {\Theta }_{\textbf{t}}|=0)=1\) for \(\textbf{t}\in \bigcup _{j=1}^{\infty }\mathcal {D}_{j}\). Then, from Lemmas 3 and 28 we obtain the statement.

8.3 Proof of Theorem 12

By changes of variables we have

$$\begin{aligned}&\int _{0}^{\infty }\mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \mathcal {D}_{j}}g(y\mathbf {\Theta }_{\textbf{t}})}\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}})} \Big ) \bigg ]d(-y^{-\alpha })\\&=\mathbb {E}\Big [\Vert \mathbf {\Theta }\Vert _{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\alpha }^{\alpha }\int _{0}^{\infty }e^{-\sum _{\textbf{t}\in \mathcal {D}_{j}}g(y\mathbf {\Theta }_{\textbf{t}}/\Vert \mathbf {\Theta }\Vert _{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\alpha })}\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}}/\Vert \mathbf {\Theta }\Vert _{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\alpha })} \Big ) d(-y^{-\alpha })\Big ]\\&=\sum _{\textbf{h}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}}\mathbb {E}\Big [|\mathbf {\Theta }_{\textbf{h}}|^{\alpha }\int _{0}^{\infty }e^{-\sum _{\textbf{t}\in \mathcal {D}_{j}}g(y\mathbf {\Theta }_{\textbf{t}}/\Vert \mathbf {\Theta }\Vert _{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\alpha })}\Big (1-e^{-g(y\mathbf {\Theta }_{\textbf{0}}/\Vert \mathbf {\Theta }\Vert _{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\alpha })} \Big ) d(-y^{-\alpha })\Big ] \end{aligned}$$

From the time-change formula, we obtain that for any \(\textbf{h}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}\),

$$\begin{aligned} &\mathbb {E}\Big [|\mathbf {\Theta }_{\textbf{h}}|^{\alpha }\int _{0}^{\infty }e^{-\sum _{\textbf{t}\in \mathcal {D}_{j}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}}) }- e^{-\sum _{\textbf{t}\in \mathcal {D}_{j}\cup \{\textbf{0}\}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})} d(-y^{-\alpha })\Big ]\\&=\mathbb {E}\Big [\int _{0}^{\infty }e^{-\sum _{\textbf{t}\in (\mathcal {D}_{j})_{-\textbf{h}}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})}- e^{-\sum _{\textbf{t}\in (\mathcal {D}_{j})_{-\textbf{h}}\cup \{-\textbf{h}\}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}}) }d(-y^{-\alpha })\Big ]. \end{aligned}$$

Since \(\textbf{h}\) is a lattice point then \((\mathcal {D}_{j})_{\mathbf {-h}}=(\mathcal {D}_{j}\cup -\mathcal {D}_{j})\cap \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succ -\textbf{h}\}\) and since \(-\textbf{h}\) is the first point of \((\mathcal {D}_{j}\cup -\mathcal {D}_{j})\cap \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succeq -\textbf{h}\}\). This leads to a telescoping sum structure for any \(\textbf{k}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}\)

$$\begin{aligned}&\sum _{\{-\textbf{h}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}:-\textbf{k}\preceq \textbf{h}\preceq \textbf{k}\}}\mathbb {E}\Big [\int _{0}^{\infty }e^{-\sum _{\textbf{t}\in (\mathcal {D}_{j})_{-\textbf{h}}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})}- e^{-\sum _{\textbf{t}\in (\mathcal {D}_{j})_{-\textbf{h}}\cup \{-\textbf{h}\}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})} d(-y^{-\alpha })\Big ]\nonumber \\&=\mathbb {E}\Big [\int _{0}^{\infty }e^{-\sum _{\textbf{t}\in (\mathcal {D}_{j})_{\textbf{k}}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})}- e^{-\sum _{\textbf{t}\in (\mathcal {D}_{j})_{-\textbf{k}}\cup \{-\textbf{k}\}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})}d(-y^{-\alpha })\Big ]. \end{aligned}$$
(23)

Since any function \(g\in \mathbb {C}^{+}_{K}\) vanishes in some neighbourhood of the origin and \(\mathbf {\Theta }_{\textbf{t}}{\mathop {\rightarrow }\limits ^{a.s}}0\) and \(\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}}{\mathop {\rightarrow }\limits ^{a.s}}0\) as \(\textbf{t}\rightarrow \varvec{\infty }\) for \(\textbf{t}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}\), we have \(\sum _{\textbf{t}\in (\mathcal {D}_{j})_{\textbf{k}}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})\rightarrow 0\) and \(\sum _{\textbf{t}\in (\mathcal {D}_{j})_{-\textbf{k}}\cup \{-\textbf{k}\}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})\rightarrow \sum _{\textbf{t}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}})\) a.s. monotonically, as \(\textbf{k}\rightarrow \varvec{\infty }\). Thus, by monotone convergence theorem the right-hand side in (23) converges, as \(\textbf{k}\rightarrow \varvec{\infty }\), to

$$\begin{aligned} \mathbb {E}\Big [\int _{0}^{\infty }1- \exp \Big (-\sum _{\textbf{t}\in \mathcal {D}_{j}\cup -\mathcal {D}_{j}}g(y\textbf{Q}_{\mathcal {D}_{j}\cup -\mathcal {D}_{j},\textbf{t}}) \Big ) d(-y^{-\alpha })\Big ]. \end{aligned}$$

One deduces the following expression of the Laplace transform of \(N^ \lambda\)

$$\begin{aligned} \Psi _{N^{\Lambda }}(g)=\exp \bigg (-\sum _{j=1}^{\infty }\lambda _{j}\int _{0}^{\infty }\mathbb {E}\Big [1 -e^{-\sum _{\textbf{t}\in \hat{\mathcal {D}_{j}}}g( y\textbf{Q}_{\hat{\mathcal {D}_{j}},\textbf{t}})}\Big ]d(-y^{-\alpha })\bigg ). \end{aligned}$$

8.4 Proof of Proposition 13

The first statement follows from similar arguments as the ones used in the proof of Theorem 2.1 in Basrak and Segers (2009) and in Lemma 3.1 in Segers et al. (2017). In particular, it is easy to see that for all \(\textbf{s}, \textbf{t} \in \mathbb {Z}^{k}\) with \(\textbf{s}\le \textbf{t}\)

$$\begin{aligned} \frac{\mathbb {P}((x^{-1} \textbf{X}_{\textbf{s}} , . . . , x^{-1} \textbf{X}_{\textbf{t}})\in \cdot )}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}=\frac{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>x)}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}\frac{\mathbb {P}((x^{-1} \textbf{X}_{\textbf{s}}, . . . , x^{-1} \textbf{X}_{\textbf{t}})\in \cdot )}{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>x)}. \end{aligned}$$

Let \(\tilde{\mu }\) be the tail measure of \(\textbf{X}\) with auxiliary regularly varying function \(\mathbb {P}(|\textbf{X}_{\textbf{0}}|>x)\). By the definition of regular variation of \(\textbf{X}\), by homogeneity of the tail measure, and assuming w.l.o.g. that \(\{\textbf{0}\}\in \Upsilon\) we have (see also Lemma 3.1 in Segers et al. (2017))

$$\begin{aligned} \frac{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>x)}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}\rightarrow \frac{1}{\tilde{\mu }_{\Upsilon }(z\in \mathbb {R}^{|\Upsilon |}:\rho (z)>1)}\in (0,\infty ). \end{aligned}$$

Thus, we have

$$\begin{aligned} \frac{\mathbb {P}((x^{-1} \textbf{X}_{\textbf{s}} , . . . , x^{-1} \textbf{X}_{\textbf{t}})\in \cdot )}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}\rightarrow \frac{\tilde{\mu }_{\textbf{s},\textbf{t}}(\cdot )}{\tilde{\mu }_{\Upsilon }(z\in \mathbb {R}^{|\Upsilon |}:\rho (z)>1)}=:\mu _{\textbf{s},\textbf{t}}(\cdot ). \end{aligned}$$

Notice that the function \(x \mapsto \mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)\) is regularly varying of index \(-\alpha\) and that \(\mu\) restricted to the set \(\{y_{\textbf{s}},....,y_{\textbf{t}}:\rho (\textbf{Y})>1\}\) is a probability measure, call it \(\nu _{\textbf{s},\textbf{t}}\). Here we have used that w.l.o.g. \(\Upsilon \subset \{\textbf{s},...,\textbf{t}\}\), indeed if some \(\textbf{z}\in \Upsilon\) is not contained in \(\{\textbf{s},...,\textbf{t}\}\) then we can consider \(\tilde{\mu }_{\{\textbf{s},\textbf{t}\}\cup \{\textbf{z}\}}\) because by consistency of the measure \(\tilde{\mu }\) we have \(\tilde{\mu }_{\{\textbf{s},\textbf{t}\}\cup \{\textbf{z}\}}(\cdot ,\mathbb {R})=\tilde{\mu }_{\textbf{s},\textbf{t}}(\cdot )\).

It is possible to see that \((\nu _{\textbf{s},\textbf{t}})_{\textbf{s},\textbf{t}\in \mathbb {Z}^{k}}\) is a family of consistent probability measures and by Kolmogorov extension theorem we obtain the first statement. The second statement follows from the first and the continuous mapping theorem.

8.5 Proof of Proposition 14

This follows from similar arguments used in the proofs of Theorem 3.1 in Basrak and Segers (2009) and of Theorem 3.2 in Wu and Samorodnitsky (2020). Consider any \(\textbf{s}\in \mathbb {Z}^{k}\) and any \(\textbf{i},\textbf{j}\in \mathbb {Z}^{k}\) such that \(\Upsilon \subset \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{i}\le \textbf{t}\le \textbf{j}\}\). Then, following the proof of Theorem 3.1 in Basrak and Segers (2009) we define the spaces:

$$\begin{aligned} \mathbb {E}_{\textbf{i},\textbf{j}}=\{(\textbf{y}_{\textbf{i}},...,\textbf{y}_{\textbf{j}})\,\,|\,\,0<\rho _{\Upsilon }(\textbf{y})<\infty \},\qquad \mathbb {S}_{\textbf{i},\textbf{j}}=\{(\textbf{y}_{\textbf{i}},...,\textbf{y}_{\textbf{j}})\,\,|\,\,\rho _{\Upsilon }(\textbf{y})=1\}. \end{aligned}$$

Define the bijection \(T:\mathbb {E}_{\textbf{i},\textbf{j}}\rightarrow (0,\infty )\times \mathbb {S}_{\textbf{i},\textbf{j}}\) by

$$\begin{aligned} T(\textbf{y}_{\textbf{i}},...,\textbf{y}_{\textbf{j}})=\bigg (\rho _{\Upsilon }(\textbf{y}),\frac{\textbf{y}_{\textbf{i}}}{\rho _{\Upsilon }(\textbf{y})},...,\frac{\textbf{y}_{\textbf{j}}}{\rho _{\Upsilon }(\textbf{y})}\bigg ). \end{aligned}$$

Let \(\mu _{\textbf{i},\textbf{j}}\) be the tail measure of \((\textbf{X}_{\textbf{i}},...,\textbf{X}_{\textbf{j}})\) with auxiliary regularly varying function \(\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)\) (as defined in the proof of Proposition 13). Define the measure \(\varPhi _{\textbf{i},\textbf{j}}\) on \(\mathbb {S}_{\textbf{i},\textbf{j}}\) by

$$\begin{aligned} \varPhi _{\textbf{i},\textbf{j}}(B)=\mu _{\textbf{i},\textbf{j}}\big (T^{-1}((1,\infty )\times B)\big ) \end{aligned}$$

for Borel-measurable \(B\subset \mathbb {S}_{\textbf{i},\textbf{j}}\). Since the law of \((\textbf{Y}_{\Upsilon ,\textbf{i}},..., \textbf{Y}_{\Upsilon ,\textbf{j}})\) is equal to the restriction of \(\mu _{\textbf{i},\textbf{j}}\) to \(T^{-1}((1,\infty )\times \mathbb {S}_{\textbf{i},\textbf{j}})\), the measure \(\varPhi _{\textbf{i},\textbf{j}}\) is in fact equal to the law of \((\mathbf {\Theta }_{\Upsilon ,\textbf{i}},..., \mathbf {\Theta }_{\Upsilon ,\textbf{j}})\). Furthermore, as \(\mu _{\textbf{i},\textbf{j}}\) is homogeneous of order \(-\alpha\), for \(u\in (0, \infty )\) and Borel sets \(B\subset \mathbb {S}_{\textbf{i},\textbf{j}}\)

$$\begin{aligned} \mu _{\textbf{i},\textbf{j}}\big (T^{-1}((u,\infty )\times B)\big )=\mu _{\textbf{i},\textbf{j}}\big (uT^{-1}((1,\infty )\times B)\big )=u^{-\alpha }\varPhi _{\textbf{i},\textbf{j}}(B) \end{aligned}$$
(24)

For \(u\ge 1\), the left-hand side is equal to \(\mathbb {P}(\rho _{\Upsilon }(\textbf{Y})>u,(\mathbf {\Theta }_{\Upsilon ,\textbf{i}},..., \mathbf {\Theta }_{\Upsilon ,\textbf{j}})\in B)\), while the right hand side is equal to \(\mathbb {P}(\rho _{\Upsilon }(\textbf{Y})>u)\mathbb {P}((\mathbf {\Theta }_{\Upsilon ,\textbf{i}},..., \mathbf {\Theta }_{\Upsilon ,\textbf{j}})\in B)\). Thus, \(\rho _{\Upsilon }(\textbf{Y})\) and \((\mathbf {\Theta }_{\Upsilon ,\textbf{i}},..., \mathbf {\Theta }_{\Upsilon ,\textbf{j}})\) are independent and so \(\rho _{\Upsilon }(\textbf{Y})\) and \((\mathbf {\Theta }_{\Upsilon ,\textbf{i}})_{\textbf{i}\in I}\) are independent, where I is any subset of \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{i}\le \textbf{t}\le \textbf{j}\}\). Since \(\textbf{i}\) and \(\textbf{j}\) were arbitrary, point (i) follows.

Concerning point (ii), consider any \(\textbf{i},\textbf{j}\in \mathbb {Z}^{k}\) and let \(g:(\mathbb {R}^{d})^{\mathbb {Z}^{k}}\rightarrow \mathbb {R}\) be a bounded and continuous function. By stationarity

$$\begin{aligned} \begin{aligned} &\mathbb {E}[g(\textbf{Y}_{\Upsilon ,\textbf{i}-\textbf{s}},...,\textbf{Y}_{\Upsilon ,\textbf{j}-\textbf{s}})\textbf{1}(\rho _{(\Upsilon )_{-\textbf{s}}}(\textbf{Y})>\varepsilon )]\\&=\lim \limits _{x\rightarrow \infty }\frac{\mathbb {E}[g(x^{-1} \textbf{X}_{\textbf{i}-\textbf{s}},..., x^{-1} \textbf{X}_{\textbf{j}-\textbf{s}})\textbf{1}(\rho _{(\Upsilon )_{-\textbf{s}}}(\textbf{X})>x\varepsilon )\textbf{1}(\rho _{\Upsilon }(\textbf{X})>x)]}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}\\&=\lim \limits _{x\rightarrow \infty }\frac{\mathbb {E}[g(x^{-1} \textbf{X}_{\textbf{i}},..., x^{-1} \textbf{X}_{\textbf{j}})\textbf{1}(\rho _{\Upsilon }(\textbf{X})>x\epsilon )\textbf{1}(\rho _{(\Upsilon )_{\textbf{s}}}(\textbf{X})>x)]}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}\\&=\int g(\textbf{y}_{\textbf{i}},...,\textbf{y}_{\textbf{j}})\textbf{1}(\rho _{\Upsilon }(\textbf{y})>\varepsilon )\textbf{1}(\rho _{(\Upsilon )_{\textbf{s}}}(\textbf{y})>1)\mu _{\textbf{l},\textbf{k}}(d\textbf{y}) \end{aligned} \end{aligned}$$
(25)

where \(\textbf{l},\textbf{k}\in \mathbb {Z}^{k}\) are such that \(\{\textbf{t}\in \mathbb {Z}^{k}:\textbf{l}\le \textbf{t}\le \textbf{k}\}\supset \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{i}\le \textbf{t}\le \textbf{j}\}\cup \Upsilon \cup (\Upsilon )_{\textbf{s}}\). The last equality follows from the consistency of the measures \(\mu _{\textbf{l},\textbf{k}}\), \(\textbf{l},\textbf{k}\in \mathbb {Z}^{k}\) and the fact that \(\mu _{\textbf{l},\textbf{k}}\) restricted on the set \(\{(\textbf{y}_{\textbf{l}},...,\textbf{y}_{\textbf{k}}):\rho _{(\Upsilon )_{\textbf{s}}}(\textbf{y})>1\}\) is a probability measure, call it \(\tilde{\nu }_{\textbf{l},\textbf{k}}\), and this holds for any \(\textbf{s}\in \mathbb {Z}^{k}\); indeed by stationarity

$$\begin{aligned} \frac{\mathbb {P}((x^{-1} \textbf{X}_{\textbf{l}} , . . . , x^{-1} \textbf{X}_{\textbf{k}})\in \cdot )}{\mathbb {P}(\rho _{(\Upsilon )_{s}}(\textbf{X})>x)}=\frac{\mathbb {P}((x^{-1} \textbf{X}_{\textbf{l}} , . . . , x^{-1} \textbf{X}_{\textbf{k}})\in \cdot )}{\mathbb {P}(\rho _{\Upsilon }(\textbf{X})>x)}\rightarrow \mu _{\textbf{l},\textbf{k}}(\cdot ) \end{aligned}$$

As a side note observe that \(\textbf{Y}_{\Upsilon }\) is not necessarily stationary because different restrictions (i.e. different \(\textbf{s}\)) of \(\mu\) correspond to potentially different probability measures. That is \(\nu _{\textbf{l},\textbf{k}}\), which is the probability measure given by \(\mu _{\textbf{l},\textbf{k}}\) restricted on \(\{(\textbf{y}_{\textbf{l}},...,\textbf{y}_{\textbf{k}}):\rho _{(\Upsilon )_{\textbf{s}}}(\textbf{y})>1\}\) (as introduced in the proof of Proposition 13) is potentially different from \(\tilde{\nu }_{\textbf{l},\textbf{k}}\).

Now, by (24) applied to \(\mu _{\textbf{l},\textbf{k}}\) we obtain that (25) is equal to

$$\begin{aligned} \int _{\varepsilon }^{\infty }\mathbb {E}[g(r\mathbf {\Theta }_{\Upsilon ,\textbf{i}},...,r\mathbf {\Theta }_{\Upsilon ,\textbf{j}})\textbf{1}(r\rho _{(\Upsilon )_{\textbf{s}}}(\mathbf {\Theta })>1)]d(-r^{-\alpha }). \end{aligned}$$

Following the monotone convergence arguments of the proof of Theorem 3.2 in Wu and Samorodnitsky (2020) we send \(\varepsilon \rightarrow 0\) and drop the continuity assumption of g. Since \(\textbf{i}\) and \(\textbf{j}\) were arbitrary we obtain the result.

Finally point (iii) follows from (9) applied to the function \(\tilde{g}(\textbf{y})=g(\textbf{y}/\rho _{(\Upsilon )_{\textbf{s}}}(\textbf{y}))\textbf{1}(\rho _{(\Upsilon )_{\textbf{s}}}(\textbf{y})\ne 0)\).

8.6 Proof of Lemma 15

Let \(c^{-}:=\inf _{\textbf{x}:\rho _{\Upsilon }(\textbf{x})\ge 1}|\textbf{x}|\) and \(c^{+}:=\sup _{\textbf{x}:\rho _{\Upsilon }(\textbf{x})<1}|\textbf{x}|\). By homogeneity we have that \(\inf _{\textbf{x}:\rho _{\Upsilon }(\textbf{x})\ge \epsilon }|\textbf{x}|=\epsilon c^{-}\) for every \(\epsilon >0\) (and the same holds for \(c^{+}\)). This implies that if \(|\textbf{x}|<\epsilon\) then \(\rho _{\Upsilon }(\textbf{x})<\epsilon /c^{-}\), and if \(\rho _{\Upsilon }(\textbf{x})<\epsilon\) then \(|\textbf{x}|<c^{+}\epsilon\). From the latter we deduce that for every \(b>0\) we have that \(|\textbf{x}|\ge bc^{+}\) implies \(\rho _{\Upsilon }(\textbf{x})\ge b\) (or equivalently by homogeneity that \(|\textbf{x}|\ge b\) implies \(\rho _{\Upsilon }(\textbf{x})\ge b/c^{+}\)) and from the former that \(\rho _{\Upsilon }(\textbf{x})\ge b\) implies that \(|\textbf{x}|\ge bc^{-}\).

Furthermore, it is easy to see that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|\) is a norm on \(\mathbb {R}^{|\Upsilon |k}\) and since on any finite dimensional vector space any norm is equivalent to any other norm, we have that there exists two constants A and B such that \(A|\textbf{x}|\le \max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}| \le B|\textbf{x}|\). Therefore, for every \(\epsilon >0\) we have that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|<\epsilon\) implies that \(A|\textbf{x}|<\epsilon\) which in turn implies that \(\rho _{\Upsilon }(\textbf{x})<\frac{\epsilon }{Ac^{-}}\). Moreover, for every \(\epsilon >0\) we have that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|\ge \epsilon\) implies that \(B|\textbf{x}|\ge \epsilon\) which in turn implies that \(\rho _{\Upsilon }(\textbf{x})\ge \frac{\epsilon }{Bc^{+}}\).

Similarly for the other direction we have that, for every \(\epsilon >0\), \(\rho _{\Upsilon }(\textbf{x})<\epsilon\) implies that \(|\textbf{x}|<c^{+}\epsilon\) which implies that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|<c^{+}B\epsilon\). Moreover, for every \(\epsilon >0\), \(\rho _{\Upsilon }(\textbf{x})\ge \epsilon\) implies that \(|\textbf{x}|\ge c^{-}\epsilon\) which implies that \(\max _{\textbf{t}\in \Upsilon }|\textbf{x}_{\textbf{t}}|\ge c^{-}A\epsilon\). Thus by setting \(C=Ac^{-}\) and \(D=Bc^{+}\) we obtain the result.

8.7 Proof of Theorem 17

Before proving Theorem 17 we present the following result on the connection between tail random fields for different sets \(\Upsilon _{1}\) and \(\Upsilon _{2}\) and different moduli \(\rho _{1}\) and \(\rho _{2}\).

Lemma 29

Let \(\Upsilon _{1}\) and \(\Upsilon _{2}\) be two finite subset of \(\mathbb {Z}^{k}\) and consider \(\rho _{1}\) and \(\rho _{2}\) be two moduli on \(\mathbb {R}^{d\mathbb {Z}^{k}}\). Let \(C_{1}\) and \(C_{2}\) be the constants such that, for every \(\epsilon >0\), \(\rho _{1,\Upsilon _{1}}(x)>\epsilon\) implies \(\rho _{2,\Upsilon _{2}}(x)>\frac{\epsilon }{C_{2}}\), and \(\rho _{2,\Upsilon _{2}}(x)>\epsilon\) implies \(\rho _{1,\Upsilon _{1}}(x)>C_{1}\epsilon\). Then,

$$\begin{aligned} \textbf{Y}_{\Upsilon _{2}}{\mathop {=}\limits ^{d}}C_{1}\textbf{Y}_{\Upsilon _{1}}\big |\rho _{2,\Upsilon _{2}}(\textbf{Y}_{\Upsilon _{1}})>\frac{1}{C_{1}} \end{aligned}$$

and

$$\begin{aligned} \textbf{Y}_{\Upsilon _{1}}{\mathop {=}\limits ^{d}}\frac{\textbf{Y}_{\Upsilon _{2}}}{C_{2}}\Big |\rho _{1,\Upsilon _{1}}(\textbf{Y}_{\Upsilon _{2}})>C_{2}. \end{aligned}$$

Proof

Let \(\Xi\) be a finite subset of \(\mathbb {Z}^{k}\) and let \(g:\mathbb {R}^{d|\Xi |}\rightarrow \mathbb {R}\) be a bounded and continuous function. Then, by homogeneity we have

$$\begin{aligned}&\mathbb {E}\big [g\big (\big (\textbf{Y}_{\Upsilon _{1}}(\textbf{t})\big )_{\textbf{t}\in \Xi }\big )\big ]\\&=\lim \limits _{x\rightarrow \infty }\frac{\mathbb {E}\Big [g\Big (\Big (\frac{\textbf{X}_{\textbf{t}}}{x}\Big )_{\textbf{t}\in \Xi }\Big )\textbf{1}(\rho _{1,\Upsilon _{1}}(\textbf{X})>x)\Big ]}{\mathbb {P}(\rho _{1,\Upsilon _{1}}(\textbf{X})>x)}\\&=\lim \limits _{x\rightarrow \infty }\frac{\mathbb {E}\Big [g\Big (\Big (\frac{\textbf{X}_{\textbf{t}}}{x}\Big )_{\textbf{t}\in \Xi }\Big )\textbf{1}(\rho _{1,\Upsilon _{1}}(\textbf{X})>x)[\textbf{1}(\rho _{2,\Upsilon _{2}}(\textbf{X})>\frac{x}{C_{1}})+\textbf{1}(\rho _{2,\Upsilon _{2}}(\textbf{X})\le \frac{x}{C_{1}})]\Big ]}{\mathbb {P}(\rho _{1,\Upsilon _{1}}(\textbf{X})>x)}\frac{\mathbb {P}(\rho _{2,\Upsilon _{2}}(\textbf{X})>\frac{x}{C_{1}})}{\mathbb {P}(\rho _{2,\Upsilon _{2}}(\textbf{X})>\frac{x}{C_{1}})}\\&=K\mathbb {E}\Big [g\Big (\Big (\dfrac{\textbf{Y}_{\Upsilon _{2},\textbf{t}}}{C_{1}}\Big )_{\textbf{t}\in \Xi }\Big )\Big ]+\mathbb {E}\big [g\big (\big (\textbf{Y}_{\Upsilon _{1},\textbf{t}}\big )_{\textbf{t}\in \Xi }\big )\textbf{1}(\rho _{2,\Upsilon _{2}}(\textbf{Y}_{\Upsilon _{1}})\le \frac{1}{C_{1}})\big ] \end{aligned}$$

where

$$\begin{aligned} K&=\lim \limits _{x\rightarrow \infty }\frac{\mathbb {P}(\rho _{2,\Upsilon _{2}}(\textbf{X})>\frac{x}{C_{1}})}{\mathbb {P}(\rho _{1,\Upsilon _{1}}(\textbf{X})>x)}=\lim \limits _{x\rightarrow \infty }\frac{\mathbb {P}(\rho _{2,\Upsilon _{2}}(\textbf{X})>\frac{x}{C_{1}},\rho _{1,\Upsilon _{1}}(\textbf{X})>x)}{\mathbb {P}(\rho _{1,\Upsilon _{1}}(\textbf{X})>x)}\\&=\mathbb {P}\Big (\rho _{2,\Upsilon _{2}}(\textbf{Y}_{\Upsilon _{1}})>\frac{1}{C_{1}}\Big ). \end{aligned}$$

Thus, we have

$$\begin{aligned} \mathbb {E}\Big [g\Big (\Big (\dfrac{\textbf{Y}_{\Upsilon _{2},\textbf{t}}}{C_{1}}\Big )_{\textbf{t}\in \Xi }\Big )\Big ]=\mathbb {E}\Big [g\big (\big (\textbf{Y}_{\Upsilon _{1},\textbf{t}}\big )_{\textbf{t}\in \Xi }\big )|\rho _{2,\Upsilon _{2}}(\textbf{Y}_{\Upsilon _{1}})>\frac{1}{C_{1}}\Big ]. \end{aligned}$$

Since g and \(\Xi\) were arbitrary we obtain the first stated result. The same arguments apply to the second one. \(\square\)

For notational purpose we consider the general case of countably many \(\mathcal {D}\)s and so of \(\mathcal {E}\)s. For the first part of this proof we follow similar arguments as the ones used in the proof of Theorem 10. By \(\mathcal {A}^{\Lambda }(a_{n})\) it suffices to show that for any \(g\in \mathbb {C}^{+}_{K}\), \((\Psi _{\tilde{N}^{\Lambda }_{r_{n}}}(g))^{k_{n}}\) converges to (11) as \(n\rightarrow \infty\): It suffices to prove that \(k_{n}(1-\Psi _{\tilde{N}^{\Lambda }_{r_{n}}}(g))\) converges to the logarithm of (11) as \(n\rightarrow \infty\).

In this proof we use Proposition 9 for \(\Lambda _{r_{n}}\) instead of \(\Lambda _{n}\). Thus, we can construct sets like \(S_{i,4\,l}\), \(S'_{i,4\,l}\), and \(\tilde{S}_{i,4l}\) as in Proposition 9 and its proof but for \(\Lambda _{r_{n}}\). By abuse of notation we call these sets \(S_{i,4\,l}\), \(S'_{i,4\,l}\), and \(\tilde{S}_{i,4l}\), respectively. Further, let \(\bar{S}_{i,4l}:=S_{i,4l}\setminus S'_{i,4l}\). For notational consistency let \(\bar{S}_{i,4l}:=\emptyset\) for i corresponding to \(\mathcal {D}_i\) bounded. Now, we apply a telescoping sum argument which generalises the one used in the proof of Theorem 10.

Let \(u:=|\bigcup _{i\in I^*,i<m_{4l}}S'_{i,4l}|\) (we omit the dependency on l and on n in u) and let \(s_{1}\prec s_{2}\prec ...\prec s_{u}\) denote the points in \(\bigcup _{i\in I^*,i<m_{4l}}S'_{i,4l}\). Denote by \(\mathcal {E}_{j_{1}},...,\mathcal {E}_{j_{u}}\) the \(\mathcal {E}\)s associated to \(s_{1},...,s_{u}\). Let \(\bar{u}:=|\bigcup _{i\in I^*,i<m_{4l}}\bar{S}_{i,4l}|\) and let \(\bar{s}_{1}\prec \bar{s}_{2}\prec ...\prec \bar{s}_{\bar{u}}\) denote the points in \(\bigcup _{i\in I^*,i<m_{4l}}\bar{S}_{i,4l}\). Denote by \(\mathcal {E}_{\bar{j}_{1}},...,\mathcal {E}_{\bar{j}_{\bar{u}}}\) the \(\mathcal {E}\)s associated to \(\bar{s}_{1},...,\bar{s}_{\bar{u}}\). Let \(\tilde{s}_{1},...,\tilde{s}_{\tilde{u}}\), for some \(\tilde{u}\in \mathbb {N}\cup \{\textbf{0}\}\), be the ordered points in \(\Lambda _{r_n}\setminus \Big (\bigcup _{i=1}^{u}(\mathcal {E}_{j_{i}})_{s_{i}}\cup \bigcup _{i=1}^{\bar{u}}(\mathcal {E}_{\bar{j}_{i}})_{\bar{s}_{i}}\Big )\). In this case we associate the set \(\{\textbf{0}\}\) to any point \(\tilde{s}_{h}\), \(h=1,...,\tilde{u}\). Denote by \(\hat{u}:=u+\bar{u}+\tilde{u}\) and by \(\hat{s}_{1},...,\hat{s}_{\hat{u}}\) the points \(s_{1},...s_{u},\bar{s}_{1},...\bar{s}_{u},\tilde{s}_{1},...,\tilde{s}_{\tilde{u}}\) indexed such that \(\hat{s}_{1}\prec \hat{s}_{2}\prec ...\prec \hat{s}_{\hat{u}}\), and denote by \(\hat{\mathcal {E}}_{1},...,\hat{\mathcal {E}}_{\hat{u}}\), the corresponding sets \(\mathcal {E}_{j_{1}},...,\mathcal {E}_{j_{u}},\mathcal {E}_{\bar{j}_{1}},...,\mathcal {E}_{\bar{j}_{\bar{u}}},\underbrace{\{\textbf{0}\},...,\{\textbf{0}\}}_{\tilde{u}\text { times}}\); for example if \(\hat{s}_{1}=\bar{s}_{\bar{u}}\) then \(\hat{\mathcal {E}}_{1}=\mathcal {E}_{\bar{j}_{\bar{u}}}\).

Let

$$\begin{aligned} \tilde{\Psi }_{l,m}(g)={\left\{ \begin{array}{ll} \exp \Big (-\sum _{j=m}^{\hat{u}}\sum _{\textbf{t}\in (\hat{\mathcal {E}}_{j})_{\hat{s}_{j}}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ],&{} 1\le m\le \hat{u},\\ 1, &{} m=\hat{u}+1\,, \end{array}\right. } \end{aligned}$$

so that for every \(l\in \mathbb {N}\) we obtain \(1-\Psi _{\tilde{N}_{r_{n}}}(g)=\sum _{m=1}^{\hat{u}}\tilde{\Psi }_{l,m+1}(g)-\tilde{\Psi }_{l,m}(g)\). In the following the choice of \(\delta > 0\) is such that \(g(x) = 0\) for \(x\in (-\delta , \delta )\). By the stationarity of \(\textbf{X}\), we have

$$\begin{aligned} &{\tilde{\Psi }_{l,m+1}(g)-\tilde{\Psi }_{l,m}(g)}\\ &=\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big ) \exp \Big (-\sum _{j=m+1}^{\hat{u}}\sum _{\textbf{t}\in (\hat{\mathcal {E}}_{j})_{\hat{s}_{j}}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}-\hat{s}_{m}}) \Big ) \Big ]\\ &=\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big ) \exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]\\ &=\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \\&\qquad \textbf{1}(\max _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\setminus K_{2l}}|\textbf{X}_{\textbf{t}}|\le \delta a_{n}) \Big ]\\&\quad +\ \mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big )\\&\qquad \textbf{1}(\max _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\setminus K_{2l}}|\textbf{X}_{\textbf{t}}| > \delta a_{n}) \Big ] \\&=\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]\\ &\quad-\ \mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big )\\ &\qquad \textbf{1}(\max _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\setminus K_{2l}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\textbf{1}(\max _{\textbf{t}\in \hat{\mathcal {E}}_{m}}|\textbf{X}_{\textbf{t}}|>\delta a_{n}) \Big ]\\ &\quad+\ \mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big )\\ &\qquad \textbf{1}(\max _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\setminus K_{2l}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\textbf{1}(\max _{\textbf{t}\in \hat{\mathcal {E}}_{m}}|\textbf{X}_{\textbf{t}}|>\delta a_{n}) \Big ]\\ &=\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]+J^{(r_n)}_{l,m}.\end{aligned}$$
(26)

Consider now only the \(J^{(r_n)}_{l,m}\) where m is such that \(\hat{s}_{m}=s_i\) for some \(i=1,...,u\). We have that

$$\begin{aligned} \begin{aligned} J^{(r_n)}_{l,m}&\le 2\mathbb {E}\Big [\textbf{1}(\max _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\setminus K_{2l}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\textbf{1}(\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\Big ] \\ &\le 2\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n},\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n}). \end{aligned} \end{aligned}$$
(27)

By Proposition 9 we have that \(|S'_{i,4l}|/|\Lambda _{r_n}|\rightarrow \gamma ^*_i\) as \(n\rightarrow \infty\), and so that \(\lim \limits _{n\rightarrow \infty }\frac{u}{|\Lambda _{r_n}|}=\sum _{j\in I^*,j<m_{4l}}\gamma ^*_{j}\). Further, since the inequality (27) holds for every \(n,l\in \mathbb {N}\) and since \(\sum _{j\in I^*}\gamma ^*_{j}|\mathcal {E}_{j}|=1\), we obtain that

$$\begin{aligned} \begin{aligned}&{ \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }k_{n}\sum _{m=1}^{u}|J^{(r_n)}_{l,m}|}\\&=\lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }k_{n}\sum _{i\in I^*,i<m_{4l}}2|S'_{i,4l}|\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n},\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\\&\le 2\lim \limits _{l\rightarrow \infty }\sum _{i\in I^*,i<m_{4l}}\gamma _{i}^{*}\limsup _{n\rightarrow \infty }|\Lambda _{r_{n}}|k_{n}\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n},\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\\&=2\lim \limits _{l\rightarrow \infty }\sum _{i\in I^*,i<m_{4l}}\gamma _{i}^{*}\limsup _{n\rightarrow \infty }|\Lambda _{n}|\mathbb {P}(|\textbf{X}_{\textbf{0}}|>\delta a_{n})\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n}|\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})}{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>\delta a_{n})}\\&=2\lim \limits _{l\rightarrow \infty }\sum _{i\in I^*,i<m_{4l}}\gamma _{i}^{*}\limsup _{n\rightarrow \infty }\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n}|\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})}{\mathbb {P}(|\textbf{X}_{\textbf{0}}|>\delta a_{n})}\\&\le 2\lim \limits _{l\rightarrow \infty }\sum _{i\in I^*,i<m_{4l}}\gamma _{i}^{*}|\mathcal {E}_{i}|\limsup _{n\rightarrow \infty }\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n}|\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n}) \end{aligned} \end{aligned}$$
(28)

and since

$$\begin{aligned} \sum _{i\in I^*,i<m_{4l}}\gamma _{i}^{*}|\mathcal {E}_{i}|\limsup _{n\rightarrow \infty }\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n}|\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n}) \le \sum _{i\in I^*,i<m_{4l}}\gamma _{i}^{*}|\mathcal {E}_{i}|\le \sum _{i\in I^*}\gamma _{i}^{*}|\mathcal {E}_{i}|=1 \end{aligned}$$

by dominated convergence theorem we have that (28) is equal to

$$\begin{aligned} 2 \sum _{i\in I^*}\gamma _{i}^{*}|\mathcal {E}_{i}|\lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,|\textbf{X}|,(i)}>\delta a_{n}|\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})=0 \end{aligned}$$
(29)

where the last equality follows by the anti-clustering condition (AC\(^{\Lambda }_{\succeq ,I^*}\)).

Now, let us focus on (26) where m is such that \(\hat{s}_{m}=\bar{s}_i\) for some \(i=1,...,\bar{u}\) or \(\hat{s}_{m}=\tilde{s}_l\) for some \(l=1,...,\tilde{u}\). Since by Proposition 9 \(\sum _{j<m_{4l}}\gamma ^*_{j}|\mathcal {E}_{j}|\rightarrow \sum _{j\in I^*}\gamma ^*_{j}|\mathcal {E}_{j}|=1\) monotonically as \(l\rightarrow \infty\), then \(\lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{|\Lambda _{r_{n}}|-\sum _{i\in I^*,i<m_{4l}}|S'_{i,4l}||\mathcal {E}_i|}{|\Lambda _{r_{n}}|}=0\). Since \(\tilde{u}+\sum _{i\in I^*,i<m_{4l}}|\bar{S}_{i,4l}||\mathcal {E}_i|=|\Lambda _{r_{n}}|-\sum _{i\in I^*,i<m_{4l}}|S'_{i,4l}||\mathcal {E}_i|\), we obtain that

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{\tilde{u}+\sum _{i\in I^*,i<m_{4l}}|\bar{S}_{i,4l}||\mathcal {E}_i|}{|\Lambda _{r_{n}}|}=0. \end{aligned}$$
(30)

By combining this with the fact that (26) is bounded by

$$\begin{aligned} \mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big ) \Big ]\le \mathbb {P}(\max _{\textbf{t}\in \hat{\mathcal {E}}_{m}}|\textbf{X}_{\textbf{t}}|>\delta a_{n})\le |\hat{\mathcal {E}}_{m}|\mathbb {P}(|\textbf{X}_{\textbf{0}}|>\delta a_{n}) \end{aligned}$$

we conclude that

$$\begin{aligned} \begin{aligned}& \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }k_{n}\bigg (\tilde{u}\mathbb {P}(|\textbf{X}_{\textbf{0}}| > \delta a_{n})+\sum _{i\in I^*,i < m_{4l}}|\bar{S}_{i,4l}||\mathcal {E}_{i}|\mathbb {P}(|\textbf{X}_{\textbf{0}}| > \delta a_{n})\bigg ) \\& =\lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{\tilde{u}+\sum _{i\in I^*,i < m_{4l}}|\bar{S}_{i,4l}||\mathcal {E}_i|}{|\Lambda _{r_{n}}|}=0. \end{aligned} \end{aligned}$$
(31)

Now, by construction (recall (7)) when \(\hat{s}_{m}=s_{k}\) for some \(k=1,...,u\) we get

$$\begin{aligned} \begin{aligned}& \mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \hat{\mathcal {E}}_{m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \bigcup _{j=m+1}^{\hat{u}}(\hat{\mathcal {E}}_{j})_{\hat{s}_{j}-\hat{s}_{m}}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]\\ &=\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_k}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_k}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ], \end{aligned} \end{aligned}$$
(32)

where we used that by definition \((\mathcal {D}^*_{i}\cap K_{2l})\setminus \bigcup _{\textbf{s}\in \{\textbf{0}\}\cup -\mathcal {G}_i}(\mathcal {E}_{i})_{\textbf{s}}=\tilde{\mathcal {D}}_i\cap K_{2l}\). Therefore, by combining (29) and (31) we have that

$$\begin{aligned} \begin{aligned} &\lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }k_{n}\bigg |\sum _{m=1}^{\hat{u}}\tilde{\Psi }_{l,m+1}(g)-\tilde{\Psi }_{l,m}(g)\\ &-\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]\bigg |=0. \end{aligned} \end{aligned}$$
(33)

Thus, for the remaining part of the proof we focus on

$$\begin{aligned} k_n\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]. \end{aligned}$$
(34)

From Lemma 15 we have that for every \(i\in I^*\)

$$\begin{aligned} \Big \{\max _{\textbf{t}\in \mathcal {E}_{i}}|\textbf{X}_{\textbf{t}}|>\delta a_{n}\Big \}\subseteq \Big \{\rho _{\mathcal {E}_{i}}(\textbf{X})>\frac{\delta }{D_{i}} a_{n}\Big \}, \end{aligned}$$
(35)

and thus we have

$$\begin{aligned}&{\lim \limits _{n\rightarrow \infty }k_{n}\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]}\\&=\lim \limits _{n\rightarrow \infty }k_{n}\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \textbf{1}\Big (\max _{\textbf{t}\in \mathcal {E}_{j_m}}|\textbf{X}_{\textbf{t}}| > \delta a_{n}\Big ) \Big ]\\&=\lim \limits _{n\rightarrow \infty }k_{n}\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big )\textbf{1}\Big (\rho _{\mathcal {E}_{j_m}}(\textbf{X}) > \frac{\delta a_{n}}{D_{j_m}} \Big ) \Big ]\\ &=\lim \limits _{n\rightarrow \infty }k_{n}\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big )\\&\quad\ \Big |\rho _{\mathcal {E}_{j_m}}(\textbf{X}) > \frac{\delta a_{n}}{D_{j_m}} \Big ]\frac{\mathbb {P}(\rho _{\mathcal {E}_{j_m}}(\textbf{X}) > \frac{\delta a_{n}}{D_{j_m}} )}{\mathbb {P}(\rho _{\mathcal {E}_{j_m}}(\textbf{X}) > a_{n})}\frac{\mathbb {P}(\rho _{\mathcal {E}_{j_m}}(\textbf{X}) > a_{n})}{\mathbb {P}(|\textbf{X}_{0}| > a_{n})}\mathbb {P}(|\textbf{X}_{0}| > a_{n})\\ &=\sum _{j\in I^*,j < m_{4l}}\Big (\frac{\delta }{D_{j}}\Big )^{-\alpha }\gamma ^*_{j}c_{j}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g(\frac{\delta }{D_{j}} Y\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}\cap K_{2l}}g(\frac{\delta }{D_{j}} Y\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]\\ &=\sum _{j\in I^*,j < m_{4l}}\int _{\delta }^{\infty }\frac{\gamma ^*_{j}c_{j}}{(D_{j})^{-\alpha }}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}\cap K_{2l}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]d(-y^{-\alpha })\\ &=\int _{\delta }^{\infty }\sum _{j\in I^*,j < m_{4l}}\frac{\gamma ^*_{j}c_{j}}{(D_{j})^{-\alpha }}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}\cap K_{2l}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]d(-y^{-\alpha }). \end{aligned}$$

By assumption \(A^{\Lambda }_{\rho }\) we can apply the dominated convergence theorem (twice) as follows. First since the integrand is bounded by the constant \(\sum _{j\in I^*}\gamma ^*_{j}c_{j}D_{j}^{\alpha }\) for every \(l\in \mathbb {N}\) and since this constant is bounded by condition \(A^{\Lambda }_{\rho }\), which makes it an integrable function for the integral \(\int _{\delta }^{\infty }d(-y^{-\alpha })\) for any \(\delta >0\), we can put the limit of l going to infinity inside the integral. Second, consider the finite counting measure \(\sum _{j\in I^*}\gamma ^*_{j}c_{j}D_{j}^{\alpha }\varepsilon _{j}(\cdot )\), where \(\varepsilon\) is the Dirac delta measure. Since the integrand is bounded by 1 and 1 is an integrable function with respect to this finite counting measure, then we can apply again the dominated convergence theorem. Hence, we obtain

$$\begin{aligned} \begin{aligned} &\lim \limits _{l\rightarrow \infty }\int _{\delta }^{\infty }\sum _{j\in I^*,j<m_{4l}}\frac{\gamma ^*_{j}c_{j}}{(D_{j})^{-\alpha }}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}\cap K_{2l}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]d(-y^{-\alpha })\\ &=\int _{\delta }^{\infty }\sum _{j\in I^*}\frac{\gamma ^*_{j}c_{j}}{(D_{j})^{-\alpha }}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]d(-y^{-\alpha }). \end{aligned} \end{aligned}$$
(36)

Since \(\rho _{\mathcal {E}_{j}}(\mathbf {\Theta })=1\) a.s., by Corollary 16 we have that \(\max _{\textbf{s}\in \mathcal {E}_{j}}|\mathbf {\Theta }_{\textbf{s}}|\le D_{j}\) a.s. Further, recall that g has compact support, in particular for any \(x\in \mathbb {R}^{d}\) with \(|x|<\delta\) we have that \(g(x)=0\). Then, we obtain that if \(y<\delta\) then \(\frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}<\delta\) a.s., and so \(g(\frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})=0\) a.s. Thus, (36) is equal to

$$\begin{aligned} \int _{0}^{\infty }\sum _{j\in I^*}\frac{\gamma ^*_{j}c_{j}}{(D_{j})^{-\alpha }}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]d(-y^{-\alpha }) \end{aligned}$$
$$\begin{aligned} {\mathop {=}\limits ^{\text {Tonelli's theorem}}}\sum _{j\in I^*}\int _{0}^{\infty }\frac{\gamma ^*_{j}c_{j}}{(D_{j})^{-\alpha }}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}}g( \frac{y}{D_{j}}\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big )\Big ]d(-y^{-\alpha }) \end{aligned}$$
$$\begin{aligned} {\mathop {=}\limits ^{({\forall j\in I^*\ \text {let}\ }z=\frac{y}{D_{j}})}}\sum _{j\in I^*}\int _{0}^{\infty }\gamma ^*_{j}c_{j}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j}}g( z\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}})}\Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j}}g( z\mathbf {\Theta }_{\mathcal {E}_{j},\textbf{t}}) \Big ) \Big ]d(-z^{-\alpha }). \end{aligned}$$

Finally, Corollary 4.14 in Kallenberg (2017) ensures the existence of the limiting random measure \(N^{\Lambda }\) and so \(N^{\Lambda }\) has the stated Laplace formulation.

8.8 Proof of Lemma 18

The anti-clustering condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) holds straightforwardly using m-dependence. The mixing condition \(\mathcal {A}^{\Lambda }(a_{n}^\Lambda )\) is much more delicate to check because of the general form of the index sets \(\Lambda _n\) and \(\Lambda _{r_n}\) and their lattice properties are required in the following proof. For every \(i\in I^*\) and \(l\in \mathbb {N}\) we consider \(Z_{i,l}=\bigcup _{\textbf{t}\in S_{i,100l}}(\mathcal {E}_i)_{\textbf{t}}\) such that

$$\begin{aligned} \mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \cup _{i\in I^*}Z_{i,l}}g(\textbf{X}_{\textbf{t}})} \bigg ]-\Psi _{N^{\Lambda }_{n}}(g)&=\mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \cup _{i\in I^*}Z_{i,l}}g(\textbf{X}_{\textbf{t}}) }\bigg (1-e^{-\sum _{\textbf{t}\in \Lambda _n\setminus (\cup _{i\in I^*}Z_{i,l})}g(\textbf{X}_{\textbf{t}})} \bigg )\bigg ]\\&\le \mathbb {P}(\max _{\textbf{t}\in \Lambda _n\setminus (\cup _{i\in I^*}Z_{i,l})}|\textbf{X}_{\textbf{t}}|>a_n)\\ {}&\le \frac{|\Lambda _n\setminus (\cup _{i\in I^*}Z_{i,l})|}{|\Lambda _n|}|\Lambda _n| \mathbb {P}(|\textbf{X}_{\textbf{0}}|>a_n)\,. \end{aligned}$$

By Proposition 9 we have that \(\lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\frac{|\Lambda _n\setminus (\cup _{i\in I^*}Z_{i,l})|}{|\Lambda _n|}=0\) and so

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\lim \limits _{n\rightarrow \infty }\mathbb {E}\bigg [e^{-\sum _{\textbf{t}\in \cup _{i\in I^*}Z_{i,l}}g(\textbf{X}_{\textbf{t}})} \bigg ]-\Psi _{N^{\Lambda }_{n}}(g)=0. \end{aligned}$$

By taking l large enough by Proposition 9 we know that \(Z_{i,l}\) and \(Z_{j,l}\) are at a distance strictly greater than m. Thus, by m-dependence we can focus on a single \(Z_{i,l}\). Consider the collection of hypercubes of side length \(r_n^\beta\), where \(\beta \in (0,1/k)\), with distance m from each other and which contains the largest number of points in \(Z_{i,l}\). If more than one combination is possible then choose one of them. Let \(Z'_{i,l}\) be the subset of these points in \(Z_{i,l}\). It is possible to see that the points in \(Z_{i,l}\setminus Z'_{i,l}\) are negligible by an argument by contradiction. Assume we do not have it, then there is a subsequence of n such that the limit of \(|Z_{i,l}\setminus Z'_{i,l}|/|\Lambda _n|\) along this subsequence is strictly positive, say greater than a certain \(\varepsilon\). Consider the first \(p_\varepsilon \in \mathbb {N}\) of the elements in \(I^*\) where \(p_\varepsilon\) is such that \(\sum _{i< p_\varepsilon }\gamma ^*_i>1-\frac{\varepsilon }{2}\). Take an l large enough such that \(l>>m\) and that \(|\gamma ^*_i-\gamma ^*_{i,l}|<\frac{\varepsilon }{2p_\varepsilon }\) for each \(i< p_\varepsilon\). By observing that each point in \(\{\textbf{t}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{t}}\cap K_{l}=\Xi _i\cap K_{l}\}\) (which are of order \(\gamma ^*_{i,l}|\Lambda _n|\) asymptotically) have the same regular structure around them, none of \(\Xi _i\) (or their translations), \(i<p_\varepsilon\), can accommodate the behaviour of the points in \(Z_{i,l}\setminus Z'_{i,l}\) along that subsequence. Thus, we obtain a contradiction.

Next, we build \(k_n\) many subsets of \(Z'_{i,l}\) of size greater (but not necessarily strictly greater) than \(r_n\gamma ^*_i\) asymptotically. This is possible because of Proposition 9 and because the building blocks of these subsets are hypercubes of size \(r_n^\beta\). Indeed, for n large enough by Proposition 9 we have that \(|Z'_{i,l}|/|\Lambda _n|\) is asymptotically greater than \(\gamma ^*_i\) and so \(|Z'_{i,l}|/k_n\) is asymptotically greater than \(\gamma ^*_i r_n\). Since \(r_n^{k\beta }\ll r_n\), \(r_n\) is approximately a multiple of \(r_n^{k\beta }\) for n large enough. Since these \(k_n\) subsets of \(Z'_{i,l}\) are unions of sets containing at most \(r_n^{k\beta }\) points and since \(r_n\) is a multiple of \(r_n^{k\beta }\), we can construct these subsets to have at least \(r_n\gamma ^*_i\) asymptotically many points. By construction, these subsets have distance m from each other and so by m-dependence we can focus on only one of these subsets, which we call \(U_{i,l}\).

Using similar arguments as in the proof of Theorem 17 (in particular Eq. (33)) we have

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\lim _{n\rightarrow \infty }k_{n}\bigg (1-\Psi _{\tilde{N}_{r_{n}}}(g) -\sum _{m=1}^{u}\mathbb {E}\Big [\Big (1-e^{-\sum _{\textbf{t}\in \mathcal {E}_{j_m}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}})} \Big )\exp \Big (-\sum _{\textbf{t}\in \tilde{\mathcal {D}}_{j_m}\cap K_{2l}}g(a_{n}^{-1}\textbf{X}_{\textbf{t}}) \Big ) \Big ]\bigg )=0. \end{aligned}$$
(37)

We remark that by Proposition 9 and by m-dependence it is sufficient to look at only one \(i\in I^*\) (this is not particularly different from the general case since for fixed l only finitely many \(i\in I^*\) are considered, see Eq. (34)).

It remains to show that the set \(U_{i,l}\) (or a subset of it whose difference is asymptotically negligible) has the same properties of the set \(S'_{i,4l}\) (from which it follows the last addendum in Eq. (37)). Observe that at the beginning of this proof we defined \(Z_{i,l}\) using \(S_{i,100l}\), the reason is that in this way we can construct a subset of \(U_{i,l}\), which we call \(U'_{i,l}\), which has the same properties of the set \(S'_{i,l}\) and such that \(U_{i,l}\setminus \bigcup _{\textbf{t}\in U'_{i,l}}(\mathcal {E}_i)_{\textbf{t}}\) is asymptotically negligible. Indeed, consider the set \(U_{i,l}\cap S_{i,100l}\). Since 100l is large we have that for every \(\textbf{t}\in S_{i,100l}\) there exists at least one \(\textbf{t}\in S_{i,4l}\) for which condition (7) is satisfied. By the asymptotic negligibility of the points in \(Z_{i,l}\setminus Z'_{i,l}\) and the fact that \(|S_{i,100l}|/|\Lambda _{n}|\rightarrow \gamma ^*_{i,100l}\ge \gamma ^*_i\) as \(n\rightarrow \infty\), we have that the points in \(Z'_{i,l}\cap S_{i,100l}(\subset S_{i,4l})\) for which condition (7) is satisfied are asymptotically \(\gamma ^*_{i,100l}|\Lambda _{n}|\). Thus, the points that lie in \(U_{i,l}\cap S_{i,100l}\) and for which condition (7) is satisfied are asymptotically \(\gamma ^*_{i,l}|\Lambda _{r_n}|\). We can now employ the reduction procedure developed in the proof of Proposition 9 to obtain a subset which has asymptotically \(\gamma ^*_{i}|\Lambda _{r_n}|\) many points. This subset is the subset \(U'_{i,l}\) we were looking for.

8.9 Proof of Proposition 19

We start by showing the following useful Lemma:

Lemma 30

Let \((\textbf{Y}_{\Upsilon ,\textbf{t}}:\textbf{t}\in \mathbb {Z}^{k})\) be an \(\mathbb {R}^{d}\)-valued random field such that the time change formula (9) is satisfied. Let \(\mathcal {L}\) be a (not necessarily full rank) lattice. Let \(\mathbf {\Theta }_{\Upsilon ,\textbf{t}}=\textbf{Y}_{\Upsilon ,\textbf{t}}/\rho _{\Upsilon }(\textbf{Y})\), \(\textbf{t}\in \mathbb {Z}^{k}\). Let \(\mathcal {H}:=\bigcup _{\textbf{s}\in \mathcal {G}}(\Upsilon )_{\textbf{s}}\) where \(\mathcal {G}=\mathcal {L}\cap \{\textbf{t}\in \mathbb {Z}^{k}:\textbf{t}\succeq \textbf{0}\}\) and such that \((\Upsilon )_{\textbf{s}}\cap (\Upsilon )_{\textbf{s}'}=\emptyset\) for every \(\textbf{s},\textbf{s}'\in \mathcal {G}\) with \(\textbf{s}\ne \textbf{s}'\). Then \(|\mathbf {\Theta }_{\Upsilon ,\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \mathcal {H}\) implies that \(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }<\infty\) a.s., and that \(\sum _{\textbf{t}\in \bigcup _{\textbf{s}\in \mathcal {L}}(\Upsilon )_{\textbf{s}}}|\mathbf {\Theta }_{\Upsilon ,\textbf{t}}|^{\alpha }<\infty\) a.s.

Proof

The proof is divided in two parts. In the first part we show that \(|\mathbf {\Theta }_{\Upsilon ,\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \bigcup _{\textbf{s}\in \mathcal {L}}(\Upsilon )_{\textbf{s}}\) and then that \(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }<\infty\) a.s.

From \(|\mathbf {\Theta }_{\Upsilon ,\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \mathcal {H}\) by continuity we obtain that \(\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \mathcal {G}\). Let \(\epsilon >0\). Observe that \(\mathcal {L}=\mathcal {G}\cup -\mathcal {G}\) and that for every \(0<c\le 1\)

$$\begin{aligned} &\mathbb {P}\bigg (\bigcup _{\textbf{t}\in \mathcal {G}}\Big \{\rho _{(\Upsilon )_{\textbf{t}}}(\textbf{Y})\ge c>\sup \limits _{\textbf{t}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\textbf{Y})\Big \}\bigg ) \\&=\sum _{\textbf{t}\in \mathcal {G}}\mathbb {P}\Big (\rho _{(\Upsilon )_{\textbf{t}}}(\textbf{Y})\ge c>\sup \limits _{\textbf{t}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\textbf{Y})\Big )= 1, \end{aligned}$$

because \(\mathbb {P}(\rho _{\Upsilon }(\textbf{Y})\ge 1)=1\), and \(\Upsilon \in \mathcal {H}\).

Suppose that \(\mathbb {P}(\sum _{\textbf{h}\in -\mathcal {G}}\textbf{1}(\rho _{(\Upsilon )_{\textbf{h}}}(\textbf{Y})>\epsilon )=\infty )>0\). We have that

$$\begin{aligned} &\mathbb {P}(\sum _{\textbf{h}\in -\mathcal {G}}\textbf{1}(\rho _{(\Upsilon )_{\textbf{h}}}(\textbf{Y})>\epsilon )=\infty ) =\sum _{\textbf{t}\in \mathcal {G}}\mathbb {P}\Big (\sum _{\textbf{h}\in -\mathcal {G}}\textbf{1}(\rho _{(\Upsilon )_{\textbf{h}}}(\textbf{Y})>\epsilon )\\&=\infty ,\rho _{(\Upsilon )_{\textbf{t}}}(\textbf{Y})\ge 1>\sup \limits _{\textbf{t}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\textbf{Y})\Big ). \end{aligned}$$

Consider any \(\textbf{t}\in \mathcal {G}\) s.t.

$$\begin{aligned} \mathbb {P}\Big (\sum _{\textbf{h}\in -\mathcal {G}}\textbf{1}(\rho _{(\Upsilon )_{\textbf{h}}}(\textbf{Y})>\epsilon ),\rho _{(\Upsilon )_{\textbf{t}}}(\textbf{Y})\ge 1>\sup \limits _{\textbf{t}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\textbf{Y})\Big )>0. \end{aligned}$$

By the time change formula (9) we get

$$\begin{aligned} \infty&=\mathbb {E}\bigg [\sum _{\textbf{h}\in -\mathcal {G}}\textbf{1}\Big (\rho _{(\Upsilon )_{\textbf{h}}}(\textbf{Y})>\epsilon ,\rho _{(\Upsilon )_{\textbf{t}}}(\textbf{Y})\ge 1>\sup \limits _{\textbf{t}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\textbf{Y})\Big )\bigg ]\\&=\sum _{\textbf{h}\in -\mathcal {G}}\mathbb {P}\bigg (\rho _{(\Upsilon )_{\textbf{h}}}(\textbf{Y})>\epsilon ,\rho _{(\Upsilon )_{\textbf{t}}}(\textbf{Y})\ge 1>\sup \limits _{\textbf{t}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\textbf{Y})\bigg )\\&=\sum _{\textbf{h}\in -\mathcal {G}}\int _{\epsilon }^{\infty }\mathbb {P}\Big (r\rho _{(\Upsilon )_{-\textbf{h}}}(\mathbf {\Theta })>1,r\rho _{(\Upsilon )_{\textbf{t}-\textbf{h}}}(\mathbf {\Theta })\ge 1>r\sup \limits _{\textbf{t}-\textbf{h}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\mathbf {\Theta })\Big )d(-r^{-\alpha })\\&{\mathop {=}\limits ^{(r=q\epsilon )}}\epsilon ^{-\alpha }\sum _{\textbf{h}\in -\mathcal {G}}\int _{1}^{\infty }\mathbb {P}\Big (q\epsilon \rho _{(\Upsilon )_{-\textbf{h}}}(\mathbf {\Theta })>1,q\epsilon \rho _{(\Upsilon )_{\textbf{t}-\textbf{h}}}(\mathbf {\Theta })\ge 1>q\epsilon \sup \limits _{\textbf{t}-\textbf{h}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\mathbf {\Theta })\Big )d(-q^{-\alpha })\\&\le \epsilon ^{-\alpha }\sum _{\textbf{h}\in -\mathcal {G}}\int _{1}^{\infty }\mathbb {P}\Big (\rho _{(\Upsilon )_{\textbf{t}-\textbf{h}}}(\mathbf {\Theta })\ge \frac{1}{q\epsilon }>\sup \limits _{\textbf{t}-\textbf{h}\prec \textbf{z},\textbf{z}\in \mathcal {G}}\rho _{(\Upsilon )_{\textbf{z}}}(\mathbf {\Theta })\Big )d(-q^{-\alpha })\\&\le \epsilon ^{-\alpha }\int _{1}^{\infty }d(-q^{-\alpha })=\epsilon ^{-\alpha }<\infty , \end{aligned}$$

where we used that for every \(\textbf{t}\in \mathcal {G}\) and \(\textbf{h}\in -\mathcal {G}\) we have that \(\textbf{t}-\textbf{h}\in \mathcal {G}\). Thus, we have a contradiction and so \(\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in -\mathcal {G}\) which by homogeneity and continuity implies that \(|\mathbf {\Theta }_{\Upsilon ,\textbf{t}}|\rightarrow 0\) a.s. as \(|\textbf{t}|\rightarrow \infty\) for \(\textbf{t}\in \bigcup _{\textbf{s}\in \mathcal {L}}(\Upsilon )_{\textbf{s}}\).

Define the random vector \(T^{*}_{\Upsilon ,\mathcal {L}}\) as follows. For every \(\textbf{t}\in \mathcal {L}\) we have

$$\begin{aligned} \{\omega :T_{\Upsilon ,\mathcal {L}}^{*}(\omega )=\textbf{t}\} =\{\omega :\mathbf {\Theta }_{\Upsilon ,\textbf{t}}(\omega )-\sup _{\textbf{s}\in \mathcal {L},\textbf{s}\prec \textbf{t}}|\mathbf {\Theta }_{\Upsilon ,\textbf{s}}(\omega )|>0\}\cap \{\omega :\mathbf {\Theta }_{\Upsilon ,\textbf{t}}(\omega )-\sup _{\textbf{s}\in \mathcal {L},\textbf{s}\succeq \textbf{t}}|\mathbf {\Theta }_{\Upsilon ,\textbf{s}}(\omega )|=0\} \end{aligned}$$

and for \(\textbf{t}\in \mathbb {Z}^{k}\setminus \mathcal {L}\) we have \(\{\omega :T^{*}(\omega )=\textbf{t}\}=\emptyset\).

Since the difference of two measurable functions is measurable and the intersection of two measurable sets is also measurable we have that \(\{\omega :T_{\Upsilon ,\mathcal {L}}^{*}(\omega )=\textbf{t}\}\) is a measurable set. Further, for any subset A of \(\mathbb {Z}^{k}\), since \((T_{\Upsilon ,\mathcal {L}}^{*})^{-1}(A)=\cup _{\textbf{t}\in A} \{\omega :T_{\Upsilon ,\mathcal {L}}^{*}(\omega )=\textbf{t}\}\) and since the union of measurable sets is measurable we have that \((T^{*})^{-1}(A)\) is measurable. Thus, \(T_{\Upsilon ,\mathcal {L}}^{*}\) is a well defined random variable.

Assume that \(\mathbb {P}(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }=\infty )>0\). We have that

$$\begin{aligned} \mathbb {P}(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }=\infty )&=\sum _{\textbf{i}\in \mathcal {L}}\mathbb {P}(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }=\infty , T^{*}_{\Upsilon ,\mathcal {L}}=\textbf{i}) \\&=\sum _{\textbf{i}\in H}\mathbb {P}(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }=\infty , T^{*}_{\Upsilon ,\mathcal {L}}=\textbf{i}), \end{aligned}$$

where H is the subset of \(\mathcal {L}\) s.t. \(\mathbb {P}(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }=\infty , T^{*}_{\Upsilon ,\mathcal {L}}=\textbf{i})>0\) for every \(\textbf{i}\in H\). Let \(\textbf{i}\in H\), then

$$\begin{aligned} \infty =\mathbb {E}\bigg [\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }\textbf{1}( T^{*}_{\Upsilon ,\mathcal {L}}=\textbf{i})\bigg ]=\sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }\textbf{1}( T^{*}_{\Upsilon ,\mathcal {L}}=\textbf{i})\bigg ]. \end{aligned}$$

Now, we generalise the arguments adopted in the proof of Lemma 3.3 in Wu and Samorodnitsky (2020). For each \(\textbf{i}\in \mathcal {L}\) define a function \(g_{\textbf{i}}:(\bar{\mathbb {R}}^{d})^{\mathbb {Z}^{k}}\rightarrow \mathbb {R}\) as follows. If \((\mathbf {\Theta }_{\Upsilon ,\textbf{s}}, \textbf{s}\in \mathbb {Z}^{k})\) is such that

$$\begin{aligned} |\mathbf {\Theta }_{\Upsilon ,\textbf{j}}|<|\mathbf {\Theta }_{\Upsilon ,\textbf{i}}|\quad \text {for}\ \textbf{j}\prec \textbf{i}\ \text {and}\ \textbf{j}\in \mathcal {L},\quad |\mathbf {\Theta }_{\Upsilon ,\textbf{j}}|\le |\mathbf {\Theta }_{\Upsilon ,\textbf{i}}|\quad \text {for}\ \textbf{j}\succeq \textbf{i}\ \text {and}\ \textbf{j}\in \mathcal {L}, \end{aligned}$$

then set \(g_{\textbf{i}}(\mathbf {\Theta }_{\Upsilon ,\textbf{z}}, \textbf{z}\in \mathbb {Z}^{k})=1\). Otherwise set \(g_{\textbf{i}}(\mathbf {\Theta }_{\Upsilon ,\textbf{z}}, \textbf{z}\in \mathbb {Z}^{k})=0\). Then, by time change formula we have

$$\begin{aligned} \infty&=\sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }\textbf{1}( T^{*}_{\Upsilon ,\mathcal {L}}=\textbf{i})\bigg ]=\sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }g_{\textbf{i}}(\mathbf {\Theta }_{\Upsilon ,\textbf{s}}, \textbf{z}\in \mathbb {Z}^{k})\bigg ]\\&=\sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }g_{\textbf{i}}\bigg (\frac{\mathbf {\Theta }_{\Upsilon ,\textbf{z}}}{\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })}, \textbf{z}\in \mathbb {Z}^{k}\bigg )\bigg ]\\&=\sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [g_{\textbf{i}}(\mathbf {\Theta }_{\Upsilon ,\textbf{z}-\textbf{t}}, \textbf{z}\in \mathbb {Z}^{k})\textbf{1}(\rho _{(\Upsilon )_{-\textbf{t}}}(\mathbf {\Theta })\ne \textbf{0})\bigg ]\\&\le \sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [g_{\textbf{i}}(\mathbf {\Theta }_{\Upsilon ,\textbf{z}-\textbf{t}}, \textbf{z}\in \mathbb {Z}^{k})\bigg ] =\sum _{\textbf{t}\in \mathcal {L}}\mathbb {E}\bigg [\textbf{1}( T_{\Upsilon ,\mathcal {L}}^{*}=\textbf{i}-\textbf{t})\bigg ]=1, \end{aligned}$$

which is a contradiction. Notice that we used the fact that by construction, for every \(\textbf{i},\textbf{t}\in \mathcal {L}\), we have \(\textbf{i}-\textbf{t}\in \mathcal {L}\).

Thus, we have \(\sum _{\textbf{t}\in \mathcal {L}}\rho _{(\Upsilon )_{\textbf{t}}}(\mathbf {\Theta })^{\alpha }<\infty\) a.s., and by homogeneity and continuity we have that \(\sum _{\textbf{t}\in \mathcal {L}}\max _{\textbf{s}\in \Upsilon }|\mathbf {\Theta }_{\Upsilon ,\textbf{t}+\textbf{s}}|^{\alpha }<\infty\) a.s., and since \(\Upsilon\) is finite we conclude that \(\sum _{\textbf{t}\in \bigcup _{\textbf{s}\in \mathcal {L}}(\Upsilon )_{\textbf{s}}}|\mathbf {\Theta }_{\Upsilon ,\textbf{t}}|^{\alpha }<\infty\) a.s. \(\square\)

Let \(j\in \mathbb {N}\). From similar arguments as the ones used in the proof of Proposition 11, we have that for every \(\textbf{t}\in \mathcal {D}_{j}\) (and so \(\textbf{t}\in \mathcal {D}_{j}\cap K_{p}\) for some \(j,p\in \mathbb {N}\)) there exists a \(n_{\textbf{t}}\in \mathbb {N}\) s.t. \(\textbf{t}\in R^{(j)}_{0,\Lambda _{m}}\) for every \(m>n_{\textbf{t}}\). Further, choose \((d_{n})_{n\in \mathbb {N}}\) such that \(d_{n}\) is the highest integer s.t. \(\max _{|\textbf{t}|\le d_{n},\textbf{t}\in \mathcal {D}_{j}}n_{\textbf{t}}< r_{n}\). It is possible to see that \(d_{n}\rightarrow \infty\) as \(n\rightarrow \infty\) and that for every \(2l<r_{n}\)

$$\begin{aligned}& {\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j}|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big )}\\&=\mathbb {P}\Big (\max _{l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} \,|\,\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_{n}}} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j}|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big )\\&\le \mathbb {P}\Big (\max _{\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_{n}}} }|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j}|\textbf{X}_{\textbf{t}}|>a_{n}^\Lambda x\Big ). \end{aligned}$$

Since

$$\begin{aligned} &{\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>C_{j}a_{n}^\Lambda x\,\big |\,\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x\Big )}\\ &=\frac{\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>C_{j}a_{n}^\Lambda x,\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x\Big )}{\mathbb {P}(\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x)}\\ &\le\frac{\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>C_{j}a_{n}^\Lambda x,\max _{\textbf{s}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{s}}|>C_{j}a_{n}^\Lambda x\Big )}{\mathbb {P}(\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x)}\\ &=\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>C_{j}a_{n}^\Lambda x|\max _{\textbf{s}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{s}}|>C_{j}a_{n}^\Lambda x\Big )\\&\quad\ \frac{\mathbb {P}(\max _{\textbf{s}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{s}}|>C_{j}a_{n}^\Lambda x)}{\mathbb {P}(\max _{\textbf{s}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{s}}|>a_{n}^\Lambda x)}\frac{\mathbb {P}(\max _{\textbf{s}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{s}}|>a_{n}^\Lambda x)}{\mathbb {P}(\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x)}, \end{aligned}$$

by condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) we obtain the following anti-clustering condition:

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>C_{j}a_{n}^\Lambda x\,\big |\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x\Big )=0. \end{aligned}$$
(38)

Now, for any \(z>0\), by the regular variation of \(\rho _{\mathcal {E}_{j}}(\textbf{X})\) (namely of \(\max _{\textbf{s}\in \mathcal {E}_{j}}|\textbf{X}_{\textbf{s}}|\)) and by (38) we have that

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\max _{2l\le |\textbf{t}|\le d_{n}\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{X}_{\textbf{t}}|>za_{n}^\Lambda x\,\big |\rho _{\mathcal {E}_{j}}(\textbf{X})>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

In other words, for any \(\epsilon >0\) and \(z>0\), there exists \(l>0\) such that for all \(w>l\)

$$\begin{aligned} \mathbb {P}\bigg (\max _{l\le |\textbf{t}|\le w\,|\,\textbf{t}\in \mathcal {D}_{j} }|\textbf{Y}_{\mathcal {E}_{j}}(\textbf{t})|>z\bigg )\le \epsilon . \end{aligned}$$

This, implies that \(\mathbb {P}(\lim \limits _{|\textbf{t}|\rightarrow \infty }|\textbf{Y}_{\mathcal {E}_{j}}(\textbf{t})|=0)=1\) and so \(\mathbb {P}(\lim \limits _{|\textbf{t}|\rightarrow \infty }|\mathbf {\Theta }_{\mathcal {E}_{j}}(\textbf{t})|=0)=1\) for \(\textbf{t}\in \mathcal {D}_{j}\). The argument holds for every \(j\in \mathbb {N}\). Since \(\tilde{\mathcal {D}}_{j}\subset \mathcal {D}_{j}\), from Lemma 30 we obtain the statement.

9 Proofs in Section 6

9.1 Proofs in Section 6.1

Since in Section 6.1 the \(\mathbb {R}^d\)-valued stationary random field \((\textbf{X}_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) is always considered in modulus and since \((|\textbf{X}_\textbf{t}|)_{\textbf{t}\in \mathbb {Z}^{k}}\) is stationary and regularly varying, it is sufficient to prove the results for a non-negative valued stationary random field \((X_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\), as we do for the remaining proofs.

Theorem 31

Consider the following conditions:

  1. (I)

    \((X_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) is a real valued stationary random field whose marginal distribution F does not have an atom at the right endpoint \(x_{F}\).

  2. (II)

    For a sequence \(u_{n}\uparrow x_{F}\) and an integer sequence \(r_{n}\rightarrow \infty\) s.t. \(k_{n}=[|\Lambda _n|/|\Lambda _{r_{n}}|]\rightarrow \infty\) the following anti-clustering condition is satisfied:

    $$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup \limits _{n\rightarrow \infty }\mathbb {P}(\hat{M}^{\Lambda ,X}_{l,r_{n}}>u_{n}\,|\, X_\textbf{0}>u_{n})=0. \end{aligned}$$
    (39)
  3. (III)

    A mixing condition holds:

    $$\begin{aligned} \mathbb {P}\big (\max _{\textbf{t}\in \Lambda _{n} }X_\textbf{t}\le u_{n}\big )-\Big (\mathbb {P}\big (\max _{\textbf{t}\in \Lambda _{r_{n}} }X_\textbf{t}\le u_{n}\big )\Big )^{k_{n}}\rightarrow 0,\quad n\rightarrow \infty , \end{aligned}$$
    (40)

    where \((u_{n})\), \((k_{n})\) and \((r_{n})\) are as in (II).

  4. (IV)

    For any \(\tau \ge 0\) there exists a sequence \((u_{n})=(u_{n}(\tau ))\) s.t. \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|\mathbb {P}(X_\textbf{0}>u_{n}(\tau ))=\tau\) and (II) and (III) are satisfied for these sequences \((u_{n})\). Then, the following statements hold:

    1. (a)

      If (I) and (II) are satisfied then

      $$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup \limits _{n\rightarrow \infty }\bigg |\theta ^{\Lambda }_{n}-\sum _{j=1}^{\infty }\lambda _{j}\mathbb {P}(\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n}\,|\, X_\textbf{0}>u_{n})\bigg |=0, \end{aligned}$$
      (41)

      and \(\liminf \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}>0\).

    2. (b)

      If (I) and (IV) are satisfied and \(\theta ^{\Lambda }_{b}=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists, then \(\theta ^{\Lambda }_{X}\in (0,1]\) exists and \(\theta ^{\Lambda }_{X}=\theta ^{\Lambda }_{b}\).

Remark 10

Notice that when \(u_{n}=a_{n}x\) then (39) is the (AC\(^{\Lambda }_{\succ }\)) condition.

Proof

Let us first focus on (41). Denote by \(t_{|\Lambda _{r_{n}}|}\) the highest element of \(\Lambda _{r_{n}}\) according to \(\prec\), by \(t_{|\Lambda _{r_{n}}|-1}\) the second highest one,..., by \(t_{1}\) the lowest one. Further, for \(m=1,...,|\Lambda _{r_{n}}|\) let \(\mathcal {M}_{m}:=\max _{j=m,...,|\Lambda _{r_{n}}|}X_{t_{j}}\) and for \(m=|\Lambda _{r_{n}}|+1\) let \(\mathcal {M}_{m}:=0\). Thus, we have \(\mathbb {P}(\mathcal {M}_{|\Lambda _{r_{n}}|+1}>u_{n})=0\) and

$$\begin{aligned} \mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_\textbf{t}> u_{n})=\sum _{m=2}^{|\Lambda _{r_{n}}|+1}-\mathbb {P}(\mathcal {M}_{m}>u_{n})+\mathbb {P}(\mathcal {M}_{m-1}>u_{n}). \end{aligned}$$

Consider \(l\in \mathbb {N}\) with \(l\ll |\Lambda _{r_{n}}|\) and for \(m=2,...,|\Lambda _{r_{n}}|\) let

$$\begin{aligned} \mathcal {M}^{\circ }_{m}:=\max _{\textbf{t}\in \{t_{m}-t_{m-1},...,t_{|\Lambda _{r_{n}}|}-t_{m-1}\}}X_\textbf{t}\quad \text { and }\quad \mathcal {M}^{\circ }_{m\setminus l}:=\max _{\textbf{t}\in \{t_{m}-t_{m-1},...,t_{|\Lambda _{r_{n}}|}-t_{m-1}\}\setminus K_{l}}X_\textbf{t}. \end{aligned}$$

We have

$$\begin{aligned} -\mathbb {P}(\mathcal {M}_{m}>u_{n})&+\mathbb {P}(\mathcal {M}_{m-1}>u_{n})=-\mathbb {P}(\mathcal {M}^{\circ }_{m}>u_{n})\\&+\mathbb {P}(\mathcal {M}^{\circ }_{m}\vee X_{\textbf{0}}>u_{n})=\mathbb {P}(\mathcal {M}^{\circ }_{m}\le u_{n},X_{\textbf{0}}>u_{n}) \end{aligned}$$
(42)

Notice that for \(m=|\Lambda _{r_{n}}|+1\) we have that

$$\begin{aligned} -\mathbb {P}(\mathcal {M}_{m}>u_{n})+\mathbb {P}(\mathcal {M}_{m-1}>u_{n})=\mathbb {P}(X_{\textbf{0}}>u_{n}) \end{aligned}$$

and so when divided by \(|\Lambda _{r_n}|\mathbb {P}(X>u_{n})\) is asymptotically negligible.

Now, for each \(j,n\in \mathbb {N}\) consider the points \(t_{m}\), for \(m=2,...,|\Lambda _{r_{n}}|\), such that \(\{t_{m}-t_{m-1},...,t_{|\Lambda _{r_{n}}|}-t_{m-1}\}\cap K_{l}= \mathcal {D}_{j}\cap K_{l}\). For such points we have that (42) is equal to

$$\begin{aligned} &\mathbb{P}(\mathcal {M}^{\circ }_{m}\le u_{n},X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n})+\mathbb {P}(\mathcal {M}^{\circ }_{m}\le u_{n},X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}> u_{n})\\ &=\mathbb {P}(\mathcal {M}^{\circ }_{m}\le u_{n},X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n})=\mathbb {P}(\mathcal {M}^{\circ }_{m\setminus l}\le u_{n},X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n})\\ &=\mathbb {P}(X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n})-\mathbb {P}(\mathcal {M}^{\circ }_{m\setminus l}> u_{n},X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n}). \end{aligned}$$

and that

$$\begin{aligned} \mathbb {P}(\mathcal {M}^{\circ }_{m\setminus l}> u_{n},X_{\textbf{0}}>u_{n},\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n})&\le \mathbb {P}(\mathcal {M}^{\circ }_{m\setminus l}> u_{n},X_{\textbf{0}}>u_{n}) \\&\le \mathbb {P}(\hat{M}_{l,r_{n}}^{X}> u_{n},X_{\textbf{0}}>u_{n}). \end{aligned}$$

For the points \(t_{m}\), for \(m=2,...,|\Lambda _{r_{n}}|\), such that \(\{t_{m}-t_{m-1},...,t_{|\Lambda _{r_{n}}|}-t_{m-1}\}\cap K_{l}\ne \mathcal {D}_{j}\cap K_{l}\) for every \(j\in \mathbb {N}\), we will use that

$$\begin{aligned} \frac{\mathbb {P}(\mathcal {M}^{\circ }_{m}\le u_{n},X_{\textbf{0}}>u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\le \frac{1}{|\Lambda _{r_{n}}|} \end{aligned}$$

Observe that there are finitely many different subsets of \(K_{l}\) and, following the notation of the proof of Theorem 10, we denote their total number by \(\tau _{l}\) and denote them by \(\mathcal {U}_{l}^{(1)},....,\mathcal {U}_{l}^{(\tau _{l})}\). Further, for \(z=1,...,\tau _{l}\), we let \(\mu _{r_{n}}^{(z)}\) be the number of points \(t_{m}\), \(m=2,...,|\Lambda _{r_{n}}|\), such that \(\{t_{m}-t_{m-1},...,t_{|\Lambda _{r_{n}}|}-t_{m-1}\}\cap K_{l}=\mathcal {U}_{l}^{(z)}\), that is \(\mu _{r_{n}}^{(z)}=|\{t_{m},m=2,...,|\Lambda _{r_{n}}|:\{t_{m}-t_{m-1},...,t_{|\Lambda _{r_{n}}|}-t_{m-1}\}\cap K_{l}=\mathcal {U}_{l}^{(z)}\}|\). Let \(\mathfrak {I}_{l}^{(z)}=\{i\in \mathbb {N}:\mathcal {U}_{l}^{(z)}=\mathcal {D}_{i}\cap K_{l}\}\). Then, by (i) and (ii) and in particular by point (I) in Proposition 3 we have

$$\begin{aligned} \frac{\mu _{r_{n}}^{(z)}}{|\Lambda _{r_{n}}|}\rightarrow \sum _{i\in \mathfrak {I}_{l}^{(z)}}\lambda _{i},\quad \quad \text {as}\ n\rightarrow \infty . \end{aligned}$$

Notice that if \(\mathcal {U}_{l}^{(z)}\ne \mathcal {D}_{i}\cap K_{l}\), for every \(i\in \mathbb {N}\), then \(\mathfrak {I}_{l}^{(z)}\) is empty and so \(\frac{\mu _{r_{n}}^{(z)}}{|\Lambda _{r_n}|}\rightarrow 0\), as \(n\rightarrow \infty\), and we let \(Z_{l}\) the subset of \(\{1,...,\tau _{l}\}\) of such zs. Further, for \(z\in \{1,...,\tau _{l}\}\setminus Z_{l}\) we let \(\mathcal {D}_{j(z)}\) indicate the (or one of the) \(\mathcal {D}_{i}\) such that \(\mathcal {U}_{l}^{(z)}= \mathcal {D}_{i}\cap K_{l}\). Thus,

$$\begin{aligned} \sum _{z\in \{1,...,\tau _{l}\}\setminus Z_{l}}\frac{\mu _{r_{n}}^{(z)}}{|\Lambda _{r_n}|}{\mathop {\rightarrow }\limits ^{n\rightarrow \infty }}\sum _{z\in \{1,...,\tau _{l}\}\setminus Z_{l}}\sum _{i\in \mathfrak {I}_{l}^{(z)}}\lambda _{i}= \sum _{i=1}^{\infty }\lambda _{i}=1. \end{aligned}$$

Hence, we have that

$$\begin{aligned} \theta ^{\Lambda }_{n}=\frac{\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_\textbf{t}>u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}= \sum _{z\in \{1,...,\tau _{l}\}\setminus Z_{l}}\frac{\mu _{r_{n}}^{(z)}}{|\Lambda _{r_{n}}|}\mathbb {P}(\max _{\textbf{t}\in \mathcal {D}_{j(z)}\cap K_{l} }X_\textbf{t}\le u_{n}|X_{\textbf{0}}>u_{n})+A_{l,n} \end{aligned}$$

where \(A_{l,n}\) is such that

$$\begin{aligned} |A_{l,n}|\le \sum _{z\in \{1,...,\tau _{l}\}\setminus Z_{l}}\frac{\mu _{r_{n}}^{(z)}}{|\Lambda _{r_{n}}|}\mathbb {P}(\hat{M}_{l,r_{n}}^{X}> u_{n}|X_{\textbf{0}}>u_{n}) + \sum _{z\in Z_{l}}\frac{\mu _{r_{n}}^{(z)}}{|\Lambda _{r_{n}}|}. \end{aligned}$$

Therefore, applying (39) we obtain (41).

To show that \(\liminf \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}>0\) we proceed as follows. Consider the a set of points composed by points \(\{t_{1},...,t_{|\Lambda _{r_{n}}|}\}\) which have a supremum distance of at least 2l, and denote it \(W_{n}\) and its points (in increasing order according to \(\succ\)) by \(w_{1},...,w_{p_{n}}\) for some \(p_{n}\in \mathbb {N}\). Observe that the sets

$$\begin{aligned} \left\{ X_{w_{m}}>u_{n},\max _{\textbf{t}\succeq w_{m},\textbf{t}\in \{t_{1},...,t_{|\Lambda _{r_{n}}|}\}\setminus K_{l-1}(w_{m})}X_{\textbf{t}}\le u_{n} \right\} \,, \end{aligned}$$

for \(m=1,...,p_{n}\), are disjoint and their union is a subset of \(\{\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}> u_{n}\}\). Observe also that \(|\Lambda _{r_{n}}|\ge |W_{n}|\ge \lfloor |\Lambda _{r_{n}}|/(2l)^{k}\rfloor\). Hence, we have

$$\begin{aligned} \theta ^{\Lambda }_{n}&=\frac{\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}>u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&\ge \sum _{m=1}^{p_{n}}\frac{\mathbb {P}(X_{w_{m}}>u_{n},\max _{\textbf{t}\succeq w_{m},\textbf{t}\in \{t_{1},...,t_{|\Lambda _{r_{n}}|}\}\setminus K_{l-1}(w_{m})}X_\textbf{t}\le u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&=\sum _{m=1}^{p_{n}}\frac{\mathbb {P}(X_\textbf{0}>u_{n},\max _{\textbf{t}\succeq \textbf{0},\textbf{t}\in \{t_{1}-w_{m},...,t_{|\Lambda _{r_{n}}|}-w_{m}\}\setminus K_{l-1}}X_\textbf{t}\le u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&\ge \sum _{m=1}^{p_{n}}\frac{\mathbb {P}(X_{\textbf{0}}>u_{n},\hat{M}_{l-1,r_{n}}\le u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&=\frac{p_{n}}{|\Lambda _{r_{n}}|}[1-\mathbb {P}(\hat{M}_{l-1,r_{n}}> u_{n}|X_{\textbf{0}}>u_{n})]\\&\ge \frac{\lfloor |\Lambda _{r_{n}}|/(2l)^{k}\rfloor }{|\Lambda _{r_{n}}|}[1-\mathbb {P}(\hat{M}_{l-1,r_{n}}> u_{n}|X_{\textbf{0}}>u_{n})]. \end{aligned}$$

Then

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }\theta _{n}=\frac{1}{(2l)^{k}}[1-\limsup \limits _{n\rightarrow \infty }\mathbb {P}(\hat{M}_{l-1,r_{n}}> u_{n}|X_{\textbf{0}}>u_{n})]. \end{aligned}$$

This proves point (a).

Now, assume that \(\theta ^{\Lambda }_{b}=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists. We need to show that \(\theta ^{\Lambda }_{X}=\theta ^{\Lambda }_{b}\). By Taylor expansion we have

$$\begin{aligned} (\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}\le u_{n}))^{k_{n}}&=\exp (k_{n}\log (1-\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}> u_{n})))\\&=\exp \Big (-\frac{|\Lambda _{n}|}{|\Lambda _{r_{n}}|}\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}> u_{n})(1+o(1))\Big )\\&=\exp \Big (-\frac{\tau }{|\Lambda _{r_{n}}|}\frac{\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}> u_{n})}{\mathbb {P}(X>u_{n})}(1+o(1))\Big )\\&=\exp \Big (-\tau \theta ^\Lambda _{n}(1+o(1))\Big )\rightarrow e^{-\theta _{b}\tau },\quad \text {as}\ n\rightarrow \infty . \end{aligned}$$

Hence, by the mixing condition (40) we have

$$\begin{aligned} \mathbb {P}(\max _{\textbf{t}\in \Lambda _{n} }X_{\textbf{t}}\le u_{n})&=[\mathbb {P}(\max _{\textbf{t}\in \Lambda _{n} }X_{\textbf{t}}\le u_{n})-(\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}\le u_{n}))^{k_{n}}]\\ &\quad +(\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}\le u_{n}))^{k_{n}}\rightarrow e^{-\theta _{b}\tau },\quad \text {as}\ n\rightarrow \infty . \end{aligned}$$

Since this holds for any \(\tau >0\), we conclude that \(\theta ^{\Lambda }_{X}=\theta ^{\Lambda }_{b}\). \(\square\)

Proof of Theorem 23

Recall (41) and let \(\theta ^{(l)}:=\lim \limits _{n\rightarrow \infty }\sum _{j=1}^{\infty }\lambda _{j}\mathbb {P}(\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }X_\textbf{t}\le u_{n}\,|\, X_{\textbf{0}}>u_{n})\), for \(l\in \mathbb {N}\). By the continuous mapping theorem (and noticing that the sum is actually a finite sum since there are finitely many different combination of points inside \(K_{l}\) for given l) we have \(\theta ^{(l)}=\sum _{j=1}^{\infty }\lambda _{j}\mathbb {P}(\max _{\textbf{t}\in \mathcal {D}_{j}\cap K_{l} }Y_{\textbf{t}}\le 1)\) and by monotonicity of the probability measure we obtain \(\theta ^{(l)}\downarrow \sum _{j=1}^{\infty }\lambda _{j}\mathbb {P}(\sup _{\textbf{t}\in \mathcal {D}_{j} }Y_{\textbf{t}}\le 1)\) as \(l\rightarrow \infty\). Given that \(\lim \limits _{l\rightarrow \infty }\theta ^{(l)}=\theta ^{\Lambda }_{b}\) exists, Theorem 31 point (a) ensures that for \((u_{n})=(u_{n}(\tau ))\) and some \(\tau >0\) we have that \(\theta ^{\Lambda }_{b}=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists and is positive. Then, from Theorem 31 point (b) for \((u_{n})=(u_{n}(\tau ))\) and arbitrary \(\tau >0\) we obtain that \(\theta ^{\Lambda }_{X}\) exists, is positive and it is equal to \(\theta ^{\Lambda }_{b}\), hence we obtain point (2). Moreover, from these arguments we immediately obtain the first equality in (13), while for the others, using \(\Theta _{\textbf{0}}{\mathop {=}\limits ^{a.s.}}1\), we have that

$$\begin{aligned} \theta ^{\Lambda }_{b}&=\sum _{j=1}^{\infty }\lambda _{j}\mathbb {P}(Y\sup _{\textbf{t}\in \mathcal {D}_{j} }|\mathbf \Theta _\textbf{t}|\le 1)=\sum _{j=1}^{\infty }\lambda _{j}\bigg (1-\int _{1}^{\infty }\mathbb {P}(y\sup _{\textbf{t}\in \mathcal {D}_{j}}\Theta _{\textbf{t}}> 1)d(-y^{-\alpha })\bigg )\\&=\sum _{j=1}^{\infty }\lambda _{j}\bigg (1-\int _{0}^{1}\mathbb {P}(\sup _{\textbf{t}\in \mathcal {D}_{j}}\Theta _{\textbf{t}}^{\alpha }> u)du\bigg )=\sum _{j=1}^{\infty }\lambda _{j}\bigg (1-\mathbb {E}\Big [\sup _{\textbf{t}\in \mathcal {D}_{j}}\Theta _{\textbf{t}}^{\alpha }\wedge 1 \Big ]\bigg )\\&=\sum _{j=1}^{\infty }\lambda _{j}\bigg (\mathbb {E}\Big [\Big (1-\sup _{\textbf{t}\in \mathcal {D}_{j}}\Theta _{\textbf{t}}^{\alpha }\Big )_{+} \Big ]\bigg )=\sum _{j=1}^{\infty }\lambda _{j}\bigg (\mathbb {E}\Big [\sup _{\textbf{t}\in \mathcal {D}_{j}\cup \{\textbf{0}\}}\Theta _{\textbf{t}}^{\alpha }-\sup _{\textbf{t}\in \mathcal {D}_{j}}\Theta _{\textbf{t}}^{\alpha } \Big ]\bigg ). \end{aligned}$$

\(\square\)

Theorem 32

Consider the following conditions:

  1. (I)

    \((X_\textbf{t})_{\textbf{t}\in \mathbb {Z}^{k}}\) is a real valued stationary random field whose marginal distribution F does not have an atom at the right endpoint \(x_{F}\).

  2. (II)

    For a sequence \(u_{n}\uparrow x_{F}\) and an integer sequence \(r_{n}\rightarrow \infty\) s.t. \(k_{n}=[|\Lambda _n|/|\Lambda _{r_{n}}|]\rightarrow \infty\) the following anti-clustering condition is satisfied: for every \(j\in I^*\)

    $$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup \limits _{n\rightarrow \infty }\mathbb {P}(\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>u_{n}\,|\, \max \limits _{\textbf{t}\in \mathcal {E}_j}X_\textbf{t}>u_{n})=0. \end{aligned}$$
    (43)
  3. (III)

    A mixing condition holds:

    $$\begin{aligned} \mathbb {P}(\max _{\textbf{t}\in \Lambda _{n} }X_\textbf{t}\le u_{n})-(\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_\textbf{t}\le u_{n}))^{k_{n}}\rightarrow 0,\quad n\rightarrow \infty , \end{aligned}$$
    (44)

    where \((u_{n})\), \((k_{n})\) and \((r_{n})\) are as in (II).

  4. (IV)

    For any \(\tau \ge 0\) there exists a sequence \((u_{n})=(u_{n}(\tau ))\) s.t. \(\lim \limits _{n\rightarrow \infty }|\Lambda _n|\mathbb {P}(X_\textbf{0}>u_{n}(\tau ))=\tau\) and (II) and (III) are satisfied for these sequences \((u_{n})\). Then, the following statements hold:

    1. (a)

      If (I) and (II) are satisfied then

      $$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup \limits _{n\rightarrow \infty }\bigg |\theta ^{\Lambda }_{n}-\sum _{h\in I^*,h<m_{4l}}\mathbb {P}(\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }X_\textbf{t}\le u_{n}|\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n})\frac{|S'_{h,4l}|}{|\Lambda _{r_n}|}\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n})}{\mathbb {P}(X>u_{n})}\bigg |=0, \end{aligned}$$
      (45)

      and \(\liminf \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}>0\).

    2. (b)

      If (I) and (IV) are satisfied and \(\theta ^{\Lambda }_{b}=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists, then \(\theta ^{\Lambda }_{X}\in (0,1]\) exists and \(\theta ^{\Lambda }_{X}=\theta ^{\Lambda }_{b}\).

Remark 11

Notice that when \(u_{n}=a_{n}x\) then (43) is the (AC\(^{\Lambda }_{\succ ,I^*}\)) condition.

Proof

Denote by \(t_{|\Lambda _{r_{n}}|}\) the highest element of \(\Lambda _{r_{n}}\) according to \(\prec\), by \(t_{|\Lambda _{r_{n}}|-1}\) the second highest one,..., by \(t_{1}\) the lowest one. Consider the s, \(\tilde{s}\), and \(\hat{s}\) introduced in the proof of Theorem 17. Let \(l\in \mathbb {N}\). For \(m=1,...,\hat{u}\) let \(\mathfrak {M}_{m}:=\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}}}X_\textbf{t}\) and for \(m=|\Lambda _{r_{n}}|+1\) let \(\mathfrak {M}_{m}:=0\). Thus, we have \(\mathbb {P}(\mathfrak {M}_{|\Lambda _{r_{n}}|+1}>u_{n})=0\) and

$$\begin{aligned} \mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_\textbf{t}> u_{n})=\sum _{m=2}^{\hat{u}+1}-\mathbb {P}(\mathfrak {M}_{m}>u_{n})+\mathbb {P}(\mathfrak {M}_{m-1}>u_{n}). \end{aligned}$$

For \(m=2,...,\hat{u}\) let \(\mathfrak {M}^{\circ }_{m}:=\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}}X_\textbf{t}\) and \(\mathfrak {M}^{\circ }_{m\setminus l}:=\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\setminus K_{l}}X_\textbf{t}\). We have

$$\begin{aligned} \begin{aligned} -\mathbb {P}(\mathfrak {M}_{m}>u_{n})+\mathbb {P}(\mathfrak {M}_{m-1}>u_{n})&=-\mathbb {P}(\mathfrak {M}^{\circ }_{m}>u_{n})+\mathbb {P}(\mathfrak {M}^{\circ }_{m}\vee \max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n})\\ &=\mathbb {P}(\mathfrak {M}^{\circ }_{m}\le u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n}) \end{aligned} \end{aligned}$$
(46)

Notice that for \(m=|\Lambda _{r_{n}}|+1\) we have that

$$\begin{aligned} -\mathbb {P}(\mathfrak {M}_{m}>u_{n})+\mathbb {P}(\mathfrak {M}_{m-1}>u_{n})=\mathbb {P}(\max _{\textbf{t}\in \hat{\mathcal {E}}_{\hat{u}}}X_\textbf{t}>u_{n})\le \mathbb {P}(\max _{\textbf{t}\in K_{l},\textbf{t}\succeq \textbf{0}}X_\textbf{t}>u_{n}) \end{aligned}$$

and so when divided by \(|\Lambda _{r_n}|\mathbb {P}(X>u_{n})\) is asymptotically negligible, for every fixed \(l\in \mathbb {N}\).

Moreover, we have that (46) is equal to

$$\begin{aligned}&{\mathbb {P}(\mathfrak {M}^{\circ }_{m}\le u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}\le u_{n})}\\&\quad+\ \mathbb {P}(\mathfrak {M}^{\circ }_{m}\le u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}> u_{n})\\ &=\mathbb {P}(\mathfrak {M}^{\circ }_{m}\le u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}\le u_{n})\\ &=\mathbb {P}(\mathfrak {M}^{\circ }_{m\setminus 2l}\le u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}\le u_{n})\\ &=\mathbb {P}(\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}\le u_{n})\\&\quad-\ \mathbb {P}(\mathfrak {M}^{\circ }_{m\setminus 2l}> u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}\le u_{n}) \end{aligned}$$

where

$$\begin{aligned} &\mathbb {P}(\mathfrak {M}^{\circ }_{m\setminus 2l}> u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n},\max _{\textbf{t}\in \cup _{i=m}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-\hat{s}_{m-1}}\cap K_{2l}}X_\textbf{t}\le u_{n})\\ &\le \mathbb {P}(\mathfrak {M}^{\circ }_{m\setminus 2l}> u_{n},\max _{\textbf{t}\in \hat{\mathcal {E}}_{m-1}}X_\textbf{t}>u_{n})\le \mathbb {P}(\hat{M}_{2l,r_{n}}^{\Lambda ,X,(j)}> u_{n},\max _{\textbf{t}\in \mathcal {E}_{j}}X_\textbf{t}>u_{n}), \end{aligned}$$

for some \(j\in I^*\) with \(j<m_{4l}\). Hence, we have that

$$\begin{aligned} \theta ^{\Lambda }_{n}&=\frac{\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_\textbf{t}>u_{n})}{|\Lambda _{r_n}|\mathbb {P}(X>u_{n})} \\&= \sum _{i\in u}\mathbb {P}(\max _{\textbf{t}\in \tilde{\mathcal {D}}_{j_i}\cap K_{2l} }X_\textbf{t}\le u_{n}|\max _{\textbf{t}\in \mathcal {E}_{j_i}}X_\textbf{t}>u_{n})\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{j_i}}X_\textbf{t}>u_{n})}{|\Lambda _{r_n}|\mathbb {P}(X>u_{n})}+B_{l,n} \end{aligned}$$

where the absolute value of \(B_{l,n}\) is such that

$$\begin{aligned}& |B_{l,n}|\le \frac{\sum _{h\in I^*,h < m_{4l}}|\bar{S}_{h,4l}||\mathcal {E}_{h}|+\tilde{u}}{|\Lambda _{r_n}|}\\ &+ \sum _{i\in I^*,i < m_{4l}}\mathbb {P}(\hat{M}_{l,r_{n}}^{\Lambda ,X,(j_i)} > u_{n}|\max _{\textbf{t}\in \mathcal {E}_{j_i}}X_\textbf{t} >u_{n})\frac{|S'_{j_i,4l}|\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{j_i}}X_\textbf{t}>u_{n})}{|\Lambda _{r_n}|\mathbb {P}(X>u_{n})}. \end{aligned}$$

By (43) and by the same arguments as the ones used in the proof of Theorem 17, see in particular (28), (29), and (30), we obtain that \(\lim \limits _{l\rightarrow \infty }\limsup \limits _{n\rightarrow \infty }|B_{l,n}|=0\). Moreover, since

$$\begin{aligned} &\sum _{i\in u}\mathbb {P}(\max _{\textbf{t}\in \tilde{\mathcal {D}}_{j_i}\cap K_{2l} }X_\textbf{t}\le u_{n}|\max _{\textbf{t}\in \mathcal {E}_{j_i}}X_\textbf{t}>u_{n})\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{j_i}}X_\textbf{t} > u_{n})}{|\Lambda _{r_n}|\mathbb {P}(X > u_{n})}\\& =\sum _{h\in I^*,h < m_{4l}}|S'_{h,4l}|\mathbb {P}(\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }X_\textbf{t}\le u_{n}|\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t} > u_{n})\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t} > u_{n})}{|\Lambda _{r_n}|\mathbb {P}(X > u_{n})}. \end{aligned}$$

we obtain (45).

To show that \(\liminf \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}>0\) we proceed as follows. Consider \(S'_{h,4l}\), for some \(h\in I^*\) with \(h<m_{4l}\). Let \(W^{(h)}_n\) be the set of points in \(S'_{h,4l}\) that have supremum distance of 4l from each other. Observe that the sets

$$\begin{aligned} \left\{ \max _{\textbf{t}\in \mathcal {E}_{h}}X_{\textbf{t}+s_{m}}>u_{n},\max _{\textbf{t}\succ s_m,\textbf{t}\in \cup _{i=1}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}}\setminus K_{2l}}X_\textbf{t}\le u_{n} \right\} ,\quad s_m\in W^{(h)}_n, \end{aligned}$$

are disjoint and their union is a subset of \(\{\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}> u_{n}\}\). This is because

$$\begin{aligned} \bigg \{\max _{\textbf{t}\in K_{2l}\cap \Lambda _{r_{n}}}X_{\textbf{t}+s_{m}}>u_{n},\max _{\textbf{t}\succ s_m,\textbf{t}\in \cup _{i=1}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}}\setminus K_{2l}}X_\textbf{t}\le u_{n} \bigg \},\quad s_m\in W^{(h)}_n, \end{aligned}$$

are disjoint and their union is a subset of \(\{\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}> u_{n}\}\) and because

$$\begin{aligned} \left\{ \max _{\textbf{t}\in K_{2l}\cap \Lambda _{r_{n}}}X_{\textbf{t}+s_{m}}>u_{n}\right\} \supset \left\{ \max _{\textbf{t}\in \mathcal {E}_{h}}X_{\textbf{t}+s_{m}}>u_{n}\right\} . \end{aligned}$$

Observe that \(|W^{(h)}_n|\ge \frac{|S'_{h,4l}|}{(4l)^{k}}\). Then, we have

$$\begin{aligned} \theta ^{\Lambda }_{n}&=\frac{\mathbb {P}(\max _{\textbf{t}\in \Lambda _{r_{n}} }X_{\textbf{t}}>u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\ge \sum _{s_m\in W^{(h)}_n}\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_{\textbf{t}+s_{m}}>u_{n},\max _{\textbf{t}\succ s_m,\textbf{t}\in \cup _{i=1}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}}\setminus K_{2l}}X_\textbf{t}\le u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&=\sum _{s_m\in W^{(h)}_n}\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n},\max _{\textbf{t}\succ 0,\textbf{t}\in \cup _{i=1}^{\hat{u}}(\hat{\mathcal {E}}_{i})_{\hat{s}_{i}-s_m}\setminus K_{2l}}X_\textbf{t}\le u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&\ge |W^{(h)}_n|\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n},\hat{M}^{\Lambda ,X,(h)}_{2l,r_{n}}\le u_{n})}{|\Lambda _{r_{n}}|\mathbb {P}(X>u_{n})}\\&=\Big (1-\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n}|\hat{M}^{\Lambda ,X,(h)}_{2l,r_{n}}\le u_{n})\Big )\frac{|W^{(h)}_n|}{|\Lambda _{r_{n}}|}\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n})}{\mathbb {P}(X>u_{n})}. \end{aligned}$$

Then

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }\theta _{n}\ge \frac{\gamma ^*_h c_h}{(4l)^{k}}[1-\limsup \limits _{n\rightarrow \infty }\mathbb {P}(\hat{M}^{\Lambda ,X,(h)}_{2l,r_{n}}> u_{n}|X_{\textbf{0}}>u_{n})]>0, \end{aligned}$$

for some l large enough. This proves point (a). The proof of point (b) follows from the same arguments as the ones used for the proof of Theorem 31 point (b). \(\square\)

Proof of Theorem 24

Recall (45) and let

$$\begin{aligned} \theta ^{(l)}:=\lim \limits _{n\rightarrow \infty }\sum _{h\in I^*,h<m_{4l}}\mathbb {P}(\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }X_\textbf{t}\le u_{n}|\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n})\frac{|S'_{h,4l}|}{|\Lambda _{r_n}|}\frac{\mathbb {P}(\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n})}{\mathbb {P}(X>u_{n})}, \end{aligned}$$

for \(l\in \mathbb {N}\). Using the arguments in the proof of Theorem 17, we have that

$$\begin{aligned} \theta ^{(l)}&=\lim \limits _{n\rightarrow \infty }\sum _{h\in I^*,h<m_{4l}}\gamma ^*_h\frac{\mathbb {P}(\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }X_\textbf{t}\le u_{n},\max _{\textbf{t}\in \mathcal {E}_{h}}X_\textbf{t}>u_{n},\rho _{\mathcal {E}_{h}}(X)>\frac{u_{n}}{D_{h}})}{\mathbb {P}(X>u_{n})}\\&=\sum _{h\in I^*,h<m_{4l}}\gamma ^*_{h}c_{h}D_{h}^{\alpha }\mathbb {P}\Big (\max _{\textbf{t}\in \mathcal {E}_{h}}Y_{\mathcal {E}_{h},\textbf{t}}>D_{h},\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }Y_{\mathcal {E}_{h},\textbf{t}}\le D_{h}\Big )\\&=\sum _{h\in I^*,h<m_{4l}}\int _{1}^{\infty }\gamma ^*_{h}c_{h}D_{h}^{\alpha }\mathbb {P}\Big (y\max _{\textbf{t}\in \mathcal {E}_{h}}\Theta _{\mathcal {E}_{h},\textbf{t}}>D_{h},y\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }\Theta _{\mathcal {E}_{h},\textbf{t}}\le D_{h}\Big )d(-y^{-\alpha }), \end{aligned}$$

and

$$\begin{aligned} \theta ^{\Lambda }_{b}&=\lim \limits _{l\rightarrow \infty }\theta ^{(l)}=\lim \limits _{l\rightarrow \infty }\sum _{h\in I^*,h<m_{4l}}\int _{1}^{\infty }\gamma ^*_{h}c_{h}D_{h}^{\alpha }\mathbb {P}\Big (y\max _{\textbf{t}\in \mathcal {E}_{h}}\Theta _{\mathcal {E}_{h},\textbf{t}}>D_{h},y\max _{\textbf{t}\in \tilde{\mathcal {D}}_{h}\cap K_{2l} }\Theta _{\mathcal {E}_{h},\textbf{t}}\le D_{h}\Big )d(-y^{-\alpha })\\&=\sum _{h\in I^*}\int _{1}^{\infty }\gamma ^*_{h}c_{h}D_{h}^{\alpha }\mathbb {P}\Big (y\max _{\textbf{t}\in \mathcal {E}_{h}}\Theta _{\mathcal {E}_{h},\textbf{t}}>D_{h},y\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{h} }\Theta _{\mathcal {E}_{h},\textbf{t}}\le D_{h}\Big )d(-y^{-\alpha })\\&=\sum _{h\in I^*}\int _{0}^{\infty }\gamma ^*_{h}c_{h}D_{h}^{\alpha }\mathbb {P}\Big (y\max _{\textbf{t}\in \mathcal {E}_{h}}\Theta _{\mathcal {E}_{h},\textbf{t}}>D_{h},y\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{h} }\Theta _{\mathcal {E}_{h},\textbf{t}}\le D_{h}\Big )d(-y^{-\alpha }). \end{aligned}$$

Given that \(\lim \limits _{l\rightarrow \infty }\theta ^{(l)}=\theta ^{\Lambda }_{b}\) exists, Theorem 32 point (a) ensures that for \((u_{n})=(u_{n}(\tau ))\) and some \(\tau >0\) we have that \(\theta ^{\Lambda }_{b}=\lim \limits _{n\rightarrow \infty }\theta ^{\Lambda }_{n}\) exists and is positive. Thus, from Theorem 32 point (b) for \((u_{n})=(u_{n}(\tau ))\) and arbitrary \(\tau >0\) we obtain that \(\theta ^{\Lambda }_{X}\) exists, is positive and it is equal to \(\theta ^{\Lambda }_{b}\). From these arguments we obtain the first equality in (16) while for the others we have that we have that

$$\begin{aligned} \theta ^{\Lambda }_{b}&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}D_{i}^{\alpha }\mathbb {P}\Big (Y\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta _{\mathcal {E}_{i},\textbf{t}}>D_{i},Y\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta _{\mathcal {E}_{i},\textbf{t}}\le D_{i}\Big )\\&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}D_{i}^{\alpha }\bigg (\mathbb {P}\Big (Y\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta _{\mathcal {E}_{i},\textbf{t}}>D_{i}\Big )-\mathbb {P}\Big (Y\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta _{\mathcal {E}_{i},\textbf{t}}>D_{i},Y\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta _{\mathcal {E}_{i},\textbf{t}}> D_{i}\Big )\bigg )\\&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}D_{i}^{\alpha }\int _{1}^{\infty }\mathbb {P}\Big (y\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta _{\mathcal {E}_{i},\textbf{t}}>D_{i}\Big )-\mathbb {P}\Big (y\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta _{\mathcal {E}_{i},\textbf{t}}>D_{i},y\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta _{\mathcal {E}_{i},\textbf{t}}> D_{i}\Big )d(-y^{-\alpha })\\&{\mathop {=}\limits ^{u=y^{-\alpha }}}\sum _{i\in I^*}\gamma ^*_{i}c_{i}D_{i}^{\alpha }\int _{0}^{1}\mathbb {P}\Big (\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}>uD^{\alpha }_{i}\Big )-\mathbb {P}\Big (\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}>uD^{\alpha }_{i},\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}> uD_{i}^{\alpha }\Big )du\\&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}D_{i}^{\alpha }\bigg (\mathbb {E}\Big [D^{-\alpha }_{i}\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge 1\Big ]-\mathbb {E}\Big [D_{i}^{-\alpha }\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge D_{i}^{-\alpha }\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge 1\Big ]\bigg )\\&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}\bigg (\mathbb {E}\Big [\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge D_{i}^{\alpha }\Big ]-\mathbb {E}\Big [\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge \sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge D_{i}^{\alpha }\Big ]\bigg )\\&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}\bigg (\mathbb {E}\Big [\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\Big ]-\mathbb {E}\Big [\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\wedge \sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\Big ]\bigg )\\&=\sum _{i\in I^*}\gamma ^*_{i}c_{i}\mathbb {E}\Big [\Big (\max _{\textbf{t}\in \mathcal {E}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}- \sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\Big )_{+}\Big ]=\sum _{i\in I^*}\gamma ^*_{i}c_{i}\mathbb {E}\Big [\sup _{\textbf{t}\in \mathcal {E}_{i}\cup \tilde{\mathcal {D}}_{i}}\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}-\sup _{\textbf{t}\in \tilde{\mathcal {D}}_{i} }\Theta ^{\alpha }_{\mathcal {E}_{i},\textbf{t}}\Big ], \end{aligned}$$

where we used the fact that since \(\rho _{\mathcal {E}_{i}}(\Theta )=1\) a.s., then \(\max _{\textbf{s}\in \mathcal {E}_{i}}\Theta (\textbf{s}\le D_{i}\) a.s. by Corollary 16. Moreover, applying a change of variable and the time change formula we get the representation

$$\begin{aligned} \sum _{j\in I^*}\gamma ^*_{j}c_{j}\mathbb {E}\bigg [\frac{\sup _{\textbf{t}\in \mathcal {H}_{j}}\Theta _{\mathcal {E}_{j},\textbf{t}}^{\alpha }}{\sum _{\textbf{s}\in \mathcal {H}_{j}}|\Theta _{\mathcal {E}_{j},\textbf{s}}|^{\alpha }} \bigg ]=\sum _{j\in I^*}\gamma ^*_{j}c_{j}\mathbb {E}\bigg [\sup \limits _{\textbf{t}\in \mathcal {H}_{j}}{Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}}^{\alpha }\bigg ]. \end{aligned}$$

Finally, by the time-change formula applied to

$$\begin{aligned} f_{t}((\Theta _{\mathcal {E}_{j},\textbf{s}})_{\textbf{s}\in \mathcal {H}_{j}})=\frac{\max _{\textbf{z}\in (\mathcal {E}_{j})_{\textbf{t}}}\Theta _{\mathcal {E}_{j},\textbf{z}}^{\alpha }}{\sum _{\textbf{s}\in \mathcal {H}_{j}}\Theta _{\mathcal {E}_{j},\textbf{s}}^{\alpha }}\textbf{1}(\textbf{T}^{*}_{j}=\textbf{t}) \end{aligned}$$

and shifting \(\textbf{t}\) to \(\textbf{0}\), for every \(\textbf{t}\in \mathcal {L}_{j}\), we have

$$\begin{aligned} &\mathbb {E}\bigg [\sup \limits _{\textbf{t}\in (\mathcal {E}_{j})_{T^{*}_{j}}} {Q}_{\mathcal {E}_{j},\mathcal {L}_{j},\textbf{t}}^{\alpha }\bigg ]=\sum _{\textbf{t}\in \mathcal {L}_{j}}\mathbb {E}\bigg [\frac{\max _{\textbf{z}\in (\mathcal {E}_{j})_{\textbf{t}}}\Theta _{\mathcal {E}_{j},\textbf{z}}^{\alpha }}{\sum _{\textbf{s}\in \mathcal {H}_{j}}\Theta _{\mathcal {E}_{j},\textbf{s}}^{\alpha }} \textbf{1}(\textbf{T}^{*}_{j}=\textbf{t})\bigg ]\\ &=\sum _{\textbf{t}\in \mathcal {L}_{j}}\mathbb {E}\bigg [\frac{\max _{\textbf{z}\in \mathcal {E}_{j}}\Theta _{\mathcal {E}_{j},\textbf{z}}^{\alpha }}{\sum _{\textbf{s}\in \mathcal {H}_{j}}\Theta _{\mathcal {E}_{j},\textbf{s}}^{\alpha }} \textbf{1}(\textbf{T}^{*}_{j}=\textbf{0})\sum _{\textbf{i}\in (\mathcal {E}_{j})_{-\textbf{t}}}\Theta _{\mathcal {E}_{j},\textbf{i}}^{\alpha }\bigg ]=\mathbb {E}\bigg [\max _{\textbf{z}\in \mathcal {E}_{j}}\Theta _{\mathcal {E}_{j},\textbf{z}}^{\alpha } \textbf{1}(\textbf{T}^{*}_{j}=\textbf{0})\bigg ]. \end{aligned}$$

\(\square\)

9.2 Proof of Proposition 26

For every \(j\ge 1\) we also denote \(R^{(j)}_{l,\Lambda _{n}}:=\big (\bigcup _{\textbf{t}\in \{\textbf{s}\in \Lambda _{n}:(\Lambda _{n})_{-\textbf{s}}\supset \mathcal {E}_j\}}((\Lambda _{n})_{-\textbf{t}}\cap \{\textbf{s}\in \mathbb {Z}^{k}:\textbf{s}\succ \textbf{0}\})\big )\setminus K_{l}\) and \(\hat{M}^{\Lambda ,X,(j)}_{l,n}:=\max _{\textbf{i}\in R^{(j)}_{l,\Lambda _{n}}}{X}_{\textbf{i}}\) so that Condition (AC\(^{\Lambda }_{\succeq ,I^*}\)) is satisfied if for every \(j\in I^*\)

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\limsup _{n\rightarrow \infty }\mathbb {P}\Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j} {X}_{\textbf{t}}>a_{n}^\Lambda x\Big )=0. \end{aligned}$$

We start by showing the following Lemma

Lemma 33

Let \((X_{\textbf {t}})_{{\textbf {t}}\in \mathbb {Z}^{k}}\) be a stationary max-stable random field. Then \((X_{{\textbf {t}}})_{{\textbf {t}}\in \mathbb {Z}^{k}}\) is jointly regularly varying and the finite-dimensional distributions of its tail field \((Y_{{\textbf {t}}})_{{\textbf {t}}\in \mathbb {Z}^{k}}\) is given by

$$\begin{aligned}& \mathbb {P}(Y_{{\textbf {t}}_{1}} < y_{1},...,Y_{{\textbf {t}}_{n}} < y_{n}) \\&=\frac{1}{\mathbb {E}[V_{{\textbf {0}}}]}\left\{ \mathbb {E}\left[ \max \left( \max _{i=1,...,n}\frac{1}{y_{i}}V_{{\textbf {t}}_{i}},V_{{\textbf {0}}}\right) \right] -\mathbb {E}\left[ \max _{i=1,...,n}\frac{1}{y_{i}}V_{{\textbf {t}}_{i}}\right] \right\} \end{aligned}$$
(47)

for \({\textbf {t}}_{1},...,{\textbf {t}}_{n}\in \mathbb {Z}^{k}\) and \(y_{1},...,y_{n}\in (0,\infty )\).

Proof

It follows from similar computations as the ones in the proof of Proposition 6.1 in Wu and Samorodnitsky (2020). For any \(x_{1},...,x_{n}\in (0,\infty )\) we have (see Examples 1.5.4 and 4.4.3 in Mikosch and Wintenberger (2023))

$$\begin{aligned} \mathbb {P}(X_{{\textbf {t}}_{1}}\le x_{1},...,X_{{\textbf {t}}_{n}}\le x_{n})=\exp \left\{ -\mathbb {E}\left[ \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{x_{i}}\right] \right\} . \end{aligned}$$

Thus, for any \(x>0\)

$$\begin{aligned} &{ \mathbb {P}(x^{-1}X_{{\textbf {t}}_{1}}\le y_{1},...,x^{-1}X_{{\textbf {t}}_{n}}\le y_{n}\,\,|\,\,X_{{\textbf {0}}}>x)}\\&=\frac{\mathbb {P}(X_{{\textbf {t}}_{1}}\le xy_{1},...,X_{{\textbf {t}}_{n}}\le xy_{n})-\mathbb {P}(X_{{\textbf {t}}_{1}}\le xy_{1},...,X_{{\textbf {t}}_{n}}\le xy_{n},X_{{\textbf {0}}}\le x)}{\mathbb {P}(X_{{\textbf {0}}}>x)}\\&=\frac{\exp \left\{ -\frac{1}{x}\mathbb {E}\left[ \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{y_{i}}\right] \right\} -\exp \left\{ -\frac{1}{x}\mathbb {E}\left[ \max \left( \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{y_{i}},V_{{\textbf {0}}}\right) \right] \right\} }{1-e^{-\mathbb {E}[V_{{\textbf {0}}}]/x}}\\&\sim \frac{x}{\mathbb {E}[V_{{\textbf {0}}}]}\left( \exp \left\{ -\frac{1}{x}\mathbb {E}\left[ \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{y_{i}}\right] \right\} -\exp \left\{ -\frac{1}{x}\mathbb {E}\left[ \max \left( \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{y_{i}},V_{{\textbf {0}}}\right) \right] \right\} \right) \\&= \frac{x}{\mathbb {E}[V_{{\textbf {0}}}]}\bigg (\exp \left\{ -\frac{\mathbb {E}[V_{{\textbf {0}}}]}{x}\frac{1}{\mathbb {E}[V_{{\textbf {0}}}]}\mathbb {E}\left[ \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{y_{i}}\right] \right\} -\exp \left\{ -\frac{\mathbb {E}[V_{{\textbf {0}}}]}{x}\frac{1}{\mathbb {E}[V_{{\textbf {0}}}]}\mathbb {E}\left[ \max \left( \max _{i=1,...,n}\frac{V_{{\textbf {t}}_{i}}}{y_{i}},V_{{\textbf {0}}}\right) \right] \right\} \bigg ) \end{aligned}$$

which converges to (47) as \(x\rightarrow \infty\)\(\square\) 

We follow partially the proof of Proposition 6.2 in Wu and Samorodnitsky (2020). Fix any \(j\in I^*\), observe that

$$\begin{aligned}& \mathbb {P}\Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\,\big |\max \limits _{\textbf{t}\in \mathcal {E}_j} X_{\textbf{t}}>a_{n}^\Lambda x\Big ) \\&=1- \frac{\mathbb {P}\Big (\max \Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} X_{\textbf{t}}\Big )>a_{n}^\Lambda x\Big )-\mathbb {P}\Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\Big )}{\mathbb {P}\Big (\max \limits _{\textbf{t}\in \mathcal {E}_j} X_{\textbf{t}}>a_{n}^\Lambda x\Big )} \end{aligned}$$

and that

$$\begin{aligned} \mathbb {P}\Big (\max \limits _{\textbf{t}\in \mathcal {E}_j} X_{\textbf{t}}>a_{n}^\Lambda x\Big )=\Big (1-e^{-\mathbb {E}\big [\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\big ]/a_{n}^\Lambda x}\Big )\sim \mathbb {E}\Big [\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\Big ]/a_{n}^\Lambda x\,,\qquad n\rightarrow \infty \,. \end{aligned}$$

Since

$$\begin{aligned} 0\le -(a_{n}^\Lambda x)^{-1}\mathbb {E}\Big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\Big ]\le -(a_{n}^\Lambda x)^{-1}|\Lambda _{r_n}|\rightarrow 0, \end{aligned}$$

as \(n\rightarrow \infty\), we also have

$$\begin{aligned} \mathbb {P}\Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\Big )=\Big (1-e^{-\mathbb {E}\big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\big ]/a_{n}^\Lambda x}\Big )\sim \mathbb {E}\Big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\Big ]/a_{n}^\Lambda x\,,\qquad n\rightarrow \infty \,. \end{aligned}$$

Then, the (AC\(_{\succ }\)) condition is satisfied if and only if

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\liminf _{n\rightarrow \infty }a_{n}^\Lambda x\Big [\mathbb {P}\Big (\max \Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} X_{\textbf{t}}\Big )>a_{n}^\Lambda x\Big )-\mathbb {P}\Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\Big )\Big ]=\mathbb {E}[\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}]. \end{aligned}$$
(48)

Since

$$\begin{aligned}&\mathbb {P}\Big (\max \Big (\hat{M}^{\Lambda ,X,j}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} X_{\textbf{t}}\Big )>a_{n}^\Lambda x\Big )-\mathbb {P}\Big (\hat{M}^{\Lambda ,X,(j)}_{2l,r_{n}}>a_{n}^\Lambda x\Big )\\&=\exp \Big (-(a_{n}^\Lambda x)^{-1}\mathbb {E}\Big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\Big ]\Big )-\exp \Big (-(a_{n}^\Lambda x)^{-1}\mathbb {E}\Big [\max \Big (\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\Big )\Big ]\Big )\\&\sim (a_{n}^\Lambda x)^{-1}\Big (\mathbb {E}\Big [\max \Big (\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\Big )\Big ]-\mathbb {E}\Big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\Big ]\Big ) \end{aligned}$$

and then (48) holds if and only if

$$\begin{aligned} \lim \limits _{l\rightarrow \infty }\liminf _{n\rightarrow \infty }\mathbb {E}\Big [\max \Big (\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\Big )\Big ]-\mathbb {E}\Big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\Big ]=\mathbb {E}\Big [\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\Big ]. \end{aligned}$$
(49)

Then

$$\begin{aligned} &\lim \limits _{l\rightarrow \infty }\liminf _{n\rightarrow \infty }\mathbb {E}\Big [\max \Big (\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}},\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}})\Big ]-\mathbb {E}\Big [\hat{M}^{\Lambda ,V,(j)}_{2l,r_{n}}\Big ]\\& =\lim \limits _{l\rightarrow \infty }\mathbb {E}\Big [\Big (\max _{\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_n}}\cup \mathcal {E}_j } V_{{\textbf {t}}} - \max \limits _{\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_n}}} V_{{\textbf {t}}}\Big )1\big (\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\ne 0\big )\Big ]\,. \end{aligned}$$

By assumption \(\max \limits _{\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_n}}} V_{{\textbf {t}}}=\max \limits _{\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_n}}\cap (\cup _{i\ge 1}((\mathcal{H})_i)^+)} V_{{\textbf {t}}}\), thus \(\max \limits _{\textbf{t}\in R^{(j)}_{2l,\Lambda _{r_n}}} V_{{\textbf {t}}}1\big (\max \limits _{\textbf{t}\in \mathcal {E}_j} V_{\textbf{t}}\ne 0\big )\rightarrow 0\) a.s. as \(l\rightarrow \infty\) under (18) and then (49) follows by dominated convergence.