## 1 Introduction

Despite the fact that the Gaussian extremes have been an active research area since at least the 60 s, up until recently little has been known about exact asymptotics of high exceedance probabilities of Gaussian processes in the multivariate case. A deep contribution Dȩbicki et al. (2020) has paved a way towards different problems of the following kind:

$$\mathbb {P} \left\{ \exists \, t \in [ 0, T ] :\varvec{X} ( t ) > u \varvec{b} \right\} \quad \text {as} \quad u \rightarrow \infty$$

for $$\varvec{b} \in \mathbb {R}^d {\setminus } ( -\infty , 0 ]^d$$ and $$\varvec{X}$$ being a continuous Gaussian process. Here “>” denotes the componentwise (Hadamard) comparison. As it turns out, these problems are much more challenging than the univariate ones due to the lack of several techniques which are crucial for the univariate case. The reader can find the detailed account of this shortage in the introduction to the aforementioned paper. Among these lacking techniques, the authors name the Slepian inequality and mention that its extension in the form of Gordon inequality is thought to be inapplicable if the compontents of $$\varvec{X}$$ are not independent (see Dȩbicki et al. (2015) for the i.i.d. case).

In this contribution, we aim to achieve two goals. First, we extend (Dȩbicki et al. 2020, Theorem 2.1) on stationary processes to a certain class of homogenous Gaussian random fields defined on $$[ 0, T ]^n$$, see Theorem 1. Second, we apply this result to the study of locally-homogenous Gaussian random fields. The corresponding result is presented in Theorem 2. The crucial step of the second part involves constructing two homogenous processes which stochastically dominate $$\varvec{X}$$ on short intervals from above and from below. This is done by showing that a certain matrix-valued function is positive definite and subsequently applying the Gordon inequality.

As an application of our findings, we present asymptotic formulas for the time-transformed operator fractional Ornstein-Uhlenbeck process $$\varvec{Y}$$ defined by the covariance matrix function

$$\mathbb {R}^2 \ni ( t, s ) \mapsto \exp \left( -\left| \varphi ( t ) - \varphi ( s ) \right| ^H \right) ,$$

with $$H$$ a symmetric matrix with eigenvalues from $$( 0, 1 ]$$ and $$\varphi$$ a strictly monotone continuously differentiable function. By Proposition 1,

$$\mathbb {P} \left\{ \exists \, \varvec{Y} ( t )> u \varvec{b} \right\} \sim c \, u^{1/h} \mathbb {P} \left\{ \varvec{Y} ( 0 ) > u \varvec{b} \right\} ,$$

where $$h$$ is the lowest eigenvalue of $$H$$ and $$c$$ is given in the form of an integral of Pickands-type constants over $$[ 0, T ]$$. This result extends (Dȩbicki et al. 2020, Proposition 3.1). Another application concerns a class of continuous Gaussian processes associated to the following matrix-valued function:

$$\mathbb {R}^2 \ni ( t, s ) \mapsto \exp \left( -\left| t - s \right| ^{\alpha } \Big [ B^{ + } + B^{ - } {\text {sign}} ( t - s ) \Big ] \right) ,$$

where $$B^{ \pm } = ( B \pm B^\top ) / 2$$ are symmetric and antisymmetric parts of a real $$d \times d$$ matrix $$B$$ and $$\alpha \in ( 0, 2 ]$$. In Ievlev and Novikov (2023) we found the necessary and sufficient conditions on the pair $$( \alpha , B )$$ under which this function is positive definite (see Lemma 3) and thus generates a Gaussian process. Here we present an asymptotic result on the time-transformed version of this process, see Proposition 2.

The notion of locally stationary process was introduced by Berman in (1974) and its extremes were extensively studied afterwards in the papers by Hüsler (1990), Piterbarg (1996), Chan and Lai (2006) and many others. See also Piterbarg and Rodionov (2020), Qiao (2021) and Tan and Zheng (2020) for more recent contributions. Its multivariate counterpart, however, has not been considered so far due to the technical issues. The technique of Dȩbicki et al. (2020) based on the uniform version of local Pickands lemma may in principle be applied to this class of processes, but it would require much stronger assumptions than those we impose in this contribution. Our result, presented in Theorem 2, should appear natural (if not obvious) for the specialist, but it still requires a rigorous proof, which involves imposing the right assumptions on the field $$\varvec{X}$$.

The applicability of Gordon inequality in this context allows to significantly simplify the study of classical multivariate Gaussian extremes. In particular, the technical issue of uniformity in the single and double sums may be resolved by passing to a stationary dominating process. Therefore, besides the results here, we establish a simpler methodology compared to Dȩbicki et al. (2020) for dealing with non-stationary Gaussian random fields.

We want to point out that one possible direction in which our results can be extended is the family of $$\alpha ( t )$$-locally stationary Gaussian random fields, see Hashorva and Ji (2016).

### Brief organization of the paper

Main results are presented in Section 2 with proofs relegated to Section 5. The applications are presented in the Section 3. Section 4 contains auxiliary results and technical lemmas. Appendix contains several known results taken from Dȩbicki et al. (2020) and reproduced here for reader’s convenience in the adapted form.

## 2 Main results

Before proceeding to the theorems, let us introduce some relevant notation.

### Vectors

Throughout the paper points of $$\mathbb {R}^d$$ are written in bold letters (values of multivariate processes), while points of $$[0, T]^n \subset \mathbb {R}^n$$ (points of their domain) are written in the regular font. This does not lead to any confusion since their meaning can always be understood from the context, but allows to avoid visual clutter. All operations on vectors in both spaces, unless specified otherwise, are performed component-wise. For example, if $$t$$ and $$s$$ belong to $$\mathbb {R}^n$$, then $$t s$$ denotes the vector $$( t_i s_i )_{i = 1, \ldots , n}$$. Similarly for $$t / s$$, $$e^t$$, $$\lfloor {t} \rfloor$$ and so on denoting vectors with components $$t_i / s_i$$, $$e^{t_i}$$ and $$\lfloor {t_i}\rfloor$$ correspondingly. We write $$t \ge s$$ if $$t_i \ge s_i$$ for all their coordinates. By abuse of notation, we write $$1 = ( 1, \ldots , 1 ) \in \mathbb {R}^n$$ and $$0 = ( 0, \ldots , 0 ) \in \mathbb {R}^n$$. If $$s > t$$, then $$[t, s]$$ denotes the box $$\{ u :u_i \in [t_i, s_i] \}$$.

### Matrices

If $$A = ( A_{ij} )_{i, j = 1, \ldots , d}$$ is a $$d \times d$$ matrix and $$I, \, J \subset \{ 1, \ldots , d \}$$ are two index sets, we write $$A_{IJ}$$ for the submatrix $$( A_{ij} )_{i \in I, \, j \in J}$$. If $$I = J$$, we occasionally write $$A_I$$ instead of $$A_{II}$$. $$\left\| A \right\|$$ denotes any fixed norm in the space of $$d \times d$$ matrices. Our formulas do not depend on the choice of the norm. For $$\varvec{w} \in \mathbb {R}^d$$, $${{\,\textrm{diag}\,}}( \varvec{w} )$$ stands for the diagonal matrix with entries $$w_1, \, w_2, \, \ldots , \, w_d$$ on the main diagonal. The notation $$A \unrhd 0$$ means that $$A$$ is positive definite and $$A \vartriangleright 0$$ means that $$A$$ is strictly positive definite. If $$A$$ is a real matrix, denote its symmetric and anti-symmetric parts by $$A^{ \pm }:= ( A \pm A^\top ) / 2$$.

Let $$\Sigma$$ be a $$d \times d$$ real matrix with inverse $$\Sigma ^{-1}$$. If $$\varvec{b} \in \mathbb {R}^d {\setminus } ( -\infty , 0 ]^d$$, then by Lemma 7 the quadratic programming problem

$$\Pi _{\Sigma } ( \varvec{b} ) :\quad \text {minimize} \quad \varvec{x}^\top \Sigma ^{-1} \varvec{x} \quad \text {under the linear constraint} \quad \varvec{x} \ge \varvec{b}$$

has a unique solution $$\widetilde{\varvec{b}} \ge \varvec{b}$$ and there exists a unique non-empty index set $$I \subset \{ 1, \ldots , d \}$$ such that

$$\widetilde{\varvec{b}}_I = \varvec{b}_I, \qquad \widetilde{\varvec{b}}_J = \Sigma _{IJ} ( \Sigma _{II} )^{-1} \varvec{b}_I \ge \varvec{b}_J, \qquad \varvec{w}_I = ( \Sigma _{II} )^{-1} \varvec{b}_I > \varvec{0}_I, \qquad \varvec{w}_J = \varvec{0}_J,$$

where $$\varvec{w}:= \Sigma ^{-1} \, \widetilde{\varvec{b}}$$ and $$J = \{ 1, \ldots , d \} {\setminus } I$$.

### Other notation

We use lower case constants $$c_1, \, c_2, \, \ldots$$ to denote generic constants used in the proofs, whose exact values are not important and can be changed from line to line. The labeling of the constants starts anew in every proof. Let $$f, \, g :[ 0, T ]^n \rightarrow M$$, where $$M = \mathbb {R}^{d \times d}, \, \mathbb {R}^d$$ or $$\mathbb {R}$$ be two matrix-valued, vector-valued or real-valued functions and $$h :[ 0, T ]^n \rightarrow \mathbb {R}$$ be a real-valued function. We write “$$f = g + o ( h )$$ as $$t \rightarrow t_0$$” if for all $$\varepsilon > 0$$ there exists $$\delta > 0$$ such that $$| t - t_0 | < \delta$$ implies $$\Vert f ( t ) - g ( t ) \Vert \le \varepsilon | h ( t ) |$$. The next two subsections present our results on homogenous and locally homogenous fields.

### 2.1 Homogenous case

Let $$\varvec{X} ( t ), \, t \in [ 0, T ]^n$$ be a centered homogenous and continuous Gaussian random field. Denote its covariance and variance matrices by

$$R ( t, s ) := \mathbb {E} \left\{ \varvec{X} ( t ) \, \varvec{X}^\top ( s ) \right\} \quad \text {and} \quad \Sigma := R ( 0, 0 ).$$

Homogenity means that for each $$t$$ and $$s$$ in $$[ 0, T ]^n$$

$$\mathbb {E} \left\{ \varvec{X} ( t ) \, \varvec{X}^\top ( s ) \right\} = \mathbb {E} \left\{ \varvec{X} ( t - s ) \, \varvec{X}^\top ( 0 ) \right\} = R ( t - s, 0 ),$$

therefore we set in the following $$R ( t ):= R ( t, 0 )$$. It follows that $$R ( -t ) = R^\top ( t )$$. The matrix $$\Sigma - R ( t )$$ is positive definite, but not necessarily symmetric. Let $$\varvec{b} \in \mathbb {R}^d {\setminus } ( -\infty , 0 ]^d$$ and denote by $$\widetilde{\varvec{b}}$$ and $$I$$ the unique solution of $$\Pi _{\Sigma } ( \varvec{b} )$$ and its $$I$$ index set, see Lemma 7 for details. Set $$\varvec{w}:= \Sigma ^{-1} \, \widetilde{\varvec{b}}$$.

In this section we impose the following assumptions:

A1:

$$\Sigma _{II} - R_{II} ( t )$$ is strictly positive definite for every $$t \in ( 0, T ]$$

A2:

There exist a collection $$\mathbb {B}:= ( B_l )_{l = 1, \ldots , n}$$ of real $$d \times d$$ matrices and a collection of numbers $$\varvec{\alpha }:= ( \alpha _l )_{l = 1, \ldots , n} \in ( 0, 2 ]^n$$ such that

$$\Sigma - R ( t )=\sum _{l = 1}^n B_l \, | t_l |^{\alpha _l} +o \left( \sum _{l = 1}^n | t_l |^{\alpha _l} \right) \quad \text {as} \quad t \downarrow 0,$$
(A2.1)
$$\varvec{w}^\top B_l \, \varvec{w} > 0 \quad \text {for all} \quad l = 1, \ldots , n.$$
(A2.2)

### Remark 1

It follows from (A2.1) that

$$\Sigma - R ( t ) \sim \sum _{l = 1}^n \Big [ B_l \, | t_l |^{\alpha _l} \, \mathbb {1}_{t_l \ge 0} +B_l \, | t_l |^{\alpha _l} \, \mathbb {1}_{t_l < 0} \Big ]$$

as $$t \rightarrow 0$$ and $$B_l$$’s satisfy

$$\widetilde{B}_l := B_l^{ + } \, \cos \left( \frac{\pi \alpha _l}{2} \right) -i B_l^{ - } \, \sin \left( \frac{\pi \alpha _l}{2} \right) \unrhd 0, \quad \text {where} \quad B^{\pm } := \frac{B \pm B^\top }{2}.$$
(1)

From this follows that $$B_l^{ + } \unrhd 0$$.

### Theorem 1

If $$\varvec{X}$$ is a centered homogenous and continuous Gaussian random field satisfying Assumptions A1 and A2, then

$$\mathbb {P} \left\{ \exists \, t \in [ 0, T ]^n :\varvec{X} ( t )> u \varvec{b} \right\} \sim T^n \, \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} \, \prod _{l = 1}^n u^{2/\alpha _l} \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} ,$$

where the constant $$\mathcal {H}_{\varvec{\alpha }, \mathbb {B}}$$ is given by

$$\mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} := \lim _{\Lambda \rightarrow \infty } \frac{1}{\Lambda ^n} \int _{\mathbb {R}^d} e^{\varvec{1}^\top \varvec{x}} \, \mathbb {P} \left\{ \exists \, t \in [ 0, \Lambda ]^n :\sum _{l = 1}^n {{\,\textrm{diag}\,}}( \varvec{w} ) \Big [ \varvec{Y}_l ( t_l ) - S_{\alpha _l, B_l} ( t_l ) \, \varvec{w} \Big ] > \varvec{x} \right\} \mathop {d \varvec{x}} \in ( 0, \infty ).$$
(2)

Here $$\varvec{Y}_l$$ is a continuous Gaussian process associated to the covariance function

$$R_{\alpha _l, B_l} ( t_l, s_l ) := S_{\alpha _l, B_l} ( t_l ) + S_{\alpha _l, B_l} ( -s_l ) - S_{\alpha _l, B_l} ( t_l - s_l ), \qquad S_{\alpha _l, B_l} ( t_l ) := | t_l |^{\alpha _l} \Big [ B \mathbb {1}_{t_l \ge 0} + B^\top \mathbb {1}_{t_l < 0} \Big ].$$

### 2.2 Locally homogenous case

In this section $$\varvec{X} ( t ), \, t \in [0, T]^n$$ is a centered continuous Gaussian random field with covariance matrix

$$R ( t, s ) := \mathbb {E} \left\{ \varvec{X} ( t ) \, \varvec{X}^\top ( s ) \right\}$$

and variance matrix $$\Sigma$$ satisfying $$R ( t, t ) = R ( 0, 0 ) =: \Sigma$$. We impose the following assumptions:

B1:

$$\Sigma _{II} - R_{II} ( t )$$ is strictly positive definite for every $$t \in ( 0, T ]$$

B2:

There exist a collection $$\mathbb {B} ( t ):= ( B_l ( t ) )_{l = 1, \ldots , n}$$ of continuous real $$d \times d$$ matrix-valued functions and a collection of numbers $$\varvec{\alpha }:= ( \alpha _l )_{l = 1, \ldots , n} \in ( 0, 2 ]^n$$ such that

$$\Sigma - R ( t + s, t ) = \sum _{l = 1}^n \Big [ B_l ( t ) \, | s_l |^{\alpha _l} \mathbb {1}_{s_l \ge 0} +B_l^\top ( t ) \, | s_l |^{\alpha _l} \, \mathbb {1}_{s_l < 0} \Big ] +o \left( \sum _{l = 1}^n | s_l |^{\alpha _l} \right) \quad \text {as} \quad t \rightarrow +0,$$
(B2.1)

where small-o is uniform in $$t \in [ 0, T ]^n$$ and

$$\widetilde{B_l} ( t ) := B_l^{ + } ( t ) \cos \left( \frac{\pi \alpha _l}{2} \right) -i B_l^{ - } ( t ) \sin \left( \frac{\pi \alpha _l}{2} \right) \vartriangleright 0 \quad \text {for all} \quad t \in [ 0, T ]^n.$$
(B2.2)

### Remark 2

From (B2.2) follows that $$\varvec{w}^\top \, B_l ( t ) \, \varvec{w} > 0$$ for all $$t \in [ 0, T ]^n$$.

### Theorem 2

If $$\varvec{X}$$ is a centered and continuous Gaussian random field satisfying Assumptions B1 and B2, then

$$\mathbb {P} \left\{ \exists \, t \in [ 0, T ]^n :\varvec{X} ( t )> u \varvec{b} \right\} \sim \int _{[ 0, T ]^n} \mathcal {H}_{\varvec{\alpha }, \mathbb {B} ( t ), \varvec{w}} \mathop {d t} \, \prod _{l = 1}^n u^{2/\alpha _l} \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} ,$$

where the constant $$\mathcal {H}_{\varvec{\alpha }, \mathbb {B}}$$ is given by (2).

## 3 Examples

### 3.1 Time-transformed operator fractional Ornstein-Ulhenbeck process

Let $$H$$ be a symmetric matrix with all eigenvalues $$h_1, \ldots , h_d$$ belonging to $$( 0, 1 ]$$ and consider a stationary a.s. continuous $$\mathbb {R}^d$$-valued Gaussian process $$\varvec{X} ( t ), \, t \ge 0$$ with cmf

$$R ( t, s ) = \exp \left( -\left| t - s \right| ^{2H} \right) ,$$
(3)

where $$t^H = \exp ( H \ln t )$$ for $$t > 0$$. This process is known in the literature as the operator fractional Ornstein-Uhlenbeck process. In this section we consider its time-transformed version. Specifically, let $$\varphi$$ be a continuously differentiable strictly monotone function. Define $$\varvec{Y} ( t ):= \varvec{X} ( \varphi ( t ) )$$. Let us show that this process is locally stationary in the sense defined above. Since $$H$$ is symmetric, there exists an orthogonal matrix $$Q$$ such that $$H = Q \, {{\,\textrm{diag}\,}}( h_1, \ldots , h_d ) \, Q^\top$$. Hence,

$$R ( t + s, t ) = I -Q \widetilde{I} Q^\top \left| \varphi ( t + s ) - \varphi ( t ) \right| ^{2 h} +O \left( \left| \varphi ( t + s ) - \varphi ( t ) \right| ^2 \right) \quad \text {as} \quad s \rightarrow 0,$$

with $$h:= \min _{i = 1, \ldots , d} h_i$$ and $$[ \widetilde{I} \; ]_{ij}:= \mathbb {1}_{i = j \, \text {and} \, h = h_i}$$. Since $$\varphi$$ is differentiable, we have

$$R ( t + s, t ) = I - Q \widetilde{I} Q^\top | \varphi ' ( t ) |^{2h} |s|^{2h} + O\left( |s|^{4h} \right) \quad \text {as} \quad s \rightarrow 0.$$

Then (B2) holds with $$B ( t ):= Q \widetilde{I} Q^\top \left| \varphi ' ( t ) \right| ^{2\,h}$$ and $$\Sigma = I$$. Note that $$| \varphi ' ( t ) | > 0$$ since $$\varphi$$ is strictly monotone. By Theorem 2 we have the following result:

### Proposition 1

Let $$\varvec{Y} ( t ) = \varvec{X} ( \varphi ( t ) ), \, t \in [0, T]$$, where $$\varphi$$ is a continuously differentiable strictly monotone function and $$\varvec{X} ( t ), \, t \in \mathbb {R}$$ is an operator fO-U process associated to the covariance (3) with a symmetric matrix $$H$$ whose eigenvalues belong to $$( 0, 1 ]$$. Let $$\widetilde{b}_j = \max \{ b_j, 0 \}$$ for $$j = 1, \ldots , d$$. If $$\widetilde{\varvec{b}}^\top Q \widetilde{I} Q^\top \widetilde{\varvec{b}} > 0$$, then

$$\mathbb {P} \left\{ \exists \, t \in [ 0, T ] :\varvec{Y} ( t )> u \varvec{b} \right\} \sim u^{1/h} \int _0^T \mathcal {H}_{2h, Q \widetilde{I} Q^\top \left| \varphi ' ( t ) \right| ^{2h}, \varvec{w}} \mathop {dt} \ \mathbb {P} \left\{ \varvec{X} ( \varphi ( 0 ) ) > u \varvec{b} \right\} .$$

### 3.2 A Gaussian process with $$\alpha$$-homogenous log-covariance

In an upcoming paper Ievlev and Novikov (2023) we show the following result:

### Theorem 3

Let $$B$$ be a real $$d \times d$$ matrix. If a matrix-valued function $$R$$ defined by

$$R ( t, s ) = \exp \left( -|t - s|^{\alpha } \Big [ B^{ + } + B^{ - } {\text {sign}} ( t - s ) \Big ] \right) , \qquad t, \, s \in \mathbb {R},$$
(4)

is positive-definite, then the condition (1) is satisfied. If, on the other hand, the condition (1) is satisfied. Then

• If $$\alpha \in ( 0, 1 )$$, then $$R$$ is positive-definite if and only if $$B$$ satisfies

$$B^{1/\alpha } + B^{1/\alpha , \top } \unrhd 0.$$
(5)
• If $$\alpha \in [ 1, 2 ]$$, then $$R$$ is positive-definite.

Using the above result, define $$\varvec{X} ( t ), \, t \in \mathbb {R}$$ a stationary continuous Gaussian process associated to this covariance and let $$\varphi$$ be a strictly increasing continuously differentiable function. Define $$\varvec{Y} ( t ):= \varvec{X} ( \varphi ( t ) )$$. The covariance of $$\varvec{Y}$$ satisfies

$$R_{\varvec{Y}} ( t + s, t ) \sim I -\Big [ B^{ + } + B^{ - } {\text {sign}} ( s ) \Big ] \left| \varphi ' ( t ) \right| ^{\alpha } |s|^{\alpha } +O \left( | s |^{2\alpha } \right) \quad \text {as} \quad s \rightarrow 0,$$

where we used the fact that $${\text {sign}} ( \varphi ( t + s ) - \varphi ( t ) ) = {\text {sign}} ( s )$$ since $$\varphi$$ is increasing. Hence, the assumption B2.1 is satisfied with $$B ( t ) = B \left| \varphi ' ( t ) \right| ^{\alpha }$$. The validity of B2.2 follows from the fact that $$| \varphi ' ( t ) | > 0$$ and our assumption on $$B$$. By Theorem 2, we have the following result:

### Proposition 2

Let $$\varvec{Y} ( t ) = \varvec{X} ( \varphi ( t ) ), \, t \in [0, T]$$, where $$\varphi$$ is a strictly increasing continuously differentiable function and $$\varvec{X}$$ is a process associated to the covariance (4), where $$B$$ and $$\alpha$$ are such that this function is positive definite. Then

$$\mathbb {P} \left\{ \exists \, t \in [ 0, T ] :\varvec{Y} ( t )> u \varvec{b} \right\} \sim u^{2/\alpha } \int _0^T \mathcal {H}_{\alpha , B | \varphi ' ( t ) |^{\alpha }, \varvec{w}} \mathop {dt} \ \mathbb {P} \left\{ \varvec{X} ( \varphi ( 0 ) ) > u \varvec{b} \right\}$$

as $$u \rightarrow \infty$$.

## 4 Auxiliary results

### Lemma 1

Let $$B$$ be a real $$d \times d$$ matrix satisfying

$$\widetilde{B} = B_{ + } \sin \left( \frac{\pi \alpha }{2} \right) -i B_{ - } \cos \left( \frac{\pi \alpha }{2} \right) \vartriangleright 0.$$
(6)

Then there exists a collection of complex numbers $$\{ \lambda _k \}_{k = 1, \ldots , d}$$ satisfying

$${{\,\textrm{Re}\,}}{\lambda _k} = 1, \qquad \left| \, {{\,\textrm{Im}\,}}{\lambda _k} \, \right| < \left| \, \tan \left( \frac{\pi \alpha }{2} \right) \right|$$
(7)

and a collection of strictly positive definite Hermitian matrices $$\{ V_k \}_{k = 1, \ldots , d}$$ of rank one such that

$$B = \sum _{k = 1}^d \lambda _k V_k.$$
(8)

### Proof

Note that $$B$$ can be represented as follows:

$$B = B_{ + } + i B_{ - }', \qquad B_{ - }' := -i B_{ - }, \qquad B_{\pm } := \frac{B \pm B^\top }{2}.$$

Here $$B_{ + }$$ is symmetric and strictly positive definite by (6) and $$B_{ - }'$$ is Hermitian. Hence, there exists an invertible real matrix $$A$$ such that $$B_{ + } = A A^\top$$. Note that for each unitaty matrix $$Q$$ holds

$$Q A^{ -1 } B_{ + } A^{-\top } Q^{ * } = Q Q^{ * } = I.$$

Since $$B_{ - }'$$ is Hermitian, so is $$A^{-1} B_{ - }' A^{-\top }$$ and therefore there exists a unitary matrix $$Q$$ and a real diagonal matrix $$D$$ such that

$$A^{-1} B_{ - }' A^{-\top } = Q^{ * } D Q.$$

Denote $$V:= A Q^{ * }$$. Therefore, we have the following representations of $$B_{ + }$$

$$V V^{ * } = A Q^{ * } Q A^\top = A A^\top = B_{ + }$$
(9)

and $$B_{ - }'$$

$$V D V^{ * } = A Q^{ * } D Q A^\top = A A^{-1} B'_{ - } A^{-\top } A^\top = B'_{ - }.$$
(10)

Hence, for $$B$$ we have

$$B = B_{ + } + i B'_{ - } = V V^{ * } + i V D V^{ * } = V \Big [ I + i D \Big ] V^{ * }.$$

Set next

$$\lambda _k := 1 + i D_{kk}, \qquad V_k := V \, \mathcal {D}_k \, V^{ * },$$
(11)

where $$[ \mathcal {D}_k ]_{ml} = \delta _{km} \delta _{kl}$$ is the diagonal matrix with $$1$$ at $$k$$-th place. Clearly, $$V_k$$’s are Hermitian, positive definite, of rank one and (8) is satisfied. It remains to show that the inequality (7) is also satisfied. To this end, use (9) and (10) to rewrite $$\widetilde{B}$$ as

$$\widetilde{B} = V \left[ I \cos \left( \frac{\pi \alpha }{2} \right) -i D \sin \left( \frac{\pi \alpha }{2} \right) \right] V^{ * } \vartriangleright 0.$$

Therefore, we have

$$I \cos \left( \frac{\pi \alpha }{2} \right) -i D \sin \left( \frac{\pi \alpha }{2} \right) \vartriangleright 0,$$

which implies (7).$$\square$$

### Lemma 2

Under the conditions of Lemma 1, the functions given by

$$\mathcal {E}_{\alpha , B, k} ( t ) := \exp \left( -d \lambda _k V_k | t |^{\alpha } \right) \, \mathbb {1}_{t \ge 0} +\exp \left( -d \, \overline{\lambda }_k \, V_k | t |^{\alpha } \right) \, \mathbb {1}_{t < 0}$$

with $$\lambda _k$$, $$V_k$$ and $$\alpha$$ from Lemma 1 are all positive definite complex matrix-valued functions. Let $$\Sigma = A A^\top$$ be a strictly positive definite matrix and define

$$\mathcal {E}_{ \alpha , B } ( t ) := \frac{1}{2 d} A \sum _{k = 1}^d \Big [ \mathcal {E}_{\alpha , A^{-1} B A^{-\top }, k} ( t ) +\overline{\mathcal {E}_{\alpha , A^{-1} B A^{-\top }, k} ( t )} \Big ] A^\top .$$

Then $$\mathcal {E}_{\alpha , B} ( t )$$ is positive definite real matrix-valued function satisfying

$$\mathcal {E}_{ \alpha , B } ( t ) = \Sigma -B | t |^{\alpha } \mathbb {1}_{t \ge 0} -B^\top | t |^{\alpha } \mathbb {1}_{t < 0} +o \left( | t |^{\alpha } \right) \quad \text {as} \quad t \rightarrow 0.$$

### Proof

Since $$V_k = V^{ * } \, \mathcal {D}_k \, V$$ by (11), there exists $$\mu _k > 0$$ and a unitary matrix $$U$$ such that $$V_k = \mu _k \, U^{ * } \, \mathcal {D}_k \, U$$. Hence,

$$\exp \left( -d \big [ 1 + i {{\,\textrm{Im}\,}}{ \lambda _k } {\text {sign}} ( t ) \big ] V_k | t |^{\alpha } \right) = U^{ * } \, \exp \left( -d \mu _k \Big [ 1 + i {{\,\textrm{Im}\,}}{ \lambda _k } {\text {sign}} ( t ) ] \mathcal {D}_k \, | t |^{\alpha } \right) U.$$

Positive definiteness of this function is therefore equivalent to that of a scalar-valued function

$$\exp \left( -d \mu _k \Big [ 1 + i {{\,\textrm{Im}\,}}{ \lambda _k } \, {\text {sign}} ( t ) \Big ] | t |^{\alpha } \right) ,$$

which follows from (7). The second claim follows from (8) and the fact that

$$\widetilde{B} \vartriangleright 0 \implies \widetilde{A^{-1} B A^{-\top }} = A^{-1} \widetilde{B} A^{-\top } \vartriangleright 0$$

by a direct computation.$$\square$$

### 4.2 Double sum bound

Define for $$k \in \mathbb {Z}^d {\setminus } \{ 0 \}$$ and $$\Lambda > 0$$ the double events’ probabilities by

P_{\varvec{b}} ( k, \Lambda ) := \mathbb {P} \left\{ \begin{aligned}&\exists \, t \in \Lambda [ 0, 1 ]^n :&\varvec{X} ( t )> u \varvec{b} \\&\exists \, s \in \Lambda [ k, k + 1 ] :&\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\}.

### Lemma 3

(Double sum bound). If $$\varvec{X} ( t ), \, t \in [ 0, T ]^n$$ is a centered continuous Gaussian field satifying Assumption A2, then there exist positive constants $$C$$ and $$\varepsilon$$ such that for every $$k \in \mathbb {Z}^d {\setminus } \{ 0 \}$$ with $$1 < | k_l | \le N_u ( \varepsilon )$$ for all $$l$$ and $$\Lambda > 0$$ holds

$$\frac{ P_{\varvec{b}} ( k, \Lambda ) }{ \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le C \Lambda ^{\# \{ l :k_l = 0 \}} \prod _{l :k_l \ne 0} \left( \left| k_l \right| - 1 \right) ^{-2} \exp \left( -\frac{1}{4} \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \left( \left| k_l \right| - 1 \right) ^{\alpha _l} \right)$$

### Remark 3

Note that the conditions of the lemma demand that there be no $$l$$’s such that $$k_l = \pm 1$$. This is not a coincidence: the adjacent double events are to be estimated differently. See the proof of Theorem 1 for details.

### Proof

Without loss of generality assume that $$I = \{ 1, \ldots , n \}$$. Then

\begin{aligned} P_{\varvec{b}} ( k, \Lambda )&\le \mathbb {P} \left\{ \exists \, ( t, s ) \in \Lambda u^{-2/\alpha } [ k, k + 1 ] \times [ 0, 1 ] :\frac{1}{2} \Big [ \varvec{X} ( t ) + \varvec{X} ( s ) \Big ]> u \varvec{b} \right\} \\&= u^{-d} \int _{\mathbb {R}^d} \mathbb {P} \left\{ \exists \, ( t, s ) \in [ 0, \Lambda ]^{2n} :\varvec{\chi }_{u, k, \varvec{x}} ( t, s ) > \varvec{x} \right\} \varphi _{u, k} \left( u \varvec{b} - \frac{\varvec{x}}{u} \right) \mathop {d \varvec{x}}, \end{aligned}
(12)

where

$$\varvec{\chi }_{u, k, \varvec{x}} ( t, s ) := u \left( \varvec{X}_{u, k} ( t, s ) -u \varvec{b} \ \Big | \, \varvec{X}_{u, k} ( 0, 0 ) = u \varvec{b} - \frac{\varvec{x}}{u} \right) +\varvec{x}$$

with

$$\varvec{X}_{u, k} ( t, s ) := \frac{1}{2} \Big [ \varvec{X} \left( \Lambda u^{-2/\alpha } k + u^{-2/\alpha } t \right) +\varvec{X} \left( u^{-2/\alpha } s \right) \Big ]$$

and $$\varphi _{u, k}$$ is the pdf of $$\varvec{X}_{u, k} ( 0, 0 ) {\mathop {=}\limits ^{d}} N ( 0, \Sigma _{u, k} )$$, where

\begin{aligned} \Sigma _{u, k}&:= \mathbb {E} \left\{ \varvec{X}_{u, k} ( 0, 0 ) \, \varvec{X}_{u, k}^\top ( 0, 0 ) \right\} = \frac{1}{4} \Big [ 2 \Sigma + R \left( \Lambda u^{-2/\alpha } k \right) + R \left( -\Lambda u^{-2/\alpha } k \right) \Big ] \\&= \Sigma - u^{-2} \sum _{l = 1}^n \Big [ B_l + B_l^\top \Big ] \Lambda ^{\alpha _l} k_l^{\alpha _l} +o \left( u^{-2/\alpha } \Lambda k \right) . \end{aligned}
(13)

First, bound $$\varphi _{u, k}$$ as follows:

$$\varphi _{u, k} \left( u \varvec{b} - \frac{\varvec{x}}{u} \right) \le \varphi ( u \varvec{b} ) \exp \left( \frac{u^2}{2} \, \varvec{b}^\top \Big [ \Sigma ^{-1} - \Sigma _{u, k}^{-1} \Big ] \varvec{b} \right) \exp \left( \varvec{b}^\top \Sigma _{u, k}^{-1} \varvec{x} \right) ,$$

where $$\varphi$$ is the pdf of $$N ( 0, \Sigma )$$. Plugging this into (12) and noting that $$u^{-d} \, \varphi ( u \varvec{b} ) = \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\}$$, we obtain the following bound:

$$\frac{P_{\varvec{b}} ( k, \Lambda )}{\mathbb {P} \left\{ \varvec{X} ( 0 )> u \varvec{b} \right\} } \le \exp \left( \frac{u^2}{2} \, \varvec{b}^\top \Big [ \Sigma ^{-1} - \Sigma _{u, k}^{-1} \Big ] \varvec{b} \right) \int _{\mathbb {R}^d} \exp \left( \varvec{b}^\top \Sigma _{u, k}^{-1} \varvec{x} \right) \mathbb {P} \left\{ \exists \, ( t, s ) \in [ 0, \Lambda ]^{2n} :\varvec{\chi }_{u, k, \varvec{x}} ( t, s ) > \varvec{x} \right\} \mathop {d \varvec{x}}.$$
(14)

At this point we split the proof into three parts: estimation of the integral, estimation of the exponent in front of it and their comparison.

### The exponent in front of the integral

By (13), we have

$$\Sigma ^{-1} - \Sigma _{u, k}^{-1} = -u^{-2} \sum _{l = 1}^n \Sigma ^{-1} \Big [ B_l + B_l^\top \Big ] \Sigma ^{-1} \Lambda ^{\alpha _l} | k_l |^{\alpha _l} +o \left( u^{-2/\alpha } \Lambda k \right) .$$
(15)

Therefore,

$$\frac{u^2}{2} \, \varvec{b}^\top \Big [ \Sigma ^{-1} - \Sigma _{u, k}^{-1} \Big ] \varvec{b} = -\sum _{l = 1}^n \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} | k_l |^{\alpha _l} +u^2 o \left( u^{-2/\alpha } \Lambda k \right) .$$
(16)

By our assumptions,

$$\sup _{-N_u ( \varepsilon ) \le k \le N_u ( \varepsilon )} u^2 \left| o \left( u^{-2/\alpha } \Lambda k \right) \right| \xrightarrow [u \rightarrow \infty ]{} 0.$$

### The integral

First note that

$$\exp \left( \varvec{b}^\top \Sigma _{u, k}^{-1} \varvec{x} \right) = \exp \left( ( \varvec{w} + o ( u^{-2/\alpha } \Lambda k ) )^\top \varvec{x} \right)$$

where the small-o term tends to zero uniformly in $$k$$. We will drop this term from now on to simplify the notation. To bound the remaining integral we will use Lemma 8, which gives

$$\int _{\mathbb {R}^d} e^{\varvec{w}^\top \varvec{x}} \, \mathbb {P} \left\{ \exists \, ( t, s ) \in [ 0, \Lambda ]^{2n} :\varvec{\chi }_{u, k, \varvec{x}} ( t, s ) > u \varvec{b} \right\} \mathop {d \varvec{x}} \le c_1 \exp \left( c_2 ( G + \sigma ^2 ) \right)$$
(17)

with some positive constants $$c_1$$ and $$c_2$$. Here $$G \in \mathbb {R}$$ and $$\sigma ^2 > 0$$ are numbers (depending on $$k$$ and $$u$$) such that

$$\sup _{F \subset \{ 1, \ldots , d \}} \sup _{(t, s) \in [ 0, \Lambda ]^{2n}} \varvec{w}_F^\top \, \mathbb {E} \left\{ \varvec{\chi }_{u, k, \varvec{x}, F} ( t, s ) \right\} \le G + \varepsilon \sum _{j = 1}^d | x_j |$$
(18)

and

$$\sup _{F \subset \{ 1, \ldots , d \}} \sup _{( t, s ) \in [ 0, \Lambda ]^{2n}} {{\,\textrm{Var}\,}}\left\{ \varvec{w}_F^\top \, \varvec{\chi }_{u, k, \varvec{x}, F} ( t, s ) \right\} \le \sigma ^2.$$

To apply this lemma we need to find such numbers.

### Finding $$G$$

By the formulas on conditional Gaussian distribution, we have

$$\mathbb {E} \left\{ \varvec{\chi }_{u, k, \varvec{x}} ( t, s ) \right\} = u \Big [ \Sigma _{u, k} - R_{u, k} ( t, s, 0, 0 ) \Big ] \Sigma _{u, k}^{-1} \left[ u \varvec{b} -\frac{\varvec{x}}{u} \right] ,$$
(19)

where $$R_{u, k} ( t, s, t', s' )$$ is the covariance of $$\varvec{\chi }_{u, k, \varvec{x}} ( t, s )$$. Note that this covariance does not depend on $$\varvec{x}$$. The $$\varvec{x}$$-term can clearly be bounded by

$$\left\| \Big [ \Sigma _{u, k} - R_{u, k} ( t, s, 0, 0 ) \Big ] \Sigma _{u, k}^{-1} \, \varvec{x} \right\| \le \varepsilon \sum _{j = 1}^d | x_j |.$$

Let us bound the $$\varvec{b}$$-contribution. A direct computation gives

\begin{aligned}&\Sigma _{u, k} - R_{u, k} ( t, s, 0, 0 ) \\&\sim \frac{1}{4 u^2} \sum _{l = 1}^n \Big [ S_{\alpha _l, B_l} ( s_l ) +S_{\alpha _l, B_l} ( t_l ) +S_{\alpha _l, B_l} ( \Lambda k_l + t_l ) +S_{\alpha _l, B_l} ( s_l - \Lambda k_l ) -S_{\alpha _l, B_l} ( -\Lambda k_l ) -S_{\alpha _l, B_l} ( \Lambda k_l ) \Big ] \end{aligned}
(20)

uniformly in $$k \in [-N_u ( \varepsilon ), N_u ( \varepsilon )]$$. By (15)

$$u^2 \, \varvec{w}_F^\top \left[ \Big [ \Sigma _{u, k} - R_{u, k} ( t, s, 0, 0 ) \Big ] \Sigma _{u, k}^{-1} \, \varvec{b} \right] _F \sim u^2 \, \varvec{w}_F^\top \left[ \Big [ \Sigma _{u, k} - R_{u, k} ( t, s, 0, 0 ) \Big ] \varvec{w} \right] _F \sim \frac{1}{4} \sum _{l = 1}^n [ A_{1, l} + A_{2, l} + A_{3, l} ]$$

uniformly in $$k \in [-N_u ( \varepsilon ), N_u ( \varepsilon )]$$, where

\begin{aligned} A_{1, l}&:= \varvec{w}_F^\top \left[ \Big [ S_{\alpha _l, B_l} ( s_l ) + S_{\alpha _l, B_l} ( t_l ) \Big ] \varvec{w} \right] _F, \\ A_{2, l}&:= \varvec{w}_F^\top \left[ \Big [ S_{\alpha _l, B_l} ( \Lambda k_l + t_l ) - S_{\alpha _l, B_l} ( \Lambda k_l ) \Big ] \varvec{w} \right] _F, \\ A_{3, l}&:= \varvec{w}_F^\top \left[ \Big [ S_{\alpha _l, B_l} ( s_l - \Lambda k_l ) - S_{\alpha _l, B_l} ( -\Lambda k_l ) \Big ] \varvec{w} \right] _F. \end{aligned}

The first can be bounded as follows:

$$\left| A_{1, l} \right| \le \left| \varvec{w} \right| ^2 \Big [ \left\| S_{\alpha _l, B_l} ( s_l ) \right\| +\left\| S_{\alpha _l, B_l} ( t_l ) \right\| \Big ] \le 2 \Lambda ^{\alpha _l} \left| \varvec{w} \right| ^2 \left\| B_l \right\| .$$

$$A_{2, l}$$ and $$A_{3, l}$$ can be bounded for $$k_l \ne 0$$ similarly as follows:

$$\left| A_{2, l} \right| \le \left| \varvec{w} \right| ^2 \left\| B \right\| \Big [ \left| \Lambda k_l + t_l \right| ^{\alpha _l} - \left| \Lambda k_l \right| ^{\alpha _l} \Big ] \le c_2 \, \Lambda ^{\alpha _l} | k_l |^{\alpha _l - 1}.$$

Therefore, the inequality (18) is satisfied with

$$G = c_2 \sum _{l = 1}^n \Lambda ^{\alpha _l} ( 1 + | k_l |^{\alpha _l - 1} \mathbb {1}_{k_l \ne 0} ).$$
(21)

### Finding $$\sigma ^2$$

We have

$${{\,\textrm{Var}\,}}\left\{ \varvec{w}_F^\top \varvec{\chi }_{u, k, \varvec{x}, F} ( t, s ) \right\} = \sum _{j', \, j \in F} w_j w_{j'} {{\,\textrm{cov}\,}}( \chi _{u, k, \varvec{x}, j} ( t, s ), \chi _{u, k, \varvec{x}, j'} ( t, s ) ) \le c_3 \sum _{j, j'} \Big [ \mathcal {R}_{u, k, \varvec{x}} ( t, s, t, s ) \Big ]_{j, j'},$$

where

\begin{aligned} \mathcal {R}_{u, k, \varvec{x}} ( t, s, t', s' )&:= \mathbb {E} \left\{ \varvec{\chi }_{u, k, \varvec{x}} ( t, s ) \, \varvec{\chi }^\top _{u, k, \varvec{x}} ( t', s' ) \right\} = R_{u, k} ( t, s, t', s' ) - R_{u, k} ( t, s, 0, 0 ) \Sigma _{u, k}^{-1} R_{u, k} ( 0, 0, t', s' ) \\&\sim \frac{1}{4} \sum _{l = 1}^n \Big [ A_{1, l} + A_{2, l} + A_{3, l} + A_{4, l} + A_{5, l} + A_{6, l} \Big ],\end{aligned}

where

\begin{aligned} A_{1, l}&:= S_{\alpha _l, B_l} ( t_l ) +S_{\alpha _l, B_l} ( s_l ) +S_{\alpha _l, B_l} ( -t_l' ) +S_{\alpha _l, B_l} ( -s_l' ), \\ A_{2, l}&:= S_{\alpha _l, B_l} ( s_l - \Lambda k_l ) - S_{\alpha _l, B_l} ( -\Lambda k_l ), \\ A_{3, l}&:= S_{\alpha _l, B_l} ( t_l + \Lambda k_l ) - S_{\alpha _l, B_l} ( \Lambda k_l ), \\ A_{4, l}&:= -S_{\alpha _l, B_l} ( s - s' ) -S_{\alpha _l, B_l} ( t - t' ), \\ A_{5, l}&:= S_{\alpha _l, B_l} ( -\Lambda k_l - t_l' ) -S_{\alpha _l, B_l} ( -\Lambda k_l - t_l' + s_l ), \\ A_{6, l}&:= S_{\alpha _l, B_l} ( \Lambda k_l - s_l' ) -S_{\alpha _l, B_l} ( \Lambda k_l - s_l' + t_l ).\end{aligned}

Similarly to how we bounded differences of this form above, we obtain

$$\left\| A_{1, l} \right\| , \, \left\| A_{4, l} \right\| \le c_4 \Lambda ^{\alpha _l}, \qquad \left\| A_{2, l} \right\| , \, \left\| A_{3, l} \right\| , \, \left\| A_{5, l} \right\| , \, \left\| A_{6, l} \right\| \le c_5 \Lambda ^{\alpha _l} | k_l |^{\alpha _l - 1}.$$

Hence, the inequality (18) is satisfied with

$$\sigma ^2 = c_6 \sum _{l = 1}^n \Lambda ^{\alpha _l} \left( 1 + | k_l |^{\alpha _l - 1} \mathbb {1}_{k_l \ne 0} \right)$$
(22)

as $$u \rightarrow \infty$$.

### Proceeding with the integral

Combining (21) and (22) with (17), we find

$$\int _{\mathbb {R}^d} e^{\varvec{w}^\top \varvec{x}} \, \mathbb {P} \left\{ \exists \, ( t, s ) \in [ 0, \Lambda ]^{2n} :\varvec{\chi }_{u, k, \varvec{x}} ( t, s ) > u \varvec{b} \right\} \mathop {d \varvec{x}} \le c_6 \exp \left( c_7 \sum _{l = 1}^n \Lambda ^{\alpha _l} \left( 1 + | k_l |^{\alpha _l - 1} \mathbb {1}_{k_l \ne 0} \right) \right).$$

By (16) and (14), we have

$$\frac{P_{\varvec{b}} ( k, \Lambda )}{\mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_8 \exp \left( -\sum _{l = 1}^n \Lambda ^{\alpha _l} \left[ \frac{\varvec{w}^\top B_l \, \varvec{w}}{2} \, | k_l |^{\alpha _l} -c_7 \left( 1 + | k_l |^{\alpha _l - 1} \mathbb {1}_{k_l \ne 0} \right) \right] \right) .$$
(23)

If $$| k_l |$$ is large enough, we have

$$\frac{\varvec{w}^\top B_l \, \varvec{w}}{2} \, | k_l |^{\alpha _l} -c_7 \left( 1 + | k_l |^{\alpha _l} \right) \ge \frac{\varvec{w}^\top B_l \, \varvec{w}}{4}.$$

### Lifting the assumption that $$|k_l|$$ is large

Let $$K$$ be such that for $$| k_l | \ge K$$ holds

$$\frac{ P_{\varvec{b}} ( k, \Lambda ) }{ \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_8 \exp \left( -\frac{1}{4} \sum _{l = 1}^n \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} | k_l |^{\alpha _l} \right) .$$

It suffices to consider the case when some of $$k_l$$’s satisfy $$1< | k_l | < K$$. Assume for simplicity that there is exactly one such $$l$$ that $$| k_l | < K$$, take $$\Lambda ' > 0$$ such that $$\Lambda ' < \Lambda$$ and bound $$P_{\varvec{b}}$$ as follows:

\begin{aligned} P_{\varvec{b}} ( k, \Lambda )&\le \sum _{ 0 \, \le \, p_l, \, q_l \, \le \, \lceil {\Lambda / \Lambda '}\rceil} \mathbb {P} \left\{ \begin{aligned} \displaystyle&\exists \, t \in \Lambda ' u^{-2/\alpha } [ \Lambda k / \Lambda ' + q_l 1_l, \Lambda k / \Lambda ' + q_l 1_l + 1 ] :&\varvec{X} ( t )> u \varvec{b} \\ \displaystyle&\exists \, s \in \Lambda ' u^{-2/\alpha } [ p_l 1_l, p_l 1_l + 1 ] :&\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} \\&= \sum _{ 0 \, \le \, p_l, \, q_l \, \le \, \lceil {\Lambda / \Lambda '} \rceil} P_{\varvec{b}} ( \Lambda k / \Lambda ' + ( q_l - p_l ) 1_l, \Lambda ' ).\end{aligned}
(24)

Here $$1_l \in \mathbb {Z}^d$$ such that $$[ 1_l ]_{l'} = \delta _{l, l'}$$. Choose $$\Lambda ':= \Lambda ( | k_l | - 1 ) / K$$. Then

$$k'_l := \Lambda k_l / \Lambda ' + q_l - p_l \ge \Lambda k_l / \Lambda ' - \Lambda / \Lambda ' = \Lambda ( k_l - 1 ) / \Lambda ' \ge K$$

and therefore

$$\frac{ P_{\varvec{b}} ( k', \Lambda ' ) }{ \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_8 \exp \left( -\frac{1}{4} \sum _{l = 1}^n \varvec{w}^\top B_l \, \varvec{w} \, \Lambda '^{\alpha _l} \, | k_l' |^{\alpha _l} \right) = c_8 \exp \left( -\frac{1}{4} \sum _{l = 1}^n \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} ( | k_l | - 1 )^{\alpha _l} \right) .$$
(25)

It remains to note that the number of terms in the sum (24) is at most $$\lceil {\Lambda / \Lambda '}\rceil^2 \le 2 K^2 / ( | k_l | - 1 )^2$$.

### Lifting the assumption that all $$k_l$$’s are non-zero

By (23) and (25)

$$\frac{P_{\varvec{b}} ( k, \Lambda )}{\mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_8 \prod _{ l :k_l \ne 0 } \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \, \left( \left| k_l \right| - 1 \right) ^{\alpha _l} \right) \prod _{ l :k_l = 0 } \exp \left( c_7 \Lambda ^{\alpha _l} \right) .$$
(26)

Similarly to the previous point of the proof, take $$\Lambda ' \in ( 0, \Lambda )$$ and assume for simplicity that there is only one $$l$$ such that $$k_l = 0$$. Note that

\begin{aligned} P_{\varvec{b}} ( k, \Lambda ) \le \sum _{0 \, \le \, p \, \le \lceil {\Lambda / \Lambda '}\rceil} \mathbb {P} \left\{ \begin{aligned}&\begin{aligned} \displaystyle&\exists \, t_j \in \Lambda u^{-2/\alpha _j} [ k_j, k_j + 1 ], \ j \ne l \\ \displaystyle&\exists \, t_l \in \Lambda ' u^{-2/\alpha _l} [ p, p + 1 ] \end{aligned} \quad :&\varvec{X} ( t )> u \varvec{b} \\&\begin{aligned} \displaystyle&\exists \, s_j \in \Lambda u^{-2/\alpha _j} [ 0, 0 + 1 ], \ j \ne l \\ \displaystyle&\exists \, s_l \in \Lambda ' u^{-2/\alpha _l} [ p, p + 1 ] \end{aligned} \quad :&\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} . \end{aligned}
(27)

A similar proof to what we used above shows that each term of this sum is at most

$$c_8 \prod _{l' \ne l} \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \left( \left| k_l \right| - 1 \right) ^{\alpha _l} \right) \exp \left( c_7 \Lambda '^{\alpha _l} \right) \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} .$$

The number of terms in the sum (27) is at most $$\lceil {\Lambda / \Lambda '} \rceil$$, hence

$$\frac{ P_{\varvec{b}} ( k, \Lambda ) }{ \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_9 \Lambda \prod _{l \ne l'} \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \left( \left| k_l \right| ^{\alpha _l} - 1 \right) ^{\alpha _l} \right) ,$$

where $$c_9 = 2 c_8 \exp ( c_7 \Lambda '^{\alpha '} ) / \Lambda '$$. The general case when there is several $$l$$’s such that $$k_l = 0$$ can be addressed similarly.$$\square$$

## 5 Proofs of the main results

### Proof

We begin the proof by splitting $$[0, T]^n$$ into pieces of Pickands scale

$${}[0, T]^n = \Lambda u^{-2/\alpha } \bigcup _{k \, \le N_u} [ k, k + 1 ], \quad \text {where} \quad N_u ( T ) := \Bigg\lceil {\frac{T}{\Lambda u^{-2/\alpha }}\Bigg\rceil}$$

and using Bonferroni inequality to obtain

$${\Sigma }_1' - {\Sigma }_2 \le \mathbb {P} \left\{ \exists \, t \in [0, T]^n :\varvec{X} ( t ) > u \varvec{b} \right\} \le {\Sigma }_1,$$

where

\begin{aligned} {\Sigma }_1&:= \sum _{0 \, \le \, k \, \le \, N_u ( T )} \mathbb {P} \left\{ \exists \, t \in \Lambda u^{-2/\alpha } [ k, k + 1 ] :\varvec{X} ( t )> u \varvec{b} \right\} , \\ {\Sigma }_2&:= \sum _{\begin{array}{c} 0 \, \le \, k, \, j \, \le \, N_u ( T ) \\ k \, \ne \, j \end{array}} \mathbb {P} \left\{ \begin{aligned} \displaystyle&\exists \, t \in \Lambda u^{-2/\alpha } [ k, k + 1 ] :\varvec{X} ( t )> u \varvec{b} \\ \displaystyle&\exists \, s \in \Lambda u^{-2/\alpha } [ j, j + 1 ] :\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} . \end{aligned}

and $${\Sigma }_1'$$ is defined by the same formula as $${\Sigma }_1$$ but with $$N - 1$$ instead of $$N$$ in the upper summation limit. At this point we split the proof into two parts. First, we will focus on finding the exact asymptotics of the single sum $${\Sigma }_1 \sim {\Sigma }_1'$$, and then demonstrate that the double sum $${\Sigma }_2$$ is negligible with respect to $${\Sigma }_1$$.

Since $$\varvec{X}$$ is homogenous, we can easily compute the single sum

$${\Sigma }_1 = \left[ \prod _{l = 1}^n N_{u, l} ( T ) \right] \mathbb {P} \left\{ \exists \, t \in \Lambda u^{-2/\alpha } [ 0, 1 ]^n :\varvec{X} ( t ) > u \varvec{b} \right\} .$$

Applying local Pickands Lemma 5, we obtain

$${\Sigma }_1' \sim {\Sigma }_1 \sim T^n \left[ \prod _{l = 1}^n u^{-2 / \alpha _l} \right] \frac{\mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( [ 0, \Lambda ]^n )}{\Lambda ^n} \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} .$$

Since $$E \mapsto \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( E )$$ is subadditive, we have that the limit

$$\mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} := \lim _{\Lambda \rightarrow \infty } \frac{\mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( [0, \Lambda ]^n )}{\Lambda ^n}$$

exists and is finite. We will show that it is also positive after dealing with the double sum.

### Double sum

By stationarity we have that

\begin{aligned} {\Sigma }_2 = \sum _{\begin{array}{c} 0 \, \le \, k, \, j \, \le \, N_u ( T ) \\ k \, \ne \, j \end{array}} \mathbb {P} \left\{ \begin{aligned} \displaystyle&\exists \, t \in \Lambda u^{-2/\alpha } [ k - j, k - j + 1 ] :&\varvec{X} ( t )> u \varvec{b} \\ \displaystyle&\exists \, s \in \Lambda u^{-2/\alpha } [ 0, 1 ] :&\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} . \end{aligned}

Reindexing the sum by $$q = k - j$$, we obtain

\begin{aligned} {\Sigma }_2 = \prod _{l = 1}^n N_{u, l} ( T ) \sum _{\begin{array}{c} -N_u ( T ) \, \le \, q \, \le N_u ( T ) \\ q \ne 0 \end{array}} \mathbb {P} \left\{ \begin{aligned} \displaystyle&\exists \, t \in \Lambda u^{-2/\alpha } [ q, q + 1 ] :&\varvec{X} ( t )> u \varvec{b} \\ \displaystyle&\exists \, s \in \Lambda u^{-2/\alpha } [ 0, 1 ] :&\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} . \end{aligned}

Denote the double events’ probabilities by

\begin{aligned} P_{\varvec{b}} ( q, \Lambda ) := \mathbb {P} \left\{ \begin{aligned} \displaystyle&\exists \, t \in \Lambda u^{-2/\alpha } [ q, q + 1 ] :&\varvec{X} ( t )> u \varvec{b} \\ \displaystyle&\exists \, s \in \Lambda u^{-2/\alpha } [ 0, 1 ] :&\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} . \end{aligned}

Take some $$\varepsilon \in ( 0, T )$$ and divide the sum in two parts:

\begin{aligned} \sum _{\begin{array}{c} 0 \, \le \, q \, \le N_u \\ q \ne 0 \end{array}} P_{\varvec{b}} ( q, \Lambda ) = \sum _{\exists \, l :| q_l | \, > \, N_{u, l} ( \varepsilon )} P_{\varvec{b}} ( q, \Lambda ) +\sum _{-N_u ( \varepsilon ) \, \le \, q \, \le \, N_u ( \varepsilon )} P_{\varvec{b}} ( q, \Lambda ). \end{aligned}
(28)

Terms of the first sum can be bounded as follows:

\begin{aligned} P_{\varvec{b}} ( q, \Lambda )&\le \mathbb {P} \left\{ \exists \, ( t, s ) \in \Lambda u^{-2/\alpha } ( [ q, q + 1 ] \times [ 0, 1 ] ) :\frac{1}{2} \left[ \varvec{X} ( t ) + \varvec{X} ( s ) \right]> u \varvec{b} \right\} \\&\le \mathbb {P} \left\{ \exists \, ( t, s ) \in \Lambda u^{-2/\alpha } ( [ q, q + 1 ] \times [ 0, 1 ] ) :\frac{1}{2} \left[ \varvec{X}_I ( t ) + \varvec{X}_I ( s ) \right] > u \varvec{b}_I \right\} . \end{aligned}

Let $$\Sigma ( t, s )$$ denote the variance matrix of $$( \varvec{X} ( t ) + \varvec{X} ( s ) ) / 2$$:

$$\Sigma ( t, s ) = \frac{1}{4} \Big [ 2 \Sigma + R ( t - s ) + R ( s - t ) \Big ].$$

In view of Assumption A1, the matrix $$( \Sigma _{II} ( t, s ) )^{-1} ) - ( \Sigma _{II} )^{-1}$$ is strictly positive definite for $$t \ne s$$, which implies

\begin{aligned} \tau&:= \inf \left\{ \inf _{\varvec{x}_I \ge \varvec{b}_I} \varvec{x}_I^\top ( \Sigma _{II} ( t, s ) )^{-1} \varvec{x}_I \ \Big | \ ( t, s ) \in \Lambda u^{-2/\alpha } ( [ q, q + 1 ] \times [ 0, 1 ] ) \right\} \\&\ge \tau _1 := \inf \left\{ \inf _{\varvec{x}_I \ge \varvec{b}_I} \varvec{x}_I^\top ( \Sigma _{II} ( t, s ) )^{-1} \varvec{x}_I \ \Big | \ ( t, s ) \in [ 0, T ]^n :| t_l - s_l |> \varepsilon \right\}> \tau _0 := \inf _{\varvec{x}_I \ge \varvec{b}_I} \varvec{x}_I^\top ( \Sigma _{II} )^{-1} \varvec{x}_I > 0. \end{aligned}

Note that the condition $$\exists \, l :| q_l | > N_u ( \varepsilon )$$ allows us to separate $$\delta ( u, \varepsilon ):= \tau - \tau _0$$ from $$0$$ by $$\delta ( \varepsilon ):= \tau _1 - \tau _0 > 0$$, which depends on $$\varepsilon$$, but does not depend on $$u$$. Since $$\tau _0 = \varvec{b}_I^\top ( \Sigma _{II} )^{-1} \varvec{b}_I = \varvec{b}^\top \Sigma ^{-1} \varvec{b}$$, we obtain by using the Piterbarg inequality (34) the following upper bound:

\begin{aligned}&\mathbb {P} \left\{ \exists \, ( t, s ) \in \Lambda u^{-2/\alpha } ( [q, q + 1] \times [ 0, 1 ] ) :\frac{1}{2} \Big [ \varvec{X} ( t ) + \varvec{X} ( s ) \Big ] > u \varvec{b} \right\} \\&\le c_1 \, u^{2 n/\gamma - 1} {{\,\textrm{mes}\,}}\left( \Lambda u^{-2/\alpha } ( [ q, q + 1 ] \times [ 0, 1 ] ) \right) \exp \left( -\frac{u^2 \tau }{2} \right) \le c_2 \, \Lambda ^{2n} \, u^M \exp \left( -\frac{u^2}{2} \Big [ \varvec{b}^\top \Sigma ^{-1} \varvec{b} + \delta ( \varepsilon ) \Big ] \right) , \end{aligned}

which is negligible with respect to $$\mathbb {P} \{ \varvec{X} ( 0 ) > u \varvec{b} \}$$ as $$u \rightarrow \infty$$. Summing these bounds, we obtain

$$\limsup _{u \rightarrow \infty } \frac{ \displaystyle \sum _{\exists \, l :| q_l |> N_u ( \varepsilon )} P_{\varvec{b}} ( q, \Lambda ) }{ \displaystyle \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } = 0.$$

To bound the second sum in (28), we divide it further into

\begin{aligned} \sum _{-N_u ( \varepsilon ) \, \le \, q \, \le \, N_u ( \varepsilon )} P_{\varvec{b}} ( q, \Lambda ) = \sum _{\begin{array}{c} -N_u ( \varepsilon ) \, \le \, q \, \le \, N_u ( \varepsilon ) \\ \exists \, l :| q_l | = 1 \end{array}} P_{\varvec{b}} ( q, \Lambda ) +\sum _{\begin{array}{c} -N_u ( \varepsilon ) \, \le \, q \, \le \, N_u ( \varepsilon ) \\ \forall \, l :| q_l | \ne 1 \end{array}} P_{\varvec{b}} ( q, \Lambda ) =: A_1 + A_2. \end{aligned}
(29)

The probabilities of the second sum can be estimated by Lemma 3 as follows:

$$\frac{ P_{\varvec{b}} ( q, \Lambda ) }{ \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c \Lambda ^{\# \{ l :k_l = 0 \}} \prod _{l :k_l \ne 0} \left( \left| k_l \right| - 1 \right) ^{-2} \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \left( \left| k_l \right| - 1 \right) ^{\alpha _l} \right)$$

and therefore

$$\lim _{\Lambda \rightarrow \infty } \limsup _{u \rightarrow \infty } \frac{ A_2 }{ \displaystyle \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( [ 0, \Lambda ]^n ) \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_1 \lim _{\Lambda \rightarrow \infty } \sum _l \Lambda ^{\# \{ l :k_l = 0 \} - n} \exp \left( -\frac{1}{8} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \right) = 0.$$

Next, we show how to bound the first sum. Assume for simplicity that $$q$$ is such that $$| q_l | = 1$$ and $$| q_{l'} | \ne 1$$ for all $$l' \ne l$$. We have

\begin{aligned} P_{\varvec{b}} ( q, \Lambda )&= \mathbb {P} \left\{ \begin{aligned}&\begin{aligned}&\forall \, j \ne l \ \exists \, t_j \in \Lambda u^{-2/\alpha _j} [ q_j, q_j + 1 ] \\&\exists \, t_l \in \Lambda u^{-2/\alpha _l} [ 1, 2 ] \, \end{aligned} \quad :&\varvec{X} ( t )> u \varvec{b} \\&\exists \, s \in \Lambda u^{-2/\alpha } [ 0, 1 ] \quad :&\varvec{X} ( s )> u \varvec{b} \end{aligned} \right\} \\&\le \mathbb {P} \left\{ \begin{aligned}&\begin{aligned}&\forall \, j \ne l \ \exists \, t_j \in \Lambda u^{-2/\alpha _j} [ q_j, q_j + 1 ] \\&\exists \, t_l \in u^{-2/\alpha _l} \left[ \Lambda + \sqrt{\Lambda }, 2 \Lambda + \sqrt{\Lambda } \right] \, \end{aligned} \quad :&\varvec{X} ( t )> u \varvec{b} \\&\exists \, s \in \Lambda u^{-2/\alpha } [ 0, 1 ] \quad :&\varvec{X} ( s )> u \varvec{b} \end{aligned} \right\} \\&\quad +\mathbb {P} \left\{ \begin{aligned}&\exists \, t_j \in \Lambda u^{-2/\alpha _j} [ q_j, q_j + 1 ] \, \forall \, j \ne l \\&\exists \, t_l \in u^{-2/\alpha _l} \left[ \Lambda , \Lambda + \sqrt{\Lambda } \right] \, \end{aligned} \quad :\varvec{X} ( t ) > u \varvec{b} \right\} =: A_3 + A_4. \end{aligned}

The first probability on the right satisfies the conditions of Lemma 3, and therefore

$$\frac{A_3}{ \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c_1 \Lambda ^{\# \{ l :k_l = 0 \}} \prod _{l' \ne l, \, k_l \ne 0} \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l} \left( \left| q_l \right| - 1 \right) ^{\alpha _l} \right) \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda ^{\alpha _l / 2} \right)$$

Therefore, we obtain

$$\lim _{\Lambda \rightarrow \infty } \limsup _{u \rightarrow \infty } \frac{ \displaystyle \sum _l A_3 }{ \displaystyle \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( [ 0, \Lambda ]^n ) \prod _{l = 1}^n u^{-2/\alpha _l} \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } = 0.$$

For $$A_4$$, we have by Lemma 5

$$\frac{A_4}{\mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \sim \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} \left( [ 0, \Lambda ] \times \ldots \times \left[ 0, \sqrt{\Lambda } \right] \times \ldots [0, \Lambda ] \right) .$$

Consequently, we have

\begin{aligned} \lim _{\Lambda \rightarrow \infty } \limsup _{u \rightarrow \infty }&\frac{ \displaystyle \sum _l A_4 }{ \displaystyle T^n \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} \left( [ 0, \Lambda ]^n \right) \prod _{l = 1}^n u^{-2/\alpha _l} \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \\&= \lim _{\Lambda \rightarrow \infty } \frac{ \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} \left( [ 0, \Lambda ] \times \ldots \times \left[ 0, \sqrt{\Lambda } \right] \times \ldots [0, \Lambda ] \right) }{ \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} \left( [ 0, \Lambda ]^n \right) } \le \lim _{\Lambda \rightarrow \infty } \Lambda ^{-1/2} = 0. \end{aligned}

The general case of $$q_{\mathcal {I}} \in \{ \pm 1 \}$$ for $$\mathcal {I} \subset \{ 1, \ldots , n \}$$ can be addressed similarly.

### Positivity of the Pickands constant

To show that the constant is positive we can use the following lower bound:

\begin{aligned} \limsup _{u \rightarrow \infty } \, \frac{\mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} \left( [ 0, \Lambda ]^n \right) }{\Lambda ^n}&\ge \liminf _{u \rightarrow \infty } \frac{ \displaystyle \mathbb {P} \left\{ \exists \, t \in [ 0, T ]^n \varvec{X} ( t )> u \varvec{b} \right\} }{ \displaystyle T^n \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 )> u \varvec{b} \right\} } \\&\ge \liminf _{u \rightarrow \infty } \frac{ \displaystyle \mathbb {P} \left\{ \exists \, t \in [ 0, \varepsilon ]^n \varvec{X} ( t )> u \varvec{b} \right\} }{ \displaystyle T^n \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 )> u \varvec{b} \right\} } \ge \liminf _{u \rightarrow \infty } \, \frac{\widetilde{{\Sigma }}_1 - \widetilde{{\Sigma }}_2}{ \displaystyle T^n \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} },\end{aligned}
(30)

where $$\widetilde{{\Sigma }}_1$$ and $$\widetilde{{\Sigma }}_2$$ are the single and double sum with some $$\Lambda '$$ instead of $$\Lambda$$ and without odd (in all coordinates) intervals:

\begin{aligned} \widetilde{{\Sigma }}_1&:= \sum _{0 \, \le \, k \, \le \, \widetilde{N}_u ( \varepsilon )} \mathbb {P} \left\{ \exists \, t \in \Lambda ' u^{-2/\alpha } [ 2 k, 2 k + 1 ] :\varvec{X} ( t )> u \varvec{b} \right\} , \\ \widetilde{{\Sigma }}_2&:= \sum _{\begin{array}{c} 0 \, \le \, k, \, j \, \le \, \widetilde{N}_u ( \varepsilon ) \\ k \, \ne \, j \end{array}} \mathbb {P} \left\{ \begin{aligned} \displaystyle&\exists \, t \in \Lambda ' u^{-2/\alpha } [ 2 k, 2 k + 1 ] :\varvec{X} ( t )> u \varvec{b} \\ \displaystyle&\exists \, s \in \Lambda u^{-2/\alpha } [ 2 j, 2 j + 1 ] :\varvec{X} ( s ) > u \varvec{b} \end{aligned} \right\} \end{aligned}

and $$\widetilde{N}_u ( \varepsilon ) = \lfloor {\varepsilon / 2 \Lambda ' u^{-2/\alpha }} \rfloor$$. By the same reasoning as above,

$$\liminf _{u \rightarrow \infty } \, \frac{\widetilde{{\Sigma }}_1}{ \displaystyle \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } = \left( \frac{\varepsilon }{2} \right) ^n \frac{\mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( [ 0, \Lambda ' ] )}{\Lambda '^n},$$

and

$$\limsup _{u \rightarrow \infty } \, \frac{\widetilde{{\Sigma }}_2}{ \displaystyle \prod _{l = 1}^n u^{-2/\alpha _l} \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} } \le c \left( \frac{\varepsilon }{2} \right) ^n \sum _l \Lambda '^{\# \{ l :k_l = 0 \} - n} \prod _{k_l \ne 0} \exp \left( -\frac{1}{4} \, \varvec{w}^\top B_l \, \varvec{w} \, \Lambda '^{\alpha _l} \right)$$

Taking $$\Lambda '$$ to be large enough, we find that the difference in (30) is separated from zero. Hence, its limit is positive.$$\square$$

### Proof

We begin the proof by splitting $$[0, T]$$ into intervals of some small enough $$\delta > 0$$

$${}[0, T]^n = \delta \bigcup _{k \, \le N_{\delta }} [ k, k + 1 ] , \qquad N_{\delta } := \Bigg\lceil {\frac{T}{\delta } \Bigg\rceil},$$

and applying the Bonferroni inequality, which yields

$${\Sigma }_1' - {\Sigma }_2 \le \mathbb {P} \left\{ \exists \, t \in [0, T] :\varvec{X} ( t ) > u \varvec{b} \right\} \le {\Sigma }_1,$$

where

\begin{aligned} {\Sigma }_1 := \sum _{k \le N_{\delta }} \mathbb {P} \left\{ \exists \, t \in \delta [ k , k + 1 ] :\varvec{X} ( t )> u \varvec{b} \right\} , \qquad {\Sigma }_2 := \sum _{\begin{array}{c} k, j \le N_{\delta } \\ k \ne j \end{array}} \mathbb {P} \left\{ \begin{array}{c} \displaystyle \exists \, t \in \delta [ k, k + 1 ] :\varvec{X} ( t )> u \varvec{b} \\ \displaystyle \exists \, s \in \delta [ j, j + 1 ] :\varvec{X} ( s ) > u \varvec{b} \end{array} \right\} \end{aligned}

and $${\Sigma }_1'$$ is defined by the same formula as $${\Sigma }_1$$, but with $$( N - 1 )$$ instead of $$N$$ in the upper limit of summation. At this point we split the proof into two parts. First, we will focus on finding the exact asymptotics of the single sum $${\Sigma }_1 \sim {\Sigma }_1'$$, and then demonstrate that the double sum $${\Sigma }_2$$ is negligible with respect to $${\Sigma }_1$$.

### Single sum

Let $$\min$$ and $$\max$$ applied to a matrix denote component-wise minimum and maximum and let $$J$$ denote a $$d \times d$$ matrix of all ones: $$J_{kj} = 1$$. Take $$\varepsilon > 0$$ and for each $$l$$ define two matrices, which bound $$B_l ( t )$$ on $$\delta [ k, k + 1 ]$$ component-wise from below and from above by

$$B_{l, k, \varepsilon , + } := \min _{t \in \delta [k, k + 1]} B_t - \varepsilon J, \qquad B_{l, k, \varepsilon , - } := \max _{t \in \delta [k, k + 1]} B_t + \varepsilon J.$$

Since for all $$t \in [0, T]$$ we have $$\widetilde{B_t} \vartriangleright 0$$ strictly, it follows that $$\widetilde{B_{k, \, \varepsilon , \, \pm }} \vartriangleright 0$$ if $$\varepsilon$$ is small enough. Denote

$$\mathbb {B}_{k, \varepsilon , \pm } := ( B_{l, k, \varepsilon , \pm } )_{l = 1, \ldots , n}.$$

By Lemma 2 the real matrix-valued functions $$\mathcal {E}_{\alpha _l, \, B_{l, k, \, \varepsilon , \, \pm }} ( s_l )$$ are positive definite and give rise to the following bounds on the covariance of $$\varvec{X}$$:

$$\sum _{l = 1}^n \mathcal {E}_{\alpha _l, \, B_{l, k, \, \varepsilon , \, - }} ( s_l ) \le R ( t + s, t ) \le \sum _{l = 1}^n \mathcal {E}_{\alpha _l, \, B_{l, k, \, \varepsilon , \, + }} ( s_l )$$

for small enough $$s$$. These functions generate two stationary Gaussian processes $$\varvec{Y}_{l, k, \varepsilon , \pm } ( s ), \, s \in \mathbb {R}$$, which by Lemma 4 provide us with bounds on the high excursion probabilities on $$\delta [ k, k + 1 ]$$:

\begin{aligned} \mathbb {P} \left\{ \exists \, t \in \delta [ k, k + 1 ] :\varvec{X} ( t )> u \varvec{b} \right\}&\le \mathbb {P} \left\{ \exists \, t \in \delta [ k, k + 1 ] :\sum _{l = 1}^n \varvec{Y}_{l, k, \varepsilon , - } ( t )> u \varvec{b} \right\} \\&\ge \mathbb {P} \left\{ \exists \, t \in \delta [ k, k + 1 ] :\sum _{l = 1}^n \varvec{Y}_{l, k, \varepsilon , + } ( t ) > u \varvec{b} \right\} \end{aligned}

Note that the sign plus is on the left and minus is on the right.

Applying Theorem 1, we find that

$$\mathbb {P} \left\{ \exists \, t \in \delta [ k, k + 1 ] :\varvec{Y}_{k, \varepsilon , \pm } ( t )> u \varvec{b} \right\} \sim \delta ^n \, \mathcal {H}_{\varvec{\alpha }, \, \mathbb {B}_{k, \varepsilon , \pm }, \, \varvec{w}} \, \prod _{l = 1}^n u^{-2/\alpha _l} \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\} .$$

By adding together all the terms, we obtain

$$\left[ \, \sum _{k = 1}^{N_u - 1} \mathcal {H}_{\alpha , \mathbb {B}_{k, \varepsilon , + }, \, \varvec{w}} \, \delta ^n \right] u^{2/\alpha } \, \mathbb {P} \left\{ \varvec{X} ( 0 )> u \, \varvec{b} \right\} \le {\Sigma }_1' \le {\Sigma }_1 \le \left[ \, \sum _{k = 1}^{N_u} \mathcal {H}_{\alpha , \mathbb {B}_{k, \varepsilon , - }, \, \varvec{w}} \, \delta ^n \right] u^{2/\alpha } \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \, \varvec{b} \right\} .$$

By continuity of $$B \mapsto \mathcal {H}_{\alpha , B, \varvec{b}}$$, we have that

$$\lim _{\varepsilon \rightarrow 0} \lim _{\delta \rightarrow 0} \sum _{k = 1}^{N_\delta } \mathcal {H}_{\alpha , \mathbb {B}_{k, \varepsilon , \pm }, \, \varvec{w}} \, \delta ^n \xrightarrow [\delta \rightarrow 0]{} \int _0^T \mathcal {H}_{\varvec{\alpha }, \mathbb {B} ( t ), \varvec{w}} \mathop {d t}.$$

Hence, as $$u \rightarrow \infty$$,

$$\lim _{\varepsilon \rightarrow 0} \lim _{\delta \rightarrow 0} {\Sigma }_1' \sim \lim _{\varepsilon \rightarrow 0} \lim _{\delta \rightarrow 0} {\Sigma }_1 \sim \left[ \int _0^T \mathcal {H}_{\varvec{\alpha }, \mathbb {B} ( t ), \varvec{w}} \mathop {dt} \right] u^{2/\alpha } \, \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \, \varvec{b} \right\} .$$

### Double sum

The double sum can be estimated by the same argument as in the proof of Theorem 1.$$\square$$