Abstract
In this paper, we study the asymptotical behaviour of high exceedence probabilities for centered continuous \(\mathbb {R}^n\)-valued Gaussian random field \(\varvec{X}\) with covariance matrix satisfying \(\Sigma - R ( t + s, t ) \sim \sum _{l = 1}^n B_l ( t ) \, | s_l |^{\alpha _l}\) as \(s \downarrow 0\). Such processes occur naturally as time transformations of homogenous random fields, and we present two asymptotic results of this nature as applications of our findings. The technical novelty of our proof consists in showing that the Slepian-Gordon inequality technique, essential in the univariate case, can also be successfully applied in the multivariate setup. This is noteworthy because this technique was previously believed to be inaccessible in this particular context.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Despite the fact that the Gaussian extremes have been an active research area since at least the 60 s, up until recently little has been known about exact asymptotics of high exceedance probabilities of Gaussian processes in the multivariate case. A deep contribution Dȩbicki et al. (2020) has paved a way towards different problems of the following kind:
for \(\varvec{b} \in \mathbb {R}^d {\setminus } ( -\infty , 0 ]^d\) and \(\varvec{X}\) being a continuous Gaussian process. Here “>” denotes the componentwise (Hadamard) comparison. As it turns out, these problems are much more challenging than the univariate ones due to the lack of several techniques which are crucial for the univariate case. The reader can find the detailed account of this shortage in the introduction to the aforementioned paper. Among these lacking techniques, the authors name the Slepian inequality and mention that its extension in the form of Gordon inequality is thought to be inapplicable if the compontents of \(\varvec{X}\) are not independent (see Dȩbicki et al. (2015) for the i.i.d. case).
In this contribution, we aim to achieve two goals. First, we extend (Dȩbicki et al. 2020, Theorem 2.1) on stationary processes to a certain class of homogenous Gaussian random fields defined on \([ 0, T ]^n\), see Theorem 1. Second, we apply this result to the study of locally-homogenous Gaussian random fields. The corresponding result is presented in Theorem 2. The crucial step of the second part involves constructing two homogenous processes which stochastically dominate \(\varvec{X}\) on short intervals from above and from below. This is done by showing that a certain matrix-valued function is positive definite and subsequently applying the Gordon inequality.
As an application of our findings, we present asymptotic formulas for the time-transformed operator fractional Ornstein-Uhlenbeck process \(\varvec{Y}\) defined by the covariance matrix function
with \(H\) a symmetric matrix with eigenvalues from \(( 0, 1 ]\) and \(\varphi\) a strictly monotone continuously differentiable function. By Proposition 1,
where \(h\) is the lowest eigenvalue of \(H\) and \(c\) is given in the form of an integral of Pickands-type constants over \([ 0, T ]\). This result extends (Dȩbicki et al. 2020, Proposition 3.1). Another application concerns a class of continuous Gaussian processes associated to the following matrix-valued function:
where \(B^{ \pm } = ( B \pm B^\top ) / 2\) are symmetric and antisymmetric parts of a real \(d \times d\) matrix \(B\) and \(\alpha \in ( 0, 2 ]\). In Ievlev and Novikov (2023) we found the necessary and sufficient conditions on the pair \(( \alpha , B )\) under which this function is positive definite (see Lemma 3) and thus generates a Gaussian process. Here we present an asymptotic result on the time-transformed version of this process, see Proposition 2.
The notion of locally stationary process was introduced by Berman in (1974) and its extremes were extensively studied afterwards in the papers by Hüsler (1990), Piterbarg (1996), Chan and Lai (2006) and many others. See also Piterbarg and Rodionov (2020), Qiao (2021) and Tan and Zheng (2020) for more recent contributions. Its multivariate counterpart, however, has not been considered so far due to the technical issues. The technique of Dȩbicki et al. (2020) based on the uniform version of local Pickands lemma may in principle be applied to this class of processes, but it would require much stronger assumptions than those we impose in this contribution. Our result, presented in Theorem 2, should appear natural (if not obvious) for the specialist, but it still requires a rigorous proof, which involves imposing the right assumptions on the field \(\varvec{X}\).
The applicability of Gordon inequality in this context allows to significantly simplify the study of classical multivariate Gaussian extremes. In particular, the technical issue of uniformity in the single and double sums may be resolved by passing to a stationary dominating process. Therefore, besides the results here, we establish a simpler methodology compared to Dȩbicki et al. (2020) for dealing with non-stationary Gaussian random fields.
We want to point out that one possible direction in which our results can be extended is the family of \(\alpha ( t )\)-locally stationary Gaussian random fields, see Hashorva and Ji (2016).
Brief organization of the paper
Main results are presented in Section 2 with proofs relegated to Section 5. The applications are presented in the Section 3. Section 4 contains auxiliary results and technical lemmas. Appendix contains several known results taken from Dȩbicki et al. (2020) and reproduced here for reader’s convenience in the adapted form.
2 Main results
Before proceeding to the theorems, let us introduce some relevant notation.
Vectors
Throughout the paper points of \(\mathbb {R}^d\) are written in bold letters (values of multivariate processes), while points of \([0, T]^n \subset \mathbb {R}^n\) (points of their domain) are written in the regular font. This does not lead to any confusion since their meaning can always be understood from the context, but allows to avoid visual clutter. All operations on vectors in both spaces, unless specified otherwise, are performed component-wise. For example, if \(t\) and \(s\) belong to \(\mathbb {R}^n\), then \(t s\) denotes the vector \(( t_i s_i )_{i = 1, \ldots , n}\). Similarly for \(t / s\), \(e^t\), \(\lfloor {t} \rfloor\) and so on denoting vectors with components \(t_i / s_i\), \(e^{t_i}\) and \(\lfloor {t_i}\rfloor\) correspondingly. We write \(t \ge s\) if \(t_i \ge s_i\) for all their coordinates. By abuse of notation, we write \(1 = ( 1, \ldots , 1 ) \in \mathbb {R}^n\) and \(0 = ( 0, \ldots , 0 ) \in \mathbb {R}^n\). If \(s > t\), then \([t, s]\) denotes the box \(\{ u :u_i \in [t_i, s_i] \}\).
Matrices
If \(A = ( A_{ij} )_{i, j = 1, \ldots , d}\) is a \(d \times d\) matrix and \(I, \, J \subset \{ 1, \ldots , d \}\) are two index sets, we write \(A_{IJ}\) for the submatrix \(( A_{ij} )_{i \in I, \, j \in J}\). If \(I = J\), we occasionally write \(A_I\) instead of \(A_{II}\). \(\left\| A \right\|\) denotes any fixed norm in the space of \(d \times d\) matrices. Our formulas do not depend on the choice of the norm. For \(\varvec{w} \in \mathbb {R}^d\), \({{\,\textrm{diag}\,}}( \varvec{w} )\) stands for the diagonal matrix with entries \(w_1, \, w_2, \, \ldots , \, w_d\) on the main diagonal. The notation \(A \unrhd 0\) means that \(A\) is positive definite and \(A \vartriangleright 0\) means that \(A\) is strictly positive definite. If \(A\) is a real matrix, denote its symmetric and anti-symmetric parts by \(A^{ \pm }:= ( A \pm A^\top ) / 2\).
Quadratic programming problem
Let \(\Sigma\) be a \(d \times d\) real matrix with inverse \(\Sigma ^{-1}\). If \(\varvec{b} \in \mathbb {R}^d {\setminus } ( -\infty , 0 ]^d\), then by Lemma 7 the quadratic programming problem
has a unique solution \(\widetilde{\varvec{b}} \ge \varvec{b}\) and there exists a unique non-empty index set \(I \subset \{ 1, \ldots , d \}\) such that
where \(\varvec{w}:= \Sigma ^{-1} \, \widetilde{\varvec{b}}\) and \(J = \{ 1, \ldots , d \} {\setminus } I\).
Other notation
We use lower case constants \(c_1, \, c_2, \, \ldots\) to denote generic constants used in the proofs, whose exact values are not important and can be changed from line to line. The labeling of the constants starts anew in every proof. Let \(f, \, g :[ 0, T ]^n \rightarrow M\), where \(M = \mathbb {R}^{d \times d}, \, \mathbb {R}^d\) or \(\mathbb {R}\) be two matrix-valued, vector-valued or real-valued functions and \(h :[ 0, T ]^n \rightarrow \mathbb {R}\) be a real-valued function. We write “\(f = g + o ( h )\) as \(t \rightarrow t_0\)” if for all \(\varepsilon > 0\) there exists \(\delta > 0\) such that \(| t - t_0 | < \delta\) implies \(\Vert f ( t ) - g ( t ) \Vert \le \varepsilon | h ( t ) |\). The next two subsections present our results on homogenous and locally homogenous fields.
2.1 Homogenous case
Let \(\varvec{X} ( t ), \, t \in [ 0, T ]^n\) be a centered homogenous and continuous Gaussian random field. Denote its covariance and variance matrices by
Homogenity means that for each \(t\) and \(s\) in \([ 0, T ]^n\)
therefore we set in the following \(R ( t ):= R ( t, 0 )\). It follows that \(R ( -t ) = R^\top ( t )\). The matrix \(\Sigma - R ( t )\) is positive definite, but not necessarily symmetric. Let \(\varvec{b} \in \mathbb {R}^d {\setminus } ( -\infty , 0 ]^d\) and denote by \(\widetilde{\varvec{b}}\) and \(I\) the unique solution of \(\Pi _{\Sigma } ( \varvec{b} )\) and its \(I\) index set, see Lemma 7 for details. Set \(\varvec{w}:= \Sigma ^{-1} \, \widetilde{\varvec{b}}\).
In this section we impose the following assumptions:
- A1:
-
\(\Sigma _{II} - R_{II} ( t )\) is strictly positive definite for every \(t \in ( 0, T ]\)
- A2:
-
There exist a collection \(\mathbb {B}:= ( B_l )_{l = 1, \ldots , n}\) of real \(d \times d\) matrices and a collection of numbers \(\varvec{\alpha }:= ( \alpha _l )_{l = 1, \ldots , n} \in ( 0, 2 ]^n\) such that
$$\Sigma - R ( t )=\sum _{l = 1}^n B_l \, | t_l |^{\alpha _l} +o \left( \sum _{l = 1}^n | t_l |^{\alpha _l} \right) \quad \text {as} \quad t \downarrow 0,$$(A2.1)$$\varvec{w}^\top B_l \, \varvec{w} > 0 \quad \text {for all} \quad l = 1, \ldots , n.$$(A2.2)
Remark 1
It follows from (A2.1) that
as \(t \rightarrow 0\) and \(B_l\)’s satisfy
From this follows that \(B_l^{ + } \unrhd 0\).
Theorem 1
If \(\varvec{X}\) is a centered homogenous and continuous Gaussian random field satisfying Assumptions A1 and A2, then
where the constant \(\mathcal {H}_{\varvec{\alpha }, \mathbb {B}}\) is given by
Here \(\varvec{Y}_l\) is a continuous Gaussian process associated to the covariance function
2.2 Locally homogenous case
In this section \(\varvec{X} ( t ), \, t \in [0, T]^n\) is a centered continuous Gaussian random field with covariance matrix
and variance matrix \(\Sigma\) satisfying \(R ( t, t ) = R ( 0, 0 ) =: \Sigma\). We impose the following assumptions:
- B1:
-
\(\Sigma _{II} - R_{II} ( t )\) is strictly positive definite for every \(t \in ( 0, T ]\)
- B2:
-
There exist a collection \(\mathbb {B} ( t ):= ( B_l ( t ) )_{l = 1, \ldots , n}\) of continuous real \(d \times d\) matrix-valued functions and a collection of numbers \(\varvec{\alpha }:= ( \alpha _l )_{l = 1, \ldots , n} \in ( 0, 2 ]^n\) such that
$$\Sigma - R ( t + s, t ) = \sum _{l = 1}^n \Big [ B_l ( t ) \, | s_l |^{\alpha _l} \mathbb {1}_{s_l \ge 0} +B_l^\top ( t ) \, | s_l |^{\alpha _l} \, \mathbb {1}_{s_l < 0} \Big ] +o \left( \sum _{l = 1}^n | s_l |^{\alpha _l} \right) \quad \text {as} \quad t \rightarrow +0,$$(B2.1)where small-o is uniform in \(t \in [ 0, T ]^n\) and
$$\widetilde{B_l} ( t ) := B_l^{ + } ( t ) \cos \left( \frac{\pi \alpha _l}{2} \right) -i B_l^{ - } ( t ) \sin \left( \frac{\pi \alpha _l}{2} \right) \vartriangleright 0 \quad \text {for all} \quad t \in [ 0, T ]^n.$$(B2.2)
Remark 2
From (B2.2) follows that \(\varvec{w}^\top \, B_l ( t ) \, \varvec{w} > 0\) for all \(t \in [ 0, T ]^n\).
Theorem 2
If \(\varvec{X}\) is a centered and continuous Gaussian random field satisfying Assumptions B1 and B2, then
where the constant \(\mathcal {H}_{\varvec{\alpha }, \mathbb {B}}\) is given by (2).
3 Examples
3.1 Time-transformed operator fractional Ornstein-Ulhenbeck process
Let \(H\) be a symmetric matrix with all eigenvalues \(h_1, \ldots , h_d\) belonging to \(( 0, 1 ]\) and consider a stationary a.s. continuous \(\mathbb {R}^d\)-valued Gaussian process \(\varvec{X} ( t ), \, t \ge 0\) with cmf
where \(t^H = \exp ( H \ln t )\) for \(t > 0\). This process is known in the literature as the operator fractional Ornstein-Uhlenbeck process. In this section we consider its time-transformed version. Specifically, let \(\varphi\) be a continuously differentiable strictly monotone function. Define \(\varvec{Y} ( t ):= \varvec{X} ( \varphi ( t ) )\). Let us show that this process is locally stationary in the sense defined above. Since \(H\) is symmetric, there exists an orthogonal matrix \(Q\) such that \(H = Q \, {{\,\textrm{diag}\,}}( h_1, \ldots , h_d ) \, Q^\top\). Hence,
with \(h:= \min _{i = 1, \ldots , d} h_i\) and \([ \widetilde{I} \; ]_{ij}:= \mathbb {1}_{i = j \, \text {and} \, h = h_i}\). Since \(\varphi\) is differentiable, we have
Then (B2) holds with \(B ( t ):= Q \widetilde{I} Q^\top \left| \varphi ' ( t ) \right| ^{2\,h}\) and \(\Sigma = I\). Note that \(| \varphi ' ( t ) | > 0\) since \(\varphi\) is strictly monotone. By Theorem 2 we have the following result:
Proposition 1
Let \(\varvec{Y} ( t ) = \varvec{X} ( \varphi ( t ) ), \, t \in [0, T]\), where \(\varphi\) is a continuously differentiable strictly monotone function and \(\varvec{X} ( t ), \, t \in \mathbb {R}\) is an operator fO-U process associated to the covariance (3) with a symmetric matrix \(H\) whose eigenvalues belong to \(( 0, 1 ]\). Let \(\widetilde{b}_j = \max \{ b_j, 0 \}\) for \(j = 1, \ldots , d\). If \(\widetilde{\varvec{b}}^\top Q \widetilde{I} Q^\top \widetilde{\varvec{b}} > 0\), then
3.2 A Gaussian process with \(\alpha\)-homogenous log-covariance
In an upcoming paper Ievlev and Novikov (2023) we show the following result:
Theorem 3
Let \(B\) be a real \(d \times d\) matrix. If a matrix-valued function \(R\) defined by
is positive-definite, then the condition (1) is satisfied. If, on the other hand, the condition (1) is satisfied. Then
-
If \(\alpha \in ( 0, 1 )\), then \(R\) is positive-definite if and only if \(B\) satisfies
$$B^{1/\alpha } + B^{1/\alpha , \top } \unrhd 0.$$(5) -
If \(\alpha \in [ 1, 2 ]\), then \(R\) is positive-definite.
Using the above result, define \(\varvec{X} ( t ), \, t \in \mathbb {R}\) a stationary continuous Gaussian process associated to this covariance and let \(\varphi\) be a strictly increasing continuously differentiable function. Define \(\varvec{Y} ( t ):= \varvec{X} ( \varphi ( t ) )\). The covariance of \(\varvec{Y}\) satisfies
where we used the fact that \({\text {sign}} ( \varphi ( t + s ) - \varphi ( t ) ) = {\text {sign}} ( s )\) since \(\varphi\) is increasing. Hence, the assumption B2.1 is satisfied with \(B ( t ) = B \left| \varphi ' ( t ) \right| ^{\alpha }\). The validity of B2.2 follows from the fact that \(| \varphi ' ( t ) | > 0\) and our assumption on \(B\). By Theorem 2, we have the following result:
Proposition 2
Let \(\varvec{Y} ( t ) = \varvec{X} ( \varphi ( t ) ), \, t \in [0, T]\), where \(\varphi\) is a strictly increasing continuously differentiable function and \(\varvec{X}\) is a process associated to the covariance (4), where \(B\) and \(\alpha\) are such that this function is positive definite. Then
as \(u \rightarrow \infty\).
4 Auxiliary results
4.1 Lemma on positive definiteness
Lemma 1
Let \(B\) be a real \(d \times d\) matrix satisfying
Then there exists a collection of complex numbers \(\{ \lambda _k \}_{k = 1, \ldots , d}\) satisfying
and a collection of strictly positive definite Hermitian matrices \(\{ V_k \}_{k = 1, \ldots , d}\) of rank one such that
Proof
Note that \(B\) can be represented as follows:
Here \(B_{ + }\) is symmetric and strictly positive definite by (6) and \(B_{ - }'\) is Hermitian. Hence, there exists an invertible real matrix \(A\) such that \(B_{ + } = A A^\top\). Note that for each unitaty matrix \(Q\) holds
Since \(B_{ - }'\) is Hermitian, so is \(A^{-1} B_{ - }' A^{-\top }\) and therefore there exists a unitary matrix \(Q\) and a real diagonal matrix \(D\) such that
Denote \(V:= A Q^{ * }\). Therefore, we have the following representations of \(B_{ + }\)
and \(B_{ - }'\)
Hence, for \(B\) we have
Set next
where \([ \mathcal {D}_k ]_{ml} = \delta _{km} \delta _{kl}\) is the diagonal matrix with \(1\) at \(k\)-th place. Clearly, \(V_k\)’s are Hermitian, positive definite, of rank one and (8) is satisfied. It remains to show that the inequality (7) is also satisfied. To this end, use (9) and (10) to rewrite \(\widetilde{B}\) as
Therefore, we have
which implies (7).\(\square\)
Lemma 2
Under the conditions of Lemma 1, the functions given by
with \(\lambda _k\), \(V_k\) and \(\alpha\) from Lemma 1 are all positive definite complex matrix-valued functions. Let \(\Sigma = A A^\top\) be a strictly positive definite matrix and define
Then \(\mathcal {E}_{\alpha , B} ( t )\) is positive definite real matrix-valued function satisfying
Proof
Since \(V_k = V^{ * } \, \mathcal {D}_k \, V\) by (11), there exists \(\mu _k > 0\) and a unitary matrix \(U\) such that \(V_k = \mu _k \, U^{ * } \, \mathcal {D}_k \, U\). Hence,
Positive definiteness of this function is therefore equivalent to that of a scalar-valued function
which follows from (7). The second claim follows from (8) and the fact that
by a direct computation.\(\square\)
4.2 Double sum bound
Define for \(k \in \mathbb {Z}^d {\setminus } \{ 0 \}\) and \(\Lambda > 0\) the double events’ probabilities by
Lemma 3
(Double sum bound). If \(\varvec{X} ( t ), \, t \in [ 0, T ]^n\) is a centered continuous Gaussian field satifying Assumption A2, then there exist positive constants \(C\) and \(\varepsilon\) such that for every \(k \in \mathbb {Z}^d {\setminus } \{ 0 \}\) with \(1 < | k_l | \le N_u ( \varepsilon )\) for all \(l\) and \(\Lambda > 0\) holds
Remark 3
Note that the conditions of the lemma demand that there be no \(l\)’s such that \(k_l = \pm 1\). This is not a coincidence: the adjacent double events are to be estimated differently. See the proof of Theorem 1 for details.
Proof
Without loss of generality assume that \(I = \{ 1, \ldots , n \}\). Then
where
with
and \(\varphi _{u, k}\) is the pdf of \(\varvec{X}_{u, k} ( 0, 0 ) {\mathop {=}\limits ^{d}} N ( 0, \Sigma _{u, k} )\), where
First, bound \(\varphi _{u, k}\) as follows:
where \(\varphi\) is the pdf of \(N ( 0, \Sigma )\). Plugging this into (12) and noting that \(u^{-d} \, \varphi ( u \varvec{b} ) = \mathbb {P} \left\{ \varvec{X} ( 0 ) > u \varvec{b} \right\}\), we obtain the following bound:
At this point we split the proof into three parts: estimation of the integral, estimation of the exponent in front of it and their comparison.
The exponent in front of the integral
By (13), we have
Therefore,
By our assumptions,
The integral
First note that
where the small-o term tends to zero uniformly in \(k\). We will drop this term from now on to simplify the notation. To bound the remaining integral we will use Lemma 8, which gives
with some positive constants \(c_1\) and \(c_2\). Here \(G \in \mathbb {R}\) and \(\sigma ^2 > 0\) are numbers (depending on \(k\) and \(u\)) such that
and
To apply this lemma we need to find such numbers.
Finding \(G\)
By the formulas on conditional Gaussian distribution, we have
where \(R_{u, k} ( t, s, t', s' )\) is the covariance of \(\varvec{\chi }_{u, k, \varvec{x}} ( t, s )\). Note that this covariance does not depend on \(\varvec{x}\). The \(\varvec{x}\)-term can clearly be bounded by
Let us bound the \(\varvec{b}\)-contribution. A direct computation gives
uniformly in \(k \in [-N_u ( \varepsilon ), N_u ( \varepsilon )]\). By (15)
uniformly in \(k \in [-N_u ( \varepsilon ), N_u ( \varepsilon )]\), where
The first can be bounded as follows:
\(A_{2, l}\) and \(A_{3, l}\) can be bounded for \(k_l \ne 0\) similarly as follows:
Therefore, the inequality (18) is satisfied with
Finding \(\sigma ^2\)
We have
where
where
Similarly to how we bounded differences of this form above, we obtain
Hence, the inequality (18) is satisfied with
as \(u \rightarrow \infty\).
Proceeding with the integral
Combining (21) and (22) with (17), we find
If \(| k_l |\) is large enough, we have
Lifting the assumption that \(|k_l|\) is large
Let \(K\) be such that for \(| k_l | \ge K\) holds
It suffices to consider the case when some of \(k_l\)’s satisfy \(1< | k_l | < K\). Assume for simplicity that there is exactly one such \(l\) that \(| k_l | < K\), take \(\Lambda ' > 0\) such that \(\Lambda ' < \Lambda\) and bound \(P_{\varvec{b}}\) as follows:
Here \(1_l \in \mathbb {Z}^d\) such that \([ 1_l ]_{l'} = \delta _{l, l'}\). Choose \(\Lambda ':= \Lambda ( | k_l | - 1 ) / K\). Then
and therefore
It remains to note that the number of terms in the sum (24) is at most \(\lceil {\Lambda / \Lambda '}\rceil^2 \le 2 K^2 / ( | k_l | - 1 )^2\).
Lifting the assumption that all \(k_l\)’s are non-zero
Similarly to the previous point of the proof, take \(\Lambda ' \in ( 0, \Lambda )\) and assume for simplicity that there is only one \(l\) such that \(k_l = 0\). Note that
A similar proof to what we used above shows that each term of this sum is at most
The number of terms in the sum (27) is at most \(\lceil {\Lambda / \Lambda '} \rceil\), hence
where \(c_9 = 2 c_8 \exp ( c_7 \Lambda '^{\alpha '} ) / \Lambda '\). The general case when there is several \(l\)’s such that \(k_l = 0\) can be addressed similarly.\(\square\)
5 Proofs of the main results
5.1 Proof of theorem 1
Proof
We begin the proof by splitting \([0, T]^n\) into pieces of Pickands scale
and using Bonferroni inequality to obtain
where
and \({\Sigma }_1'\) is defined by the same formula as \({\Sigma }_1\) but with \(N - 1\) instead of \(N\) in the upper summation limit. At this point we split the proof into two parts. First, we will focus on finding the exact asymptotics of the single sum \({\Sigma }_1 \sim {\Sigma }_1'\), and then demonstrate that the double sum \({\Sigma }_2\) is negligible with respect to \({\Sigma }_1\).
Since \(\varvec{X}\) is homogenous, we can easily compute the single sum
Applying local Pickands Lemma 5, we obtain
Since \(E \mapsto \mathcal {H}_{\varvec{\alpha }, \mathbb {B}, \varvec{w}} ( E )\) is subadditive, we have that the limit
exists and is finite. We will show that it is also positive after dealing with the double sum.
Double sum
By stationarity we have that
Reindexing the sum by \(q = k - j\), we obtain
Denote the double events’ probabilities by
Take some \(\varepsilon \in ( 0, T )\) and divide the sum in two parts:
Terms of the first sum can be bounded as follows:
Let \(\Sigma ( t, s )\) denote the variance matrix of \(( \varvec{X} ( t ) + \varvec{X} ( s ) ) / 2\):
In view of Assumption A1, the matrix \(( \Sigma _{II} ( t, s ) )^{-1} ) - ( \Sigma _{II} )^{-1}\) is strictly positive definite for \(t \ne s\), which implies
Note that the condition \(\exists \, l :| q_l | > N_u ( \varepsilon )\) allows us to separate \(\delta ( u, \varepsilon ):= \tau - \tau _0\) from \(0\) by \(\delta ( \varepsilon ):= \tau _1 - \tau _0 > 0\), which depends on \(\varepsilon\), but does not depend on \(u\). Since \(\tau _0 = \varvec{b}_I^\top ( \Sigma _{II} )^{-1} \varvec{b}_I = \varvec{b}^\top \Sigma ^{-1} \varvec{b}\), we obtain by using the Piterbarg inequality (34) the following upper bound:
which is negligible with respect to \(\mathbb {P} \{ \varvec{X} ( 0 ) > u \varvec{b} \}\) as \(u \rightarrow \infty\). Summing these bounds, we obtain
To bound the second sum in (28), we divide it further into
The probabilities of the second sum can be estimated by Lemma 3 as follows:
and therefore
Next, we show how to bound the first sum. Assume for simplicity that \(q\) is such that \(| q_l | = 1\) and \(| q_{l'} | \ne 1\) for all \(l' \ne l\). We have
The first probability on the right satisfies the conditions of Lemma 3, and therefore
Therefore, we obtain
For \(A_4\), we have by Lemma 5
Consequently, we have
The general case of \(q_{\mathcal {I}} \in \{ \pm 1 \}\) for \(\mathcal {I} \subset \{ 1, \ldots , n \}\) can be addressed similarly.
Positivity of the Pickands constant
To show that the constant is positive we can use the following lower bound:
where \(\widetilde{{\Sigma }}_1\) and \(\widetilde{{\Sigma }}_2\) are the single and double sum with some \(\Lambda '\) instead of \(\Lambda\) and without odd (in all coordinates) intervals:
and \(\widetilde{N}_u ( \varepsilon ) = \lfloor {\varepsilon / 2 \Lambda ' u^{-2/\alpha }} \rfloor\). By the same reasoning as above,
and
Taking \(\Lambda '\) to be large enough, we find that the difference in (30) is separated from zero. Hence, its limit is positive.\(\square\)
5.2 Proof of theorem 2
Proof
We begin the proof by splitting \([0, T]\) into intervals of some small enough \(\delta > 0\)
and applying the Bonferroni inequality, which yields
where
and \({\Sigma }_1'\) is defined by the same formula as \({\Sigma }_1\), but with \(( N - 1 )\) instead of \(N\) in the upper limit of summation. At this point we split the proof into two parts. First, we will focus on finding the exact asymptotics of the single sum \({\Sigma }_1 \sim {\Sigma }_1'\), and then demonstrate that the double sum \({\Sigma }_2\) is negligible with respect to \({\Sigma }_1\).
Single sum
Let \(\min\) and \(\max\) applied to a matrix denote component-wise minimum and maximum and let \(J\) denote a \(d \times d\) matrix of all ones: \(J_{kj} = 1\). Take \(\varepsilon > 0\) and for each \(l\) define two matrices, which bound \(B_l ( t )\) on \(\delta [ k, k + 1 ]\) component-wise from below and from above by
Since for all \(t \in [0, T]\) we have \(\widetilde{B_t} \vartriangleright 0\) strictly, it follows that \(\widetilde{B_{k, \, \varepsilon , \, \pm }} \vartriangleright 0\) if \(\varepsilon\) is small enough. Denote
By Lemma 2 the real matrix-valued functions \(\mathcal {E}_{\alpha _l, \, B_{l, k, \, \varepsilon , \, \pm }} ( s_l )\) are positive definite and give rise to the following bounds on the covariance of \(\varvec{X}\):
for small enough \(s\). These functions generate two stationary Gaussian processes \(\varvec{Y}_{l, k, \varepsilon , \pm } ( s ), \, s \in \mathbb {R}\), which by Lemma 4 provide us with bounds on the high excursion probabilities on \(\delta [ k, k + 1 ]\):
Note that the sign plus is on the left and minus is on the right.
Applying Theorem 1, we find that
By adding together all the terms, we obtain
By continuity of \(B \mapsto \mathcal {H}_{\alpha , B, \varvec{b}}\), we have that
Hence, as \(u \rightarrow \infty\),
Double sum
The double sum can be estimated by the same argument as in the proof of Theorem 1.\(\square\)
Data availability
Not applicable.
References
Berman, S.M.: Sojourns and extremes of Gaussian processes. Ann. Probab. 999–1026 MR 0372976 (1974)
Chan, H.P., Lai, T.L.: Maxima of asymptotically Gaussian random fields and moderate deviation approximations to boundary crossing probabilities of sums of random variables with multidimensional indices. Ann. Probab. 34(1)80–121 MR 2206343 (2006)
Dȩbicki, K., Hashorva, E., Ji, L., Tabiś, K.: Extremes of vector-valued Gaussian processes: exact asymptotics. Stochastic Process. Appl. 125(11)4039–4065 MR 3385594 (2015)
Dȩbicki, K., Hashorva, E., Wang, L.: Extremes of vector-valued Gaussian processes. Stochastic Process. Appl. 130(9)5802–5837 MR 4127347 (2020)
Hashorva, E., Ji, L.: Extremes of \(\alpha (t)\)-locally stationary Gaussian random fields. Trans. Am. Math. Soc. 368(1)1–26 MR 3413855 (2016)
Hüsler, J.: Extreme values and high boundary crossings of locally stationary Gaussian processes. Ann. Probab. 1141–1158 MR 1062062 (1990)
Ievlev, P., Novikov, S.: A matrix-valued Schoenberg’s problem with applications to Gaussian processes. Electron. Commun. Probab. 28, 1–12 (2023). https://doi.org/10.1214/23-ECP562
Piterbarg, V.I.: Asymptotic methods in the theory of Gaussian processes and fields. American Mathematical Soc. 148 MR 1361884 (1996)
Piterbarg, V.I., Rodionov, I.V.: High excursions of Bessel and related random processes. Stochastic Processes and their Applications 130(8)4859–4872 MR 4108474 (2020)
Qiao, W.: Extremes of locally stationary Gaussian and chi fields on manifolds. Stochastic Processes and their Applications 133, 166–192 MR 4192351 (2021)
Tan, Z., Zheng, S.: Extremes of a type of locally stationary Gaussian random fields with applications to Shepp statistics. J. Theor. Probab. 33(4)2258–2279 MR 4166199 (2020)
Acknowledgements
The author wants to express his gratitude to Enkelejd Hashorva and Svyatoslav Novikov for our thoughtful discussions and numerous suggestions that substantially improved the manuscript.
Funding
Open access funding provided by University of Lausanne Financial support by SNSF Grant 200021–196888 is kindly acknowledged.
Author information
Authors and Affiliations
Contributions
Pavel Ievlev developed the theoretical framework, conducted the mathematical proofs and wrote the manuscript.
Corresponding author
Ethics declarations
Ethical approval
This study is a purely mathematical research paper and does not involve human subjects, animal subjects, or ethical considerations. Therefore, no ethical approval was required for this research.
Conflict of interest
The author declares that there are no conflicts of interest regarding this research study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Gordon inequality
The following Slepian-type lemma is stated in Dȩbicki et al. (2015) for the case where \(T \subset \mathbb {R}\), but it can be extended to the following version by standard techniques. Due to its complexity we present it here without proof.
Lemma 4
(Gordon inequality). Let \(\varvec{X} ( t ), \, t \in T\) and \(\varvec{Y} ( t ), \, t \in T\) be two centered separable vector-valued Gaussian processes with values in \(\mathbb {R}^d\) defined on a separable metric space \(T\). If for all \(t, \, s \in T\) holds
then for \(\varvec{u} \in \mathbb {R}^d\) holds
1.2 Local Pickands lemma
The reader may find the uniform multivariate version of the local Pickands lemma in Dȩbicki et al. (2020). However, for the needs of this paper this strong result is not necessary, since we obtain uniformity using Gordon’s inequality (Lemma 4). This is why we present here a simplified version of the local Pickands lemma.
Lemma 5
Let \(\varvec{X} ( t ), \, t \in [0, T]^n\) be a centered Hölder continuous homogenous Gaussian random field with values in \(\mathbb {R}^d\) and covariance \(R\) satisfying
where \(B_l\)’s are some \(d \times d\) real matrices and \(\alpha _l \in ( 0, 2 ]\). Denote \(\alpha := ( \alpha _l )_{l = 1, \ldots , n}\), \(B:= ( B_l )_{l = 1, \ldots , n}\) and \(\varvec{w}:= \Sigma ^{-1} \, \widetilde{\varvec{b}}\), where \(\widetilde{\varvec{b}}\) is the unique solution of the quadratic programming problem \(\Pi _{\Sigma } ( \varvec{b} )\). Then the matrix-valued functions \(\mathcal {R}_{\alpha _l, B_l} :\mathbb {R} \rightarrow \mathbb {R}^{d \times d}\) defined by
are positive definite and for any closed \(E \subset [ 0, T ]\) containing \(0\) and closed holds
with
where \(\varvec{Y}_l\) is a continuous zero mean Gaussian process associated to the covariance function \(\mathcal {R}_{\alpha _l, B_l} ( t, s )\).
1.3 Borell-TIS and Piterbarg inequalities
Lemma 6
Let \(( \varvec{Z} ( t ) )_{t \in E}\), \(E \subset \mathbb {R}^k\) be a separable centered \(d\)-dimensional vector-valued Gaussian random field having components with a.s. continuous paths. Assume that \(\Sigma ( t ) = \mathbb {E} \left\{ \varvec{Z} ( t ) \, \varvec{Z} ( t )^\top \right\}\) is non-singular for all \(t \in E\). Let \(\varvec{b} \in \mathbb {R}^d {\setminus } (-\infty , 0]^d\) and define \(\sigma _{\varvec{b}}^2 ( t )\) by
If \(\sigma _{\varvec{b}}^2:= \sup _{t \in E} \sigma _{\varvec{b}}^2 ( t ) \in ( 0, \infty )\), then there exists some positive constant \(\mu\) such that for all \(u > \mu\)
If further for some \(C \in ( 0, \infty )\) and \(\varvec{\gamma } \in ( 0, 2 ]^k\)
and
hold for all \(t, \, s \in E\), then for all \(u\) positive
where \(C_{ * }\) is some positive constant not depending on \(u\). In particular, if \(\sigma _{\varvec{b}}^2 ( t )\) is continuous and achieves its unique maximum at some fixed point \(t_{ * } \in E\), then (34) is still valid if (32) and (33) are assumed to hold only for all \(t, \, s \in E\) in an open neighborhood of \(t_{ * }\).
1.4 Quadratic programming problem
For a given non-singular \(d \times d\) real matrix \(\Sigma\) we consider the quadratic programming problem
Below \(J = \{ 1, \ldots , d \} {\setminus } I\) can be empty; the claim in (37) is formulated under the assumption that \(J\) is non-empty.
Lemma 7
Let \(d \ge 2\) and \(\Sigma\) a \(d \times d\) symmetric positive definite matrix with inverse \(\Sigma ^{-1}\). If \(\varvec{b} \in \mathbb {R}^d {\setminus } (-\infty , ]^d\), then \(\Pi _{\Sigma } ( \varvec{b} )\) has a unique solution \(\widetilde{\varvec{b}}\) and there exists a unique non-empty index set \(I \subset \{ 1, \ldots , d \}\) with \(m \le d\) elements such that
with \(\varvec{w} = \Sigma ^{-1} \, \widetilde{\varvec{b}}\) satisfying \(\varvec{w}_I = ( \Sigma _{II} )^{-1} \varvec{b}_I > \varvec{0}_I\), \(\varvec{w}_J = \varvec{0}_J\).
1.5 Integral estimate
Lemma 8
If a family of Hölder continuous random fields \(\varvec{\chi }_{\varvec{x}} ( t )\), \(t \in [ 0, \Lambda ]^n\) indexed by \(\varvec{x} \in \mathbb {R}^d\) and jointly measurable with respect to \(( t, \varvec{x})\) satisfies
and
with some constants \(\varvec{w} > \varvec{0}\), \(\sigma ^2 > 0\), \(G \in \mathbb {R}\) and small enough \(\varepsilon > 0\), then there exist constants \(C, \, c > 0\) independent of \(\Lambda\), such that the following inequality holds:
Proof of Lemma 8
Define a collection of sets \(\Omega _F = \left\{ \varvec{x} \in \mathbb {R}^d :\varvec{x}_F > \varvec{0}, \ \varvec{x}_{F^c} < \varvec{0} \right\}\) indexed by \(F \subset \{ 1, \ldots , d \}\) and split the integral:
For \(\varvec{x} \in \Omega _F\) the probability under the integral may be bounded as follows:
where
Let us split the domain \(\Omega _{F}\) into two parts
Let us first deal with the integral over \(\Omega _{F,-}\). It follows from \(\varvec{w}_F^\top \, \varvec{x}_F - \varepsilon \sum _{j = 1}^d | x_j | < G\) that
or, with \(w_{ * } = \min _{j \in F} w_j > 0\) and \(\varepsilon < w_{ * }\),
Therefore, with \(r = r_{F, \varepsilon } ( \varvec{x} )\), we have
provided that \(\varepsilon\) is small enough. Bounding the probability under the integral by \(1\) and changing the variables, we obtain
Next, we concentrate on the intergral over \(\Omega _{F,+}\). By Piterbarg inequality (34), we have the following uniform in \(\varvec{x} \in \Omega _{F, +}\) upper bound:
Plugging this bound into the integral and changing the variables, we obtain
Note that with \(w^{ * } = \max _{i = 1, \dots , d} w_i\) we have
and it follows that for all \(\varepsilon < w^{ * }\) the following bound holds:
This bound yields
from which for small enough \(\varepsilon\) follows that \(\left( \varvec{w}, \varvec{x} \right) \le (1 + \varepsilon ') r,\) with \(\varepsilon ' = \varepsilon / (w^{ * } - \varepsilon )\). Hence,
where in the last step we used the Gaussian mgf formula \(\mathbb {E} \left\{ e^{t \mathcal {N} ( \mu , \sigma ^2 )} \right\} = e^{t \mu + t^2 \sigma ^2 / 2}\) with \(t = 1 + \varepsilon '\).\(\square\)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ievlev, P. Extremes of locally-homogenous vector-valued Gaussian processes. Extremes 27, 219–245 (2024). https://doi.org/10.1007/s10687-023-00483-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10687-023-00483-9