Abstract
Recently, the notion of implicit extreme value distributions has been established, which is based on a given loss function f ≥ 0. From an application point of view, one is rather interested in extreme loss events that occur relative to f than in the corresponding extreme values itself. In this context, so-called f -implicit α-Fréchet max-stable distributions arise and have been used to construct independently scattered sup-measures that possess such margins. In this paper we solve an open problem in Goldbach (2016) by developing a stochastic integral of a deterministic function g ≥ 0 with respect to implicit max-stable sup-measures. The resulting theory covers the construction of max-stable extremal integrals (see Stoev and Taqqu Extremes 8, 237–266 (2005)) and, at the same time, reveals striking parallels.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Biermé, H., Meerschaert, M. M., Scheffler, H. -P.: Operator scaling stable random fields. Stoch. Process. Appl. 117(3), 312–332 (2007)
Billingsley, P.: Probability and measure. Wiley, New York (2008)
Bogachev, V. I.: Measure theory, vol. 2. Springer Science & Business Media (2007)
de Fondeville, R., Davison, A. C.: High-dimensional peaks-over-threshold inference. Biometrika 105(3), 575–592 (2018)
Dombry, C., Ribatet, M.: Functional regular variations, pareto processes and peaks over threshold. Stat. Interface 8(1), 9–17 (2015)
Dudley, R. M.: Real analysis and probability, vol. 74. Cambridge University Press (2002)
Elstrodt, J.: Maß-und Integrationstheorie. Springer, Berlin (2006)
Goldbach, J.: A new approach to multivariate extreme value theory: f-implicit max-infinitely divisible distributions and f-implicit max-stable processes. PhD thesis, University of Siegen (2016)
Klenke, A.: Probability theory: a comprehensive course. Springer Science & Business Media (2013)
Kremer, D., Scheffler, H. -P.: Multivariate stochastic integrals with respect to independently scattered random measures on δ-rings. Publ. Math. Debr. 95(1-2), 39–66 (2019)
Li, Y., Xiao, Y.: Multivariate operator-self-similar random fields. Stoch. Process. Appl. 121(6), 1178–1200 (2011)
Rajput, B. S., Rosinski, J.: Spectral representations of infinitely divisible processes. Probab. Theory Relat. Fields 82(3), 451–487 (1989)
Resnick, S. I.: Extreme values, regular variation and point processes. Springer, Berlin (2013)
Samoradnitsky, G., Taqqu, M. S.: Stable non-Gaussian random processes: stochastic models with infinite variance. CRC press (1994)
Scheffler, H. -P., Stoev, S.: Implicit extremes and implicit max–stable laws. Extremes 20(2), 265–299 (2017)
Stoev, S. A., Taqqu, M. S.: Extremal stochastic integrals: a parallel between max-stable processes and α-stable processes. Extremes 8(4), 237–266 (2005)
Acknowledgments
The author would like to emphasize that this paper is inspired by the fundamental results in Goldbach (2016), which my former colleague Johannes Goldbach developed during his PhD time under the supervision of Hans-Peter Scheffler. Moreover, particular thanks are due to Marco Oesting for many fruitful discussions that were particularly helpful in the context of Lemma 3.3. Finally, the author would like to thank two anonymous referees for their very detailed suggestions, which helped to improve the paper. For instance, their remarks stimulated the examination of (4.11) as well as parts of Section 5.
Funding
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Some proofs and auxiliary results
Appendix: Some proofs and auxiliary results
Proof of Proposition 3.6
Obviously, we can always assume that ∅ is not an element of the occurring partitions. This allows to define \({\upbeta }_{j}^{(n)}:={\min \limits } \{h_{2,n}(s): s \in A_{j}^{(n)}\}\) and in view of gn ≤ h2,n we obtain that \(\alpha _{j}^{(n)} \le {\upbeta }_{j}^{(n)}\) for every \(n \in \mathbb {N}\) and 1 ≤ j ≤ kn. If we let \(h_{n} \sim (A_{j}^{(n)},{\upbeta }_{j}^{(n)})_{j=1,...,,k_{n}}\) together with
it follows for every \(n \in \mathbb {N}\) that \(f(X_{n}^{*}) \le f(Y_{n}^{*}) \) a.s. Also note that the sequence (hn) is increasing, since the same holds true for (h2,n) by assumption. In particular, we deduce that \(f(Y_{n}^{*})\) is increasing due to part (b) of Remark 3.5. Let \(Y^{*}:= \sup _{n \in \mathbb {N}} f(Y_{n}^{*})\) and verify that h1,n ≤ gn ≤ hn ≤ h2,n. Then Proposition 3.2.4 (together with (1.3.2)) in Goldbach (2016) states that
However, Remark 2.5 and Proposition 2.7 in Stoev and Taqqu (2005) imply that the increasing sequences f(I(h1,n)) and f(I(h2,n)) have the same limit a.s., say Y. It follows that \(f(Y_{n}) \rightarrow Y\) a.s. Also note that Y ≥ Y∗ and that \(0<Y<\infty \) a.s., provided that ∥g∥α > 0 (otherwise we conclude that gn = 0 m-a.e. and (35) is true anyway).
The next step is to prove that Y − Y∗ > 0 holds true a.s. Conversely, assume that there exists a set \(B \in \mathcal {A}\) with \(p:=\mathbb {P} (B) >0\) and Y (ω)/Y∗(ω) = 1 for every ω ∈ B. Then we obtain that \(\rightarrow 0\) a.s. (and particularly in probability). Hence, for \(\gamma , \gamma ^{\prime }>0\) arbitrary, it follows that
which also implies that \(\mathbb {P} (f(Y_{n}) \le (1 + \gamma ) f(Y_{n}^{*}) ) \ge p -\gamma ^{\prime }\) for those n. Observe that this gives a contradiction to Lemma 3.3, when choosing \(0<\gamma ^{\prime }<p+ (1+\gamma )^{-\alpha }-1\), which is always possible as long as we have that p = 1 or 0 < γ < (1 − p)− 1/α − 1, respectively.
Fix ε > 0. By what we have just seen there exist some \(0< \delta ^{\prime }<1\) and a set \(A_{1} \in \mathcal {A}\) with \(\mathbb {P} (A_{1}) \ge 1- \varepsilon /2\), fulfilling the relation \(Y\ge (1+ \delta ^{\prime }) Y^{*} \) on A1. In a similar way and using that f(I(h1,n)) ↑ Y a.s. (see above), we obtain some \(N \in \mathbb {N}\) and a further set \(A_{2} \in \mathcal {A}\) with \(\mathbb {P}(A_{2}) \ge 1- \varepsilon /2\) and such that \(f(I(h_{1,n}))(\omega ) /Y(\omega ) \ge 1- \delta ^{\prime } / 2\) holds true for every ω ∈ A2 and n ≥ N. Let A := A1 ∩ A2 and observe that \(\mathbb {P}(A) \ge 1- \varepsilon \). Finally, recall (5.12) and that \(f(X_{n}^{*}) \le f(Y_{n}^{*}) \le Y^{*}\). Then, for n ≥ N, the following computation is valid on A, where we can assume that \(f(X_{n}^{*})>0\) (else (3.5) is true anyway again):
Letting \(\delta :=(1+\delta ^{\prime })(1-\delta ^{\prime } /2) -1 >0\) for instance, this gives the assertion. □
Lemma 6.1 Let \(h : E \rightarrow \mathbb {R}_{+}\) be measurable and assume that (hn) is a sequence of simple functions with hn ≤ h and such that hn converges to h uniformly on E. Then there exist further sequences (h1,n), (h2,n) of simple functions with h1,n ≤ hn ≤ h2,n and such that hi,n ↑ h for i = 1, 2 as \(n \rightarrow \infty \).
Proof
By assumption we can find a strictly increasing sequence of naturals (Nl)l such that, for any n ≥ Nl and s ∈ E, we have h(s) − hn(s) ≤ 1/l. In case n < N1 let h1,n := 0. Else we define
Now it is easy to verify that this gives a sequence (h1,n) of simple functions as desired. Conversely, we can simply choose \(h_{2,n}:= {\max \limits } \{h_{1},...,h_{n} \} \) for every \(n \in \mathbb {N}\). □
Proof of Proposition 3.8
Since the measure m is σ-finite, we can use Egorov’s theorem (see Chapter VI, Exercise 3.1 in Elstrodt (2006)) to obtain increasing sets \(E_{1},E_{2},... \in \mathcal {E}\) with \(m(E \setminus \bigcup _{l=1}^{\infty } E_{l})=0\) and such that, for any \(l \in \mathbb {N}\), the convergence holds uniformly as \(n \rightarrow \infty \). Using the σ-finiteness of m again and by a little abuse of notation, we can even assume that \((E_{l}) \subset \mathcal {E}_{0}\). Moreover, note that the proof of the present statement is obvious in case ∥g∥α = 0. Hence, without loss of generality, we can even assume that
Fix \(l \in \mathbb {N}\) and consider the sequence is still simple. Denote by \(\mathcal {P}_{n}^{\prime }\) a partition of for every \(n \in \mathbb {N}\) and define \(\mathcal {P}_{1}= \mathcal {P}_{1}^{\prime }\). Then, using the construction from Remark 3.5 (a), we obtain a common partition for denoted by \(\mathcal {P}_{2}\) and which, in addition, fulfills \(\mathcal {P}_{1}, \mathcal {P}_{2}^{\prime } \le \mathcal {P}_{2}\). Based on \(\mathcal {P}_{2}\) and \(\mathcal {P}_{3}^{\prime }\), we do the same to obtain \(\mathcal {P}_{3}\). Inductively, this gives a sequence \((\mathcal {P}_{n})\) of partitions such that, on the one hand, we have \(\mathcal {P}_{n-1}, \mathcal {P}_{n}^{\prime } \le \mathcal {P}_{n}\). On the other hand, and can be both represented by using the common partition \(\mathcal {P}_{n}\) for every n ≥ 2. In particular, if we assume that \(\mathcal {P}_{n}\) consists of \(A_{1}^{(n)},...,A_{k_{n}}^{(n)} \in \mathcal {E}_{0} \setminus \{\emptyset \}\) (which is always possible, see the proof of Proposition 3.6 above), there exist \(\alpha _{1}^{(n)},..., \alpha _{k_{n}}^{(n)} \ge 0\) such that we have
At the same time, whenever m > n, the previous construction also allows us to find suitable coefficients \({\upbeta }_{m,1}^{(n)},..., {\upbeta }_{m,k_{m}}^{(n)} \ge 0\), which only depend on \(\alpha _{1}^{(n)},....,\alpha _{k_{n}}^{(n)}\) and which fulfill
Based on (5.13), we define
for every \(n \in \mathbb {N}\). In view of Lemma 1.1 we can apply Proposition 3.6 to Xn and \(X_{n}^{*}\) in this case. Hence, for fixed ε > 0, there exist a set \(A_{0} \in \mathcal {A}\) with \(\mathbb {P}(A_{0}) \ge 1-\varepsilon /3\) as well as some δ > 0 and \(N_{0} \in \mathbb {N}\) fulfilling
The fundamental idea is to use Lemma 3.7 now. However, its assumptions are not fulfilled yet. As a way out, recall the proof of Proposition 3.6 and that, in a very similar way, f(Xn) converges to a random variable Y a.s. In addition, Proposition 2.7 in Stoev and Taqqu (2005) states that Moreover, since we have that Y > 0 a.s. If we combine both results, there exist a set \(A_{1} \in \mathcal {A}\) with \(\mathbb {P}(A_{1}) \ge 1-\varepsilon /3 \) as well as some τ > 0 and \(N_{1} \in \mathbb {N}\) such that
Let j0 = j0(n,ω) be the (random) index fulfilling \(X_{n}(\omega )=\alpha ^{(n)}_{j_{0}} M(A_{j_{0}}^{(n)})(\omega )\). Here, without loss of generality, we can assume that \(A_{j}^{(n)} \subset E_{l}\) for every \(n \in \mathbb {N}\) and 1 ≤ j ≤ kn. Using Proposition 3.2.4 in Goldbach (2016) again, this implies for those j and n that
where \(f(M(E_{l})) \sim {\Phi }_{\alpha } (m(E_{l})^{1/ \alpha })\) due to (2.5). Note that \(m(E_{l})< \infty \), since \(E_{l} \in \mathcal {E}_{0}\). Hence, there finally exist a set \(A_{2} \in \mathcal {A}\) with \(\mathbb {P}(A_{2}) \ge 1- \varepsilon /3\) and some K > 0 such that f(M(El))(ω) ≤ K for every ω ∈ A2. Let A := A0 ∩ A1 ∩ A2 and observe that \(\mathbb {P}(A) \ge 1- \varepsilon \). Moreover, for any ω ∈ A and \(n \ge N:= {\max \limits } \{N_{0},N_{1}\}\), we obtain that \(\alpha _{j_{0}}^{(n)} \ge \tau /K\). Let
Then it is clear that Xn = I(gn) and \(I(\widetilde {g_{n}})\) coincide for every n ≥ N on A. In addition, if we introduce
relation (5.15) can be preserved. More precisely, for every n ≥ N and ω ∈ A, we have that
On the other hand, since the convergence holds uniformly, we can choose some \(N^{\prime } \ge N\) such that, for every \(m, n \ge N^{\prime }\) and s ∈ E, the estimation
is valid with C being defined as in Lemma 3.7. Moreover, we claim that
holds true. Recall that ε > 0 was arbitrary. Hence, it is well-known that (5.20) would imply that the sequence is Cauchy with respect to convergence in probability (see Corollary 6.15 in Klenke (2013) for instance) and would therefore complete the proof. In order to prove (5.20), let us assume that \(m>n \ge N^{\prime }\) are fixed naturals. Since \(\mathbb {P}(A) \ge 1- \varepsilon \), it suffices to show for every ω ∈ A that . For this purpose, we additionally fix ω ∈ A and recall that according to (5.17). At the same time, we can also use another representation for which is given by (5.14). More precisely, we have that where \({\upbeta }_{m,1}^{(n)},...,{\upbeta }_{m,k_{m}}^{(n)} \ge 0\) are appropriate coefficients. Recall that \(\emptyset \notin \mathcal {P}_{m}\). Hence, by definition of Jm and in view of (5.19), we obtain for every j ∈{1,...,km}∖ Jm the estimation
Similarly to (5.17), the previous observation suggests to consider the truncation
and to conclude that At this point, we neglect the fact that could vary on a \(\mathbb {P} \)-null set by using the representation from (5.14) now. Anyway, let us summarize that the equality
holds true. Then, a similar calculation as performed in (5.21), using (5.19) and the reverse triangle equality, ensures that \({\upbeta }_{m,j}^{(n)}, \alpha _{j}^{(m)} \ge \frac {\tau }{4K}\) for every j ∈ Jm. Hence, if we let
and recall (5.16), we can use Lemma 3.7 together with (5.18) and (5.19) again to conclude that (5.22) is smaller than ε. As justified before already, this gives the assertion. □
Proof of Lemma 3.9
Consider \(n,l \in \mathbb {N}\). Since Proposition 3.2.4 in Goldbach (2016) reveals that the random vectors are independent and that
Recalling (1.3), we see that is equivalent to in this case and that (3.6) would follow if we can prove that
Note that respectively. On the one hand, this shows that we can assume that (which particularly implies that ∥g∥α > 0). On the other hand, a similar computation as performed in (3.1) yields
Hence, instead of (5.24) it suffices to show that
For this purpose, observe that we have \(\| {g_{n} }\|_{\alpha }^{\alpha } \rightarrow \| {g}\|_{\alpha }^{\alpha }>0\) (as \(n \rightarrow \infty )\) by the dominated convergence theorem. Conversely, we obtain (for every \(n \in \mathbb {N}\)) that since \({E_{l}^{c}} \downarrow \) with \(m(\cap _{l=1}^{\infty } {E_{l}^{c}})=0\) and since \(g \in L^{\alpha }_{+}(m)\). This implies (5.25). □
Proof of Lemma 4.1
Recall (1.3) and the beginning of the proof of Lemma 3.9 above. Then, letting
we have to show that \(\mathbb {P} (A)=0\). Since g1 ≤ g2, there exist sequences (g1,n) and (g2,n) of simple functions such that g1,n ≤ g2,n and gi,n ↑ gi for i = 1, 2 as \(n \rightarrow \infty \). Moreover, Remark 3.5 allows us to find a common sequence of partitions (each not containing ∅, see above) for g1,n and g2,n, which, in addition, is consistent. More precisely, let us assume that
respectively. In view of g1,n ≤ g2,n and \(A_{j}^{(n)} \ne \emptyset \), we necessarily have that \(\alpha _{j}^{(n)} \le {\upbeta }_{j}^{(n)}\) for every \(n \in \mathbb {N}\) and 1 ≤ j ≤ kn. Let
together with
Anyway, Theorem 3.10 states that I(gi,n) converges to I(gi) in probability and therefore, by passing to a suitable subsequence, a.s. Without loss of generality, we omit the consideration of this subsequence in the sequel and therefore obtain a set \(B \in \mathcal {A}\) such that \(\mathbb {P}(B)=1\) and
Now, if we assume that \(\mathbb {P} (A)=:p>0\), we can apply Proposition 3.6 to (Yn), providing a set \(C \in \mathcal {A}\) with \(\mathbb {P} (C) \ge 1-p/2\) as well as some δ > 0 and \(N \in \mathbb {N}\) fulfilling
Note that, for certain (random) indices j1 = j1(n,ω) and j2 = j2(n,ω), we can always write
Moreover, observe that \(\mathbb {P} (A \cap B \cap C) >0\). Then, for fixed ω ∈ A ∩ B ∩ C, we have to distinguish two cases. In the first case the indices j1 and j2 differ. Then, using (5.28) and \(\alpha _{j}^{(n)} \le {\upbeta }_{j}^{(n)}\), we obtain for every n ≥ N that
However, by definition of the set A and by using the continuity of f together with (5.27), we verify that \(f(X_{n})(\omega )/f(Y_{n})(\omega ) \rightarrow 1\). This means that (5.29) can only happen for finitely many n. Else, the second case occurs, where j1 = j2. By the homogeneity of f this yields
Using similar arguments as before, it follows that \(f(X_{n}(\omega ) - Y_{n}(\omega ) ) \rightarrow 0\). However, in view of Lemma 3.1.14 in Goldbach (2016), this implies that \((X_{n}(\omega ) - Y_{n}(\omega )) \rightarrow 0\). Remembering that
we finally obtain that I(g1)(ω) = I(g1)(ω), which is a contradiction to the claim ω ∈ A. □
Proof of Lemma 4.4
Let \((g_{n}^{\prime })\) be a sequence of simple functions fulfilling \(g_{n}^{\prime } \uparrow g\), which particularly means that \(I(g)=\mathbb {P} \)-\(\lim _{n \rightarrow \infty } I(g_{n}^{\prime })\). As in the proof of Theorem 3.10, define a new sequence \((h_{\nu })_{\nu \in \mathbb {N}}\) that alternates between (gn) and \((g_{n}^{\prime })\). Again it follows that (I(hν)) converges in probability. Actually, this gives the assertion, since all subsequences yield the same limit. More precisely,
□
Proof of Lemma 4.5
Using homogeneity and the f-implicit monotonicity from Theorem 4.2, we first obtain that f(I(h2,n)) ≤ γnf(M(A)) a.s. (recall (4.1)), which shows that \(f(I(h_{2,n})) \rightarrow 0\) a.s. In view of Lemma 3.1.14 in Goldbach (2016) this also implies that \(I(h_{2,n}) \rightarrow 0\) a.s. Moreover, since I(h1,n ∨ h2,n) = I(h1,n) ∨fI(h2,n) a.s. due to (4.2), we merely need that the ∨f-operation provides continuity a.s. For this purpose, recall the proof of the f-implicit max-linearity above or use Lemma 1.1.9 in Goldbach (2016), respectively. □
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kremer, D. Implicit max-stable extremal integrals. Extremes 24, 1–35 (2021). https://doi.org/10.1007/s10687-020-00388-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10687-020-00388-x
Keywords
- Implicit max-stable distributions
- Independently scattered random sup-measures
- Stochastic integrals
- Implicit max-stable processes