Abstract
Modelling of multiple simultaneous failures in insurance, finance and other areas of applied probability is important especially from the point of view of pandemic-type events. A benchmark limiting model for the analysis of multiple failures is the classical d-dimensional Brownian risk model (Brm), see Delsing et al. (Methodol. Comput. Appl. Probab. 22(3), 927–948 2020). From both theoretical and practical point of view, of interest is the calculation of the probability of multiple simultaneous failures in a given time horizon. The main findings of this contribution concern the approximation of the probability that at least k out of d components of Brm fail simultaneously. We derive both sharp bounds and asymptotic approximations of the probability of interest for the finite and the infinite time horizon. Our results extend previous findings of Dȩbicki et al. (J. Appl. Probab. 57(2), 597–612 2020) and Dȩbicki et al. (Stoch. Proc. Appl. 128(12), 4171–4206 2018).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper we are interested in the probabilistic aspects of multiple simultaneous failures typically occurring due to pandemic-type events. A key benchmark risk model considered here is the d-dimensional Brownian risk model (Brm)
where c = (c1, … , cd)⊤, u = (u1, … , ud)⊤ are vectors in \(\mathbb {R}^{d}\) and
with Γ a d × d real-valued non-singular matrix and \(\boldsymbol B(t)=(B_{1}(t) ,\ldots , B_{d}(t))^{\top } , t \in \mathbb {R}\) a d-dimensional Brownian motion with independent components which are standard Brownian motions.
By bold symbols we denote column vectors, operations with vectors are meant component-wise and ax = (ax1, … , axd)⊤ for any scalar \(a\in \mathbb {R}\) and any \(\boldsymbol {x}\in \mathbb {R}^{d}\).
Indeed, Brm is a natural limiting model in many statistical applications. Moreover, as shown in Delsing et al. (2020) such a risk model appears naturally in insurance applications.Since Brm is a natural limiting model, it can be used as a benchmark for various complex models. Given the fundamental role of Brownian motion in applied probability and statistics, it is also of theoretical interest to study failure events arising from this model. Specifically, in this contribution we are interested in the behaviour of the probability of multiple simultaneous failures occurring in a given time horizon \([S,T] \subset [0, \infty ]\).
In our settings failures can be defined in various ways. Let us consider first the failure of a given component of our risk model. Namely, we say that the i th component of our Brm has a failure (or ruin occurs) if Ri(t, ui)= ui+cit − Wi(t) < 0 for some t ∈ [S, T]. The extreme case of a catastrophic event is when d multiple simultaneous failures occurs. Typically, for pandemic-type events there are at least k components of the model with simultaneous failures and k is large with the extreme case k = d. In mathematical notation, for given positive integer k ≤ d of interest is the calculation of the following probability
where \(|\mathcal {I}|\) denotes the cardinality of the set \(\mathcal {I}\). If T is finite, by the self-similarity property of the Brownian motion ψk(S, T, u) can be derived from the case T = 1, whereas \(T=\infty \) has to be treated separately.
There are no results in the literature investigating ψk(S, T, u) for general k. The particular case k = d, for which ψd(S, T, u) coincides with the simultaneous ruin probability has been studies in different contexts, see e.g., Lieshout and Mandjes (2007), Avram et al. (2008a), Avram et al. (2008b), Dȩbicki et al. (2018), Ji and Robert (2018), Foss et al. (2017), Pan and Borovkov (2019), Borovkov and Palmowski (2019), Ji (2020), Hu and Jiang (2013), Samorodnitsky and Sun (2016), and Dombry and Rabehasaina (2017). The case d = 2 of Brm has been recently investigated in Dȩbicki et al. (2020).
Although the probability of multiple simultaneous failures seems very difficult to compute, our first result below, motivated by Korshunov and Wang (2020)[Thm 1.1], shows that ψk(S, T, u) can be bounded by the multivariate Gaussian survival probability, namely by
where
When \(u\to \infty \) we can approximate pT(u) utilising Laplace asymptotic method, see e.g., Korshunov et al. (2015), whereas for small and moderate values of u it can be calculated or simulated with sufficient accuracy. Our next result gives bounds for ψk(S, T, u) in terms of pT(u).
Theorem 1.1
If the matrix Γ is non-singular, then for any positive integer k ≤ d, all constants \( 0\leq S < T< \infty \) and all \(\boldsymbol {c},\boldsymbol {u}\in \mathbb {R}^{d}\)
where \(K= 1/\min \limits _{\substack {\mathcal {I}\subset \{1,\ldots , d\},|\mathcal {I}|=k}}\mathbb {P} \left \{ \forall _{i\in \mathcal {I}}: W_{i}(T)> \max \limits (0,c_{i} T) \right \} >0\).
The bounds in Eq. 2 indicate that it might be possible to derive an approximations of ψk(S, T, u) for large threshold u, which has been already shown for k = d = 2 in Dȩbicki et al. (2020). In this paper we consider the general case k ≤ d, d > 2 discussing both the finite time interval (i.e., T = 1) and the infinite time horizon case with \(T=\infty \) extending the results of Dȩbicki et al. (2018) where d = k is considered.
In Section 2 we explain the main ideas that lead to the approximation of ψk(S, T, u). Section 3 discusses some interesting special cases, whereas the proofs are postponed to Section 4. Some technical calculations are displayed in Appendix.
2 Main results
In this section W(t), t ≥ 0 is as in the Introduction and for a given positive integer k ≤ d we shall investigate the approximation of ψk(S, T, u) where we fix u = au, with a in \(\mathbb {R}^{d}\setminus (-\infty ,0]^{d}\) and u is sufficiently large.
Let hereafter \(\mathcal {I}\) denote a non-empty index set of {1, … , d}. For a given vector, say \(\boldsymbol {x}\in \mathbb {R}^{d}\) we shall write \(\boldsymbol {x}_{\mathcal {I}}\) to denote a subvector of x obtained by dropping its components not in \(\mathcal {I}\). Set next
with
where \(\boldsymbol {E}_{\mathcal {I}}(\boldsymbol {a} u)\) was defined in Eq. 1.
In vector notation for any \(u\in \mathbb {R}\)
The following lower bound (by Bonferroni inequality)
together with the upper bound
are crucial for the derivation of the exact asymptotics of ψk(S, T, au) as \(u\to \infty \). As we shall show below, the upper bound (5) turns out to be exact asymptotically as \(u\to \infty \). The following theorem constitutes the main finding of this contribution.
Theorem 2.1
Suppose that the square d × d real-valued matrix Γ is non-singular. If a has no more than k − 1 non-positive components, where k ≤ d is a positive integer, then for all \( 0 \leq S< T < \infty , \boldsymbol {c}\in \mathbb {R}^{d} \)
Moreover, Eq. 6 holds also if \(T=\infty \), provided that c and a + ct have no more than k − 1 non-positive components for all t ≥ 0.
Essentially, the above result is the claim that the second term in the Bonferroni lower bound (4) is asymptotically negligible. In order to prove that, the asymptotics of \( \psi _{\left \lvert \mathcal {I} \right \rvert } (S,T,\boldsymbol {a}_{\mathcal {I}} u)\) has to be derived. For the special case that \(\mathcal {I}\) has only two elements and S = 0, its approximation has been obtained in Dȩbicki et al. (2020). Note in passing that the assumption in Theorem 2.1 that a has no more than k − 1 non-positive components excludes the case that there exists a set \(\mathcal {I} \subset \{1,\ldots , d\}, \ |\mathcal {I}|=k\) such that \(\psi _{\mathcal {I}} (0,T,\boldsymbol {a}_{\mathcal {I}} u)\) does not tend to 0 as \(u\to \infty \), which due to its non-rare event nature is out of interest in this contribution.
The next result extends the findings of Dȩbicki et al. (2020) to the case d > 2. For notational simplicity we consider the case \(\mathcal {I}\) has d elements and thus avoid indexing by \(\mathcal {I}\). Recall that in our model W(t) = ΓB(t) where B(t) has independent standard Brownian motion components and Γ is a d × d non-singular real-valued matrix. Consequently Σ = ΓΓ⊤ is a positive definite matrix. Hereafter \(\boldsymbol 0 \in \mathbb {R}^{d}\) is the column vector with all elements equal 0. Denote by πΣ(a) the quadratic programming problem:
Its unique solution \(\tilde { \boldsymbol {a}}\) is such that
where \(\tilde {\boldsymbol {a}}_{J}\) is defined if J = {1, … , d}∖ I is non-empty. The index set I is unique with \(m=\left \lvert I \right \rvert \ge 1\) elements, see the next lemma (or Dȩbicki et al. (2018)[Lem 2.1]) for more details.
Lemma 2.2
Let Σ be a d × d positive definite matrix and let \(\boldsymbol {a} \in \mathbb {R}^{d} \setminus (-\infty , 0]^{d} \). πΣ(a) has a unique solution \(\tilde {\textbf {a}}\) given in (7) with I a unique non-empty index set with m ≤ d elements such that
for any index set F ⊂{1, … , d} containing I. Further if \(\boldsymbol {a}= {(a ,\ldots , a)^{\top } , a}\in (0,\infty )\), then \( 2 \le \left \lvert I \right \rvert \le d\).
In the following we set
In view of the above lemma
with the convention that when J is empty the indexing should be disregarded so that the last inequality above is irrelevant.
The next theorem extends the main result in Dȩbicki et al. (2020) and further complements findings presented in Theorem 2.1 showing that the simultaneous ruin probability (i.e., k = d) behaves up to some constant, asymptotically as \(u\to \infty \) the same as pT(u). For notational simplicity and without loss of generality we consider next T = 1.
Theorem 2.3
If \(\boldsymbol {a} \in \mathbb {R}^{d}\) has at least one positive component and Γ is non-singular, then for all S ∈ [0,1)
where \(C(\boldsymbol {a})= {\prod }_{i\in I} \lambda _{i} {\int \limits }_{\mathbb {R}^{m}} \mathbb {P} \left \{ \exists _{t\ge 0}:\boldsymbol {W}_{I}(t)-t\boldsymbol {a}_{I}>\boldsymbol {x}_{I} \right \} e^{\boldsymbol \lambda ^{\top }_{I} \boldsymbol {x}_{I}} \mathrm {d}\boldsymbol {x}_{I} \in (0,\infty )\).
Remark 2.4
-
i)
By Lemma 4.6 below taking T = 1 therein (hereafter φ denotes the probability density function (pdf) of ΓB(1))
as \(u\to \infty \), where \(\boldsymbol \lambda = {\Sigma }^{-1} \tilde {\boldsymbol {a}}\) and if J = {1, … , d}∖ I is non-empty, then \(U=\{j\in J:\tilde { a}_{j}= a_{j}\}\). When J is empty the conditional probability related to U above is set to 1.
-
ii)
Combining Theorems 2.1 and 2.3 for all S ∈ [0,1) and all \(\boldsymbol {a} \in \mathbb {R}^{d}\) with no more than k − 1 non-positive components we have
for some C > 0 and some \(\mathcal {I}^{*}\subset \{1 ,\ldots , d\}\) with k elements.
-
iii)
Comparing the results of Theorem 2.3 and Dȩbicki et al. (2018) we obtain
$$ \limsup_{u\to \infty} \frac{ (-\ln \psi_{k}(S_{1}, 1,\boldsymbol {a} u))^{1/2}}{ - \ln \psi_{k}(S_{2}, \infty,\boldsymbol {a} u) }< \infty $$for all \( S_{1}\in [0,T], S_{2}\in [0,\infty )\).
-
iv)
Define the failure time (consider for simplicity k = d) for our multidimensional model by
If a has at least one positive component, then for all T > S ≥ 0, x > 0
see the proof in Section 4.
3 Examples
In order to illustrate our findings we shall consider three examples assuming that ΓΓ⊤ is a positive definite correlation matrix. The first example is dedicated to the simplest case k = 1. In the second one we discuss k = 2 restricting a to have all components equal to 1 followed then by the last example where only the assumption ΓΓ⊤ is an equi-correlated correlation matrix is imposed. In this section T = 1 and S ∈ [0,1) is fixed.
Example 1 (k = 1)
: Suppose that a has all components positive. In view of Theorem 2.1 we have that
as \(u\to \infty \). Note that for any positive integer i ≤ d
where B is a standard Brownian motion. It follows easily that
Example 2 (k = 2 and a = 1)
: Suppose next k = 2 and a has all components equal 1. By Theorems 2.1 and 2.3 we have that
as \(u\to \infty \), where \(\boldsymbol 1 \in \mathbb {R}^{d}\) has all components equal to 1. Using further Remark 2.4 we obtain
Here we set ρi, j = corr(Wi(1), Wj(1)). Consequently, if \(\rho _{i,j}>\rho _{i^{*},j^{*}}\), then as \(u\to \infty \)
The same holds also if \(\rho _{i,j}=\rho _{i^{*},j^{*}}\) and \(c_{i}+c_{j}>c_{i^{*}}+c_{j^{*}}\). If we denote by τ the maximum of all ρi, j’s and by c∗ the maximum of ci + cj for all i, j’s such that ρi, j = τ, then we conclude that
Note that in this case Ci, j(1) does not depend on i and j and is equals to
where (B1(t), B2(t)), t ≥ 0 is a 2-dimensional Gaussian process with Bi’s being standard Brownian motions with constant correlation τ. Consequently, as \(u\to \infty \)
where
Example 3 (Equi-correlated risk model)
: We consider the matrix Γ such that Σ = ΓΓ⊤ is an equi-correlated non-singular correlation matrix with off-diagonal entries equal to ρ ∈ (− 1/(d − 1),1). Let \(\boldsymbol {a}\in \mathbb {R}^{d}\) with at least one positive component and assume for simplicity that its components are ordered, i.e., a1 ≥ a2 ≥⋯ ≥ ad and thus a1 > 0. The inverse of Σ equals
where Jd is the identity matrix. First we determine the index set I corresponding to the unique solution of πΣ(a). We have for this case that I with m elements is unique and in view of Eq. 7
with \(\boldsymbol 0 \in \mathbb {R}^{d}\) the origin. From the above \(m=\left \lvert I \right \rvert =d\) if and only if
which holds in the particular case that all ai’s are equal and positive.
When the above does not hold, the second condition on the index set I given in Eq. 7 reads
Next, suppose that \(a_{i}=a>0, c_{i}=c\in \mathbb {R}\) for all i ≤ d. In view of Eq. 13 for any positive integer k ≤ d and any S ∈ [0,1) we have
where (set I = {1, … , k})
Note that the case ρ = 0 is treated in Bai et al. (2018)[Prop. 3.6] and follows as a special case of this example.
4 Proofs
4.1 Proof of Theorem 1.1
Our proof below is based on the idea of the proof of Korshunov and Wang (2020)[Thm 1.1], where c has zero components, k = d and S = 0 has been considered. Recall the definition of sets \(\boldsymbol E_{\mathcal {I}}(\boldsymbol {u})\) and E(u) introduced in Eq. 1 for any non-empty \(\mathcal {I}\subset \{1,\ldots , d\}\) such that \(|\mathcal {I}|=k \le d\). With this notation we have
where τk(u) is the ruin time defined by
For the lower bound, we note that
By the fact that Brownian motion has continuous sample paths
almost surely, where ∂A stands for the topological boundary (frontier) of the set \(A \subset \mathbb {R}^{d}\). Consequently, by the strong Markov property of the Brownian motion, we can write further
Crucial is that the boundary ∂E(u) can be represented as the following union
For every \(\boldsymbol {x}\in F_{\mathcal {I}}(\boldsymbol {u})\) using the self-similarity of Brownian motion for all non-empty index sets \(\mathcal {I} \subset \{1 ,\ldots , d\}\) and all t ∈ (S, T)
where \(\tilde {c}_{i}=\max \limits (0,c_{i})\), hence for all x ∈ ∂E(u)
Consequently, using further Eq. 17 we obtain
establishing the proof.
4.2 Proof of Theorem 2.1
The results in this section hold under the assumption that Σ = ΓΓ⊤ is positive definite, which is equivalent with our assumption that Γ is non-singular. The next lemma is a consequence of Hashorva (2019)[Lem 2]. We recall that φ denotes the probability density function of ΓB(1).
Lemma 4.1
For any \(\boldsymbol {a}\in \mathbb {R}^{d} \setminus (-\infty , 0]^{d}\) we have for some positive constants C1, C2
where α is some integer and \(\tilde {\boldsymbol {a}}\) is the solution of quadratic programming problem \({\Pi }_{\Sigma }(\boldsymbol {a}), {\Sigma }={\Gamma } {\Gamma }^{\top } \) and I is the unique index set that determines the solution of πΣ(a).
We agree in the following that if \(\mathcal {I}\) is empty, then simply the term \(A_{\mathcal {I}}(t)\) should be deleted from the expressions below; recall that \(A_{\mathcal {I}}(t)\) is defined in Eq. 3.
We state next three lemmas utilised in the case \(T< \infty \). Their proofs are displayed Appendix.
Lemma 4.2
Let \(\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}\) be two index sets such that \(\mathcal {I}\not =\mathcal {J}\) and \(|\mathcal {I}|=|\mathcal {J}|=k {\ge }1\). If \(\boldsymbol {a}_{\mathcal {I}\cup \mathcal {J}}\) has at least two positive components, then for any s, t ∈ [0,1] there exists some ν = ν(s, t) > 0 such that as \(u\to \infty \)
and
Lemma 4.3
Let S > 0, k ≤ d be a positive integer and let \(\boldsymbol {a} \in \mathbb {R}^{d}\) be given. If \(\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}\) are two different index sets with k ≥ 1 elements such that \(\boldsymbol {a}_{\mathcal {I}\cup \mathcal {J}}\) has at least one positive component, then there exist s1, s2 ∈ [S,1] and some positive constant τ such that as \(u\to \infty \)
Case \(T< \infty \)
According to Theorem 1.1 and Lemma 4.1 it is enough to show the proof for S ∈ (0, T). In view of the self-similarity of Brownian motion we assume for simplicity T = 1. Recall that in our notation Σ = ΓΓ⊤ is the covariance matrix of W(1) which is non-singular and we denote its pdf by φ. In view of Eqs. 19 and 20 for all S ∈ (0,1) there exists some ν > 0 such that as \(u\to \infty \)
Note that we may utilise Eqs. 19 and 20 for sets \(\mathcal {I}\) and \(\mathcal {J}\) of length k, because of the assumption that a has no more than k − 1 non-positive components. Hence any vector \(\boldsymbol {a}_{\mathcal {I}}\) has at least one positive component.
Further, by Theorem 1.1 and the inclusion-exclusion formula we have that for some K > 0 and all u sufficiently large
Hence the claim follows from Eqs. 4 and 5.
Case \(T=\infty \)
Using the self-similarity of Brownian motion we have
where
For t > 0 define
Since \(\lim _{t\downarrow 0}r_{{\mathcal {I}}}(t)=\infty \) we set below \(r_{{\mathcal {I}}}(0)=\infty \).
In view of Lemma 4.1 we have as \(u\to \infty \)
where \(\widetilde {\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t}\) is the solution of quadratic programming problem \({\Pi }_{t{\Sigma }_{\mathcal {I}\mathcal {I}}}(\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t)\) and \(\varphi _{\mathcal {I},t}(\boldsymbol {x})\) is the pdf of \(\boldsymbol {W}_{\mathcal {I}}(t)\), α is some integer and C1, C2 are positive constant that do not depend on u. For notational simplicity we shall omit below the subscript \(\mathcal {I}\).
The rest of the proof is established by utilising the following lemmas, whose proofs are displayed in Appendix.
Lemma 4.4
Let k ≤ d be a positive integer and let \(\boldsymbol {a},\boldsymbol {c}\in \mathbb {R}^{d}\). Consider two different sets \(\mathcal {I},\mathcal {J}\subset \{1,{\ldots } ,d\}\) of cardinality k. If both \(\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t\) and \(\boldsymbol {a}_{\mathcal {J}}+\boldsymbol {c}_{\mathcal {J}} t\) have at least one positive component for all t > 0 and both \(\boldsymbol {c}_{\mathcal {I}}\) and \(\boldsymbol {c}_{\mathcal {J}}\) also have at least one positive component, then in case \(\hat {t}_{\mathcal {I}}:=\arg \min \limits _{t>0}~r_{\mathcal {I}}(t)\not =\hat {t}_{\mathcal {J}}:=\arg \min \limits _{t>0}~r_{\mathcal {J}}(t)\),
Lemma 4.5
Under the settings of Lemma 4.4, if a + ct has no more than k − 1 non-positive component for all t > 0 and c has no more than k − 1 non-positive components, then in case \(\hat {t}_{\mathcal {I}}:=\arg \min \limits _{t>0}~r_{\mathcal {I}}(t)=\hat {t}_{\mathcal {J}}:=\arg \min \limits _{t>0}~r_{\mathcal {J}}(t)\)
Combining the above two lemmas we have that for any two index sets \(\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}\) of cardinality k, there is some index set \(\mathcal {K}\subset \{1,\ldots , d\}\) such that as \(u\to \infty \)
which is equivalent with
The proof follows now by Eqs. 4 and 5.
4.3 Proof of Theorem 2.3
Below we set
and denote by \(\tilde { \boldsymbol {a}}\) the unique solution of the quadratic programming problem πΣ(a).
We denote below by I the index set that determines the unique solution of πΣ(a), where \(\boldsymbol {a} \in \mathbb {R}^{d}\) has at least one positive component (see Lemma 2.2). If J = {1, … , d}∖ I is non-empty, then we set below \(U=\{j\in J:\tilde { a}_{j}= a_{j}\}\). The number of elements |I| of I is denoted by m, which is a positive integer.
The next lemma is proved in Appendix.
Lemma 4.6
For any Λ > 0, \(\boldsymbol {a}\in \mathbb {R}^{d} \setminus (-\infty ,0]^{d}, {\boldsymbol {c} \in \mathbb {R}^{d}}\) and all sufficiently large u there exist C > 0 such that
and further
where \(C(\boldsymbol {c})= \mathbb {P} \left \{ \boldsymbol {W}_{U}(1)> \boldsymbol {c}_{U} \lvert \boldsymbol {W}_{I}(1)> \boldsymbol {c}_{I} \right \} \) and for \(\boldsymbol \lambda = {\Sigma }^{-1} \tilde {\boldsymbol {a}}\)
for all constants Λ1 < Λ2. We set C(c) equal 1 if U defined in Remark 2.4 is empty. Further we have
First note that for all Λ, u positive
In view of Lemmas 4.6 and 4.1
hence
and thus the proof follows applying Eq. 24.
4.4 Proof of Eq. 14
The proof is similar to that of Dȩbicki et al. (2017)[Thm 2.5] and therefore we highlight only the main steps. If T > S ≥ 0 by the definition of τ(u) and the self-similarity of Brownian motion
Thus, without loss of generality in the rest of the proof we suppose that T = 1 > S ≥ 0.
We note that
Next, for \(\tilde {x}(u)=1-\frac {x}{u^{2}}\)
Hence by Theorem 2.3, the fact that
and
we obtain
Moreover, following the same reasoning as above
as \(u\to \infty \). Thus, combination of Eqs. 26 with 27 leads to
References
Avram, F., Palmowski, Z., Pistorius, M.: A two-dimensional ruin problem on the positive quadrant. Insurance Math. Econom. 42(1), 227–234 (2008)
Avram, F., Palmowski, Z., Pistorius, M.R.: Exit problem of a two-dimensional risk process from the quadrant: exact and asymptotic results. Ann. Appl. Probab. 18(6), 2421–2449 (2008)
Bai, L., Dȩbicki, K., Liu, P.: Extremes of vector-valued Gaussian processes with trend. J. Math. Anal. Appl. 465(1), 47–74 (2018)
Borovkov, K., Palmowski, Z.: The Exact Asymptotics for Hitting Probability of a Remote Orthant by a Multivariate Lévy Process: the Cramé Case. In: 2017 MATRIX Annals vol. 2 of MATRIX Book Ser, pp 303–309. Springer, Cham (2019)
Dȩbicki, K., Hashorva, E., Ji, L., Rolski, T.: Extremal behavior of hitting a cone by correlated Brownian motion with drift. Stoch. Proc. Appl. 128(12), 4171–4206 (2018)
Dȩbicki, K., Hashorva, E., Kriukov, N.: Pandemic-type failures in multivariate Brownian risk models. arXiv:2008.07480 (2021)
Dȩbicki, K., Hashorva, E., Liu, P.: Extremes of γ-reflected gaussian processes with stationary increments. ESAIM Probab. Stat. 21, 495–535 (2017)
Dȩbicki, K., Hashorva, E., Michna, Z.: Simultaneous ruin probability for two-dimensional Brownian risk model. J. Appl. Probab. 57(2), 597–612 (2020)
Delsing, G.A., Mandjes, M.R.H., Spreij, P.J.C., Winands, E.M.M.: Asymptotics and approximations of ruin probabilities for multivariate risk processes in a Markovian environment. Methodol. Comput. Appl. Probab. 22(3), 927–948 (2020)
Dombry, C., Rabehasaina, L.: High order expansions for renewal functions and applications to ruin theory. Ann. Appl. Probab. 27, 2342–2382, 08 (2017)
Foss, S., Korshunov, D., Palmowski, Z., Rolski, T.: Two-dimensional ruin probability for subexponential claim size. Probab. Math. Statist. 37(2), 319–335 (2017)
Hashorva, E.: Approximation of some multivariate risk measures for Gaussian risks. J. Multivariate Anal. 169, 330–340 (2019)
Hu, Z., Jiang, B.: On joint ruin probabilities of a two-dimensional risk model with constant interest rate. J. Appl. Probab. 50(2), 309–322 (2013)
Ji, L.: On the cumulative Parisian ruin of multi-dimensional Brownian motion risk models. Scand. Actuar. J. 9, 819–842 (2020)
Ji, L., Robert, S.: Ruin problem of a two-dimensional fractional Brownian motion risk process. Stoch. Models 34(1), 73–97 (2018)
Korshunov, D., Wang, L.: Tail asymptotics for Shepp-statistics of Brownian motion in \(\mathbb {R}^{d}\). Extremes 23(1), 35–54 (2020)
Korshunov, D.A., Piterbarg, V.I., Hashorva, E.: On the asymptotic Laplace method and its application to random chaos. Mat. Zametki 97 (6), 868–883 (2015)
Lieshout, P., Mandjes, M.: Tandem Brownian queues. Math. Methods Oper. Res. 66(2), 275–298 (2007)
Pan, Y., Borovkov, K.A.: The exact asymptotics of the large deviation probabilities in the multivariate boundary crossing problem. Adv. Appl. Probab. 51(3), 835–864 (2019)
Piterbarg, V.I.: Asymptotic Methods in the Theory of Gaussian Processes and Fields, Vol. 148 of Translations of Mathematical Monographs. Providence, RI, American Mathematical Society. Translated from the Russian by V.V. Piterbarg, revised by the author (1996)
Samorodnitsky, G., Sun, J.: Multivariate subexponential distributions and their applications. Extremes 19(2), 171–196 (2016)
Acknowledgements
We are thankful to the reviewers for valuable comments and corrections. K.D. was partially supported by NCN Grant No 2018/31/B/ST1/00370. Thanks to the Swiss National Science Foundation Grant 200021-196888.
Author information
Authors and Affiliations
Corresponding author
Additional information
Data availability statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Lemma A.1
If for \(\boldsymbol {a}\in (\mathbb {R}\cup \{-\infty \})^{d}\) and \(\mathcal {I}\subset \{1,\ldots , d\}\) such that \(\boldsymbol {a}_{\mathcal {I}}\) has at least two positive components and Γ is non-singular, then for all t > 0
where \(\nu =\nu (t,\mathcal {I})>0\) does not depend on u.
Remark A.2
Lemma A.1 implies that for any vector \(\boldsymbol {a}\in (\mathbb {R}\cup \{-\infty \})^{d}\) and for any d-dimensional Gaussian random vector W, if a has at least two positive components, there exists some positive constant η and i ∈{1…d} such that as \(u\to \infty \)
Proof of Lemma A.1
For notational simplicity we shall assume that \(\mathcal {I}= \{1,\ldots , d \}\) and set \(K_{i}=\mathcal {I}\setminus \{i\} \). By the assumption for all \(i\in \mathcal {I}\) the vector \(\boldsymbol {a}_{K_{i}}\) has at least one positive component and Σ = ΓΓ⊤ is positive definite. In view of Lemma 4.1 for any fixed t > 0 and some C1, C2 two positive constants we have
where φt is the pdf of W(t) with covariance matrix Σ(t) = tΣ and \(\tilde {\boldsymbol {a}}=\arg \min \limits _{\boldsymbol {x} \ge \boldsymbol {a}} \boldsymbol {x}^{\top } {\Sigma }^{-1}(t)\boldsymbol {x}\), \(\bar {\boldsymbol {a}_{i}}=\arg \min \limits _{\boldsymbol {x}\in S_{i}} \boldsymbol {x}^{\top }{\Sigma }^{-1}(t)\boldsymbol {x},\) with \(S_{i}=\{\boldsymbol {x}\in \mathbb {R}^{d}:~\forall j \in K_{i} : x_{j}\geq a_{j}\}\). Since \(\{\boldsymbol {x}\in \mathbb {R}^{d}: \boldsymbol {x} \ge \boldsymbol {a} \}\subset S_{i}\), then clearly
for any i ≤ d. Next, if we have strict inequality for some i ≤ d, i.e., \(\tilde {\boldsymbol {a}}^{\top }{\Sigma }^{-1}(t) \tilde {\boldsymbol {a}}>\bar {\boldsymbol {a}_{i}}^{\top } {\Sigma }^{-1}(t)\bar {\boldsymbol {a}_{i}}\), then it follows that
for \(\nu =\frac {1}{2}\left (\tilde {\boldsymbol {a}}^{\top }{\Sigma }^{-1}(t) \tilde {\boldsymbol {a}}-\bar {\boldsymbol {a}_{i}}^{\top } {\Sigma }^{-1}(t)\bar {\boldsymbol {a}_{i}}\right ) >0\), hence the claim follows.
Let us consider now the extreme case that for all i ≤ d we have \(\tilde {\boldsymbol {a}}^{\top }{\Sigma }^{-1}\tilde {\boldsymbol {a}}=\bar {\boldsymbol {a}_{i}}^{\top } {\Sigma }^{-1}\bar {\boldsymbol {a}_{i}}\). Define the following set
Since Σ(t) is positive definite, E is a full dimensional ellipsoid in \(\mathbb {R}^{d}\). By the definition, \(E\cap S_{i}=\{\tilde {\boldsymbol {a}}\}\). Define the following lines in \(\mathbb {R}^{d}\)
and observe that since li ∈ Si, then \(l_{i}\cap E=\{\tilde {\boldsymbol {a}}\}\), and they are linearly independent. Since the boundary of E is smooth, there can not be more than d − 1 linearly independent tangent lines at the point \(\tilde {\boldsymbol {a}}\), which leads to a contradiction. □
Proof of Lemma 4.2
First note that since \(\mathcal {I}\not = \mathcal {J}\), then \(|\mathcal {I}\cup \mathcal {J}|\geq k+1\). Consequently, we can find some index set \(\mathcal {K}\) such that
and further \( \boldsymbol {a}_{\mathcal {K}}\) has at least two positive components. Applying Lemma A.1 for any t ∈ [0, 1] and some ν > 0
If s = t, then applying Lemma 4.1
Next, if s < 1, then applying Lemma 4.1 we obtain
A similar asymptotic bound follows for t < 1, whereas if s = t = 1, the first claim follows directly from the case s = t discussed above. We show next Eq. 19. If s < t, then s < 1 and applying Lemma 4.1 we obtain
A similar asymptotic bound follows for t < s or s = t ≤ 1 by applying Eq. 18 establishing the proof. □
Proof of Lemma 4.3
Define for s, t ∈ [S, 1] the Gaussian random vector
with covariance matrix D(s, t). We show first that this matrix is positive definite. For this we assume that s ≤ t. As D(s, t) is some covariance matrix, we know that it is non-negative definite. Choose some vector \(\boldsymbol v\in \mathbb {R}^{d}\). It is sufficient to show that if v⊤D(s, t)v = 0, then v = 0 (here \(\boldsymbol 0=(0, \ldots ,0)^{\top }\in \mathbb {R}^{d}\)). Note that
Using that W(t) has independent increments, this variance is equal to the sum of the variances. Hence, both of them should be equal to zero. In particular it means that V ar(〈W(s), v〉) = 0. Hence, as \(s\geqslant S>0\), we have that v = 0. Thus, D(s, t) is positive definite and D− 1(s, t) exists. Set further
With this notation we have
Let \(\tilde {\mathfrak {a}}(s,t) = \arg \min \limits _{{\boldsymbol {x}}\geq \mathfrak {a}} \boldsymbol {x}^{\top } D^{-1}(s,t) \boldsymbol {x}\) be the unique solution of \({\Pi }_{D(s,t)}(\mathfrak {a})\) and let further \( \mathfrak {w}(s,t) =D^{-1}(s,t)\tilde {\mathfrak {a}}(s,t)\) be the solution of the dual problem. We denote by I(s, t) the index set related to the quadratic programming problem \({\Pi }_{D(s,t)}(\mathfrak {a})\). Then \( \mathfrak {w}(s,t)\) has non-negative components and according to Lemma 2.2 since both s, t ≥ S > 0 we have
Consequently, we have
for any positive u, where \(\mathfrak {C}=\min \limits _{s,t\in [S,1]}\frac {\mathfrak {w}^{\top }(s,t)\mathfrak {c}(s,t)}{\mathfrak {w}^{\top }(s,t)\tilde {\mathfrak {a}}(s,t)}\). Moreover, for some s1, s2 ∈ [S, 1]
since [S, 1]2 is compact. Moreover, one can check that for some positive constant G and s1, s2, t1, t2 ∈ [S, 1]
Thus, utilizing Piterbarg inequality, see e.g., Piterbarg (1996)[Thm 8.1], we have that there exist positive constants C, γ such that
for all u positive. Further, by Lemma 4.1 for some constants α, C∗, C+ as \(u\to \infty \)
Hence the claim follows for \(\tau =|\mathfrak {C}/\sigma ^{2}|+\sup _{s,t\in [S,1]}|\tilde {\mathfrak {a}}(s,t)D^{-1}(s,t)\mathfrak {c}(s,t)|+1\). □
Lemma A.3
The function \(r_{\mathcal {I}}(t),t>0\) defined in Eq. 22 is convex and if \(\boldsymbol {c}_{\mathcal {I}}\) has at least one positive component, then there exists T > 0 such that for some positive s and any t > 0
Moreover, if \(\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t\) for any t > 0 have at least one positive component, then \(r_{\mathcal {I}}(t),t>0\) has a unique point of minimum.
The proof of Lemma A.3 is purely analytical, thus we skip the details, referring for precise argumentation to the extended version of this contribution (Dȩbicki et al. 2021).
Lemma A.4
Suppose that Σ = ΓΓ⊤ is positive definite. For any non-empty subset \(\mathcal {I}\subset \{1,\ldots , d\}\) if \(\boldsymbol {c}_{\mathcal {I}}\) and \(\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t\) for all t ≥ 0 have at least one positive component, then for any point \(0<t\not =\hat {t}=\arg \min \limits _{t>0} r_{\mathcal {I}}(t)\) there exists some positive constant ν such that
Proof of Lemma A.4
For notational simplicity we omit below the subscript \(\mathcal {I}\). Since for any t > 0 we have V ar(W(t)) = tΣ, then by Lemma 4.1
where C is some positive constant, α(t) is an integer and \(\tilde {\boldsymbol {\mathfrak {p}}}(t)\) is the unique solution of πtΣ(a + ct), which can be reformulated also as
If \(t\not =\hat {t}\), then \(r(t)-r(\hat {t})=\tau >0\) and
as \(u\to \infty \). □
Lemma A.5
Let \(\boldsymbol {a},\boldsymbol {c}\in \mathbb {R}^{d}\) be such that a + ct has at least one positive component for all t in a compact set \(\mathcal {T}\subset (0, \infty )\). If Σ = ΓΓ⊤ is positive definite, then there exist constants C > 0, γ > 0 and \(\mathfrak {t}\in \mathcal {T}\) such that for all u > 0
If we also have that for some non-overlapping index sets \(\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}\) and some compact subset \(\mathcal {T}\subset [0,\infty )^{2}\) both \(((\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t_{1})^{\top }, (\boldsymbol {a}_{\mathcal {J}}+\boldsymbol {c}_{\mathcal {J}} t_{2})^{\top })^{\top }\) have at least one positive component for all \((t_{1},t_{2})\in \mathcal {T}\), then for some \(\mathfrak {t}=(\mathfrak {t}_{1},\mathfrak {t}_{2})\in \mathcal {T}\) as \(u\to \infty \)
Moreover, the same estimate holds if \(\mathcal {I}\) and \(\mathcal {J}\) are overlapping and for all \((\mathfrak {t}_{1},\mathfrak {t_{2}})\in \mathcal {T}\) we have \(\mathfrak {t}_{1}\not =\mathfrak {t}_{2}\).
Proof of Lemma A.5
Denote by D(t) the covariance matrix of W(t), which by assumption on Γ is positive definite. Let \(\tilde {\mathfrak {a}}(t) = \arg \min \limits _{\boldsymbol {x}\geq \boldsymbol {a}+\boldsymbol {c} t} \boldsymbol {x}^{\top } D^{-1}(t) \boldsymbol {x}\) be the solution of πD(a + ct), t > 0 and let further
be the solution of the dual optimization problem. In view of Eq. 10\(\mathfrak {w}_{I}(t)\) has positive components for I the unique index set related to πD(t)(a + ct) and moreover by Eq. 9
implying
We have further that
for some \(\mathfrak {t}\in \mathcal {T}\), since \(\mathcal {T}\) is compact. Since \(f(t)>0, t\in \mathcal {T}\) is continuous, we may apply Piterbarg inequality (as in the proof of Eq. 20) and obtain
for some positive constants γ and C, which depend only on W(t) and d. Since, by the definition we have \(r(\mathfrak {t})=1/\sigma ^{2}\), the proof of the first inequality is complete.
The next assertion may be obtained with the same arguments but for vector-valued random process
By the definition of \(\mathcal {T}\), for any \((s,t)\in \mathcal {T}\) we have \(\lvert Var(\mathcal {W}(s,t))\rvert >0\), thus we can apply Piterbarg inequality and in consequence, using Lemma 4.1, the claim follows. □
Lemma A.6
Suppose that Σ = ΓΓ⊤ is positive definite. For any subset \(\mathcal {I}\subset \{1,\ldots , d\}\) if \(\boldsymbol {c}_{\mathcal {I}} \in \mathbb {R}^{\left \lvert \mathcal {I} \right \rvert }\) has at least one positive component and \(\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t \in \mathbb {R}^{\left \lvert {\mathcal {I}} \right \rvert }\) has at least one positive component for all non-negative t, then for some positive constants ν, \(\hat {t}=\arg \min \limits _{t>0} r_{\mathcal {I}}(t)\) and all T large
Proof of Lemma A.6
For notational simplicity we omit below the subscript \(\mathcal {I}\). For some given \(T>\hat {t} \) we have using Lemmas A.5, A.3
where s > 0 and ti ∈ [T + i, T + i + 1]. The last integral is finite and decreasing for sufficiently large u. Hence the claim follows with the same arguments as in the proof of Lemma A.4. □
Proof of Lemma 4.4
Using Lemma A.6 we know that there exist points \(t_{\mathcal {I}},~t_{\mathcal {J}}\) such that
Next, for some positive \(\varepsilon <|\hat {t}_{\mathcal {I}}-\hat {t}_{\mathcal {J}}|/3\) we have
Using Lemmas A.5, A.6 and
we obtain
for some positive constants ti, 3 ≤ i ≤ 6, where
Note that for i = 3, 4, \(t_{i}\not =\hat {t}_{\mathcal {I}}\). Hence by Lemma A.4
The same works also for j = 5, 6
Thus we can focus only on the first probability. By the definition of \(A^{*}_{\mathcal {I}}\) and \(A^{*}_{\mathcal {J}}\) in Eq. 21
where \(\boldsymbol b= ((\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} s_{1})^{\top },(\boldsymbol {a}_{\mathcal {J}}+\boldsymbol {c}_{\mathcal {J}} s_{2})^{\top })\) and \(\mathcal {W}(s,t)=(\boldsymbol {W}_{\mathcal {I}}(s)^{\top },\boldsymbol {W}_{\mathcal {J}}(t)^{\top })^{\top }. \) Define \(\widehat {i}=\mathcal {I}\cup \mathcal {J}\setminus \{i\}\). Applying Remark A.2, there exists an index i and a constant η > 0 such that
If \(i\in \mathcal {I}\), then
or
In both casess
establishing the proof. □
Proof of Lemma 4.5
Using Lemma A.6 we have
where
and \(T_{\mathcal {I}}\) and \(T_{\mathcal {J}}\) are the constants from Eq. 29. According to Lemma A.5 for some \((s_{i},t_{i})\in \mathbb {T}_{i}\)
If \(s_{1}\not =\hat {t}_{\mathcal {I}}\), then according to Lemma A.4
Otherwise, using the definition of \(\mathbb {T}_{1}\), \(|t_{1}-\hat {t}_{\mathcal {I}}|\leq |s_{1}-\hat {t}_{\mathcal {I}}|=0\), so \(t_{1}=\hat {t}_{\mathcal {I}}\) and thus
This probability can be bounded using Remark A.2, namely we have
for some \(i\in \mathcal {I}\cup \mathcal {J}\) and η > 0. As \(|\mathcal {I}|=|\mathcal {J}|=k\), and \(\mathcal {I}\not =\mathcal {J}\), then \(|\mathcal {I}\cup \mathcal {J}|\ge k+1\) and thus \(|\mathcal {I}\cup \mathcal {J}\setminus \{i\}|\ge k\). Consequently, we have
With similar arguments we obtain further
Hence the claim follows. □
Recall that \(\tilde { \boldsymbol {a}}\) stands for the unique solution of the quadratic programming problem πΣ(a).
Proof of Lemma 4.6
By the self-similarity of Brownian motion for all u > 0
Hence, applying Theorem 1.1 we obtain
which after some standard algebraic manipulations, straightforwardly implies inequality (23).
Equation 24 and limit (25) follow by the same idea as the proof of “Pickands’ lemma” in e.g. Dȩbicki et al. (2018); see Lemmas 4.2 and 4.3 therein. We skip long but standard proof, referring for details to the extended version of this contribution (Dȩbicki et al. 2021). □
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dȩbicki, K., Hashorva, E. & Kriukov, N. Pandemic-type failures in multivariate Brownian risk models. Extremes 25, 1–23 (2022). https://doi.org/10.1007/s10687-021-00424-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10687-021-00424-4
Keywords
- Multivariate Brownian risk model
- Probability of multiple simultaneous failures
- Simultaneous ruin probability
- Failure time
- Exact asymptotics
- Pandemic-type events