Pandemic-type failures in multivariate Brownian risk models

Modelling of multiple simultaneous failures in insurance, finance and other areas of applied probability is important especially from the point of view of pandemic-type events. A benchmark limiting model for the analysis of multiple failures is the classical d-dimensional Brownian risk model (Brm), see Delsing et al. (Methodol. Comput. Appl. Probab. 22(3), 927–948 2020). From both theoretical and practical point of view, of interest is the calculation of the probability of multiple simultaneous failures in a given time horizon. The main findings of this contribution concern the approximation of the probability that at least k out of d components of Brm fail simultaneously. We derive both sharp bounds and asymptotic approximations of the probability of interest for the finite and the infinite time horizon. Our results extend previous findings of Dȩbicki et al. (J. Appl. Probab. 57(2), 597–612 2020) and Dȩbicki et al. (Stoch. Proc. Appl. 128(12), 4171–4206 2018).


Introduction
In this paper we are interested in the probabilistic aspects of multiple simultaneous failures typically occurring due to pandemic-type events. A key benchmark risk model considered here is the d-dimensional Brownian risk model (Brm) R(t, u) = (R 1 (t, u 1 ), . . . , R d (t, u d where c = (c 1 , . . . , c d ) , u = (u 1 , . . . , u d ) are vectors in R d and with a d × d real-valued non-singular matrix and B(t) = (B 1 (t), . . . , B d (t)) , t ∈ R a d-dimensional Brownian motion with independent components which are standard Brownian motions.
By bold symbols we denote column vectors, operations with vectors are meant component-wise and ax = (ax 1 , . . . , ax d ) for any scalar a ∈ R and any x ∈ R d .
Indeed, Brm is a natural limiting model in many statistical applications. Moreover, as shown in Delsing et al. (2020) such a risk model appears naturally in insurance applications.Since Brm is a natural limiting model, it can be used as a benchmark for various complex models. Given the fundamental role of Brownian motion in applied probability and statistics, it is also of theoretical interest to study failure events arising from this model. Specifically, in this contribution we are interested in the behaviour of the probability of multiple simultaneous failures occurring in a given time horizon In our settings failures can be defined in various ways. Let us consider first the failure of a given component of our risk model. Namely, we say that the ith component of our Brm has a failure (or ruin occurs) if R i (t, u i )= u i +c i t − W i (t) < 0 for some t ∈ [S, T ]. The extreme case of a catastrophic event is when d multiple simultaneous failures occurs. Typically, for pandemic-type events there are at least k components of the model with simultaneous failures and k is large with the extreme case k = d. In mathematical notation, for given positive integer k ≤ d of interest is the calculation of the following probability where |I| denotes the cardinality of the set I. If T is finite, by the self-similarity property of the Brownian motion ψ k (S, T , u) can be derived from the case T = 1, whereas T = ∞ has to be treated separately.
Although the probability of multiple simultaneous failures seems very difficult to compute, our first result below, motivated by Korshunov and Wang (2020)[Thm 1.1], shows that ψ k (S, T , u) can be bounded by the multivariate Gaussian survival probability, namely by (1) When u → ∞ we can approximate p T (u) utilising Laplace asymptotic method, see e.g., Korshunov et al. (2015), whereas for small and moderate values of u it can be calculated or simulated with sufficient accuracy. Our next result gives bounds for ψ k (S, T , u) in terms of p T (u).

Theorem 1.1 If the matrix is non-singular, then for any positive integer
The bounds in Eq. 2 indicate that it might be possible to derive an approximations of ψ k (S, T , u) for large threshold u, which has been already shown for k = d = 2 in Dȩbicki et al. (2020). In this paper we consider the general case k ≤ d, d > 2 discussing both the finite time interval (i.e., T = 1) and the infinite time horizon case with T = ∞ extending the results of Dȩbicki et al. (2018) where d = k is considered.
In Section 2 we explain the main ideas that lead to the approximation of ψ k (S, T , u). Section 3 discusses some interesting special cases, whereas the proofs are postponed to Section 4. Some technical calculations are displayed in Appendix.

Main results
In this section W (t), t ≥ 0 is as in the Introduction and for a given positive integer k ≤ d we shall investigate the approximation of ψ k (S, T , u) where we fix u = au, with a in R d \ (−∞, 0] d and u is sufficiently large.
Let hereafter I denote a non-empty index set of {1, . . . , d}. For a given vector, say x ∈ R d we shall write x I to denote a subvector of x obtained by dropping its components not in I. Set next where E I (au) was defined in Eq. 1.

In vector notation for any
The following lower bound (by Bonferroni inequality) together with the upper bound are crucial for the derivation of the exact asymptotics of ψ k (S, T , au) as u → ∞.
As we shall show below, the upper bound (5) turns out to be exact asymptotically as u → ∞. The following theorem constitutes the main finding of this contribution.
Moreover, Eq. 6 holds also if T = ∞, provided that c and a + ct have no more than k − 1 non-positive components for all t ≥ 0.
Essentially, the above result is the claim that the second term in the Bonferroni lower bound (4) is asymptotically negligible. In order to prove that, the asymptotics of ψ |I| (S, T , a I u) has to be derived. For the special case that I has only two elements and S = 0, its approximation has been obtained in Dȩbicki et al. (2020). Note in passing that the assumption in Theorem 2.1 that a has no more than k−1 non-positive components excludes the case that there exists a set I ⊂ {1, . . . , d}, |I| = k such that ψ I (0, T , a I u) does not tend to 0 as u → ∞, which due to its non-rare event nature is out of interest in this contribution.
The next result extends the findings of Dȩbicki et al. (2020) to the case d > 2. For notational simplicity we consider the case I has d elements and thus avoid indexing by I. Recall that in our model W (t) = B(t) where B(t) has independent standard Brownian motion components and is a d × d non-singular real-valued matrix. Consequently = is a positive definite matrix. Hereafter 0 ∈ R d is the column vector with all elements equal 0. Denote by (a) the quadratic programming problem: minimise x −1 x, for all x ≥ a.
Its unique solutionã is such that whereã J is defined if J = {1, . . . , d} \ I is non-empty. The index set I is unique with m = |I | ≥ 1 elements, see the next lemma (or Dȩbicki et al. (2018)[Lem 2.1]) for more details.

Lemma 2.2 Let be a d × d positive definite matrix and let
(a) has a unique solutionã given in (7) with I a unique non-empty index set with m ≤ d elements such that In the following we set λ = −1ã . In view of the above lemma with the convention that when J is empty the indexing should be disregarded so that the last inequality above is irrelevant. The next theorem extends the main result in Dȩbicki et al. (2020) and further complements findings presented in Theorem 2.1 showing that the simultaneous ruin probability (i.e., k = d) behaves up to some constant, asymptotically as u → ∞ the same as p T (u). For notational simplicity and without loss of generality we consider next T = 1.

Theorem 2.3
If a ∈ R d has at least one positive component and is non-singular, then for all S ∈ [0, 1) where Remark 2.4 i) By Lemma 4.6 below taking T = 1 therein (hereafter ϕ denotes the probability density function (pdf) of B(1)) as u → ∞, where λ = −1ã and if J = {1, . . . , d} \ I is non-empty, then U = {j ∈ J :ã j = a j }. When J is empty the conditional probability related to U above is set to 1.
ii) Combining Theorems 2.1 and 2.3 for all S ∈ [0, 1) and all a ∈ R d with no more than k − 1 non-positive components we have for some C > 0 and some I * ⊂ {1, . . . , d} with k elements.
iii) Comparing the results of Theorem 2.3 and Dȩbicki et al. (2018) we obtain lim sup iv) Define the failure time (consider for simplicity k = d) for our multidimensional model by see the proof in Section 4.

Examples
In order to illustrate our findings we shall consider three examples assuming that is a positive definite correlation matrix. The first example is dedicated to the simplest case k = 1. In the second one we discuss k = 2 restricting a to have all components equal to 1 followed then by the last example where only the assumption is an equi-correlated correlation matrix is imposed. In this section T = 1 and S ∈ [0, 1) is fixed.
Example 1 (k = 1) : Suppose that a has all components positive. In view of Theorem 2.1 we have that where B is a standard Brownian motion. It follows easily that Example 2 (k = 2 and a = 1): Suppose next k = 2 and a has all components equal 1. By Theorems 2.1 and 2.3 we have that as u → ∞, where 1 ∈ R d has all components equal to 1. Using further Remark 2.4 we obtain P min Here we set The same holds also if ρ i,j = ρ i * ,j * and c i + c j > c i * + c j * . If we denote by τ the maximum of all ρ i,j 's and by c * the maximum of c i + c j for all i, j 's such that ρ i,j = τ , then we conclude that Note that in this case C i,j (1) does not depend on i and j and is equals to Example 3 (Equi-correlated risk model) : We consider the matrix such that = is an equi-correlated non-singular correlation matrix with off-diagonal entries equal to ρ ∈ (−1/(d − 1), 1). Let a ∈ R d with at least one positive component and assume for simplicity that its components are ordered, i.e., a 1 ≥ a 2 ≥ · · · ≥ a d and thus a 1 > 0. The inverse of equals where J d is the identity matrix. First we determine the index set I corresponding to the unique solution of (a). We have for this case that I with m elements is unique and in view of Eq. 7 with 0 ∈ R d the origin. From the above m = |I | = d if and only if which holds in the particular case that all a i 's are equal and positive. When the above does not hold, the second condition on the index set I given in Eq. 7 reads In view of Eq. 13 for any positive integer k ≤ d and any S ∈ [0, 1) we have Note that the case ρ = 0 is treated in Bai et al. (2018)[Prop. 3.6] and follows as a special case of this example.

Proof of Theorem 1.1
Our proof below is based on the idea of the proof of Korshunov and Wang (2020)[Thm 1.1], where c has zero components, k = d and S = 0 has been considered. Recall the definition of sets E I (u) and E(u) introduced in Eq. 1 for any non-empty I ⊂ {1, . . . , d} such that |I| = k ≤ d. With this notation we have For the lower bound, we note that By the fact that Brownian motion has continuous sample paths almost surely, where ∂A stands for the topological boundary (frontier) of the set A ⊂ R d . Consequently, by the strong Markov property of the Brownian motion, we can write further Crucial is that the boundary ∂E(u) can be represented as the following union For every x ∈ F I (u) using the self-similarity of Brownian motion for all nonempty index sets I ⊂ {1, . . . , d} and all t ∈ (S, T ) Consequently, using further Eq. 17 we obtain (S, T , u) establishing the proof.

Proof of Theorem 2.1
The results in this section hold under the assumption that = is positive definite, which is equivalent with our assumption that is non-singular. The next lemma is a consequence of Hashorva (2019)[Lem 2]. We recall that ϕ denotes the probability density function of B(1).
where α is some integer andã is the solution of quadratic programming problem (a), = and I is the unique index set that determines the solution of (a).
We agree in the following that if I is empty, then simply the term A I (t) should be deleted from the expressions below; recall that A I (t) is defined in Eq. 3.
We state next three lemmas utilised in the case T < ∞. Their proofs are displayed Appendix.
Case T < ∞ According to Theorem 1.1 and Lemma 4.1 it is enough to show the proof for S ∈ (0, T ). In view of the self-similarity of Brownian motion we assume for simplicity T = 1. Recall that in our notation = is the covariance matrix of W (1) which is non-singular and we denote its pdf by ϕ. In view of Eqs. 19 and 20 for all S ∈ (0, 1) there exists some ν > 0 such that as u → ∞ Note that we may utilise Eqs. 19 and 20 for sets I and J of length k, because of the assumption that a has no more than k − 1 non-positive components. Hence any vector a I has at least one positive component.
Further, by Theorem 1.1 and the inclusion-exclusion formula we have that for some K > 0 and all u sufficiently large ψ k (S, 1, u) Hence the claim follows from Eqs. 4 and 5.
Case T = ∞ Using the self-similarity of Brownian motion we have For t > 0 define (22) Since lim t↓0 r I (t) = ∞ we set below r I (0) = ∞.
In view of Lemma 4.1 we have as u → ∞ where a I + c I t is the solution of quadratic programming problem t II (a I + c I t) and ϕ I,t (x) is the pdf of W I (t), α is some integer and C 1 , C 2 are positive constant that do not depend on u. For notational simplicity we shall omit below the subscript I. The rest of the proof is established by utilising the following lemmas, whose proofs are displayed in Appendix.
Combining the above two lemmas we have that for any two index sets I, J ⊂ {1, . . . , d} of cardinality k, there is some index set K ⊂ {1, . . . , d} such that as The proof follows now by Eqs. 4 and 5.

Proof of Theorem 2.3
Below we set δ(u, ) := 1 − u −2 and denote byã the unique solution of the quadratic programming problem (a). We denote below by I the index set that determines the unique solution of (a), where a ∈ R d has at least one positive component (see Lemma 2.2). If J = {1, . . . , d} \ I is non-empty, then we set below U = {j ∈ J :ã j = a j }. The number of elements |I | of I is denoted by m, which is a positive integer.
The next lemma is proved in Appendix.

Lemma 4.6
For any > 0, a ∈ R d \ (−∞, 0] d , c ∈ R d and all sufficiently large u there exist C > 0 such that for all constants 1 < 2 . We set C(c) equal 1 if U defined in Remark 2.4 is empty. and thus the proof follows applying Eq. 24.

Proof of Eq. 14
The proof is similar to that of Dȩbicki et al. (2017)[Thm 2.5] and therefore we highlight only the main steps. If T > S ≥ 0 by the definition of τ (u) and the self-similarity of Brownian motion Thus, without loss of generality in the rest of the proof we suppose that T = 1 > S ≥ 0.
We note that Hence by Theorem 2.3, the fact that Moreover, following the same reasoning as above Proof of Lemma A.1 For notational simplicity we shall assume that I = {1, . . . , d} and set K i = I \ {i}. By the assumption for all i ∈ I the vector a K i has at least one positive component and = is positive definite. In view of Lemma 4.1 for any fixed t > 0 and some C 1 , C 2 two positive constants we have Since (t) is positive definite, E is a full dimensional ellipsoid in R d . By the definition, E ∩ S i = {ã}. Define the following lines in R d l i = {x ∈ R d : ∀i ∈ K i , x i =ã i } and observe that since l i ∈ S i , then l i ∩ E = {ã}, and they are linearly independent. Since the boundary of E is smooth, there can not be more than d − 1 linearly independent tangent lines at the pointã, which leads to a contradiction.
Proof of Lemma 4.2 First note that since I = J , then |I ∪J | ≥ k+1. Consequently, we can find some index set K such that and further a K has at least two positive components. Applying Lemma A.1 for any t ∈ [0, 1] and some ν > 0 If s = t, then applying Lemma 4.1 Next, if s < 1, then applying Lemma 4.1 we obtain A similar asymptotic bound follows for t < 1, whereas if s = t = 1, the first claim follows directly from the case s = t discussed above. We show next Eq. 19. If s < t, then s < 1 and applying Lemma 4.1 we obtain A similar asymptotic bound follows for t < s or s = t ≤ 1 by applying Eq. 18 establishing the proof. min(s, t)) ) , with covariance matrix D(s, t). We show first that this matrix is positive definite. For this we assume that s ≤ t. As D(s, t) is some covariance matrix, we know that it is non-negative definite. Choose some vector

Proof of Lemma 4.3 Define for s, t ∈ [S, 1] the Gaussian random vector
Using that W (t) has independent increments, this variance is equal to the sum of the variances. Hence, both of them should be equal to zero. In particular it means that V ar( W (s), v ) = 0. Hence, as s S > 0, we have that v = 0. Thus, D(s, t) is positive definite and D −1 (s, t) exists. Set further a = (a I\J , a J \I , a I∩J ) , c(s, t) = (sc I\J , tc J \I , min(s, t)c I∩J ) .
With this notation we have Letã(s, t) = arg min x≥a x D −1 (s, t)x be the unique solution of D(s,t) (a) and let further w(s, t) = D −1 (s, t)ã(s, t) be the solution of the dual problem. We denote by I (s, t) the index set related to the quadratic programming problem D(s,t) (a). Then w(s, t) has non-negative components and according to Lemma 2.2 since both s, t ≥ S > 0 we have a w(s, t) =ã (s, t)w(s, t) =ã (s, t)D −1 (s, t)ã(s, t) > 0.

Lemma
Proof of Lemma A.4 For notational simplicity we omit below the subscript I. Since for any t > 0 we have V ar(W (t)) = t , then by Lemma 4.1 where C is some positive constant, α(t) is an integer andp(t) is the unique solution of t (a + ct), which can be reformulated also as If t =t, then r(t) − r(t) = τ > 0 and Let a, c ∈ R d be such that a + ct has at least one positive component for all t in a compact set T ⊂ (0, ∞). If = is positive definite, then there exist constants C > 0, γ > 0 and t ∈ T such that for all u > 0 P ∃t ∈ T : W (t) > (a + ct) √ u ≤ Cu γ e − u 2 r(t) .
If we also have that for some non-overlapping index sets I, J ⊂ {1, . . . , d} and some compact subset T ⊂ [0, ∞) 2 both ((a I + c I t 1 ) , (a J + c J t 2 ) ) have at least one positive component for all (t 1 , t 2 ) ∈ T , then for some t = (t 1 , t 2 ) ∈ T as u → ∞ P ∃t ∈ T : Moreover, the same estimate holds if I and J are overlapping and for all (t 1 , t 2 ) ∈ T we have t 1 = t 2 .
Proof of Lemma A.5 Denote by D(t) the covariance matrix of W (t), which by assumption on is positive definite. Letã(t) = arg min x≥a+ct x D −1 (t)x be the solution of D (a + ct), t > 0 and let further be the solution of the dual optimization problem. In view of Eq. 10 w I (t) has positive components for I the unique index set related to D(t) (a +ct) and moreover by Eq. 9 We have further that for some t ∈ T , since T is compact. Since f (t) > 0, t ∈ T is continuous, we may apply Piterbarg inequality (as in the proof of Eq. 20) and obtain P ∃t ∈ T : W (t) ≥ (a + ct) √ u ≤ Cu γ e −u/2σ 2 for some positive constants γ and C, which depend only on W (t) and d. Since, by the definition we have r(t) = 1/σ 2 , the proof of the first inequality is complete. The next assertion may be obtained with the same arguments but for vector-valued random process W(s, t) = (W I (s), W J (t)) .
By the definition of T , for any (s, t) ∈ T we have |V ar(W(s, t))| > 0, thus we can apply Piterbarg inequality and in consequence, using Lemma 4.1, the claim follows.

Lemma A.6 Suppose that
= is positive definite. For any subset I ⊂ {1, . . . , d} if c I ∈ R |I| has at least one positive component and a I + c I t ∈ R |I| has at least one positive component for all non-negative t, then for some positive constants ν,t = arg min t>0 r I (t) and all T large Proof of Lemma A.6 For notational simplicity we omit below the subscript I. For some given T >t we have using Lemmas A.5, A.3 where s > 0 and t i ∈ [T + i, T + i + 1]. The last integral is finite and decreasing for sufficiently large u. Hence the claim follows with the same arguments as in the proof of Lemma A.4.
Using Lemmas A.5, A.6 and Otherwise, using the definition of T 1 , |t 1 −t I | ≤ |s 1 −t I | = 0, so t 1 =t I and thus This probability can be bounded using Remark A.2, namely we have for some i ∈ I ∪ J and η > 0. As |I| = |J | = k, and I = J , then |I ∪ J | ≥ k + 1 and thus |I ∪ J \ {i}| ≥ k. Consequently, we have With similar arguments we obtain further P ∃(s, t) ∈ T 2 : A * I (s)∩A * Hence the claim follows.
Recall thatã stands for the unique solution of the quadratic programming problem (a).