## 1 Introduction

In this paper we are interested in the probabilistic aspects of multiple simultaneous failures typically occurring due to pandemic-type events. A key benchmark risk model considered here is the d-dimensional Brownian risk model (Brm)

$$\boldsymbol{R}(t, \boldsymbol{u})=(R_{1}(t,u_{1}),\ldots,R_{d}(t,u_{d}))^{\top}= \boldsymbol{u} {+ \boldsymbol{c} t- \boldsymbol{W}(t)}, \quad t\ge 0,$$

where c = (c1, … , cd), u = (u1, … , ud) are vectors in $$\mathbb {R}^{d}$$ and

$$\boldsymbol{W}(t)= {\Gamma} \boldsymbol B(t), \quad t\in \mathbb{R} ,$$

with Γ a d × d real-valued non-singular matrix and $$\boldsymbol B(t)=(B_{1}(t) ,\ldots , B_{d}(t))^{\top } , t \in \mathbb {R}$$ a d-dimensional Brownian motion with independent components which are standard Brownian motions.

By bold symbols we denote column vectors, operations with vectors are meant component-wise and ax = (ax1, … , axd) for any scalar $$a\in \mathbb {R}$$ and any $$\boldsymbol {x}\in \mathbb {R}^{d}$$.

Indeed, Brm is a natural limiting model in many statistical applications. Moreover, as shown in Delsing et al. (2020) such a risk model appears naturally in insurance applications.Since Brm is a natural limiting model, it can be used as a benchmark for various complex models. Given the fundamental role of Brownian motion in applied probability and statistics, it is also of theoretical interest to study failure events arising from this model. Specifically, in this contribution we are interested in the behaviour of the probability of multiple simultaneous failures occurring in a given time horizon $$[S,T] \subset [0, \infty ]$$.

In our settings failures can be defined in various ways. Let us consider first the failure of a given component of our risk model. Namely, we say that the i th component of our Brm has a failure (or ruin occurs) if Ri(t, ui)= ui+citWi(t) < 0 for some t ∈ [S, T]. The extreme case of a catastrophic event is when d multiple simultaneous failures occurs. Typically, for pandemic-type events there are at least k components of the model with simultaneous failures and k is large with the extreme case k = d. In mathematical notation, for given positive integer kd of interest is the calculation of the following probability

$$\begin{array}{@{}rcl@{}} \psi_k(S,T,\boldsymbol{u})&=& \mathbb{P} \left\{ \exists t\in[S,T], \exists \mathcal{I}\subset\{1,\ldots, d\},|\mathcal{I}|=k: \cap_{i\in \mathcal{I}} \{ R_i(t, { u_i}) < 0\} \right \} \\ &=& \mathbb{P} \left\{ \exists t\in[S,T], \exists \mathcal{I}\subset\{1,\ldots, d\},|\mathcal{I}| = k: {\cap_{i\in \mathcal{I}}} {\{}W_i(t)- c_i t\!>\!u_i{\}} \right \} , \end{array}$$

where $$|\mathcal {I}|$$ denotes the cardinality of the set $$\mathcal {I}$$. If T is finite, by the self-similarity property of the Brownian motion ψk(S, T, u) can be derived from the case T = 1, whereas $$T=\infty$$ has to be treated separately.

There are no results in the literature investigating ψk(S, T, u) for general k. The particular case k = d, for which ψd(S, T, u) coincides with the simultaneous ruin probability has been studies in different contexts, see e.g., Lieshout and Mandjes (2007), Avram et al. (2008a), Avram et al. (2008b), Dȩbicki et al. (2018), Ji and Robert (2018), Foss et al. (2017), Pan and Borovkov (2019), Borovkov and Palmowski (2019), Ji (2020), Hu and Jiang (2013), Samorodnitsky and Sun (2016), and Dombry and Rabehasaina (2017). The case d = 2 of Brm has been recently investigated in Dȩbicki et al. (2020).

Although the probability of multiple simultaneous failures seems very difficult to compute, our first result below, motivated by Korshunov and Wang (2020)[Thm 1.1], shows that ψk(S, T, u) can be bounded by the multivariate Gaussian survival probability, namely by

$$p_{T}(\boldsymbol{u})= \mathbb{P} \left\{ {(W_{1}(T)- c_{1} T ,\ldots, W_{d}(T)- c_{d} T)} \in \boldsymbol E(\boldsymbol{u}) \right \} ,$$

where

$$\boldsymbol E(\boldsymbol{u})=\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\bigcup}\boldsymbol{E_{\mathcal{I}}(\boldsymbol{u})}=\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\bigcup}{\{\boldsymbol{x}\in\mathbb{R}^d:\forall i\in \mathcal{I}:~\boldsymbol{x}_i\geq\boldsymbol{u}_i\}}.$$
(1)

When $$u\to \infty$$ we can approximate pT(u) utilising Laplace asymptotic method, see e.g., Korshunov et al. (2015), whereas for small and moderate values of u it can be calculated or simulated with sufficient accuracy. Our next result gives bounds for ψk(S, T, u) in terms of pT(u).

### Theorem 1.1

If the matrix Γ is non-singular, then for any positive integer kd, all constants $$0\leq S < T< \infty$$ and all $$\boldsymbol {c},\boldsymbol {u}\in \mathbb {R}^{d}$$

$$p_T(\boldsymbol{u}) \le \psi_k({S},T, \boldsymbol{u}) \le K p_T(\boldsymbol{u}),$$
(2)

where $$K= 1/\min \limits _{\substack {\mathcal {I}\subset \{1,\ldots , d\},|\mathcal {I}|=k}}\mathbb {P} \left \{ \forall _{i\in \mathcal {I}}: W_{i}(T)> \max \limits (0,c_{i} T) \right \} >0$$.

The bounds in Eq. 2 indicate that it might be possible to derive an approximations of ψk(S, T, u) for large threshold u, which has been already shown for k = d = 2 in Dȩbicki et al. (2020). In this paper we consider the general case kd, d > 2 discussing both the finite time interval (i.e., T = 1) and the infinite time horizon case with $$T=\infty$$ extending the results of Dȩbicki et al. (2018) where d = k is considered.

In Section 2 we explain the main ideas that lead to the approximation of ψk(S, T, u). Section 3 discusses some interesting special cases, whereas the proofs are postponed to Section 4. Some technical calculations are displayed in Appendix.

## 2 Main results

In this section W(t), t ≥ 0 is as in the Introduction and for a given positive integer kd we shall investigate the approximation of ψk(S, T, u) where we fix u = au, with a in $$\mathbb {R}^{d}\setminus (-\infty ,0]^{d}$$ and u is sufficiently large.

Let hereafter $$\mathcal {I}$$ denote a non-empty index set of {1, … , d}. For a given vector, say $$\boldsymbol {x}\in \mathbb {R}^{d}$$ we shall write $$\boldsymbol {x}_{\mathcal {I}}$$ to denote a subvector of x obtained by dropping its components not in $$\mathcal {I}$$. Set next

$$\psi_{\mathcal{I}}(S,T, \boldsymbol {a}_{\mathcal{I}} u)= \mathbb{P} \left\{ \exists {t\in [S,T]}: A_{\mathcal{I}}(t) \right \} ,$$

with

$$A_{\mathcal{I}} (t)=\{ \boldsymbol{W}(t)- \boldsymbol{c} t\in\boldsymbol E_{\mathcal{I}}(\boldsymbol {a} u) \}=\{ \forall i\in\mathcal{I}:\ ~W_i(t)- c_it\geq a_i u \},$$
(3)

where $$\boldsymbol {E}_{\mathcal {I}}(\boldsymbol {a} u)$$ was defined in Eq. 1.

In vector notation for any $$u\in \mathbb {R}$$

$$\psi_k(S,T,\boldsymbol {a} u) = \mathbb{P} \left\{ \exists t\in[S,T]:~\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\bigcup}A_{\mathcal{I}}(t) \right \} = \mathbb{P} \left\{ \underset{\underset{{|\mathcal{I}|=k}}{\mathcal{I}\subset\{1,\ldots, d\}}}{\bigcup}\left\{\exists t\in[S,T]:~A_{\mathcal{I}}(t)\right\} \right \}.$$

The following lower bound (by Bonferroni inequality)

$$\psi_k(S, T,\boldsymbol {a} u) \ge \underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\sum} \psi_{\mathcal{I}} (S,T,\boldsymbol {a}_{\mathcal{I}} u) -\underset{\begin{array}{cc}\mathcal{I},\mathcal{J}\subset\{1,\ldots, d\}\\|\mathcal{I}|=|\mathcal{J}|=k\\\mathcal{I}\not=\mathcal{J} \end{array}}{\sum}\mathbb{P} \left\{ \exists t,s\in[S,T]:~A_{\mathcal{I}}(t)\cap A_{\mathcal{J}}(s) \right \}$$
(4)

together with the upper bound

$$\begin{array}{@{}rcl@{}} \psi_k(S, T,\boldsymbol {a} u) \le \underset{\underset{{|\mathcal{I}|=k}}{\mathcal{I}\subset\{1,\ldots, d\}}}{\sum} \psi_{\mathcal{I}} (S,T,\boldsymbol {a}_{\mathcal{I}} u) \end{array}$$
(5)

are crucial for the derivation of the exact asymptotics of ψk(S, T, au) as $$u\to \infty$$. As we shall show below, the upper bound (5) turns out to be exact asymptotically as $$u\to \infty$$. The following theorem constitutes the main finding of this contribution.

### Theorem 2.1

Suppose that the square d × d real-valued matrix Γ is non-singular. If a has no more than k − 1 non-positive components, where kd is a positive integer, then for all $$0 \leq S< T < \infty , \boldsymbol {c}\in \mathbb {R}^{d}$$

$$\psi_k(S, T, \boldsymbol {a} u) \sim {\sum}_{\substack{\mathcal{I}\subset\{1,\ldots, d\}\\|\mathcal{I}|=k}} \psi_{\mathcal{I}} (0,T,\boldsymbol {a}_{\mathcal{I}} u) , \quad u\to \infty.$$
(6)

Moreover, Eq. 6 holds also if $$T=\infty$$, provided that c and a + ct have no more than k − 1 non-positive components for all t ≥ 0.

Essentially, the above result is the claim that the second term in the Bonferroni lower bound (4) is asymptotically negligible. In order to prove that, the asymptotics of $$\psi _{\left \lvert \mathcal {I} \right \rvert } (S,T,\boldsymbol {a}_{\mathcal {I}} u)$$ has to be derived. For the special case that $$\mathcal {I}$$ has only two elements and S = 0, its approximation has been obtained in Dȩbicki et al. (2020). Note in passing that the assumption in Theorem 2.1 that a has no more than k − 1 non-positive components excludes the case that there exists a set $$\mathcal {I} \subset \{1,\ldots , d\}, \ |\mathcal {I}|=k$$ such that $$\psi _{\mathcal {I}} (0,T,\boldsymbol {a}_{\mathcal {I}} u)$$ does not tend to 0 as $$u\to \infty$$, which due to its non-rare event nature is out of interest in this contribution.

The next result extends the findings of Dȩbicki et al. (2020) to the case d > 2. For notational simplicity we consider the case $$\mathcal {I}$$ has d elements and thus avoid indexing by $$\mathcal {I}$$. Recall that in our model W(t) = ΓB(t) where B(t) has independent standard Brownian motion components and Γ is a d × d non-singular real-valued matrix. Consequently Σ = ΓΓ is a positive definite matrix. Hereafter $$\boldsymbol 0 \in \mathbb {R}^{d}$$ is the column vector with all elements equal 0. Denote by πΣ(a) the quadratic programming problem:

$$\text{minimise } \boldsymbol{x}^{\top}{\Sigma}^{-1} \boldsymbol{x}, \text{ for all } \boldsymbol{x}\ge \boldsymbol {a}.$$

Its unique solution $$\tilde { \boldsymbol {a}}$$ is such that

$$\tilde{\boldsymbol {a}}_{I}= \boldsymbol {a}_{I}, \ \ ({\Sigma}_{II})^{-1} \boldsymbol{a}_{I}>\boldsymbol{0}_{I}, \ \ \tilde{\boldsymbol {a}}_{J}= {\Sigma}_{JI} ({\Sigma}_{II} )^{-1} \boldsymbol {a}_{I} \ge \boldsymbol {a}_{J},$$
(7)

where $$\tilde {\boldsymbol {a}}_{J}$$ is defined if J = {1, … , d}∖ I is non-empty. The index set I is unique with $$m=\left \lvert I \right \rvert \ge 1$$ elements, see the next lemma (or Dȩbicki et al. (2018)[Lem 2.1]) for more details.

### Lemma 2.2

Let Σ be a d × d positive definite matrix and let $$\boldsymbol {a} \in \mathbb {R}^{d} \setminus (-\infty , 0]^{d}$$. πΣ(a) has a unique solution $$\tilde {\textbf {a}}$$ given in (7) with I a unique non-empty index set with md elements such that

$$\min_{\boldsymbol{x} \ge \boldsymbol{a}}\boldsymbol{x}^{\top} {\Sigma}^{-1} \boldsymbol{x}= \tilde{\textbf{a}}^{\top} {\Sigma}^{-1} \tilde{\textbf{a}} = \boldsymbol{a}_{I}^{\top} ({\Sigma}_{II})^{-1}\boldsymbol{a}_{I}>0,$$
(8)
$$\boldsymbol{x}^{\top} {\Sigma}^{-1} \tilde{\textbf{a}}= \boldsymbol{x}_F^{\top} ({\Sigma}_{FF})^{-1} {\tilde{\textbf{a}}_F}, \quad \forall \boldsymbol{x}\in \mathbb{R}^d$$
(9)

for any index set F ⊂{1, … , d} containing I. Further if $$\boldsymbol {a}= {(a ,\ldots , a)^{\top } , a}\in (0,\infty )$$, then $$2 \le \left \lvert I \right \rvert \le d$$.

In the following we set

$$\boldsymbol \lambda = {\Sigma}^{-1} \tilde{\boldsymbol {a}}.$$

In view of the above lemma

$$\boldsymbol \lambda_I= ({\Sigma}_{II})^{-1} \boldsymbol {a}_I> \boldsymbol 0_I, \ \ \boldsymbol \lambda_J \ge \boldsymbol 0_J,$$
(10)

with the convention that when J is empty the indexing should be disregarded so that the last inequality above is irrelevant.

The next theorem extends the main result in Dȩbicki et al. (2020) and further complements findings presented in Theorem 2.1 showing that the simultaneous ruin probability (i.e., k = d) behaves up to some constant, asymptotically as $$u\to \infty$$ the same as pT(u). For notational simplicity and without loss of generality we consider next T = 1.

### Theorem 2.3

If $$\boldsymbol {a} \in \mathbb {R}^{d}$$ has at least one positive component and Γ is non-singular, then for all S ∈ [0,1)

$$\psi_d (S,1, \boldsymbol {a} u) \sim C(\boldsymbol {a}) p_1(\boldsymbol {a} u) , \quad u\to \infty,$$
(11)

where $$C(\boldsymbol {a})= {\prod }_{i\in I} \lambda _{i} {\int \limits }_{\mathbb {R}^{m}} \mathbb {P} \left \{ \exists _{t\ge 0}:\boldsymbol {W}_{I}(t)-t\boldsymbol {a}_{I}>\boldsymbol {x}_{I} \right \} e^{\boldsymbol \lambda ^{\top }_{I} \boldsymbol {x}_{I}} \mathrm {d}\boldsymbol {x}_{I} \in (0,\infty )$$.

### Remark 2.4

1. i)

By Lemma 4.6 below taking T = 1 therein (hereafter φ denotes the probability density function (pdf) of ΓB(1))

$$p_1(\boldsymbol {a} u) =\mathbb{P} \left\{ \boldsymbol{W}(1)-\boldsymbol{c} >u\boldsymbol {a} \right \} \sim {\prod}_{i\in I} \lambda_i^{-1}\mathbb{P} \left\{ \boldsymbol{W}_U(1)> \boldsymbol{c}_U \lvert \boldsymbol{W}_I(1)> \boldsymbol{c}_I \right \} u^{-\left\lvert I \right\rvert}\varphi(u\tilde{\boldsymbol {a}}+\boldsymbol{c})$$
(12)

as $$u\to \infty$$, where $$\boldsymbol \lambda = {\Sigma }^{-1} \tilde {\boldsymbol {a}}$$ and if J = {1, … , d}∖ I is non-empty, then $$U=\{j\in J:\tilde { a}_{j}= a_{j}\}$$. When J is empty the conditional probability related to U above is set to 1.

1. ii)

Combining Theorems 2.1 and 2.3 for all S ∈ [0,1) and all $$\boldsymbol {a} \in \mathbb {R}^{d}$$ with no more than k − 1 non-positive components we have

$$\psi_k(S, 1,\boldsymbol {a} u) \sim \underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\sum} C(\boldsymbol {a}_{\mathcal{I}}) \psi_{\left\lvert \mathcal{I} \right\rvert} (0,1,\boldsymbol {a}_{\mathcal{I}} u) \sim C \mathbb{P} \left\{ \forall_{i \in \mathcal{I}^{*}}: W_i(1) > ua_i+ c_i \right \} , \quad u\to \infty$$
(13)

for some C > 0 and some $$\mathcal {I}^{*}\subset \{1 ,\ldots , d\}$$ with k elements.

1. iii)

Comparing the results of Theorem 2.3 and Dȩbicki et al. (2018) we obtain

$$\limsup_{u\to \infty} \frac{ (-\ln \psi_{k}(S_{1}, 1,\boldsymbol {a} u))^{1/2}}{ - \ln \psi_{k}(S_{2}, \infty,\boldsymbol {a} u) }< \infty$$

for all $$S_{1}\in [0,T], S_{2}\in [0,\infty )$$.

1. iv)

Define the failure time (consider for simplicity k = d) for our multidimensional model by

$$\tau(u)=\inf\{t\ge 0:\boldsymbol{W}(t)-t\boldsymbol{c}>\boldsymbol {a} u\},\qquad u>0.$$

If a has at least one positive component, then for all T > S ≥ 0, x > 0

$$\lim_{u \to \infty} \mathbb{P} \left\{ u^2(T-\tau(u))\geq x|\tau(u) {\in [S,T]} \right \} = {e^{-x\frac{\tilde{\boldsymbol {a}}^{\top}{\Sigma}^{-1}\tilde{\boldsymbol {a}}}{2T^2}}},$$
(14)

see the proof in Section 4.

## 3 Examples

In order to illustrate our findings we shall consider three examples assuming that ΓΓ is a positive definite correlation matrix. The first example is dedicated to the simplest case k = 1. In the second one we discuss k = 2 restricting a to have all components equal to 1 followed then by the last example where only the assumption ΓΓ is an equi-correlated correlation matrix is imposed. In this section T = 1 and S ∈ [0,1) is fixed.

### Example 1 (k = 1)

: Suppose that a has all components positive. In view of Theorem 2.1 we have that

$$\psi_{k}(S,1, \boldsymbol {a} u) \sim {\sum}_{i=1}^{d} \psi_{\{i\}}(0,1, a_{i}u)$$

as $$u\to \infty$$. Note that for any positive integer id

$$\psi_{\{i\}}(0, 1, a_{i} u) = \mathbb{P} \left\{ \exists_{t\in [0,1]}: B(t)- c_{i} t > a_{i} u \right \} ,$$

where B is a standard Brownian motion. It follows easily that

$$\psi_{k}(S,1, \boldsymbol {a} u) \sim 2{\sum}_{i=1}^{d}\mathbb{P} \left\{ B(1)> a_{i} u+c_{i} \right \} , \quad u\to \infty.$$

### Example 2 (k = 2 and a = 1)

: Suppose next k = 2 and a has all components equal 1. By Theorems 2.1 and 2.3 we have that

$$\psi_{k}(S,1,\boldsymbol 1 u)\sim{\sum}_{\{i,j\}\subset\{1,\ldots, d\}}C_{i,j}(\boldsymbol 1)\mathbb{P} \left\{ \min_{k\in\{i,j\}}(W_{k}(1)-c_{k})>u) \right \}$$

as $$u\to \infty$$, where $$\boldsymbol 1 \in \mathbb {R}^{d}$$ has all components equal to 1. Using further Remark 2.4 we obtain

$${\mathbb{P} \left\{\min_{k\in\{i,j\}}(W_k(1)-c_k)>u)\right \} } \sim \frac{u^{-2}}{(1-\rho_{i,j})^2\sqrt{2\pi(1-\rho_{i,j}^2)}}e^{-\frac{u^2}{1+\rho_{i,j}} -{\frac{(c_i+c_j)u}{1+\rho_{i,j}}} -\frac{c_i^2-2\rho_{i,j}c_ic_j+c_j^2}{2(1-\rho^2_{i,j})}}, \quad u\to \infty.$$

Here we set ρi, j = corr(Wi(1), Wj(1)). Consequently, if $$\rho _{i,j}>\rho _{i^{*},j^{*}}$$, then as $$u\to \infty$$

$$\mathbb{P} \left\{ \min_{k\in\{i^{*},j^{*}\}}(W_{k}(1)-c_{k}>u) \right \} =o\left( \mathbb{P} \left\{ \min_{k\in\{i,j\}}(W_{k}(1)-c_{k}>u) \right \} \right).$$

The same holds also if $$\rho _{i,j}=\rho _{i^{*},j^{*}}$$ and $$c_{i}+c_{j}>c_{i^{*}}+c_{j^{*}}$$. If we denote by τ the maximum of all ρi, j’s and by c the maximum of ci + cj for all i, j’s such that ρi, j = τ, then we conclude that

$$\psi_{k}(S,1,\boldsymbol au)\sim {\sum}_{i,j \in \{1,\ldots, d\}, \rho_{i,j}=\tau,~c_{i}+c_{j}=c_{*}}C_{i,j}(\boldsymbol 1)\mathbb{P} \left\{ \min_{k\in\{i,j\}}(W_{k}(1)-c_{k}>u) \right \} .$$

Note that in this case Ci, j(1) does not depend on i and j and is equals to

$$(1-\tau)^{2}{\int}_{\mathbb{R}^{2}}\mathbb{P} \left\{ \exists_{t\ge 0}:B_{1}(t)-t>x,B_{2}(t)-t>y \right \} e^{(1-\tau^{2})(x+y)}\mathrm{d} x\mathrm{d} y,$$

where (B1(t), B2(t)), t ≥ 0 is a 2-dimensional Gaussian process with Bi’s being standard Brownian motions with constant correlation τ. Consequently, as $$u\to \infty$$

$$\psi_{2}(S,1,\boldsymbol 1u)\sim C_*u^{-2}e^{-\frac{u^2}{1+\tau} -\frac{c_*u}{2(1{+}\tau)}},$$

where

$$\begin{array}{@{}rcl@{}}C_*&=&\frac{e^{-\frac{c_*^{2}}{2(1-{\tau}^2)}}}{\sqrt{2\pi(1-{\tau}^2)}} {\sum}_{i,j \in \{1,\ldots, d\}, \rho_{i,j}=\tau,~c_i+c_j=c_*}e^{\frac{c_ic_j}{1-\tau}}\\ && \times {\int}_{\mathbb{R}^2}\mathbb{P} \left\{ \exists_{t\ge 0}:B_1(t)-t>x,B_2(t)-t>y \right \} e^{(1-{\tau}^2)(x+y)}\mathrm{d} x\mathrm{d} y \in (0,\infty). \end{array}$$

### Example 3 (Equi-correlated risk model)

: We consider the matrix Γ such that Σ = ΓΓ is an equi-correlated non-singular correlation matrix with off-diagonal entries equal to ρ ∈ (− 1/(d − 1),1). Let $$\boldsymbol {a}\in \mathbb {R}^{d}$$ with at least one positive component and assume for simplicity that its components are ordered, i.e., a1a2 ≥⋯ ≥ ad and thus a1 > 0. The inverse of Σ equals

$$\left[J_{d} - \boldsymbol 1 \boldsymbol 1^{\top} \frac{ \rho }{ 1+ \rho(d-1)} \right] \frac{1}{ 1-\rho},$$

where Jd is the identity matrix. First we determine the index set I corresponding to the unique solution of πΣ(a). We have for this case that I with m elements is unique and in view of Eq. 7

$$\boldsymbol \lambda_I= ({\Sigma}_{II})^{-1} \boldsymbol {a}_I= \frac 1 {1- \rho}\left[ \boldsymbol {a}_I - \rho \frac{{\sum}_{i\in I} a_i}{1+ \rho(m-1)} \boldsymbol 1_I\right] > \boldsymbol 0_I,$$
(15)

with $$\boldsymbol 0 \in \mathbb {R}^{d}$$ the origin. From the above $$m=\left \lvert I \right \rvert =d$$ if and only if

$$a_{d}> \rho \frac{{\sum}_{i=1}^{d} a_{i}}{1+ \rho(d-1)},$$

which holds in the particular case that all ai’s are equal and positive.

When the above does not hold, the second condition on the index set I given in Eq. 7 reads

$${\Sigma}_{JI}{\Sigma}_{II}^{-1} \boldsymbol {a}_{I}= \rho (\boldsymbol 1 \boldsymbol 1^{\top} )_{JI} {\Sigma}_{II}^{-1} \boldsymbol {a}_{I} \ge \boldsymbol {a}_{J}.$$

Next, suppose that $$a_{i}=a>0, c_{i}=c\in \mathbb {R}$$ for all id. In view of Eq. 13 for any positive integer kd and any S ∈ [0,1) we have

$$\psi_k(S, 1,\boldsymbol {a} u) \sim C\mathbb{P} \left\{ \forall_{i \le k}: W_i(1) > ua+ c \right \} , \quad u\to \infty,$$
(16)

where (set I = {1, … , k})

$$C= \frac{d!}{k! (d-k)!} {\prod}_{i\in I} \lambda_{i} {\int}_{\mathbb{R}^{k}} \mathbb{P} \left\{ \exists_{t\ge 0}:\boldsymbol{W}_{I}(t)-t\boldsymbol {a}_{I}>\boldsymbol{x}_{I} \right \} e^{\boldsymbol \lambda_{I}^{\top} x_{I}} \mathrm{d}\boldsymbol{x}_{I} \in (0,\infty).$$

Note that the case ρ = 0 is treated in Bai et al. (2018)[Prop. 3.6] and follows as a special case of this example.

## 4 Proofs

### 4.1 Proof of Theorem 1.1

Our proof below is based on the idea of the proof of Korshunov and Wang (2020)[Thm 1.1], where c has zero components, k = d and S = 0 has been considered. Recall the definition of sets $$\boldsymbol E_{\mathcal {I}}(\boldsymbol {u})$$ and E(u) introduced in Eq. 1 for any non-empty $$\mathcal {I}\subset \{1,\ldots , d\}$$ such that $$|\mathcal {I}|=k \le d$$. With this notation we have

$$\psi_k(S, T,\boldsymbol{u})=\mathbb{P} \left\{ \exists{t\in[S,T]}:\boldsymbol{W}(t)-\boldsymbol{c} t\in\boldsymbol E(\boldsymbol{u}) \right \} = \mathbb{P} \left\{ \tau_k(\boldsymbol{u})\leq T \right \} ,$$

where τk(u) is the ruin time defined by

$$\tau_k(\boldsymbol{u})=\inf\{ {t \ge S}: \boldsymbol{W}(t)-\boldsymbol{c} t\in\boldsymbol E(\boldsymbol{u})\}.$$

For the lower bound, we note that

$$\psi_k(S, T,\boldsymbol{u})=\mathbb{P} \left\{ \exists{t\in[S,T]}:\boldsymbol{W}(t)-\boldsymbol{c} t\in\boldsymbol E(\boldsymbol{u}) \right \} \ge \mathbb{P} \left\{ \boldsymbol{W}(T)-\boldsymbol{c} T\in\boldsymbol E(\boldsymbol{u}) \right \} .$$

By the fact that Brownian motion has continuous sample paths

$$\boldsymbol{W}(\tau_k(\boldsymbol{u}))-\boldsymbol{c} \tau_k(\boldsymbol{u})\in\partial\boldsymbol E(\boldsymbol{u})$$
(17)

almost surely, where A stands for the topological boundary (frontier) of the set $$A \subset \mathbb {R}^{d}$$. Consequently, by the strong Markov property of the Brownian motion, we can write further

$$\begin{array}{@{}rcl@{}} \lefteqn{\mathbb{P} \left\{ \boldsymbol{W}(T) -{\boldsymbol{c}}T \in\boldsymbol E(\boldsymbol{u}) \right \} }\\ &= &{{\int}_{0}^{T}}{\int}_{\partial\boldsymbol E(\boldsymbol{u})}\mathbb{P} \left\{ \boldsymbol{W}(t)-\boldsymbol{c} t\in\mathrm{d}\boldsymbol{x}|\tau_k(\boldsymbol{u})=t \right \} \mathbb{P} \left\{ \boldsymbol{W}(T)-\boldsymbol{c}{T}\in\boldsymbol E(\boldsymbol{u})|\boldsymbol{W}(t)-\boldsymbol{c} t=\boldsymbol{x} \right \} \mathbb{P} \left\{ \tau_k(\boldsymbol{u})\in\mathrm{d} t \right \}. \end{array}$$

Crucial is that the boundary E(u) can be represented as the following union

$$\partial\boldsymbol E(\boldsymbol{u})=\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\bigcup}\left( \partial\boldsymbol E_{\mathcal{I}}(\boldsymbol{u})\cap\partial\boldsymbol E(\boldsymbol{u})\right)=:\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\bigcup} F_{\mathcal{I}}({\boldsymbol{u}}).$$

For every $$\boldsymbol {x}\in F_{\mathcal {I}}(\boldsymbol {u})$$ using the self-similarity of Brownian motion for all non-empty index sets $$\mathcal {I} \subset \{1 ,\ldots , d\}$$ and all t ∈ (S, T)

$$\begin{array}{@{}rcl@{}} \mathbb{P} \left\{ \boldsymbol{W}(T)-\boldsymbol cT\in\boldsymbol E(\boldsymbol{u})|\boldsymbol{W}(t)-\boldsymbol{c} t=\boldsymbol{x} \right \} &\geq&\mathbb{P} \left\{ \boldsymbol{W}(T)-\boldsymbol cT\in\boldsymbol E_{\mathcal{I}}(\boldsymbol{u})|\boldsymbol{W}(t)-\boldsymbol{c} t=\boldsymbol{x} \right \} \\ &=&\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T)-\boldsymbol{c}_{\mathcal{I}} T\geq \boldsymbol{u}_{\mathcal{I}}|\boldsymbol{W}(t)-\boldsymbol{c} t=\boldsymbol{x} \right \} \\ &\geq&\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T-t)-\boldsymbol{c}_{\mathcal{I}}(T-t)\geq \boldsymbol 0 \right \} \\ &\geq&\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T-t)\geq \boldsymbol{c}_{\mathcal{I}}(T-t) \right \} \\ &=&\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(1)\geq \boldsymbol{c}_{\mathcal{I}}\sqrt{T-t} \right \} \\ &\geq&\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(1)\geq \tilde{\boldsymbol{c}}_{\mathcal{I}}\sqrt{T} \right \} \\ &=&\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T)\geq \tilde{\boldsymbol{c}}_{\mathcal{I}} T \right \} \\ &\geq&\underset{\underset{{|\mathcal{I}|=k}}{\mathcal{I}\subset\{1,\ldots, d\}}}{\min}\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T)\geq \tilde{\boldsymbol{c}}_{\mathcal{I}} T \right \} , \end{array}$$

where $$\tilde {c}_{i}=\max \limits (0,c_{i})$$, hence for all xE(u)

$$\mathbb{P} \left\{ \boldsymbol{W}({T})-\boldsymbol{c}{T}\in\boldsymbol E(\boldsymbol{u})|\boldsymbol{W}(t)-\boldsymbol{c} t=\boldsymbol{x} \right \} \geq \underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\min}\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}({T})\geq \tilde{\boldsymbol{c}}_{\mathcal{I}} {T} \right \} .$$

Consequently, using further Eq. 17 we obtain

$$\begin{array}{@{}rcl@{}} & &{\mathbb{P} \left\{ \boldsymbol{W}({T}) -{\boldsymbol{c}}{T} \in\boldsymbol E(\boldsymbol{u}) \right \} }\\ & &\qquad{\ge}\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\min}\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T)\geq \tilde{\boldsymbol{c}}_{\mathcal{I}} T \right \} {\int}_{{S}}^{T}{\int}_{\partial\boldsymbol E(u)}\mathbb{P} \left\{ \boldsymbol{W}(t)-\boldsymbol{c} t\in\mathrm{d}\boldsymbol{x}|\tau_k(\boldsymbol{u})=t \right \} {\mathbb{P} \left\{ \tau_k(\boldsymbol{u})\in\mathrm{d} t \right \} }\\ & &\qquad=\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\min}\mathbb{P} \left\{ \boldsymbol{W}_{\mathcal{I}}(T)\geq \tilde{\boldsymbol{c}}_{\mathcal{I}} T \right \} \psi_k({S},T,\boldsymbol{u}) \end{array}$$

establishing the proof.

### 4.2 Proof of Theorem 2.1

The results in this section hold under the assumption that Σ = ΓΓ is positive definite, which is equivalent with our assumption that Γ is non-singular. The next lemma is a consequence of Hashorva (2019)[Lem 2]. We recall that φ denotes the probability density function of ΓB(1).

### Lemma 4.1

For any $$\boldsymbol {a}\in \mathbb {R}^{d} \setminus (-\infty , 0]^{d}$$ we have for some positive constants C1, C2

$$\mathbb{P} \left\{ \boldsymbol{W}(1) {- \boldsymbol{c} }>\boldsymbol {a} u \right \} \sim C_1 \mathbb{P} \left\{ \forall_{i\in I}: W_i(1) {- c_i }> a_i u \right \} \sim C_2 u^{-\alpha}\varphi(\tilde{\boldsymbol {a}}u {+ \boldsymbol{c} }), \quad u \to \infty,$$

where α is some integer and $$\tilde {\boldsymbol {a}}$$ is the solution of quadratic programming problem $${\Pi }_{\Sigma }(\boldsymbol {a}), {\Sigma }={\Gamma } {\Gamma }^{\top }$$ and I is the unique index set that determines the solution of πΣ(a).

We agree in the following that if $$\mathcal {I}$$ is empty, then simply the term $$A_{\mathcal {I}}(t)$$ should be deleted from the expressions below; recall that $$A_{\mathcal {I}}(t)$$ is defined in Eq. 3.

We state next three lemmas utilised in the case $$T< \infty$$. Their proofs are displayed Appendix.

### Lemma 4.2

Let $$\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}$$ be two index sets such that $$\mathcal {I}\not =\mathcal {J}$$ and $$|\mathcal {I}|=|\mathcal {J}|=k {\ge }1$$. If $$\boldsymbol {a}_{\mathcal {I}\cup \mathcal {J}}$$ has at least two positive components, then for any s, t ∈ [0,1] there exists some ν = ν(s, t) > 0 such that as $$u\to \infty$$

$$\mathbb{P} \left\{ A_{\mathcal{I}}(t)\cap A_{\mathcal{J}}(s) \right \} ={o\left( e^{-\nu u^2}\right)}\underset{\underset{|\mathcal{I}^{*}|=k}{\mathcal{I}^{*} \subset\{1,\ldots, d\}}}{\sum}\mathbb{P} \left\{ A_{\mathcal{I}^{*}}(1) \right \} ,$$
(18)

and

$$\mathbb{P} \left\{ A_{\mathcal{I}\setminus\mathcal{J}}(t), A_{{\mathcal{J}\setminus\mathcal{I}}}(s), A_{\mathcal{I}\cap\mathcal{J}}(\min(t,s)) \right \} ={o\left( e^{-\nu u^2}\right)}\underset{\underset{|\mathcal{I}^{*}|=k}{\mathcal{I}^{*}\subset\{1,\ldots, d\}}}{\sum}\mathbb{P} \left\{ A_{\mathcal{I}^{*}}(1) \right \} .$$
(19)

### Lemma 4.3

Let S > 0, kd be a positive integer and let $$\boldsymbol {a} \in \mathbb {R}^{d}$$ be given. If $$\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}$$ are two different index sets with k ≥ 1 elements such that $$\boldsymbol {a}_{\mathcal {I}\cup \mathcal {J}}$$ has at least one positive component, then there exist s1, s2 ∈ [S,1] and some positive constant τ such that as $$u\to \infty$$

$$\mathbb{P} \left\{ \exists s,t\in[S,1]: A_{\mathcal{I}}(s)\cap A_{\mathcal{J}}(t) \right \} = {o\left( e^{\tau u}\right)} \mathbb{P} \left\{ A_{\mathcal{I}\setminus\mathcal{J}}(s_1)\cap A_{\mathcal{J}\setminus\mathcal{I}}(s_2)\cap A_{\mathcal{I}\cap\mathcal{J}}(\min(s_1,s_2)) \right \} .$$
(20)

### Case $$T< \infty$$

According to Theorem 1.1 and Lemma 4.1 it is enough to show the proof for S ∈ (0, T). In view of the self-similarity of Brownian motion we assume for simplicity T = 1. Recall that in our notation Σ = ΓΓ is the covariance matrix of W(1) which is non-singular and we denote its pdf by φ. In view of Eqs. 19 and 20 for all S ∈ (0,1) there exists some ν > 0 such that as $$u\to \infty$$

$$\underset{\underset{| \mathcal{I}|=|\mathcal{J}|=k, \mathcal{I}\not= \mathcal{J}}{\mathcal{I},\mathcal{J} \subset\{1,\ldots, d\}}}{\sum}\mathbb{P} \left\{ \exists s,t\in[S,1]: A_{\mathcal{I}}(s)\cap A_{\mathcal{J}}(t) \right \} ={o\left( e^{- \nu u^2} \right)}\underset{\underset{|\mathcal{I}|=k}{\mathcal{I}\subset\{1,\ldots, d\}}}{\sum}\mathbb{P} \left\{ A_{\mathcal{I}}(1) \right \} .$$

Note that we may utilise Eqs. 19 and 20 for sets $$\mathcal {I}$$ and $$\mathcal {J}$$ of length k, because of the assumption that a has no more than k − 1 non-positive components. Hence any vector $$\boldsymbol {a}_{\mathcal {I}}$$ has at least one positive component.

Further, by Theorem 1.1 and the inclusion-exclusion formula we have that for some K > 0 and all u sufficiently large

$$\psi_{k}(S, 1,\boldsymbol{u}) \le {K}{\sum}_{\substack{I\subset\{1,\ldots, d\}\\|I|=k}}\mathbb{P} \left\{ A_{\mathcal{I}}(1) \right \} .$$

Hence the claim follows from Eqs. 4 and 5.

### Case $$T=\infty$$

Using the self-similarity of Brownian motion we have

$$\begin{array}{@{}rcl@{}}\mathbb{P} \left\{ \exists t>0: A_{\mathcal{I}}(t) \right \} =\mathbb{P} \left\{ \exists t>0:\boldsymbol{W}_{\mathcal{I}}(ut)\ge(\boldsymbol {a}+\boldsymbol{c} t)_{{\mathcal{I}}} u \right \} &=&\mathbb{P} \left\{ \exists t>0:\boldsymbol{W}_{\mathcal{I}}(t)\ge (\boldsymbol {a}+\boldsymbol{c} t)_{{\mathcal{I}}}\sqrt{u} \right \} \\ &=&\mathbb{P} \left\{ \exists t>0:A^{*}_{\mathcal{I}}(t) \right \} , \end{array}$$

where

$$A^{*}_{\mathcal{I}}(t)= \{ \boldsymbol{W}_{\mathcal{I}}(t)\ge (\boldsymbol {a}+\boldsymbol{c} t)_{{\mathcal{I}}}\sqrt{u} \} .$$
(21)

For t > 0 define

$$r_{\mathcal{I}}(t)=\min_{\boldsymbol{x}\ge\boldsymbol {a}_{\mathcal{I}}+\boldsymbol{c}_{\mathcal{I}} t}\frac{1}{t}\boldsymbol{x}^{\top}{\Sigma}^{-1}_{\mathcal{I}\mathcal{I}}\boldsymbol{x}, \ \ {\Sigma}_{\mathcal{I}\mathcal{I}}=Var(\boldsymbol{W}_{\mathcal{I}}(1)), \ \ {\Sigma}^{-1}_{\mathcal{I}\mathcal{I}}= ({\Sigma}_{\mathcal{I}\mathcal{I}})^{-1}.$$
(22)

Since $$\lim _{t\downarrow 0}r_{{\mathcal {I}}}(t)=\infty$$ we set below $$r_{{\mathcal {I}}}(0)=\infty$$.

In view of Lemma 4.1 we have as $$u\to \infty$$

$$\begin{array}{@{}rcl@{}}\mathbb{P} \left\{ A_{\mathcal{I}}^{*}(t) \right \} \sim C_1 u^{-\alpha/2}\varphi_{\mathcal{I},t}((\widetilde{\boldsymbol {a}_{\mathcal{I}}+\boldsymbol{c}_{\mathcal{I}} t})\sqrt{u})= C_2 u^{{-\alpha/2}} e^{-\frac{r_{\mathcal{I}}(t) u}{2}}, \end{array}$$

where $$\widetilde {\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t}$$ is the solution of quadratic programming problem $${\Pi }_{t{\Sigma }_{\mathcal {I}\mathcal {I}}}(\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t)$$ and $$\varphi _{\mathcal {I},t}(\boldsymbol {x})$$ is the pdf of $$\boldsymbol {W}_{\mathcal {I}}(t)$$, α is some integer and C1, C2 are positive constant that do not depend on u. For notational simplicity we shall omit below the subscript $$\mathcal {I}$$.

The rest of the proof is established by utilising the following lemmas, whose proofs are displayed in Appendix.

### Lemma 4.4

Let kd be a positive integer and let $$\boldsymbol {a},\boldsymbol {c}\in \mathbb {R}^{d}$$. Consider two different sets $$\mathcal {I},\mathcal {J}\subset \{1,{\ldots } ,d\}$$ of cardinality k. If both $$\boldsymbol {a}_{\mathcal {I}}+\boldsymbol {c}_{\mathcal {I}} t$$ and $$\boldsymbol {a}_{\mathcal {J}}+\boldsymbol {c}_{\mathcal {J}} t$$ have at least one positive component for all t > 0 and both $$\boldsymbol {c}_{\mathcal {I}}$$ and $$\boldsymbol {c}_{\mathcal {J}}$$ also have at least one positive component, then in case $$\hat {t}_{\mathcal {I}}:=\arg \min \limits _{t>0}~r_{\mathcal {I}}(t)\not =\hat {t}_{\mathcal {J}}:=\arg \min \limits _{t>0}~r_{\mathcal {J}}(t)$$,

$$\mathbb{P} \left\{ \exists s,t>0:~A^{*}_{\mathcal{I}}(t){\cap} A^{*}_{\mathcal{J}}(s) \right \} =o(\mathbb{P} \left\{ A^{*}_{\mathcal{I}}(\hat{t}_{\mathcal{I}}) \right \} +\mathbb{P} \left\{ A^{*}_{\mathcal{J}}(\hat{t}_{\mathcal{J}}) \right \} ), \quad u\to \infty.$$

### Lemma 4.5

Under the settings of Lemma 4.4, if a + ct has no more than k − 1 non-positive component for all t > 0 and c has no more than k − 1 non-positive components, then in case $$\hat {t}_{\mathcal {I}}:=\arg \min \limits _{t>0}~r_{\mathcal {I}}(t)=\hat {t}_{\mathcal {J}}:=\arg \min \limits _{t>0}~r_{\mathcal {J}}(t)$$

$$\mathbb{P} \left\{ \exists s,t>0:~A^{*}_{\mathcal{I}}(t){\cap} A^{*}_{\mathcal{J}}(s) \right \} =o\left( \underset{\underset{|\mathcal{K}|=k}{\mathcal{K}\subset\{1{\ldots} d\}}}{\sum}\mathbb{P} \left\{ A^{*}_{\mathcal{K}}(\hat{t}_{\mathcal{K}}) \right \} \right), \quad u\to \infty.$$

Combining the above two lemmas we have that for any two index sets $$\mathcal {I},\mathcal {J}\subset \{1,\ldots , d\}$$ of cardinality k, there is some index set $$\mathcal {K}\subset \{1,\ldots , d\}$$ such that as $$u\to \infty$$

$$\begin{array}{@{}rcl@{}}\mathbb{P} \left\{ \exists s,t>0: A_{\mathcal{I}}^{*}(s){\cap} A_{\mathcal{J}}^{*}(t) \right \} =o\left( \mathbb{P} \left\{ \exists t>0:A_{\mathcal{K}}^{*}(t) \right \} \right), \end{array}$$

which is equivalent with

$$\begin{array}{@{}rcl@{}}\mathbb{P} \left\{ \exists s,t>0: A_{\mathcal{I}}(s){\cap} A_{\mathcal{J}}(t) \right \} =o\left( \mathbb{P} \left\{ \exists t>0:A_{\mathcal{K}}(t) \right \} \right). \end{array}$$

The proof follows now by Eqs. 4 and 5.

### 4.3 Proof of Theorem 2.3

Below we set

$$\delta(u,{\Lambda}):=1-{\Lambda}u^{-2}$$

and denote by $$\tilde { \boldsymbol {a}}$$ the unique solution of the quadratic programming problem πΣ(a).

We denote below by I the index set that determines the unique solution of πΣ(a), where $$\boldsymbol {a} \in \mathbb {R}^{d}$$ has at least one positive component (see Lemma 2.2). If J = {1, … , d}∖ I is non-empty, then we set below $$U=\{j\in J:\tilde { a}_{j}= a_{j}\}$$. The number of elements |I| of I is denoted by m, which is a positive integer.

The next lemma is proved in Appendix.

### Lemma 4.6

For any Λ > 0, $$\boldsymbol {a}\in \mathbb {R}^{d} \setminus (-\infty ,0]^{d}, {\boldsymbol {c} \in \mathbb {R}^{d}}$$ and all sufficiently large u there exist C > 0 such that

$$m(u,{\Lambda}):=\mathbb{P} \left\{ \exists_{t\in[0,\delta(u,{\Lambda})]}:\boldsymbol{W}(t)-t\boldsymbol{c}> u\boldsymbol {a} \right \} \le e^{-{\Lambda}/C}\frac{\mathbb{P} \left\{ \boldsymbol{W}(1)\ge \boldsymbol {a} u+\boldsymbol{c} \right \} }{\mathbb{P} \left\{ \boldsymbol{W}(1)>\max(\boldsymbol{c},0) \right \} }$$
(23)

and further

$$M(u,{\Lambda}):=\mathbb{P} \left\{ \exists_{t\in[\delta(u,{\Lambda}),1]}:\boldsymbol{W}(t)-t\boldsymbol{c} >u\boldsymbol {a} \right \} \sim C(\boldsymbol{c})K([0,{\Lambda}])u^{-m}\varphi(u\tilde{\boldsymbol {a}}+\boldsymbol{c}),$$
(24)

where $$C(\boldsymbol {c})= \mathbb {P} \left \{ \boldsymbol {W}_{U}(1)> \boldsymbol {c}_{U} \lvert \boldsymbol {W}_{I}(1)> \boldsymbol {c}_{I} \right \}$$ and for $$\boldsymbol \lambda = {\Sigma }^{-1} \tilde {\boldsymbol {a}}$$

$$\begin{array}{@{}rcl@{}}E([{{\Lambda}_1},{{\Lambda}_2}])={\int}_{\mathbb{R}^{m}} \mathbb{P} \left\{ \exists_{t\in[{{\Lambda}_1},{{\Lambda}_2}]}:\boldsymbol{W}_{I}(t)-t\boldsymbol {a}_{I}>\boldsymbol{x}_I \right \} e^{\boldsymbol\lambda_I^{\top} \boldsymbol{x}_I}\mathrm{d}\boldsymbol{x}_I \in (0,\infty) \end{array}$$

for all constants Λ1 < Λ2. We set C(c) equal 1 if U defined in Remark 2.4 is empty. Further we have

$$\lim\limits_{{\Lambda}\to\infty}E([0,{\Lambda}])={\int}_{\mathbb{R}^{m}}\mathbb{P} \left\{ \exists_{t\geq 0}:\boldsymbol{W}_{I}(t)-t\boldsymbol {a}_{I}>\boldsymbol{x}_I \right \} e^{\boldsymbol \lambda_I^{\top} \boldsymbol{x}_I}\mathrm{d}\boldsymbol{x}_I\in(0,\infty).$$
(25)

First note that for all Λ, u positive

$$M(u,{\Lambda})\leq\mathbb{P} \left\{ \exists_{t\in[0,1]}:\boldsymbol{W}(t)-t\boldsymbol{c}>u\boldsymbol {a} \right \} \leq M(u,{\Lambda})+m(u,{\Lambda}).$$

In view of Lemmas 4.6 and 4.1

$$\begin{array}{@{}rcl@{}}\lim_{{\Lambda}\to\infty}\lim_{u\to\infty}\frac{m(u,{\Lambda})}{M(u,{\Lambda})}=0, \end{array}$$

hence

$$\begin{array}{@{}rcl@{}} \lim_{{\Lambda}\to\infty}\lim_{u\to\infty}\frac{\mathbb{P} \left\{ \exists_{t\in[0,1]}:\boldsymbol{W}(t)-t\boldsymbol{c}>u\boldsymbol {a} \right \} }{M(u,{\Lambda})}=1 \end{array}$$

and thus the proof follows applying Eq. 24.

### 4.4 Proof of Eq. 14

The proof is similar to that of Dȩbicki et al. (2017)[Thm 2.5] and therefore we highlight only the main steps. If T > S ≥ 0 by the definition of τ(u) and the self-similarity of Brownian motion

$$\begin{array}{@{}rcl@{}}\frac{\tau(u)}{T}=\inf\{t\ge 0:\boldsymbol{W}(Tt)-tT\boldsymbol{c}>\boldsymbol {a} u\}=\inf\{t\ge 0:\boldsymbol{W}(t)-t\sqrt{T}\boldsymbol{c}>\boldsymbol {a} u/\sqrt{T}\}. \end{array}$$

Thus, without loss of generality in the rest of the proof we suppose that T = 1 > S ≥ 0.

We note that

$$\begin{array}{@{}rcl@{}} &&\mathbb{P} \left\{ u^2(1-\tau(u))\geq x| \tau(u)\in [S,1] \right \} =\frac{\mathbb{P} \left\{ u^2(1-\tau(u))\geq x, \tau(u)\in [S,1] \right \} }{\mathbb{P} \left\{ \tau(u)\in [S,1] \right \} }\\ &&\qquad= \frac{\mathbb{P} \left\{ u^2(1-\tau(u))\geq x, \tau(u)\leq 1 \right \} }{\mathbb{P} \left\{ \tau(u)\in [S,1] \right \} } -\frac{\mathbb{P} \left\{ u^2(1-\tau(u))\geq x, \tau(u)\leq S \right \} }{\mathbb{P} \left\{ \tau(u)\in [S,1] \right \} }\\ &&\qquad=P_1(u)-P_2(u). \end{array}$$

Next, for $$\tilde {x}(u)=1-\frac {x}{u^{2}}$$

$$\begin{array}{@{}rcl@{}} P_{1}(u)&=&\frac{\mathbb{P} \left\{ \tau(u)\leq \tilde{x}(u) \right \} }{\mathbb{P} \left\{ \tau(u)\in [S,1] \right \} }\sim \frac{\mathbb{P} \left\{ \exists_{t\in[0,\tilde{x}(u)]}:\boldsymbol{W}(t)-\boldsymbol{c} t>u\boldsymbol {a} \right \} } {\mathbb{P} \left\{ \exists_{t\in[0,1]}:\boldsymbol{W}(t)-\boldsymbol{c} t>u\boldsymbol {a} \right \} }\\ &=& \frac{\mathbb{P} \left\{ \exists_{t\in[0,1]}:\boldsymbol{W}(t)-(\boldsymbol{c}\sqrt{\tilde{x}(u)}) t>\frac{u}{\sqrt{\tilde{x}(u)}}\boldsymbol {a} \right \} } {\mathbb{P} \left\{ \exists_{t\in[0,1]}:\boldsymbol{W}(t)-\boldsymbol{c} t>u\boldsymbol {a} \right \} }, \ u \to \infty. \end{array}$$

Hence by Theorem 2.3, the fact that

$$\varphi\left( \frac{u}{\sqrt{\tilde{x}(u)}}\tilde{\boldsymbol {a}}+(\boldsymbol{c}\sqrt{\tilde{x}(u)})\right)=\varphi(u\tilde{\boldsymbol {a}}+\boldsymbol{c})e^{-\frac{1}{2}\left( \frac{1}{\tilde{x}(u)}-1\right)u^2\tilde{\boldsymbol {a}}^{\top}{\Sigma}^{-1}\tilde{\boldsymbol {a}}}e^{-\frac{1}{2}(\tilde{x}(u)-1)\boldsymbol{c}^{\top}{\Sigma}^{-1}\boldsymbol{c}}$$

and

$$\lim_{u \to \infty} e^{-\frac{1}{2}\left( \frac{1}{\tilde{x}(u)}-1\right)u^2\tilde{\boldsymbol {a}}^{\top}{\Sigma}^{-1}\tilde{\boldsymbol {a}}}= e^{-x\frac{\tilde{\boldsymbol {a}}^{\top}{\Sigma}^{-1}\tilde{\boldsymbol {a}}}{2}},\quad \lim_{u \to \infty}e^{-\frac{1}{2}(\tilde{x}(u)-1)\boldsymbol{c}^{\top}{\Sigma}^{-1}\boldsymbol{c}}= 1$$

we obtain

$$\lim_{u\to\infty} P_{1}(u)= e^{-x\frac{\tilde{\boldsymbol {a}}^{\top}{\Sigma}^{-1}\tilde{\boldsymbol {a}}}{2}}.$$
(26)

Moreover, following the same reasoning as above

$$P_{2}(u)=\frac{\mathbb{P} \left\{ \tau(u)\leq S \right \} }{\mathbb{P} \left\{ \tau(u)\in [S,1] \right \} } \sim \frac{\mathbb{P} \left\{ \tau(u)\leq S \right \} }{\mathbb{P} \left\{ \tau(u)\leq 1 \right \} }\to 0$$
(27)

as $$u\to \infty$$. Thus, combination of Eqs. 26 with 27 leads to

$$\lim_{u \to \infty} \mathbb{P} \left\{ u^2(1-\tau(u))\geq x|\tau(u) \in [S,1] \right \} = e^{-x\frac{\tilde{\boldsymbol {a}}^{\top}{\Sigma}^{-1}\tilde{\boldsymbol {a}}}{2}}.$$