1 Introduction

A general \(2\times 2\) switch is modeled by a two-server queueing system with two arrival streams. A well-studied special case of such a switch is given by the \(2\times 2\) clocked buffered switch, where in a unit time interval each arrival stream can generate only one arrival and each server can serve only one customer; see, for example, [1, 11, 19] and others. This switch is commonly used to model a device used in data-processing networks for routing messages from one node to another.

In this paper, we study a \(2\times 2\) switch that operates in continuous time, i.e., the arrivals are modeled by two independent compound Poisson processes. Every incoming job is of random size and it is then distributed to the two servers by a random procedure. This leads to a pair of coupled M/G/1-queues. In this model, we study the equilibrium probabilities of the resulting workload processes. In particular, we determine the asymptotic behavior of the probabilities that the workloads exceed a prespecified buffer. Hereby, we will distinguish between workload exceedance of a specific single server, both servers, or one unspecified server. As we will see, the behavior of these workload exceedance probabilities strongly depends on whether jobs are heavy-tailed or light-tailed, and we will therefore consider both cases separately.

A related model to the one we study has been introduced in [10] where a pair of coupled queues driven by independent spectrally positive Lévy processes is introduced. The coupling procedure, however, is completely different to the switch we shall use. For this model, in [10], the joint transform of the stationary workload distribution in terms of Wiener–Hopf factors is determined. Two parallel queues are also considered, for example, in [23] for an M/M/2-queue where arriving customers simultaneously place two demands handled independently by two servers. We refer to [2, 22] and references therein for more general information on Lévy-driven queueing systems.

As it is well-known, there are several connections between queueing and risk models. In particular, the workload (or waiting time) in an M/G/1 queue with compound Poisson input is related to the ruin probability in the prominent Cramér–Lundberg risk model, in which the arrival process of claims is defined to be just the same compound Poisson process; see, for example, [2] or [31]. To be more precise, let

$$\begin{aligned} R(t)=u+ct-\sum _{i=1}^{N(t)} X_i, \quad t\ge 0, \end{aligned}$$

be a Cramér–Lundberg risk process with initial capital \(u>0\), premium rate \(c>0\), i.i.d. claims \(\{X_i,i\in {\mathbb {N}}\}\) with cdf F such that \(X_1>0\) a.s. and \({\mathbb {E}}[X_1]=\mu <\infty \), and a claim number process \((N(t))_{t\ge 0}\) which is a Poisson process with rate \(\lambda >0\). Then, it is well-known that the ruin probability

$$\begin{aligned} \Psi (u)={\mathbb {P}}(R(t)<0 \quad \text {for some }t\ge 0) \end{aligned}$$

tends to 0 as \(u\rightarrow \infty \), as long as the net-profit condition \(\lambda \mu <c\) holds, while otherwise \(\Psi (u)\equiv 1\). In particular, if the claims sizes are light-tailed in the sense that an adjustment coefficient \(\kappa >0\) exists, i.e.,

$$\begin{aligned} \exists \kappa >0: \quad \int _0^\infty \hbox {e}^{\kappa x} {\overline{F}}(x) \mathop {}\!\mathrm {d}x = \frac{c}{\lambda }, \end{aligned}$$

where \({\overline{F}}(x)=1-F(x)\) is the tail function of the claim sizes, then the ruin probability \(\Psi (u)\) satisfies the famous Cramér–Lundberg inequality (cf. [2, Eq. XIII (5.2)], [3, Eq. I.(4.7)])

$$\begin{aligned} \Psi (u)\le \hbox {e}^{-\kappa u}, \quad u >0. \end{aligned}$$

Furthermore, in this case the Cramér–Lundberg approximation states that (cf. [2, Thm. XIII.5.2], [3, Eq. I.(4.3)])

$$\begin{aligned} \lim _{u\rightarrow \infty } \hbox {e}^{\kappa u}\Psi (u)=C, \end{aligned}$$

for some known constant \(C\ge 0\) depending on the chosen parameters of the model. On the contrary, for heavy-tailed claims with a subexponential integrated tail function \(\frac{1}{\mu } \int _0^x {\overline{F}}(y) \mathop {}\!\mathrm {d}y\) it is known that (cf. [3, Thm. X.2.1])

$$\begin{aligned} \lim _{u\rightarrow \infty } \left( \frac{1}{\mu } \int _u^\infty {\overline{F}}(y) \mathop {}\!\mathrm {d}y \right) ^{-1}\Psi (u) =\frac{\lambda \mu }{c-\lambda \mu }, \end{aligned}$$

and in the special case of tail functions that are regularly varying this directly implies that the ruin probability decreases polynomially.

Via the mentioned duality, these results can easily be translated into corresponding results on the workload exceedance probability of an M/G/1-queue.

In this paper, we shall use an analogous duality between queueing and risk models in a multi-dimensional setting, as introduced in [7]. This allows us to obtain results on the workload exceedance probabilities of the \(2\times 2\) switch by studying the corresponding ruin probabilities in the two-dimensional dual risk model.

Bivariate risk models are a well-studied field of research. A prominent model in the literature that can be interpreted as a special case of the dual risk model in this paper has been introduced by Avram et al. [5]. In this so-called degenerate model, a single claim process is shared via prespecified proportions between two insurers (see, for example, [4,5,6, 24, 26]). The model allows for a rescaling of the bivariate process that reduces the complexity to a one-dimensional ruin problem. Exact results and sharp asymptotics for this model have been obtained in [4], where also the asymptotic behavior of ruin probabilities of a general two-dimensional Lévy model under light-tail assumptions is derived. In [24], the degenerate model is studied in the presence of heavy tails; specifically asymptotic formulae for the finite time as well as the infinite time ruin probabilities under the assumption of subexponential claims are provided. In [26], the degenerate model is extended by a constant interest rate. In [6], another generalization of the degenerate model is studied that introduces a second source of claims only affecting one insurer. Our risk model defined in Sect. 2.2 can be seen as a further generalization of the model in [6] because of the random sharing of every single claim, compare also with Sect. 5.2. There exist plenty of other papers concerning bivariate risk models of all types and several approaches to tackle the problem. For example, [14, 21] consider bivariate risk models of Cramér–Lundberg-type with correlated claim-counting processes and derive partial integro-differential equations for infinite-time ruin and survival probabilities in these models. Various authors focus on finite time ruin probabilities under different assumptions; see, for example, [15,16,17, 29, 32, 37, 38]. For example, in [38], the finite time survival probability is approximated using a so-called bivariate compound binomial model and bounds for the infinite-time ruin probability are obtained using the concept of association.

In general dimensions, multivariate ruin is studied in [9, 12, 13, 25, 30, 34]. In particular, in [9], a bipartite network induces the dependence between the risk processes, and this model is in some sense similar to the dual risk model in this paper. Further, in [20], multivariate risk processes with heavy-tailed claims are treated and so-called ruin regions are studied, that is, sets in \({\mathbb {R}}^d\) which are hit by the risk process with small probability. Multivariate regularly varying claims are also assumed in [27] and [30], where in [27] several lines of business are considered that can balance out ruin, while [30] focuses exclusively on simultaneous ruin of all business lines/agents. Further, [36] introduces a notion of multivariate subexponentiality and applies this on a multivariate risk process. Note that [27, 36] both consider rather general regions of ruin, and some of the results from these papers will be applied on our dual risk model.

The paper is outlined as follows: In Sect. 2, we specify the random switch model that we are interested in and introduce the corresponding dual risk model. Section 3 is devoted to study both models under the assumption that jobs/claims are heavy-tailed, and it is divided into two parts. First, in Sect. 3.1, we focus on subexponentiality. As we shall rely on results from [36], we first concentrate on the risk model in Sect. 3.1.1, and then transfer our findings to the switch model in Sect. 3.1.2. Second, we treat the special case of regular variation in Sect. 3.2, where we start with results for the risk model in Sect. 3.2.1, taking advantage of results given in [27], before we transfer our findings to the switch model in Sect. 3.2.2. In Sect. 4, we assume all jobs/claims to be light-tailed and again first consider the risk model in Sect. 4.1 before converting the results to the switch context in Sect. 4.2. Two particular examples of the switch will then be outlined in Sect. 5, where we also compare the behavior of the exceedance probabilities for different specifications of the random switch via a short simulation study in Sect. 5.3. The final Sect. 6 collects the proofs of all our findings.

2 The switching model and its dual

2.1 The \(2\times 2\) random switching model

Let \({\mathcal {W}_1, \mathcal {W}_2}\) be servers (or workers) with work speeds \(c_1, c_2>0\) and let \({\mathcal {J}}_1,{\mathcal {J}}_2\) be two job generating objects. We assume that both objects generate jobs independently with Poisson rates \(\lambda _1,\lambda _2>0\), respectively, and that the workloads generated by one object are i.i.d. positive random variables. More specifically, we identify the objects \({\mathcal {J}}_j\), \(j=1,2\), with two independent compound Poisson processes

$$\begin{aligned} \sum _{k=1}^{N_j(t)} X_{j,k}, \qquad j=1,2, \end{aligned}$$

with jumps \(\{X_{j,k}, k\in {\mathbb {N}}\}\) being i.i.d. copies of two random variables \(X_j\sim F_j\) such that \(F_j(0)=0\) and \({\mathbb {E}}[X_j]<\infty \), \(j=1,2\).

The jobs shall be distributed to the two servers by a random switch that is modeled by a random \((2\times 2)\)-matrix \(\mathbf{A }=(A_{ij})_{i,j=1,2}\), independent of all other randomness and satisfying the following conditions:

  1. (i)

    \(A_{ij}\in [0,1]\) for all \(i,j=1,2\), meaning that a job cannot be assigned more than totally or less than not at all to a certain server,

  2. (ii)

    \(\sum _{i=1}^2 A_{ij}=1\) for all \(j=1,2\), i.e. every job must be assigned entirely to the servers.

The switch matrix is triggered independently at every arrival of a job (Fig. 1).

Fig. 1
figure 1

The random switching model

We are interested in the coupled M/G/1-queues defined by the resulting storage processes of the two servers, i.e.,

$$\begin{aligned} W_i(t) = \sum _{j=1}^2\sum _{k=1}^{N_j(t)} (A_{ij})_{k}X_{j,k} - \int _0^t c_i(W_i(s))\mathop {}\!\mathrm {d}s, \end{aligned}$$
(2.1)

where \(\{ \mathbf{A }_k, k\in {\mathbb {N}}\}\) are i.i.d. copies of \(\mathbf{A }\) and

$$\begin{aligned} c_i(x)={\left\{ \begin{array}{ll} 0,&{} x\le 0, \\ c_i,&{} x>0, \end{array}\right. } \quad i=1,2. \end{aligned}$$

In particular, we aim to study the stationary distribution of the multivariate storage process \(\mathbf{W }(t)=(W_1(t), W_2(t))^\top \), that is, the distributional limit of \(\mathbf{W }(t)\) as \(t\rightarrow \infty \) whenever it exists. In this case, we write

$$\begin{aligned} \mathbf{W }:=(W_1,W_2)^\top \end{aligned}$$
(2.2)

for a generic random vector with this steady-state distribution. Note that here and in the following \((\cdot ) ^\top \) denotes the transpose of a vector or matrix.

Let \(u>0\) be some fixed buffer barrier for the system and \(\mathbf{b }=(b_1,b_2)^\top \in (0,1)^2\) with \(b_1+b_2=1\). Set \(\mathbf{u }=\mathbf{b }u\), i.e. \(u_i=b_i u\). Then, we are interested in the probabilities that the single servers exceed their barriers

$$\begin{aligned} \Upsilon _i(u_i)={\mathbb {P}}\left( W_i-u_i>0\right) , \quad i=1,2, \end{aligned}$$
(2.3)

the probability that at least one of the workloads exceeds the barrier u

$$\begin{aligned} \Upsilon _\vee (u)={\mathbb {P}}\left( \max _{i=1,2} (W_i-u_i)>0\right) , \end{aligned}$$
(2.4)

and the probability that both of the workloads exceed the barrier u

$$\begin{aligned} \Upsilon _\wedge (u)={\mathbb {P}}\left( \min _{i=1,2} (W_i-u_i)>0\right) . \end{aligned}$$
(2.5)

2.2 The dual risk model

In the one-dimensional case, it is well-known that there exists a duality between risk and queueing models; see, for example, [2]. The multivariate analogue shown in [7] allows us to formulate the dual risk model to the above introduced random switching model as follows:

Set \(N(t):=N_1(t)+N_2(t)\) such that N(t) is a Poisson process with rate \(\lambda =\lambda _1+\lambda _2\). Define the multivariate risk process

$$\begin{aligned} \mathbf{R }(t) := \begin{pmatrix} R_1(t) \\ R_2(t) \end{pmatrix} := \sum _{k=1}^{N(t)} \mathbf{A }_k \mathbf{B }_k \begin{pmatrix}X_{1,k} \\ X_{2,k} \end{pmatrix}- t\begin{pmatrix} c_1 \\ c_2 \end{pmatrix}=: \sum _{k=1}^{N(t)} \mathbf{A }_k \mathbf{B }_k \mathbf{X }_k - t \mathbf{c }, \end{aligned}$$
(2.6)

where \(\mathbf{B }_k\) are i.i.d. random matrices, independent of all other randomness, such that

$$\begin{aligned} {\mathbb {P}}\left( \mathbf{B }_k= \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \end{pmatrix}\right) = \frac{\lambda _1}{\lambda } \qquad \text { and }\qquad {\mathbb {P}}\left( \mathbf{B }_k= \begin{pmatrix} 0 &{} 0 \\ 0 &{} 1 \end{pmatrix}\right) = \frac{\lambda _2}{\lambda } \quad \text {for all }k. \end{aligned}$$

Note that the components of \((\mathbf{R }(t))_{t\ge 0}\) satisfy the net-profit condition if

$$\begin{aligned} c_i^*&:=-\frac{1}{\lambda }{\mathbb {E}}[R_i(1)] \ = \frac{1}{\lambda } \left( c_i - \lambda _1 \mathbb {E}[A_{i1}]\cdot \mathbb {E}[X_1]- \lambda _2 \mathbb {E}[A_{i2}] \cdot \mathbb {E}[X_2] \right) >0 \nonumber \\&\qquad \text {for }i=1,2. \end{aligned}$$
(2.7)

We will therefore assume (2.7) throughout the paper. Note that, as mentioned in [7], (2.7) implies existence of the stationary distribution of \(\mathbf{W }(t)\), i.e. \(\mathbf{W }\) in (2.2) is well-defined. For a proof of this fact in the univariate setting, see, for example, [31, Thm. 4.10].

For the buffer \(u>0\) in the risk model, we define the ruin probabilities of the single components

$$\begin{aligned} \Psi _i(u_i) := {\mathbb {P}}(R_i(t) - u_i>0 \text { for some }t>0), \quad i=1,2, \end{aligned}$$
(2.8)

the ruin probability for at least one component

$$\begin{aligned} \Psi _{\vee }(u) := {\mathbb {P}}\left( \max _{i\in \{1,2\}} \left( R_i(t)-u_i\right)> 0 \text { for some }t> 0\right) , \end{aligned}$$
(2.9)

and the ruin probability for all components

$$\begin{aligned} \Psi _{\wedge }(u) := {\mathbb {P}}\left( \left( R_i(t_i)-u_i\right)> 0 \text { for some }t_i> 0, i=1,2\right) , \end{aligned}$$
(2.10)

where, as before, \(\mathbf{u }=\mathbf{b }u\) for \(\mathbf{b }\in (0,1)^2\) with \(b_1+b_2=1\).

The following lemma allows us to gather information about the bivariate storage process in the switching model by performing calculations on our dual risk model.

Lemma 2.1

Consider the distributional limit of the workload process \(\mathbf{W }\) and the risk process \((\mathbf{R }(t))_{t\ge 0}\) defined in (2.6) and assume (2.7). Then, the workload exceedance probabilities (2.3), (2.4), and (2.5), and the ruin probabilities (2.8), (2.9), and (2.10) satisfy

$$\begin{aligned} \Upsilon _i(u_i)&= \Psi _i(u_i),\\ \Upsilon _\vee (u)&= \Psi _\vee (u), \\ \text {and} \quad \Upsilon _\wedge (u)&= \Psi _\wedge (u), \quad u>0. \end{aligned}$$

Proof

This follows directly from [7, Lem. 1] letting \(N\rightarrow \infty \) and due to the so-called PASTA property; see [2, Thm. 6.1]. \(\square \)

Note that in the ruin context it is common (see, for example, [4] or [9]) to consider also the simultaneous ruin probability for all components

$$\begin{aligned} \Psi _{\wedge ,\text {sim}}(u) := {\mathbb {P}}\left( \min _{i\in \{1,2\}} \left( R_i(t)-u_i\right)> 0 \text { for some }t> 0\right) . \end{aligned}$$
(2.11)

As we will see, results on \(\Psi _{\wedge ,\text {sim}}\) can sometimes be shown by analogy to those on \(\Psi _\vee \), and we shall do so whenever it seems suitable. However, \(\Psi _{\wedge ,\text {sim}}\) has no counterpart in the switching model.

It is clear from the above definitions that for all \(\mathbf{u }=\mathbf{b }u\in (0,\infty )^2\)

$$\begin{aligned} \Psi _{\wedge ,\text {sim}}(u) \le \Psi _\wedge (u)= \Psi _1(b_1u) + \Psi _2(b_2u) - \Psi _\vee (u), \end{aligned}$$
(2.12)

and likewise

$$\begin{aligned} \Upsilon _\wedge (u) = \Upsilon _1(b_1 u) + \Upsilon _2(b_2 u) - \Upsilon _\vee (u). \end{aligned}$$
(2.13)

We will therefore focus in our study on \(\Upsilon _\vee \) and \(\Psi _\vee \) and then derive the corresponding results for \(\Upsilon _\wedge \) and \(\Psi _\wedge \) via (2.13) and (2.12).

2.3 Further notation

To keep notation short, we write \({\mathbb {R}}_{\ge 0}\), and \({\mathbb {R}}_{\le 0}\) for the positive/negative half line of the real numbers, respectively, and likewise use the notation, \({\mathbb {R}}_{>0}\), and \({\mathbb {R}}_{<0}\) such that, in particular, \({\mathbb {R}}_{<0}^2=(-\infty ,0)\times (-\infty ,0)\). Further, \({\overline{{\mathbb {R}}}}={\mathbb {R}}\cup \{-\infty ,\infty \}\). For any set \(M\subset {\mathbb {R}}^q\), we write \({\overline{M}}\) for its closure, and \(\partial M\) for its boundary, i.e., \({\overline{M}}=M\cup \partial M\).

We write \(\sim \) for asymptotic equivalence at infinity, i.e., \(f\sim g\) if and only if \(\lim _{x\rightarrow \infty } \frac{f(x)}{g(x)}=1\), while \(\not \sim \) indicates that such a convergence does not hold. Moreover, we use the standard Landau symbols, i.e., \(f(x) = o(g(x))\) if and only if \(f(x)/g(x) \rightarrow 0\) as \(x\rightarrow \infty \).

Lastly, throughout the paper, we set \(\frac{1}{\infty }:=0\) and \(\frac{1}{0}=:\infty \), which yields in particular \({\overline{F}}(\frac{x}{0}):=0\) for any tail function \({\overline{F}}\).

3 The heavy-tailed case

In this section, we will assume that the distribution of the arriving jobs is heavy-tailed. A very general class of heavy-tailed distributions is given by the subexponential distributions, and we will consider this case in Sect. 3.1. However, as we will see, the asymptotics we obtain in this case are not very explicit, in the sense that in a multivariate setting they do not allow for a direct statement about the speed of decay of the exceedance probabilities. We will therefore proceed and treat the special case of regularly varying distributions in Sect. 3.2, where speeds of decay can be derived more easily.

3.1 The subexponential case

Recall first that a random variable X in \({\mathbb {R}}_{>0}\) with distribution function F is called subexponential if

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{\overline{F^{*2}}(x)}{{\overline{F}}(x)} = 2, \end{aligned}$$

where \(F^{*2}\) is the second convolution power of F, i.e., the distribution of \(X'+ X''\), where \(X'\) and \(X''\) are i.i.d. copies of X. In this case, we write \(F\in {\mathcal {S}}\) or \(X\in {\mathcal {S}}\).

As we are considering a multivariate setting in this paper, our proofs use a concept of multivariate subexponentiality. Several approaches for this exist, and we shall rely here on the definition and results as given in [36], which also provides a comprehensive overview of previous notions of multivariate subexponentiality as given in [18, 33].

3.1.1 Results in the risk context

We start by presenting our main theorem in the subexponential setting, which we state in terms of the risk process defined in Sect. 2.2. Its proof relies on the theory developed in [36] and is given in Sect. 6.1.

Theorem 3.1

For all \(u>0, v\ge 0\) set

$$\begin{aligned} g(u,v):= & {} \frac{\lambda _1}{\lambda }\cdot {\mathbb {E}}\left[ {\overline{F}}_1\left( \min \left\{ \tfrac{u b_1 + vc_1^*}{A_{11}}, \tfrac{u b_2 + vc_2^*}{A_{21}}\right\} \right) \right] \nonumber \\&+ \frac{\lambda _2}{\lambda }\cdot {\mathbb {E}}\left[ {\overline{F}}_2\left( \min \left\{ \tfrac{u b_1 + vc_1^*}{A_{12}}, \tfrac{u b_2 + vc^*_2}{A_{22}}\right\} \right) \right] , \end{aligned}$$
(3.1)

and assume that

$$\begin{aligned} \theta:= & {} \frac{\lambda _1}{\lambda }\cdot {\mathbb {E}}[X_1] \cdot {\mathbb {E}}\left[ \min \left\{ \frac{A_{11}}{c_1^*}, \frac{A_{21}}{c_2^*}\right\} \right] \nonumber \\&+ \frac{\lambda _2}{\lambda }\cdot {\mathbb {E}}[X_2] \cdot {\mathbb {E}}\left[ \min \left\{ \frac{A_{12}}{c_1^*}, \frac{A_{22}}{c_2^*}\right\} \right] >0. \end{aligned}$$
(3.2)

Further, define a cdf by

$$\begin{aligned} F_{\text {subexp}}(u) := 1- \theta ^{-1}\int _0^\infty g(u,v) \mathop {}\!\mathrm {d}v, \quad u\ge 0, \end{aligned}$$
(3.3)

and assume that \(F_{\text {subexp}} \in {\mathcal {S}}\). Then

$$\begin{aligned} \Psi _\vee (u) \sim \int _0^\infty g(u,v) \mathop {}\!\mathrm {d}v, \quad \text {as }u\rightarrow \infty . \end{aligned}$$
(3.4)

The asymptotic behavior of the ruin probabilities for single components in the subexponential setting as presented in the next lemma can be shown by classic results. Again a proof is given in Sect. 6.1.

Lemma 3.2

Assume that

$$\begin{aligned} F_I^{i}(x)&:= \frac{1}{\lambda _1\cdot \mathbb {E}[A_{i1}]\cdot \mathbb {E}[X_1] + \lambda _2 \cdot \mathbb {E}[A_{i2}]\cdot \mathbb {E}[X_2]} \\&\qquad \times \mathbb {E}\left[ \lambda _1 \int _0^x \overline{F}_1(\tfrac{y}{A_{i1}}) \mathop {}\!\mathrm {d}y + \lambda _2 \int _0^x \overline{F}_2(\tfrac{y}{A_{i2}}) \mathop {}\!\mathrm {d}y\right] \end{aligned}$$

is subexponential. Then, the ruin probability for a single component (2.8) satisfies

$$\begin{aligned} \Psi _i(u) \sim \frac{1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty \left( \lambda _1 {\overline{F}}_1\left( \tfrac{u + v c_i^*}{A_{i1}} \right) + \lambda _2 {\overline{F}}_2\left( \tfrac{u +vc_i^*}{A_{i2}} \right) \right) \mathop {}\!\mathrm {d}v \right] ,\quad \text {as }u\rightarrow \infty . \end{aligned}$$
(3.5)

Lastly, we consider the joint ruin probability \(\Psi _\wedge \) in the following proposition.

Proposition 3.3

Assume that \(F_{\text {subexp}}\) as in (3.3), and \(F_I^1,F_I^2\) as in Lemma 3.2 are in \({\mathcal {S}}\). Recall \(\mathbf{u }=\mathbf{b }u\) with \(b_1+b_2=1\) and \(\mathbf{b }=(b_1,b_2)^\top \in (0,1)^2\). Then, if

$$\begin{aligned} \Psi _1(b_1u)+\Psi _2(b_2u) \not \sim \Psi _\vee (u), \end{aligned}$$
(3.6)

we obtain, as \(u\rightarrow \infty \),

$$\begin{aligned} \Psi _{\wedge }(u)&\sim \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_1\left( \max \left\{ \tfrac{ub_1 + vc_1^*}{A_{11}}, \tfrac{ub_2 + vc_2^*}{A_{21}}\right\} \right) \mathop {}\!\mathrm {d}v \right] \\&\qquad + \frac{\lambda _2}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_2\left( \max \left\{ \tfrac{ub_1 + vc_1^*}{A_{12}}, \tfrac{ub_2 + vc_2^*}{A_{22}}\right\} \right) \mathop {}\!\mathrm {d}v \right] . \end{aligned}$$

Conversely, if (3.6) fails, then with g(uv) as in (3.1),

$$\begin{aligned} \Psi _{\wedge }(u) =o\left( \int _0^\infty g(u,v) \mathrm{d}v \right) , \quad \text {as }u\rightarrow \infty . \end{aligned}$$
(3.7)

3.1.2 Results in the switch context

With the help of Lemma 2.1, we may now directly summarize our findings from the last section to provide a rather explicit insight into the asymptotic behavior of the workload barrier exceedance probabilities in the switching model defined in Sect. 2.1.

Corollary 3.4

(Asymptotics of the exceedance probabilities under subexponentiality) Assume that \(F_{\text {subexp}}\) as in (3.3), and \(F_I^1,F_I^2\) as in Lemma 3.2, are in \({\mathcal {S}}\). Define the resulting integrated tail functions for servers \(i=1,2\) via

$$\begin{aligned} {\overline{F}}_{I,i}(u,\mathbf{A }):= \lambda _1 \int _0^\infty {\overline{F}}_1\left( \tfrac{u + v c_i^*}{A_{i1}} \right) \mathop {}\!\mathrm {d}v + \lambda _2 \int _0^\infty {\overline{F}}_2\left( \tfrac{u +vc_i^*}{A_{i2}} \right) \mathop {}\!\mathrm {d}v, \quad u>0. \end{aligned}$$
(3.8)

Then, the workload exceedance probabilities (2.3), (2.4), and (2.5) satisfy

$$\begin{aligned} \Upsilon _i(b_i u)&\sim \frac{1}{\lambda }\cdot {\mathbb {E}}\left[ {\overline{F}}_{I,i}(b_i u,\mathbf{A })\right] , \quad i=1,2,\\ \Upsilon _\vee (u)&\sim \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_1\left( \min \left\{ \tfrac{ub_1 + vc_1^*}{A_{11}}, \tfrac{ub_2 + vc_2^*}{A_{21}}\right\} \right) \mathop {}\!\mathrm {d}v \right] \\&\qquad + \frac{\lambda _2}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_2\left( \min \left\{ \tfrac{ub_1 + vc_1^*}{A_{12}}, \tfrac{ub_2 + vc_2^*}{A_{22}}\right\} \right) \mathop {}\!\mathrm {d}v \right] \end{aligned}$$

and, assuming additionally that

$$\begin{aligned} \Upsilon _1(b_1u)+\Upsilon _2(b_2u) \not \sim \Upsilon _\vee (u), \end{aligned}$$
(3.9)

we obtain

$$\begin{aligned} \Upsilon _\wedge (u)&\sim \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_1\left( \max \left\{ \tfrac{ub_1 + vc_1^*}{A_{11}}, \tfrac{ub_2 + vc_2^*}{A_{21}}\right\} \right) \mathop {}\!\mathrm {d}v \right] \\&\qquad + \frac{\lambda _2}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_2\left( \max \left\{ \tfrac{ub_1 + vc_1^*}{A_{12}}, \tfrac{ub_2 + vc_2^*}{A_{22}}\right\} \right) \mathop {}\!\mathrm {d}v \right] . \end{aligned}$$

If (3.9) fails, then

$$\begin{aligned} \Upsilon _{\wedge }(u) =o\left( {\mathbb {E}}\left[ {\overline{F}}_{I,1}(b_1 u,\mathbf{A })+{\overline{F}}_{I,2}(b_2 u,\mathbf{A })\right] \right) . \end{aligned}$$

Proof

This is clear from Lemma 2.1, Theorem 3.1, Lemma 3.2, and Proposition 3.3. \(\square \)

3.2 The regularly varying case

In this section, we will restrict the class of considered heavy-tailed distributions and assume that the tail functions of the arriving jobs are regularly varying. As we will see, this restriction leads to a much more explicit description of the asymptotic behavior of ruin and exceedance probabilities.

Let \(f:{\mathbb {R}}\rightarrow (0,\infty )\) be a measurable function and recall that f is regularly varying (at infinity) with index \(\alpha \ge 0\) if, for all \(\lambda >0\), it holds that

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{f(\lambda t)}{f(t)} =\lambda ^{\alpha }, \end{aligned}$$

with the case \(\alpha =0\) typically being referred to as slowly varying. In this case, we write \(f\in {\text {RV}}(\alpha )\). A real-valued random variable X is called regularly varying with index \(\alpha \ge 0\), i.e., \(X\in {\text {RV}}(\alpha )\), if its tail function \({\overline{F}}(\cdot )={\mathbb {P}}(X>\cdot )\) is regularly varying with index \(-\alpha \). It is well-known that \({\text {RV}}(\alpha )\subset {\mathcal {S}}\) for all \(\alpha \ge 0\), cf. [3, Prop. X.1.4].

Further, we follow [27] and call a random vector \(\mathbf{Z }\) on \({\mathbb {R}}^q\) multivariate regularly varying if there exists a non-null measure \(\mu \) on \(\overline{{\mathbb {R}}}^q\backslash \{\mathbf{0 }\}\) such that

  1. (i)

    \(\mu \left( \overline{{\mathbb {R}}}^q\backslash {\mathbb {R}}^q\right) =0\),

  2. (ii)

    \(\mu (M)<\infty \) for all Borel sets M bounded away from \(\mathbf{0 }\),

  3. (iii)

    for all Borel sets M satisfying \(\mu (\partial M)=0\) it holds that

    $$\begin{aligned} \frac{{\mathbb {P}}(\mathbf{Z }\in tM)}{{\mathbb {P}}(\left\Vert \mathbf{Z }\right\Vert >t)} \rightarrow \mu (M). \end{aligned}$$
    (3.10)

The norm \(\Vert \cdot \Vert \) will typically be chosen to be the \(L^1\)-norm in this article. If \(\mathbf{Z }\) is multivariate regularly varying, necessarily there exists \(\alpha >0\) such that, for all M as in (3.10) and \(t>0\),

$$\begin{aligned} \mu (tM) = t^{-\alpha }\mu (M). \end{aligned}$$

Thus, we write \(\mathbf{Z }\in {\text {MRV}}(\alpha ,\mu )\).

Note that in the one-dimensional case, the above definitions coincide. We refer to [8, 35] for references of the above and more detailed information on multivariate regular variation.

3.2.1 Results in the risk context

We will now present our first main result in the regularly varying context. Note that this is not obtained by an application of our above results in the special case of regular variation, but instead we give an independent proof of Theorem 3.5 in Sect. 6.2 that relies on results from [28]. This approach also allows as, to consider the simultaneous ruin probability, which had not been possible with the methods used in Sect. 3.1 due to stronger assumptions on the involved ruin sets.

Theorem 3.5

Assume the claim size variables \(X_1,X_2\) are regularly varying, i.e., \(X_1\in {\text {RV}}(\alpha _1)\), and \(X_2 \in {\text {RV}}(\alpha _2)\) for \(\alpha _1,\alpha _2>1\). Then with \(\mathbf{A }\), \(\mathbf{B }\) from Sect. 2 and \(\mathbf{X }=(X_1,X_2)^\top \), it follows that there exists a measure \(\mu ^*\) such that

$$\begin{aligned} \mathbf{A }\mathbf{B }\mathbf{X }\in {\text {MRV}}(\min \{\alpha _1,\alpha _2\},\mu ^*). \end{aligned}$$

Further,

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Psi _{\vee }(u)}{u\cdot {\mathbb {P}}(\left\Vert \mathbf{A }\mathbf{B }\mathbf{X }\right\Vert >u)} = \int _0^\infty \mu ^*(v\mathbf{c }^*+\mathbf{b }+ L_{\vee })\mathop {}\!\mathrm {d}v =: C_{\vee }<\infty , \end{aligned}$$
(3.11)

and

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Psi _{\wedge ,\text {sim}}(u)}{u\cdot {\mathbb {P}}(\left\Vert \mathbf{A }\mathbf{B }\mathbf{X }\right\Vert >u)} = \int _0^\infty \mu ^*(v\mathbf{c }^*+\mathbf{b }+ L_{\wedge ,\text {sim}})\mathop {}\!\mathrm {d}v =: C_{\wedge ,\text {sim}}<\infty , \end{aligned}$$
(3.12)

with \(\mathbf{c }^*= (c_1^*,c_2^*)^\top \), \(L_\vee = {\mathbb {R}}^2\backslash {\mathbb {R}}_{\le 0}^2,\) and \(L_{\wedge ,\text {sim}} = {\mathbb {R}}_{>0}^2\).

Note that, by conditioning on \(\mathbf{B }\), we have

$$\begin{aligned} {\mathbb {P}}(\left\Vert \mathbf{A }\mathbf{B }\mathbf{X }\right\Vert>u)&= \frac{\lambda _1}{\lambda }\cdot {\mathbb {P}}\left( \left\Vert \begin{pmatrix} A_{11}X_1 \\ A_{21}X_1\end{pmatrix}\right\Vert>u\right) +\frac{\lambda _2}{\lambda } \cdot {\mathbb {P}}\left( \left\Vert \begin{pmatrix} A_{12}X_2 \\ A_{22}X_2\end{pmatrix}\right\Vert >u\right) \nonumber \\&= \lambda ^{-1} \left( \lambda _1 {\overline{F}}_1(u) + \lambda _2{\overline{F}}_2(u)\right) . \end{aligned}$$
(3.13)

Using the limiting-measure property of \(\mu ^*\), it is further possible to explicitly compute the constants \(C_\vee \), and \(C_{\wedge ,\text {sim}}\) in Theorem 3.5. This then yields the following proposition, whose proof is also postponed to Sect. 6.2.

Proposition 3.6

Assume \(X_1\in {\text {RV}}(\alpha _1)\) and \(X_2 \in {\text {RV}}(\alpha _2)\) for \(\alpha _1,\alpha _2>1\) and set

$$\begin{aligned} \zeta := \lim _{t\rightarrow \infty } \frac{\lambda _1 {\overline{F}}_1(t)}{\lambda _2 {\overline{F}}_2(t)} \in [0,\infty ], \end{aligned}$$

so that clearly \(\zeta \in (0,\infty )\) implies \(\alpha _1=\alpha _2\). Then

$$\begin{aligned}&\Psi _{\vee }(u) \sim \frac{C_\vee }{\lambda } \cdot u \left( \lambda _1{\overline{F}}_1(u) + \lambda _2 {\overline{F}}_2(u)\right) \qquad \text {with} \end{aligned}$$
(3.14)
$$\begin{aligned}&C_\vee := {\mathbb {E}}\left[ \int _0^\infty \frac{\zeta \cdot \left( \min \left\{ \frac{vc_1^*+b_1}{A_{11}},\frac{vc_2^*+b_2}{A_{21}}\right\} \right) ^{-\alpha _1} +\left( \min \left\{ \frac{vc_1^*+b_1}{A_{12}},\frac{vc_2^*+b_2}{A_{22}}\right\} \right) ^{-\alpha _2}}{1+\zeta } \mathop {}\!\mathrm {d}v\right] , \end{aligned}$$
(3.15)

and

$$\begin{aligned}&\Psi _{\wedge ,\text {sim}}(u) \sim \frac{C_{\wedge ,\text {sim}}}{\lambda }\cdot u \left( \lambda _1{\overline{F}}_1(u) + \lambda _2 {\overline{F}}_2(u)\right) \qquad \text {with} \nonumber \\&C_{\wedge ,sim}:= {\mathbb {E}}\left[ \int _0^\infty \frac{\zeta \cdot \left( \max \left\{ \frac{vc_1^*+b_1}{A_{11}},\frac{vc_2^*+b_2}{A_{21}}\right\} \right) ^{-\alpha _1}+\left( \max \left\{ \frac{vc_1^*+b_1}{A_{12}},\frac{vc_2^*+b_2}{A_{22}}\right\} \right) ^{-\alpha _2}}{1+\zeta }\mathop {}\!\mathrm {d}v\right] , \end{aligned}$$
(3.16)

where we interpret \(\frac{\infty \cdot x}{\infty }:=x\).

We continue our study of the asymptotics of the risk model by determining the asymptotic behavior of \(\Psi _\wedge \). It is clear from Equations (2.12) and (3.14) that in order to do this, we first have to determine the asymptotic behavior of the ruin probabilities for single components (2.8), which will be given by the following lemma.

Lemma 3.7

Assume \(X_1\in {\text {RV}}(\alpha _1)\), and \(X_2 \in {\text {RV}}(\alpha _2)\) for \(\alpha _1,\alpha _2>1\). Then, the ruin probability for a single component (2.8) satisfies (3.5).

With this the following proposition is straightforward. Again, the proof is given in Sect. 6.2.

Proposition 3.8

Assume \(X_1\in {\text {RV}}(\alpha _1)\) and \(X_2 \in {\text {RV}}(\alpha _2)\) for \(\alpha _1,\alpha _2>1\). Recall \(\mathbf{u }=\mathbf{b }u\) with \(b_1+b_2=1\) and \(\mathbf{b }=(b_1,b_2)^\top \in (0,1)^2\). Then,if (3.6) holds,

$$\begin{aligned} \begin{aligned} \Psi _{\wedge }(u)&\sim \frac{1}{\lambda }\cdot \Big (\lambda _1 \left( {\mathbb {E}}\left[ \overline{F}_{1,I}(u,\mathbf{A }) \right] - C_\vee u \overline{F}_1(u) \right) \\&\qquad \ \qquad + \lambda _2 \left( {\mathbb {E}}\left[ \overline{F}_{2,I}(u,\mathbf{A })\right] - C_\vee u \overline{F}_2(u) \right) \Big ), \end{aligned} \end{aligned}$$
(3.17)

with \(C_\vee \) as defined in (3.15) and with the weighted integrated tail functions

$$\begin{aligned} \overline{F}_{j,I}(u,\mathbf{A })&:= \int _{0}^\infty \overline{F}_j \left( \frac{u b_1 + v c_1^*}{A_{1j}}\right) \mathop {}\!\mathrm {d}v + \int _{0}^\infty \overline{F}_j \left( \frac{u b_2+ vc_2^*}{A_{2j}}\right) \mathop {}\!\mathrm {d}v, \end{aligned}$$

for \(u>0, j=1,2\).

Otherwise, if (3.6) fails, then

$$\begin{aligned} \Psi _\wedge (u) = \textit{o}\left( u\cdot \left( {\overline{F}}_1(u) + {\overline{F}}_2(u)\right) \right) . \end{aligned}$$
(3.18)

3.2.2 Results in the switch context

Again, we may now summarize our findings in the context of the switching model defined in Sect. 2.1 as follows.

Corollary 3.9

(Asymptotics of the exceedance probabilities for regularly varying jobs) Assume the workload variables \(X_1,X_2\) are regularly varying, i.e., \(X_1\in {\text {RV}}(\alpha _1)\) and \(X_2 \in {\text {RV}}(\alpha _2)\) for \(\alpha _1,\alpha _2>1\). Set

$$\begin{aligned} \zeta := \lim _{t\rightarrow \infty } \frac{\lambda _1 {\overline{F}}_1(t)}{\lambda _2 {\overline{F}}_2(t)} \in [0,\infty ], \end{aligned}$$

so that \(\zeta \in (0,\infty )\) implies \(\alpha _1=\alpha _2\). Recall \(C_\vee \) from (3.15) and the integrated tail functions for servers \(i=1,2\) from (3.8). Then, the workload exceedance probabilities (2.3) and (2.4) satisfy

$$\begin{aligned} \Upsilon _i(b_i u)&\sim \frac{1}{\lambda }\cdot {\mathbb {E}}\left[ {\overline{F}}_{I,i}(b_i u,\mathbf{A })\right] , \quad i=1,2,\\ \Upsilon _{\vee }(u)&\sim \frac{C_\vee }{\lambda }\cdot u (\lambda _1{\overline{F}}_1(u) + \lambda _2 {\overline{F}}_2(u)). \end{aligned}$$

Assuming additionally (3.9), the workload exceedance probability (2.5) satisfies

$$\begin{aligned} \Upsilon _{\wedge }(u)&\sim \frac{1}{\lambda }\cdot \Big ({\mathbb {E}}\left[ \overline{F}_{I,1}(b_1 u,\mathbf{A }) \right] + {\mathbb {E}}\left[ \overline{F}_{I,2}(b_2 u,\mathbf{A }) \right] \\&\qquad \ \qquad - \left( \lambda _1 u\overline{F}_1(u) + \lambda _2 u\overline{F}_2(u)\right) C_\vee \Big ). \end{aligned}$$

If (3.9) fails, then

$$\begin{aligned} \Upsilon _{\wedge }(u) =o\left( u\cdot \left( \overline{F_1}(u) + \overline{F_2}(u)\right) \right) . \end{aligned}$$

Proof

This is clear from Lemmas 2.1, 3.7, and Propositions 3.6 and 3.8. \(\square \)

Remark 3.10

At first sight the structure of the asymptotic formulae for \(\Psi _{\vee }\) and \(\Upsilon _{\vee }\) in the regularly varying and the subexponential case looks pretty similar, as both formulae rely on a minimum inside an integral. However, in the case of regularly varying claims the formulae immediately provide the principal behavior of the tail, only the constant needs more computation. On the contrary, in the subexponential case, the initial capital \(\mathbf{u }\) is involved strongly inside the integral and even to obtain the asymptotics up to a constant, one has to calculate the integral explicitly.

Example 3.11

In the setting of Corollary 3.9, assume that \(\alpha _1<\alpha _2\). Then, in all asymptotics given in Corollary 3.9, the terms including \(F_2\) that are regularly varying with index \(-\alpha _2+1\) are dominated by the terms involving \(F_1\) which are regularly varying with index \(-\alpha _1+1\). This yields that in this case

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Upsilon _i(b_i u)}{{\mathbb {E}}\left[ \int _{0}^\infty {\overline{F}}_1\left( \tfrac{u b_i + v c_i^*}{A_{i1}}\right) \mathop {}\!\mathrm {d}v\right] }&= \frac{\lambda _1}{\lambda }, \quad i=1,2, \end{aligned}$$

as long as \({\mathbb {P}}(A_{i1}=0)<1\). Similarly, since \(\zeta =\infty \), we obtain

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Upsilon _{\vee }(u)}{u\cdot {\overline{F}}_1(u)}&=\frac{\lambda _1 C_\vee }{\lambda }=\frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty \left( \min \left\{ \tfrac{vc_1^*+b_1}{A_{11}},\tfrac{vc_2^*+b_2}{1-A_{11}}\right\} \right) ^{-\alpha _1} \mathop {}\!\mathrm {d}v\right] . \end{aligned}$$

With these observations at hand, we may conclude that (3.9) holds if and only if

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{{\mathbb {E}}\left[ \int _{0}^\infty {\overline{F}}_1\left( \tfrac{u b_1 + v c_1^*}{A_{11}}\right) \mathop {}\!\mathrm {d}v + \int _{0}^\infty {\overline{F}}_1\left( \tfrac{u b_2 + v c_2^*}{1-A_{11}} \right) \mathop {}\!\mathrm {d}v \right] }{{\mathbb {E}}\left[ \int _0^\infty \left( \min \left\{ \frac{vc_1^*+b_1}{A_{11}},\frac{vc_2^*+b_2}{1-A_{11}}\right\} \right) ^{-\alpha _1} \mathop {}\!\mathrm {d}v\right] u \cdot {\overline{F}}_1(u) }\ne 1. \end{aligned}$$
(3.19)

Thus, given (3.19), we get

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Upsilon _{\wedge }(u)}{{\mathbb {E}}\left[ \int _{0}^\infty {\overline{F}}_1\left( \tfrac{u b_1 + v c_1^*}{A_{11}}\right) \mathop {}\!\mathrm {d}v + \int _{0}^\infty {\overline{F}}_1\left( \tfrac{u b_2 + v c_2^*}{1-A_{11}} \right) \mathop {}\!\mathrm {d}v \right] - C_\vee u \cdot {\overline{F}}_1(u)} = \frac{\lambda _1}{\lambda }, \end{aligned}$$

while otherwise

$$\begin{aligned} \Upsilon _{\wedge }(u) =o\left( u\cdot \overline{F_1}(u)\right) . \end{aligned}$$

Remark 3.12

The above example can be generalized in the sense that a regularly varying tail dominates any lighter tail, no matter whether this is regularly varying as well or not.

Indeed, assuming that w.l.o.g. \(X_1\in {\text {RV}}(\alpha )\) for \(\alpha >1\) and \(X_2\) is such that

$$\begin{aligned} {\overline{F}}_2(x) = o({\overline{F}}_1(x)), \end{aligned}$$
(3.20)

one can prove in complete analogy to the results from the last subsection that the workload exceedance probabilities (2.3), (2.4), and (2.5) satisfy

$$\begin{aligned} \Upsilon _i(b_iu)&\sim \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _{0}^\infty {\overline{F}}_1\left( \tfrac{b_i u + vc_i^*}{A_{i1}}\right) \mathop {}\!\mathrm {d}v\right] , \quad i=1,2,\\ \Upsilon _{\vee }(u)&\sim \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty \Big (\min \Big \{\tfrac{vc_1^*+b_1}{A_{11}},\tfrac{vc_2^*+b_2}{A_{21}}\Big \}\Big )^{-\alpha }\mathop {}\!\mathrm {d}v\right] \cdot u {\overline{F}}_1(u), \end{aligned}$$

and, assuming additionally that (3.9) holds,

$$\begin{aligned} \Upsilon _{\wedge }(u)&\sim \frac{\lambda _1}{\lambda }\left( {\mathbb {E}}\left[ \int _{0}^\infty {\overline{F}}_1\left( \tfrac{b_1 u + vc_1^*}{A_{11}}\right) \mathop {}\!\mathrm {d}v \right] + {\mathbb {E}}\left[ \int _{0}^\infty {\overline{F}}_1\left( \tfrac{b_2 u + vc_2^*}{A_{21}}\right) \mathop {}\!\mathrm {d}v \right] \right. \\&\qquad \qquad \left. - {\mathbb {E}}\left[ \int _0^\infty \Big (\min \Big \{\tfrac{vc_1^*+b_1}{A_{11}},\tfrac{vc_2^*+b_2}{A_{21}}\Big \}\Big )^{-\alpha }\mathop {}\!\mathrm {d}v\right] \cdot u{\overline{F}}_1(u)\right) , \end{aligned}$$

while otherwise

$$\begin{aligned} \Upsilon _{\wedge }(u) =o\left( u\cdot \overline{F_1}(u) \right) . \end{aligned}$$

4 The light-tailed case

In this section, we will study the asymptotic behavior of ruin/workload exceedance probabilities for claims/jobs that are typically small, i.e., we will assume throughout this section that the moment generating functions \(\varphi _{X_j}(x)={\mathbb {E}}[\exp (xX_j)]\), \(j=1,2\), are such that

$$\begin{aligned} \varphi _{X_j}(x_j)<\infty \;\text { for some } x_j>0, \quad j=1,2. \end{aligned}$$
(4.1)

4.1 Results in the risk context

As in the heavy-tailed setting, we start by studying the dual risk model. Again, the ruin probabilities for the single components are particularly easy to treat. The following lemma is obtained by a direct application of Lundberg’s well-known inequality and the Cramér–Lundberg approximation; see, for example, [3, Thms. IV.5.2 and IV.5.3]. In Sect. 6.3, a short proof is provided.

Lemma 4.1

Assume the claim size variables \(X_1,X_2\) satisfy (4.1) and assume there exist (unique) solutions \(\kappa _1,\kappa _2>0\) to

$$\begin{aligned} c_i\kappa _i&= {\mathbb {E}}\left[ \lambda _1(\varphi _{X_1}(\kappa _i A_{i1}) -1) + \lambda _2 (\varphi _{X_2}(\kappa _i A_{i2}) -1)\right] , \quad i=1,2. \end{aligned}$$
(4.2)

Then, the ruin probabilities of the single components satisfy

$$\begin{aligned} \Psi _i(u) \le \hbox {e}^{-\kappa _i u} \;\text {for all }u>0,\quad \text {and} \quad \Psi _i(u) \sim C_i \hbox {e}^{-\kappa _i u}, \quad i=1,2, \end{aligned}$$

where

$$\begin{aligned} C_i&= \frac{\lambda c_i^*}{ {\mathbb {E}}[\lambda _1 A_{i1}\varphi _{X_1}'(\kappa _iA_{i1})+ \lambda _2A_{i2}\varphi _{X_2}'(\kappa _iA_{i2})] -c_i} , \quad i=1,2. \end{aligned}$$
(4.3)

Using (2.12) in the form \(\Psi _\vee (u)\le \Psi _1(b_1 u)+\Psi _2(b_2 u)\), we easily derive the following Lundberg-type bound for \(\Psi _\vee \) from the above lemma.

Corollary 4.2

Assume the claim size variables \(X_1,X_2\) satisfy (4.1) and assume there exist (unique) solutions \(\kappa _1,\kappa _2>0\) to (4.2). Then the ruin probability for at least one component satisfies

$$\begin{aligned} \Psi _\vee (u)\le (\hbox {e}^{-\kappa _1 b_1 u}+\hbox {e}^{-\kappa _2 b_2 u})\wedge 1 \quad \text {for all }u>0. \end{aligned}$$

Remark 4.3

Similarly to what has been done in [9, Thm. 6.1], it is also possible to derive a Lundberg bound for \(\Psi _{\wedge ,\text {sim}}\) via classical martingale techniques. Indeed, one can show that for any \(\kappa _1,\kappa _2>0\) such that

$$\begin{aligned} \begin{aligned} \kappa _1c_1+\kappa _2 c_2&= \lambda _1\left( {\mathbb {E}}\left[ \varphi _{X_1}(\kappa _1 A_{11})\varphi _{X_1}(\kappa _2(1-A_{11})) \right] -1\right) \\&\quad + \lambda _2\left( {\mathbb {E}}\left[ \varphi _{X_2}(\kappa _1 A_{12})\varphi _{X_2}(\kappa _2(1-A_{12}))\right] -1\right) \end{aligned} \end{aligned}$$

it holds that

$$\begin{aligned} \Psi _{\wedge ,\text {sim}}(u) \le \hbox {e}^{-(\kappa _1 b_1 + \kappa _2 b_2) u},\quad u>0. \end{aligned}$$

As this has no implications for the considered queueing model, we will not go into further details here.

To derive the asymptotics of \(\Psi _{\wedge }\), \(\Psi _{\wedge ,\text {sim}}\) and \(\Psi _{\vee }\), we rely on results from [4], which lead to the following theorem.

Theorem 4.4

Assume the claim size variables \(X_1,X_2\) satisfy (4.1) and assume there exist (unique) solutions \(\kappa _1,\kappa _2>0\) to (4.2). Then,

$$\begin{aligned} \Psi _\vee (u)&\sim C_1\cdot \hbox {e}^{-\kappa _1 b_1 u } + C_2\cdot \hbox {e}^{-\kappa _2 b_2 u}, \\ \Psi _\wedge (u)&= o\left( C_1 \cdot \hbox {e}^{-\kappa _1 b_1 u} + C_2 \cdot \hbox {e}^{-\kappa _2b_2 u}\right) , \\ \text {and}\quad \Psi _{\wedge ,\text {sim}}(u)&= o\left( C_1 \cdot \hbox {e}^{-\kappa _1 b_1 u} + C_2 \cdot \hbox {e}^{-\kappa _2b_2 u}\right) , \end{aligned}$$

with \(C_1,C_2\) given in (4.3).

4.2 Results in the switch context

Again, using Lemma 2.1, we summarize our findings from the last subsection to obtain the following corollary on the asymptotic behavior of the workload barrier exceedance probabilities in the switching model defined in Sect. 2.1.

Corollary 4.5

(Asymptotics and bounds of the exceedance probabilities for light-tailed jobs) Assume the workload variables \(X_1,X_2\) are light-tailed such that (4.1) holds and assume there exist (unique) solutions \(\kappa _1,\kappa _2>0\) to (4.2). Then, the workload exceedance probabilities (2.3), (2.4), and (2.5) satisfy

$$\begin{aligned} \Upsilon _i(b_i u)&\le \hbox {e}^{-\kappa _i b_i u} \quad \text {for all }u_i>0, i=1,2,\quad \\ \text {and} \quad \Upsilon _\vee (u)&\le (\hbox {e}^{-\kappa _1 b_1 u} + \hbox {e}^{-\kappa _2 b_2 u})\wedge 1 \quad \text {for all }u>0. \end{aligned}$$

Further, with \(C_i, i=1,2,\) as in (4.3), it holds that

$$\begin{aligned} \Upsilon _i(b_i u)&\sim C_i \hbox {e}^{- \kappa _i b_i u}, \quad i=1,2, \nonumber \\ \Upsilon _\vee (u)&\sim C_1\cdot \hbox {e}^{-\kappa _1 b_1 u } + C_2\cdot \hbox {e}^{-\kappa _2 b_2 u}, \end{aligned}$$
(4.4)

while the probability that both workloads exceed their barrier satisfies

$$\begin{aligned} \Upsilon _\wedge (u)&= o\left( C_1 \cdot \hbox {e}^{-\kappa _1 b_1 u} + C_2 \cdot \hbox {e}^{-\kappa _2b_2 u}\right) . \end{aligned}$$

Remark 4.6

Note that the light-tail assumption (4.1) does not necessarily imply existence of \(\kappa _1,\kappa _2>0\) solving (4.2). Assuming for \(j=1,2\), the slightly stronger condition that

Either

$$\begin{aligned} \varphi _{X_j}(x_j)<\infty \quad \text { for all }x_j<\infty , \end{aligned}$$

or there exists \(x_j^*<\infty \) such that

$$\begin{aligned} \varphi _{X_j}(x_j)<\infty \; \text { for all } x_j<x_j^*\quad \text {and} \quad \varphi _{X_j}(x_j)=\infty \; \text { for all } x_j\ge x_j^*. \end{aligned}$$

however, is sufficient for existence of \(\kappa _1,\kappa _2>0\).

In case that the above condition fails, i.e., for some \(j\in \{1,2\}\) there exists \(x_j^*\) such that \(\varphi _{X_j}(x_j)<\infty \) for all \(x_j\le x_j^*\) and \(\varphi _{X_j}(x_j)=\infty \) for all \(x_j> x_j^*\), then existence of \(\kappa _1,\kappa _2\) depends on the chosen parameters of the model; see, for example, [3, Chapter IV.6a] for a more thorough discussion of this.

Remark 4.7

If \(\kappa _1 b_1 \ne \kappa _2b_2\), then the summand of lower order on the right-hand side of (4.4) can be omitted in the asymptotic equivalence. Thus, in contrast to the regularly varying case, the vector \(\mathbf{b }\) here is crucial for the exact asymptotic behavior and to contributes more than just the constant.

On the other hand, we immediately see that, given two job distributions and hence given \(\kappa _1,\kappa _2>0\), we can choose \(b_1,b_2\) in order to minimize the joint exceedance probabilities. The optimal \(\mathbf{b }\) then solves

$$\begin{aligned} b_1\kappa _1 = b_2\kappa _2, \quad \text {i.e.} \quad b_1= \frac{\kappa _2}{\kappa _1+\kappa _2},\quad \text {and} \quad b_2= \frac{\kappa _1}{\kappa _1+\kappa _2}, \end{aligned}$$

which leads to

$$\begin{aligned} \Upsilon _\vee (u) \sim&(C_1+C_2) \hbox {e}^{-\frac{\kappa _1\kappa _2}{\kappa _1+\kappa _2}\cdot u}, \\ \text {while} \quad \Upsilon _\wedge (u) =&\textit{o}\left( \hbox {e}^{-\frac{\kappa _1\kappa _2}{\kappa _1+\kappa _2}\cdot u} \right) . \end{aligned}$$

5 Examples and simulation study

In this section, we consider two special choices of the random switch for which we will evaluate the above results and compare to simulated data. The first part is dedicated to the special case of the Bernoulli switch, where the queueing processes become independent of each other. In the second part, we discuss the special case of a non-random switch, where every job is shared between the servers with some predefined deterministic proportions. We finish in Sect. 5.3 with a short comparison to study the influence of the chosen type of randomness on the exceedance probabilities.

5.1 The Bernoulli switch

The Bernoulli switch does not split any jobs, but assigns the arriving jobs randomly to one of the two servers. More precisely, we set

$$\begin{aligned} A_{11}=1-A_{21}\sim \text {Bernoulli}(p), \quad \text {and} \quad A_{12}=1-A_{22}\sim \text {Bernoulli}(q), \end{aligned}$$

independent of each other with \(p,q\in [0,1]\). This yields independence of the components of the process \((\mathbf{R }(t))_{t\ge 0}\) which can now be represented as

$$\begin{aligned} R_1(t)&=\sum _{k=1}^{N_{1}^{(1)}(t)} X'_{1,k} + \sum _{\ell =1}^{N_{2}^{(1)}(t)} X'_{2,\ell }-tc_1 \quad \text {and} \quad R_2(t)=\sum _{k=1}^{N_{1}^{(2)}(t)} X''_{1,k} + \sum _{\ell =1}^{N_{2}^{(2)}(t)} X''_{2,\ell }-tc_2, \end{aligned}$$

where \(X'_{j,k}\) and \(X''_{j,k}\) are independent copies of \(X_{j,k}\), \(j=1,2\), \(k\in {\mathbb {N}}\), and the counting processes \((N_1^{(1)}(t))_{t\ge 0}\), \((N_1^{(2)}(t))_{t\ge 0}\), \((N_2^{(1)}(t))_{t\ge 0}\), and \((N_2^{(2)}(t))_{t\ge 0}\) are independent Poisson processes with rates \(\frac{\lambda _1 p}{\lambda _1+\lambda _2}\), \(\frac{\lambda _1 (1-p)}{\lambda _1+\lambda _2}\), \(\frac{\lambda _2 q}{\lambda _1+\lambda _2}\), and \(\frac{\lambda _2 (1-q)}{\lambda _1+\lambda _2}\), respectively. In particular, from (2.5) and (2.13) we obtain that in the Bernoulli switch

$$\begin{aligned} \Upsilon _{\wedge }(u)=\Upsilon _1(b_1 u) \Upsilon _2(b_2 u) = \Upsilon _1(b_1 u) + \Upsilon _2(b_2 u)- \Upsilon _\vee (u), \end{aligned}$$
(5.1)

and hence \(\Upsilon _{\wedge }(u)\) and \(\Upsilon _{\vee }(u)\) can be expressed in terms of \(\Upsilon _1(b_1 u)\) and \(\Upsilon _2(b_2 u)\). Thus, although the Bernoulli switch is not covered by Theorem 3.1 in the subexponential setting, as (3.2) is not satisfied, via (5.1) one can still calculate the asymptotics using Lemma 3.2.

Fig. 2
figure 2

Simulated exceedance probabilities in the Bernoulli switch in comparison with the obtained asymptotics in natural scaling (left) and as log–log plot (right). Here, job sizes are Pareto distributed with \({\overline{F}}_1(x)=x^{-3/2}\), \(x\ge 1\), and \({\overline{F}}_2(x)=4x^{-2}\), \(x\ge 2\). Further, \(\lambda _1=\lambda _2=1\), \(c_1=5\), \(c_2=8\), and \(b_1=0.8=1-b_2\). The Bernoulli switch is characterized by \(p=0.4\) and \(q=0.7\). For these parameters from (5.2), we derive \(\Upsilon _1(u_1)\sim 0.8\cdot u_1^{-0.5}\) and \(\Upsilon _2(u_2) \sim 0.24 \cdot u_2^{-0.5}\) such that \(\Upsilon _\wedge (u) \sim 0.48 \cdot u^{-1}\) and \(\Upsilon _\vee (u)\sim 1.431 \cdot u^{-0.5}\) via (5.1). Note that a direct evaluation of the asymptotics of \(\Upsilon _\vee \) as given in Corollary 3.9 yields the same result

Fig. 3
figure 3

Simulated exceedance probabilities in the Bernoulli switch in comparison with the obtained asymptotics in natural scaling (left) and as log–linear plot (right). Here, jobs are exponentially distributed with \({\overline{F}}_1(x)=\hbox {e}^{-x/3}\), \(x\ge 0\), and \({\overline{F}}_2(x)=\hbox {e}^{-x/4}\), \(x\ge 0\). Further, \(\lambda _1=\lambda _2=1\), \(c_1=5\), \(c_2=8\), and \(b_1=0.8=1-b_2\). The Bernoulli switch is characterized by \(p=0.4\) and \(q=0.7\). For these parameters from (5.4), we derive \(\kappa _1 \approx 0.054\) and \(\kappa _2 \approx 0.178\) and (5.3) yields \(\Upsilon _1(u_1)\sim 0.796 \cdot \exp (-0.054 u_1)\) and \(\Upsilon _2(u_2) \sim 0.343 \cdot \exp (-0.178 u_2)\) from which \(\Upsilon _\wedge (u)\sim 0.273 \cdot \exp (-0.079u)\) and \(\Upsilon _\vee (u)\sim 0.796 \cdot \exp (-0.043 u) + 0.343 \cdot \exp (-0.036 u)\) via (5.1). Note that in the latter case we keep both summands, since the exponents are close together

Indeed, we obtain by direct application of Corollary 3.4 that if \(F_I^1, F_I^2\in {\mathcal {S}}\) (which holds in particular if \(X_1\in {\text {RV}}(\alpha _1)\), \(X_2\in {\text {RV}}(\alpha _2)\), \(\alpha _1,\alpha _2>1\)),

$$\begin{aligned} \begin{aligned} \Upsilon _1(b_1u)&\sim \frac{\lambda _1 p \int _{b_1 u}^\infty {\overline{F}}_1(y) \mathop {}\!\mathrm {d}y + \lambda _2 q \int _{b_1 u}^\infty {\overline{F}}_2(y) \mathop {}\!\mathrm {d}y}{c_1-\lambda _1 p {\mathbb {E}}[X_1] - \lambda _2 q {\mathbb {E}}[X_2]}, \\ \text {and} \quad \Upsilon _2(b_2 u)&\sim \frac{\lambda _1 (1-p) \int _{b_2 u}^\infty {\overline{F}}_1(y) \mathop {}\!\mathrm {d}y + \lambda _2 (1-q) \int _{b_2 u}^\infty {\overline{F}}_2(y) \mathop {}\!\mathrm {d}y}{c_1-\lambda _1 (1-p) {\mathbb {E}}[X_1] - \lambda _2 (1-q) {\mathbb {E}}[X_2]}. \end{aligned} \end{aligned}$$
(5.2)

In the light-tailed case, an application of Corollary 4.5 yields

$$\begin{aligned} \begin{aligned} \Upsilon _1(b_1u)&\sim \frac{c_1 - \lambda _1 p {\mathbb {E}}[X_1] - \lambda _2 q {\mathbb {E}}[X_2]}{\lambda _1 p\varphi _{ X_1}'(\kappa _1) + \lambda _2q\varphi _{X_2}'(\kappa _1)}\hbox {e}^{-\kappa _1 b_1 u}\\ \text {and} \quad \Upsilon _2(b_2u)&\sim \frac{c_2 - \lambda _1 (1-p) {\mathbb {E}}[X_1] - \lambda _2 (1-q) {\mathbb {E}}[X_2]}{\lambda _1 (1-p)\varphi _{ X_1}'(\kappa _2)+ \lambda _2(1-q)\varphi _{X_2}'(\kappa _2)} \hbox {e}^{-\kappa _2 b_2 u}, \end{aligned} \end{aligned}$$
(5.3)

as long as there exist \(\kappa _1,\kappa _2>0\) such that (4.2) holds, which in the Bernoulli switch simplifies to

$$\begin{aligned} \begin{aligned} c_1 \kappa _1&= \lambda _1 p (\varphi _{X_1}(\kappa _1)-1) + \lambda _2 q (\varphi _{X_2}(\kappa _1)-1)\\ \text {and} \quad c_2 \kappa _2&= \lambda _1 (1-p) (\varphi _{X_1}(\kappa _2)-1) + \lambda _2 (1-q) (\varphi _{X_2}(\kappa _2)-1). \end{aligned} \end{aligned}$$
(5.4)

The asymptotic behavior of \(\Upsilon _\vee \) and \(\Upsilon _{\wedge }\) can now be described via (5.1).

In Figs. 2 and 3, we compare the asymptotics in the Bernoulli switch obtained in this way with data that has been simulated using standard Monte Carlo techniques. As one can see in all cases, the obtained asymptotics fit the data very well for u large enough.

5.2 The deterministic switch

The deterministic switch is characterized by setting

$$\begin{aligned} A_{11}&=d_1=1-A_{21},\\ \text {and} \quad A_{12}&= d_2 = 1-A_{22}, \end{aligned}$$

for some predefined constants \(d_1,d_2\in [0,1]\).

Note that for \(\lambda _2=0\), meaning that we have only one source of claims, the corresponding dual risk model coincides with the degenerate model considered in [4, 5, 24]. Allowing two sources of claims, but setting \(d_1\in (0,1)\), \(d_2=1\) reduces our model to the setting treated in [6].

Clearly, for any choice of \(d_1,d_2\) in the deterministic switch one can easily evaluate the asymptotics of the exceedance probabilities as given in Corollaries 3.4 , 3.9, and 4.5 since all appearing expectations disappear.

Fig. 4
figure 4

Simulated exceedance probabilities in the deterministic switch in comparison with the obtained asymptotics as log–lin plot (left) and log(−log)–log plot (right), meaning the y-axis is scaled as \(\log (-\log (y))\), while the x-axis is scaled as \(\log (x)\). This particular scaling of the axes is chosen in order to get straight lines of ascent \(\gamma ^{-1}\), for functions of the type \(\hbox {e}^{-\root \gamma \of {x}}\). Job sizes are Weibull distributed with \({\overline{F}}_1(x)=1-\exp ((2x)^\frac{1}{3})\), \(x\ge 0\), and \({\overline{F}}_2(x)=1- \exp ((x/2)^{\frac{1}{2}})\), \(x\ge 0\). Further, \(\lambda _1=\lambda _2=1\), \(c_1=5\), \(c_2=8\), and \(b_1=0.8=1-b_2\). The deterministic switch is characterized by \(d_1=0.4\) and \(d_2=0.7\). For these parameters, from Corollary 3.4, we derive \(\Upsilon _2(b_2u) \sim \Upsilon _\vee (u) \sim (0.1374\cdot u^{\frac{2}{3}} + 0.31449\cdot u^{\frac{1}{3}} +0.36)\cdot \exp (-0.87358\cdot u^{\frac{1}{3}})\), and \(\Upsilon _1(b_1u) \sim (1.51191 \cdot u^{\frac{2}{3}} + 1.90488\cdot u^{\frac{1}{3}} + 1.2)\cdot \exp ( - 1.58740 \cdot u^{\frac{1}{3}})\), while for \(\Upsilon _{\wedge }\) no asymptotics are given as (3.9) fails

Fig. 5
figure 5

Simulated exceedance probabilities in the deterministic switch in comparison with the obtained asymptotics in natural scaling (left) and as log–log plot (right). Here—as in Fig. 2—job sizes are Pareto distributed with \({\overline{F}}_1(x)=x^{-3/2}\), \(x\ge 1\), and \({\overline{F}}_2(x)=4x^{-2}\), \(x\ge 2\). Further \(\lambda _1=\lambda _2=1\), \(c_1=5\), \(c_2=8\), and \(b_1=0.8=1-b_2\). The deterministic switch is characterized by \(d_1=0.4\) and \(d_2=0.7\). For these parameters, from Corollary 3.9, we derive \(\Upsilon _1(u_1) \sim 0.506 \cdot u_1^{-0.5}\), and \(\Upsilon _2(u_2) \sim 0.186 \cdot u_2^{-0.5} \), while \(\Upsilon _\vee (u) \sim 0.756\cdot u^{-0.5} \), and \(\Upsilon _{\wedge } (u) \sim 0.226\cdot u^{-0.5}\)

Fig. 6
figure 6

Simulated exceedance probabilities in the deterministic switch in comparison with the obtained asymptotics in natural scaling (left) and as log–linear plot (right). Here, jobs are—as in Fig. 3—exponentially distributed with \({\overline{F}}_1(x)=\hbox {e}^{-x/3}\), \(x\ge 0\), and \({\overline{F}}_2(x)=\hbox {e}^{-x/4}\), \(x\ge 0\). Further, \(\lambda _1=\lambda _2=1\), \(c_1=5\), \(c_2=8\), and \(b_1=0.8=1-b_2\). The deterministic switch is characterized by \(d_1=0.4\) and \(d_2=0.7\). For these parameters, from (4.2), we obtain \(\kappa _1\approx 0.084\) and \(\kappa _2 \approx 0.383\), which yield \(\Upsilon _1(u_1)\sim 0.78 \cdot \exp (-0.084 u_1)\) and \(\Upsilon _2(u_2) \sim 0.341 \cdot \exp (-0.383 u_2)\), while \(\Upsilon _\vee (u)\sim 0.78 \cdot \exp (-0.067 u) + 0.341 \cdot \exp (-0.077 u)\) and \(\Upsilon _\wedge (u)= o(\exp (-0.067 u))\) by Corollary 3.9

In Figs. 4, 5, and 6, we compare the asymptotics and bounds in the deterministic switch obtained in this way with data that has been simulated using standard Monte Carlo techniques. Again simulations and theoretical asymptotics fit well in all cases. Note that in contrast to the cases with lighter tails, in the purely subexponential case shown in Fig. 4, we observe that \(\Upsilon _1\) is close to \(\Upsilon _\vee \) for small u, but close to \(\Upsilon _{\wedge }\) for large u. This can be interpreted as follows: While, for small u, the net working speed \(c_i^*\) determines the exceedance probabilities, for large u this becomes less relevant and the workload exceedance is mainly influenced by the heavyness of the tails.

Example 5.1

Consider a deterministic switch with \(d_1\in (0,1)\) and \(\lambda _2=0\), i.e., there is only one source of jobs and the jobs are distributed deterministically to the two servers with proportions \(d_1\) and \(1-d_1\), and assume that \(\tfrac{b_1}{d_1} < \tfrac{b_2}{1-d_1}\), so that the resulting dual risk model properly rescaled coincides with the degenerate model studied in [4, 5, 24]. Then, applying Theorem 3.1 in this setting reproduces the tail behavior of the probability of ruin of at least one component stated in [24, Cor. 2.2]. Interestingly, also the ruin probability of both insurance companies one derives in this case from Proposition 3.3 coincides with the asymptotics provided in [24, Eq. (2.9)], although the latter corresponds to simultaneous ruin. This suggests that in this special setting the ruin probability of both components and the simultaneous ruin probability of both components are asymptotically equivalent.

5.3 A comparison of different switches

In this section, we aim to compare the two above special cases of the Bernoulli switch and the deterministic switch with a non-trivial random switch, which we chose to be a Beta switch characterized by setting

$$\begin{aligned} A_{11}&=1-A_{21} \sim {\text {Beta}}(\beta _1, \gamma _1 ),\\ \text {and} \quad A_{12}&= 1-A_{22}\sim {\text {Beta}}(\beta _2,\gamma _2), \end{aligned}$$

for some constants \(\beta _1,\beta _2,\gamma _1,\gamma _2>0\), where \(\text {Beta}(\beta ,\gamma )\) is the Beta distribution with density \(\frac{\Gamma (\beta +\gamma )}{\Gamma (\beta )\Gamma (\gamma )} x^{\beta -1}(1-x)^{\gamma -1}\), \(x\in [0,1]\).

To keep all examples comparable, we fix \(\lambda _1\), \(\lambda _2\), \({\mathbb {E}}[X_1]\), \({\mathbb {E}}[X_2]\), \({\mathbb {E}}[A_{11}]\) and \({\mathbb {E}}[A_{12}]\) such that the scenarios only differ in the behavior of the switch and the job sizes. Figure 7 shows the approximate exceedance probabilities obtained by Monte Carlo simulation for the Bernoulli switch, the deterministic switch and two different Beta switches.

As we can see, in the regularly varying case the probability that at least one of the workloads exceeds the barrier \(\Upsilon _\vee \) tends to zero with the same index of regular variation for all choices of the random switch. In case of the probability that both components exceed their barrier \(\Upsilon _\wedge \), the Bernoulli switch yields a faster decay due to the independence of the two workload processes in this model.

Further, the figure indicates the intuitive behavior: The more correlated the co-ordinates of the workload process are, the closer together are \(\Upsilon _\vee \) and \(\Upsilon _\wedge \). This leads to a trade-off between the two probabilities: Changing the switch toward reducing one probability raises the other and the Beta switches may serve here as a compromise to control both probabilities.

In the light-tailed case, the trade-off between \(\Upsilon _\vee \) and \(\Upsilon _\wedge \) cannot be observed. Quite the contrary, the more correlated the co-ordinates of the workload process are, the lower tend to be the exceedance probabilities. Hence, in this case, the Bernoulli switch yields the highest exceedance probabilities, while the deterministic switch obtains the best results.

Fig. 7
figure 7

Simulated exceedance probabilities for different switches with heavy-tailed (left, log–log plot) and light-tailed (right, log–linear plot) job sizes. Throughout \(\lambda _1=\lambda _2=1\), \(c_1=5\), \(c_2=8\), and \(b_1=0.8=1-b_2\). On the left—as in Figs. 2 and 5—job sizes are Pareto distributed with \({\overline{F}}_1(x)=x^{-3/2}\), \(x\ge 1\), and \({\overline{F}}_2(x)=4x^{-2}\), \(x\ge 2\). On the right jobs are—as in Figs. 3 and 6—exponentially distributed with \({\overline{F}}_1(x)=\hbox {e}^{-x/3}\), \(x\ge 0\), and \({\overline{F}}_2(x)=\hbox {e}^{-x/4}\), \(x\ge 0\). The Bernoulli switch is characterized by \(p=0.4\) and \(q=0.7\), the deterministic switch is characterized by \(d_1=0.4\) and \(d_2=0.7\), the Beta switch 1 is characterized by \(A_{11}=1-A_{21}\sim {\text {Beta}}(0.4,0.6)\) and \(A_{12}=1-A_{22} \sim {\text {Beta}}(0.7,0.3)\), and the Beta switch 2 is characterized by \(A_{11}=1-A_{21}\sim {\text {Beta}}(1.5,2.25)\) and \(A_{12}=1-A_{22} \sim {\text {Beta}}(3,9/7)\)

Thus, for keeping \(\Upsilon _\vee \) small, in general the simple deterministic switch yields good results. On the contrary, if one is interested in keeping \(\Upsilon _\wedge \) small, the tail-behavior of the appearing jobs is crucial for the choice of the optimal switch. Here, again Beta switches or other non-trivial random switches may serve as a compromise in situations where the tail behavior of the appearing jobs is unknown.

6 Proofs

6.1 Proofs for Sect. 3.1

To prove the asymptotic result for the ruin probability \(\Psi _\vee \) as given in Theorem 3.1, we need some preliminary definitions and results.

Let

$$\begin{aligned} {\mathcal {R}} := \{M \subset {\mathbb {R}}^2 : ~ M\text { open, increasing, } M^c \text { convex, } \mathbf{0 }\notin {\overline{M}}\} \end{aligned}$$

be a family of open sets, where increasing means that for each \(\mathbf{x }\in M\) and \(\mathbf{d }\ge \mathbf{0 }\) we have \(\mathbf{x }+\mathbf{d }\in M\). Let \(\Pi (\mathop {}\!\mathrm {d}x)\) be a probability measure on \({\mathbb {R}}^2_{\ge 0}\). For \(M \in {\mathcal {R}}\) we define a cdf on \([0,\infty )\) by setting

$$\begin{aligned} F_M(t) := 1-\Pi (tM), ~ t\ge 0. \end{aligned}$$

Then, following [36, Def. 4.6], if \(F_M \in {\mathcal {S}}\) we say that \(\Pi \in {\mathcal {S}}_M\). Furthermore, \(\Pi \) is multivariate subexponential if \(\Pi \in {\mathcal {S}}_{\mathcal {R}} := \bigcap _{M\in {\mathcal {R}}} {\mathcal {S}}_M\).

Throughout this section we consider the specific sets

$$\begin{aligned} L:={\mathbb {R}}^2\backslash {\mathbb {R}}^2_{\ge \mathbf{0 }} \qquad \text {and} \qquad M := \mathbf{b }- L. \end{aligned}$$
(6.1)

Then clearly \(M\in {\mathcal {R}}\) and by [36, Rem. 4.1] also \(uM\in {\mathcal {R}}\) for all \(u>0\).

Moreover, we specify \(\Pi \) to be the probability measure of the claims \(\mathbf{A }\mathbf{B }\mathbf{X }\) in the dual risk model described in Sect. 2.2. Then, we can prove some basic relationships in the upcoming lemma.

Lemma 6.1

Consider the probability measure \(\Pi \) as just defined, the set M as in (6.1), and the constant \(\theta \) given in (3.2). Then,

$$\begin{aligned} \Pi (u M + v \mathbf{c }^*)&= g(u,v), \end{aligned}$$

with g(uv) defined in (3.1), and

$$\begin{aligned} \Pi ({\mathbb {R}}_{\ge 0}^2 + v \mathbf{c} ^*)&= \frac{\lambda _1}{\lambda }\cdot \mathbb {E}\left[ \overline{F}_1\left( v\cdot \max \left\{ \tfrac{c_1^*}{A_{11}}, \tfrac{c_2^*}{A_{21}}\right\} \right) \right] \\&\qquad + \frac{\lambda _2}{\lambda }\cdot \mathbb {E}\left[ \overline{F}_2\left( v\cdot \max \left\{ \tfrac{c_1^*}{A_{12}}, \tfrac{c_2^*}{A_{22}}\right\} \right) \right] , \end{aligned}$$

for all \(u>0, ~ v\ge 0\) and \(\mathbf{c }^*\in {\mathbb {R}}^2_{>\mathbf{0 }}\). Moreover,

$$\begin{aligned} \theta = \int _0^\infty \Pi \left( {\mathbb {R}}_{\ge 0}^2 + v \mathbf{c }^*\right) \mathop {}\!\mathrm {d}v <\infty . \end{aligned}$$

Proof

By the definitions of M, L and \(\Pi \), and conditioning on \(\mathbf{B }\),

$$\begin{aligned} \Pi (u M+v \mathbf{c }^*)&= {\mathbb {P}}\left( \mathbf{A }\mathbf{B }\mathbf{X } \in u \mathbf{b } +v \mathbf{c }^* - u L \right) = {\mathbb {P}}\left( \mathbf{A }\mathbf{B }\mathbf{X } \in u \mathbf{b } +v \mathbf{c }^* + {\mathbb {R}}^2\backslash {\mathbb {R}}^2_{\le \mathbf{0 }}\right) \\&= \frac{\lambda _1}{\lambda } {\mathbb {P}}\left( \begin{pmatrix}A_{11} \\ A_{21} \end{pmatrix}X_1 \in u \mathbf{b } +v \mathbf{c }^* + {\mathbb {R}}^2\backslash {\mathbb {R}}^2_{\le \mathbf{0 }}\right) \\&\qquad + \frac{\lambda _2}{\lambda } {\mathbb {P}}\left( \begin{pmatrix}A_{12} \\ A_{22} \end{pmatrix}X_2 \in u \mathbf{b } +v \mathbf{c }^* + {\mathbb {R}}^2\backslash {\mathbb {R}}^2_{\le \mathbf{0 }}\right) , \end{aligned}$$

where, for \(j=1,2\),

$$\begin{aligned} \mathbb {P}\bigg (&\begin{pmatrix}A_{1j} \\ A_{2j} \end{pmatrix}X_j \in u \mathbf{b} +v \mathbf{c} ^* + \mathbb {R}^d\backslash \mathbb {R}^d_{\le \mathbf{0} }\bigg ) \\&= \mathbb {P}\left( ( A_{1j}X_j> ub_1 + vc_1^* ) \text { or } (A_{2j}X_j>ub_2 + vc_2^*)\right) \\&= \mathbb {P}\left( X_j > \min \left\{ \tfrac{ub_1 + vc_1^*}{A_{1j}}, \tfrac{ub_2 + vc_2^*}{A_{2j}}\right\} \right) \\&= \mathbb {E}\left[ \overline{F}_j\left( \min \left\{ \tfrac{ub_1 + vc_1^*}{A_{1j}}, \tfrac{ub_2 + vc_2^*}{A_{2j}}\right\} \right) \right] , \end{aligned}$$

which proves the first equation. The second equation follows by an analogous computation.

Lastly, using the obtained expression for \(\Pi ({\mathbb {R}}_{\ge 0}^2 + v \mathbf{c }^*)\), we may compute using Tonelli’s theorem that

$$\begin{aligned} \int _0^\infty \Pi ({\mathbb {R}}_{\ge 0}^2 + v\cdot \mathbf{c} ^*) dv&= \frac{\lambda _1}{\lambda } \mathbb {E}\left[ \int _0^\infty \overline{F}_1\left( v\cdot \max \left\{ \tfrac{c_1^*}{A_{11}}, \tfrac{c_2^*}{A_{21}}\right\} \right) \mathop {}\!\mathrm {d}v \right] \\ {}&\qquad + \frac{\lambda _2}{\lambda } \mathbb {E}\left[ \int _0^\infty \overline{F}_2\left( v\cdot \max \left\{ \tfrac{c_1^*}{A_{12}}, \tfrac{c_2^*}{A_{22}}\right\} \right) \mathop {}\!\mathrm {d}v\right] , \end{aligned}$$

where, for \(j=1,2\),

$$\begin{aligned} {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_j\left( v\cdot \max \left\{ \tfrac{c_1^*}{A_{1j}}, \tfrac{c_2^*}{A_{2j}}\right\} \right) \mathop {}\!\mathrm {d}v \right]&= {\mathbb {E}}\left[ \left( \max \left\{ \tfrac{c_1^*}{A_{1j}}, \tfrac{c_2^*}{A_{2j}}\right\} \right) ^{-1} \int _0^\infty {\overline{F}}_1\left( y \right) \mathop {}\!\mathrm {d}y \right] \\&= {\mathbb {E}}\left[ \min \left\{ \tfrac{A_{1j}}{c_1^*} , \tfrac{A_{2j}}{c_2^*} \right\} \right] {\mathbb {E}}[X_j]. \end{aligned}$$

This proves \(\int _0^\infty \Pi ({\mathbb {R}}_{\ge 0}^2 + v \mathbf{c }^*) dv = \theta \). Moreover, we note that the finite mean of the claim sizes \(X_j\) implies finiteness of \(\int _0^\infty \Pi \left( {\mathbb {R}}_{\ge 0}^2 + v \mathbf{c }^*\right) \mathop {}\!\mathrm {d}v\). \(\square \)

Proof of Theorem 3.1

Recall the definition of the set L in (6.1) and note that obviously L satisfies [36, Assumption 5.1]. Furthermore note that \(u\mathbf{b }-\mathbf{R }(t) \in L\) if and only if \(\max _{i=1,2}( R_i(t) -ub_i )>0\), which immediately implies that

$$\begin{aligned} \Psi _\vee (u)&= {\mathbb {P}}(u\mathbf{b }-\mathbf{R }(t) \in L \text { for some }t\ge 0) \\&= {\mathbb {P}}(\mathbf{R }(t) \in uM \text { for some }t\ge 0), \quad u>0. \end{aligned}$$

Thus, by [36, Thm. 5.2] we obtain

$$\begin{aligned} \Psi _\vee (u) \sim \int _0^\infty \Pi (u M + v\mathbf{c} ^*)\mathop {}\!\mathrm {d}v, \end{aligned}$$
(6.2)

as soon as we can guarantee that the probability measure on \({\mathbb {R}}^2\) defined by

$$\begin{aligned} \Pi ^I(D) = \theta ^{-1} \int _0^\infty \Pi (D+v\mathbf{c }^*) \mathop {}\!\mathrm {d}v, \quad \text {for any Borel set }D\subset {\mathbb {R}}^2_{\ge 0}, \end{aligned}$$

and \(\Pi ^I({\mathbb {R}}^2{\setminus } {\mathbb {R}}_{\ge 0}^2)=0\), is in \({\mathcal {S}}_M\). This, however, is by definition equivalent to the assumption that the cdf \(F_M(u)=1-\Pi ^I(u M)\), \(u\ge 0\), is in \({\mathcal {S}}\). Since by Lemma 6.1\(F_M(u)=F_{\text {subexp}}(u)\) with \(F_{\text {subexp}}\) as given in (3.3), this is assumed in the theorem. Lastly, observe that the right-hand side of (6.2) equals \(\int _0^\infty g(u,v) \mathop {}\!\mathrm {d}v\) as shown in Lemma 6.1. \(\square \)

Remark 6.2

Naively one could guess that subexponentiality of the claims \(X_1\) and \(X_2\) should be enough to obtain subexponentiality of at least \(\Pi \). However, as noted in [36, Remark 4.9], this is not true in general, because random mixing of two subexponential distributions (as done by our matrix \(\mathbf{B }\)) leads to a subexponential distribution if and only if the sum of the mixed distributions is subexponential. This again is not true in general.

Proof of Lemma 3.2

Fix \(i\in \{1,2\}\) and assume that \({\mathbb {P}}(A_{i1}+A_{i2}=0)< 1.\) Otherwise \(R_i(t)\) is monotonely decreasing, \(\Psi _i(u)=0\), and the statement is proven. Note that by definition

$$\begin{aligned} \Psi _i(u)&= {\mathbb {P}}\Bigg (\sum _{k=1}^{N(t)} \big ((B_{11})_k (A_{i1})_k X_{1,k} + (B_{22})_k (A_{i2})_k X_{2,k}\big ) - t c_i>u \text { for some }t>0\Bigg )\\&=: {\mathbb {P}}\Bigg (\sum _{k=1}^{N(t)} Y_{i,k} - t c_i>u \text { for some }t>0\Bigg ), \quad i=1,2, \end{aligned}$$

where the random variables \(\{Y_{i,k}, k\in {\mathbb {N}}\}\) are i.i.d. copies of two generic random variables \(Y_i\), \(i=1,2\). The corresponding integrated tail function is defined as

$$\begin{aligned} F^{Y_i}_I(x):= {\mathbb {E}}[Y_{i}]^{-1} \int _0^x {\mathbb {P}}(Y_{i}>y) \mathop {}\!\mathrm {d}y, \quad x\ge 0. \end{aligned}$$

From [3, Thm. X.2.1] we obtain that, if \(F^{Y_i}_I \in {\mathcal {S}}\),

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Psi _i(u)}{\overline{F^{Y_i}}_I(u)} = \frac{\lambda {\mathbb {E}}[Y_{i}] }{c_i-\lambda {\mathbb {E}}[Y_{i}]}, \end{aligned}$$
(6.3)

where

$$\begin{aligned} {\mathbb {E}}[Y_i] = \frac{\lambda _1}{\lambda } {\mathbb {E}}[A_{i1}]{\mathbb {E}}[X_1] + \frac{\lambda _2}{\lambda } {\mathbb {E}}[A_{i2}]{\mathbb {E}}[X_2], \end{aligned}$$
(6.4)

so that

$$\begin{aligned} \frac{\lambda {\mathbb {E}}[Y_{i}] }{c_i-\lambda {\mathbb {E}}[Y_{i}]}= & {} \frac{\lambda _1 {\mathbb {E}}[A_{i1}]{\mathbb {E}}[X_1] + \lambda _2 {\mathbb {E}}[A_{i2}]{\mathbb {E}}[X_2]}{c_i - \lambda _1 {\mathbb {E}}[A_{i1}]{\mathbb {E}}[X_1] - \lambda _2 {\mathbb {E}}[A_{i2}]{\mathbb {E}}[X_2]} \\= & {} \frac{\lambda _1 {\mathbb {E}}[A_{i1}]{\mathbb {E}}[X_1] + \lambda _2 {\mathbb {E}}[A_{i2}]{\mathbb {E}}[X_2]}{\lambda c_i^*}. \end{aligned}$$

Further

$$\begin{aligned} \int _0^x {\mathbb {P}}(Y_{i}>y) \mathop {}\!\mathrm {d}y&= \frac{\lambda _1}{\lambda } \int _0^x {\mathbb {P}}(A_{i1}X_1>y) \mathop {}\!\mathrm {d}y + \frac{\lambda _2}{\lambda } \int _0^x {\mathbb {P}}(A_{i2}X_2>y) \mathop {}\!\mathrm {d}y, \end{aligned}$$

and since, by Tonelli’s theorem for all \(i,j\in \{1,2\}\),

$$\begin{aligned} \int _0^x {\mathbb {P}}(A_{ij}X_j>y) \mathop {}\!\mathrm {d}y&= \int _0^x {\mathbb {E}}\left[ {\overline{F}}_j(\tfrac{y}{A_{ij}})\right] \mathop {}\!\mathrm {d}y = {\mathbb {E}}\left[ \int _0^x {\overline{F}}_j(\tfrac{y}{A_{ij}}) \mathop {}\!\mathrm {d}y\right] , \end{aligned}$$

this proves \(F_I^{Y_i}= F_I^{i}\in {\mathcal {S}}\). Inserting everything in (6.3) we obtain

$$\begin{aligned} \Psi _i(u) \sim \frac{\lambda _1 {\mathbb {E}}[A_{i1}]{\mathbb {E}}[X_1] + \lambda _2 {\mathbb {E}}[A_{i2}]{\mathbb {E}}[X_2] }{c_i- \lambda _1 {\mathbb {E}}[A_{i1}]{\mathbb {E}}[X_1] + \lambda _2 {\mathbb {E}}[A_{i2}]{\mathbb {E}}[X_2]} F_I^i(u), \end{aligned}$$

which immediately yields the result by (2.7) via substitution with \(v = \tfrac{y-b_iu}{c_i^*}\). \(\square \)

Proof of Proposition 3.3

Combining the asymptotics obtained in Theorem 3.1 and Lemma 3.2 via (2.13) we obtain, due to (3.9),

$$\begin{aligned} \Upsilon _{\wedge }(u)&\sim \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_1\left( \tfrac{b_1 u + v c_1^*}{A_{11}} \right) + {\overline{F}}_1\left( \tfrac{b_2 u + v c_2^*}{A_{21}} \right) - {\overline{F}}_1\left( \min \left\{ \tfrac{ub_1 + vc_1^*}{A_{11}}, \tfrac{ub_2 + vc_2^*}{A_{21}}\right\} \right) \mathop {}\!\mathrm {d}v \right] \\&\quad + \frac{\lambda _2}{\lambda } {\mathbb {E}}\left[ \int _0^\infty {\overline{F}}_2\left( \tfrac{b_1 u + v c_1^*}{A_{12}} \right) + {\overline{F}}_2\left( \tfrac{b_2 u + v c_2^*}{A_{22}} \right) - {\overline{F}}_2\left( \min \left\{ \tfrac{ub_1 + vc_1^*}{A_{12}}, \tfrac{ub_2 + vc_2^*}{A_{22}}\right\} \right) \mathop {}\!\mathrm {d}v \right] , \end{aligned}$$

where

$$\begin{aligned}&{\overline{F}}_i(f_1(v)) + {\overline{F}}_i(f_2(v)) - {\overline{F}}_i(\min \{f_1(v), f_2(v)\})\\&\quad = {\overline{F}}_i(f_1(v)) + {\overline{F}}_i(f_2(v)) - \max \{{\overline{F}}_i(f_1(v)), {\overline{F}}_i(f_2(v)) \} \\&\quad = \min \{{\overline{F}}_i(f_1(v)), {\overline{F}}_i(f_2(v)) \}\\&\quad = {\overline{F}}_i(\max \{f_1(v), f_2(v) \}) \end{aligned}$$

as \({\overline{F}}_i\) is monotonely decreasing. This implies the given asymptotics for \(\Psi _\wedge \).

If (3.6) fails, then \(\Psi _1(b_1u)+\Psi _2(b_2u)\sim \Psi _\vee (u)\). Therefore, we immediately obtain from (2.12) that

$$\begin{aligned} \lim _{u\rightarrow \infty } \frac{\Psi _\wedge (u)}{\Psi _\vee (u)} = \lim _{u\rightarrow \infty } \frac{\Psi _1(b_1u)+ \Psi _2(b_2u)}{\Psi _\vee (u)} -1 = 0. \end{aligned}$$

Thus Theorem 3.1 implies (3.7), which finishes the proof. \(\square \)

6.2 Proofs for Sect. 3.2

We start to prove the first statement of Theorem 3.5 which we restate below as Lemma 6.4.

Proposition 6.3

Let \(\mathbf{Z }\in {\text {MRV}}(\alpha ,\mu )\) be a random vector in \({\mathbb {R}}^{d}\) and let \(\mathbf{M }\) be a random \((q\times d)\)-matrix independent of \(\mathbf{Z }\). Let

$$\begin{aligned} {\tilde{\mu }}({}\cdot {}):={\mathbb {E}}[\mu \circ \mathbf{M }^{-1}({}\cdot {})], \end{aligned}$$

where \(\mathbf{M }^{-1}(\cdot )\) denotes the preimage under \(\mathbf{M }\). If \({\mathbb {E}}[\left\Vert \mathbf{M }\right\Vert ^\gamma ]<\infty \) for some \(\gamma >\alpha \) and \({\tilde{\mu }}({\mathcal {B}}_1^c)>0\) then \(\mathbf{M }\mathbf{Z }\in {\text {MRV}}(\alpha ,\mu ^*)\) with

$$\begin{aligned} \mu ^*({}\cdot {}) := \frac{1}{{\tilde{\mu }}({\mathcal {B}}_1^c)} \cdot {\tilde{\mu }}({}\cdot {}), \end{aligned}$$

where \({\mathcal {B}}_1^c:=\{x\in {\mathbb {R}}^q: ~ \left\Vert x\right\Vert >1\}\) denotes the complement of the unit sphere in \({\mathbb {R}}^q\).

Proof

First note that our definition of regular variation corresponds to Definition 2.16 (Theorem 2.1.4 (i)) in [8], setting \(E={\mathcal {B}}_1^c\), which implies \({\mathbb {P}}(\mathbf{Z } \in tE) = {\mathbb {P}}(\left\Vert \mathbf{Z }\right\Vert >t)\). Now, double application of [8, Proposition 2.1.18] implies the statement, since for \(M\subseteq {\mathbb {R}}^2\) measurable and bounded away from \(\mathbf{0 }\)

$$\begin{aligned} \frac{{\mathbb {P}}(\mathbf{M }\mathbf{Z }\in tM)}{{\mathbb {P}}(\left\Vert \mathbf{M }\mathbf{Z }\right\Vert>t)} =\, \underbrace{\frac{{\mathbb {P}}(\mathbf{M }\mathbf{Z }\in tM)}{{\mathbb {P}}(\left\Vert \mathbf{Z }\right\Vert>t)}}_{\rightarrow {\tilde{\mu }}(M)}\cdot \underbrace{ \frac{{\mathbb {P}}(\left\Vert \mathbf{Z }\right\Vert >t)}{{\mathbb {P}}(\mathbf{M }\mathbf{Z }\in t{\mathcal {B}}_1^c)}}_{\rightarrow {\tilde{\mu }}({\mathcal {B}}_1^c)^{-1}}. \end{aligned}$$

\(\square \)

Lemma 6.4

Consider the notation of Sect. 2. If \(X_1\) and \(X_2\) are regularly varying in the univariate sense with indices \(\alpha _1,\alpha _2\), then there exists a measure \(\mu ^*\) as in Proposition 6.3 such that \(\mathbf{A }\mathbf{B }\mathbf{X }\in {\text {MRV}}(\min \{\alpha _1,\alpha _2\},\mu ^*)\).

Proof

Obviously, \(\mathbf{X }=(X_1,X_2)\in {\text {MRV}}(\alpha ,\mu )\) for some non-null measure \(\mu \) concentrated on the axes, and \(\alpha =\min (\alpha _1,\alpha _2)\) since the random variables \(X_1,X_2\) are independent and both regularly varying with indices \(\alpha _1,\alpha _2\). To prove the lemma, it is thus enough to check the prerequisites of Proposition 6.3. Clearly, using the properties of \(\mathbf{A }\) and \(\mathbf{B }\), we compute \({\mathbb {E}}[\Vert \mathbf{A }\mathbf{B }\Vert ^\gamma ]=1<\infty \) for any \(\gamma \). Further, for \(M\subseteq {\mathbb {R}}^2\) measurable and bounded away from \(\mathbf{0 }\),

$$\begin{aligned} {\tilde{\mu }}(M) =&{\mathbb {E}}[\mu \circ (\mathbf{A }\mathbf{B })^{-1}(M)] = {\mathbb {E}}\left[ \mu \left( \left\{ \mathbf{x }\in {\mathbb {R}}^2: ~ \mathbf{A }\mathbf{B }\mathbf{x }\in M\right\} \right) \right] \\ =&\frac{\lambda _1}{\lambda }\cdot {\mathbb {E}}\left[ \mu \left( \left\{ \mathbf{x }=(x_1,x_2)\in {\mathbb {R}}^2: ~ \begin{pmatrix} A_{11} x_1 \\ A_{21}x_1\end{pmatrix}\in M\right\} \right) \right] \\&\quad +\frac{\lambda _2}{\lambda }\cdot {\mathbb {E}}\left[ \mu \left( \left\{ \mathbf{x }=(x_1,x_2)\in {\mathbb {R}}^2: ~ \begin{pmatrix} A_{12} x_2 \\ A_{22}x_2\end{pmatrix}\in M\right\} \right) \right] . \end{aligned}$$

Thus for \(M={\mathcal {B}}_1^c\), and recalling property (ii) of the matrix \(\mathbf{A }\), we obtain

where we have used that, due to positivity of \(\mathbf{X }\), \(\mu \) is zero on \({\mathbb {R}}^2\backslash {\mathbb {R}}_{>0}^2\). This finishes the proof. \(\square \)

To prove the remainder of Theorem 3.5, we will use a result from [28]. To do so, first recall the bivariate compound Poisson process \(\mathbf{R }\) from our dual risk model from Sect. 2.2. Let \((T_k)_{k\in {\mathbb {N}}}\) be the independent identically \({\text {Exp}}(\lambda )\)-distributed interarrival times of the Poisson process N(t), i.e.,

$$\begin{aligned} N(t) = \sum _{n=1}^\infty {\mathbb {1}}_{\left\{ \sum \nolimits _{k=1}^n T_k\le t\right\} }. \end{aligned}$$

We define the random walk

$$\begin{aligned} \mathbf{S }_n := \sum _{k=1}^n\left( \mathbf{A }_k\mathbf{B }_k\mathbf{X }_k-T_k \mathbf{c }\right) + n\cdot \left( {\mathbb {E}}[T_1]\mathbf{c }-{\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }]\right) , \end{aligned}$$
(6.5)

and directly observe that \((\mathbf{S }_n)_{n\in {\mathbb {N}}}\) is compensated, i.e., for all \(n\in {\mathbb {N}}\),

$$\begin{aligned} {\mathbb {E}}[\mathbf{S }_n]&= \sum _{k=1}^n \left( {\mathbb {E}}[\mathbf{A }_k\mathbf{B }_k\mathbf{X }_k]-{\mathbb {E}}[T_k \mathbf{c }]\right) + n\cdot {\mathbb {E}}[T_1]\mathbf{c }-n \cdot {\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }] =\mathbf{0 }. \end{aligned}$$
(6.6)

The following lemma explains the relationship between the risk process \((\mathbf{R }(t))_{t\ge 0}\) and the random walk \((\mathbf{S }_n)_{n\in {\mathbb {N}}}\).

Lemma 6.5

Let \(L\subseteq {\mathbb {R}}^2\) be a ruin set, i.e., assume that

  1. (i)

    \(L\backslash \mathbb {R}^2_{<0} = L\), i.e., \(L\cap {\mathbb {R}}_{<0}^2=\emptyset \), and

  2. (ii)

    \(uL=L\) for all \(u>0\).

Then

$$\begin{aligned} \Psi _L(u) :=&\,{\mathbb {P}}\big (\mathbf{S }_n - n\left( \lambda ^{-1}\mathbf{c }-{\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }]\right) \in u(\mathbf{b }+L) \text { for some }n\in {\mathbb {N}} \big ) \\ =&\, {\mathbb {P}}\big (\mathbf{R }(t)- u\mathbf{b } \in L \text { for some }t\ge 0 \big ). \end{aligned}$$

Proof

Recall from (2.6) that \(\mathbf{R }(t) = \sum _{k=1}^{N(t)} \mathbf{A }_k\mathbf{B }_k\mathbf{X }_k - t\mathbf{c }\), where \(\mathbf{c }=(c_1,c_2)^\top \in {\mathbb {R}}_{\ge 0}^2\). Thus, by assumption (i) \(\mathbf{R }(t)\) may enter L only by a jump and since \(N(t)\overset{t\nearrow \infty }{\longrightarrow }\infty \) a.s. we get

$$\begin{aligned}&\{\mathbf{R }(t) -u\mathbf{b }\in L\text { for some }t\ge 0 \} \\&= \left\{ \sum _{k=1}^{N(t)}\mathbf{A }_k\mathbf{B }_k\mathbf{X }_k-t\mathbf{c } \in u\mathbf{b }+ L\text { for some }t\ge 0 \right\} \\&=\left\{ \sum _{k=1}^{n} (\mathbf{A }_k\mathbf{B }_k\mathbf{X }_k- T_k\mathbf{c }) \in u(\mathbf{b }+ L)\text { for some } n\in {\mathbb {N}} \right\} \\&= \left\{ \sum _{k=1}^{n} (\mathbf{A }_k\mathbf{B }_k\mathbf{X }_k- T_k\mathbf{c }) +(n-n)\left( \lambda ^{-1}\mathbf{c }-{\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }]\right) \in u(\mathbf{b }+L) \text { for some }n\in {\mathbb {N}}\right\} \\&= \left\{ \mathbf{S }_n - n\left( \lambda ^{-1}\mathbf{c }-{\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }]\right) \in u(\mathbf{b }+L) \text { for some }n\in {\mathbb {N}}\right\} , \end{aligned}$$

which yields the claim. \(\square \)

We proceed with a lemma that specifies the ruin sets that we are interested in.

Lemma 6.6

Let

$$\begin{aligned} L_\vee&:= \{(x_1,x_2)\in \mathbb {R}^2: ~ x_1>0 \vee x_2>0\} = {\mathbb {R}}^2\backslash {\mathbb {R}}_{\le 0}^2\quad \text {and} \\ L_{\wedge ,\text {sim}}&:= \{(x_1,x_2)\in \mathbb {R}^2: ~ x_1>0 \wedge x_2>0\}= {\mathbb {R}}_{>0}^2, \end{aligned}$$

then

$$\begin{aligned} \Psi _{L_\vee } (u) = \Psi _\vee (u), \quad \text {and} \quad \Psi _{L_{\wedge ,\text {sim}}} (u)&= \Psi _{\wedge ,\text {sim}}(u), \quad u>0. \end{aligned}$$

Proof

Clearly

$$\begin{aligned} {\mathbb {P}}\big (\mathbf{R }(t) - u\mathbf{b }\in L_\vee \text { for some }t\ge 0\big ) = {\mathbb {P}}\left( \max _{i=1,2}(R_i(t)-u_i) >0 \text { for some }t\ge 0\right) \end{aligned}$$

which is \(\Psi _{L_\vee } (u) = \Psi _\vee (u)\). The second equality follows analogously. \(\square \)

Proposition 6.7

Let the claim size variables \(X_1,X_2\) be regularly varying, i.e. \(X_{j} \in {\text {RV}}(\alpha _j)\) for \(\alpha _j>1\). Then \(\mathbf{A }\mathbf{B }\mathbf{X }\in {\text {MRV}}(\min (\alpha _1,\alpha _2),\mu ^*)\) for a suitable measure \(\mu ^*\). Further, recall \(\mathbf{c }^*=(c_1^*,c_2^*)^\top \in {\mathbb {R}}^2_{>0}\) from (2.7). Let \(L\subseteq {\mathbb {R}}^2\) be a ruin set in the sense of Lemma 6.5 and assume additionally:

  1. (iii)

    For all \(\mathbf{a }\in {\mathbb {R}}^2_{>0}\)

    $$\begin{aligned} \mu ^*(\partial (\mathbf{a }+L))=0. \end{aligned}$$
  2. (iv)

    The set \(\mathbf{b }+L\) is \(\mathbf{p }\)-increasing for all \(\mathbf{p }\in {\mathbb {R}}^2_{>0}\), i.e., for all \(v\ge 0\) it holds that

    $$\begin{aligned} \mathbf{x } \in \mathbf{b }+ L \quad \text {implies} \quad \mathbf{x } +v\mathbf{p } \in \mathbf{b }+ L. \end{aligned}$$

Then

$$\begin{aligned} \lim _{u\rightarrow \infty }\frac{\Psi _L(u)}{u\cdot {\mathbb {P}}(\left\Vert \mathbf{A }\mathbf{B }\mathbf{X }\right\Vert >u)}&= \int _0^\infty \mu ^*(v\mathbf{c} ^{*}+\mathbf{b }+L)\mathop {}\!\mathrm {d}v. \end{aligned}$$

Proof

That \(\mathbf{A }\mathbf{B }\mathbf{X }\in {\text {RV}}(\min (\alpha _1,\alpha _2),\mu ^*)\) has been shown in Lemma 6.4. Recalling the definitions of \(\mathbf{S }_n\) and \(\Psi _L(u)\), we may write

$$\begin{aligned} \Psi _L(u)&= {\mathbb {P}}\left( \mathbf{S }_n - n\left( \lambda ^{-1}\mathbf{c }-{\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }]\right) \in u(\mathbf{b }+L) \text { for some }n\in {\mathbb {N}} \right) \\&= {\mathbb {P}}\left( \sum _{k=1}^n\mathbf{Y }_k - n\mathbf{c }^*\in u(\mathbf{b }+L) \text { for some }n\in {\mathbb {N}} \right) , \end{aligned}$$

for i.i.d. random vectors

$$\begin{aligned} \mathbf{Y }_k&= \mathbf{A }_k\mathbf{B }_k\mathbf{X }_k-T_k \mathbf{c } + \lambda ^{-1}\mathbf{c }-{\mathbb {E}}[\mathbf{A }\mathbf{B }\mathbf{X }]. \end{aligned}$$

All the other prerequisites ensure that we may apply [28, Thm. 3.1 and Rem. 3.2] to obtain the desired asymptotics. \(\square \)

The following lemma justifies the usage of Proposition 6.7 for our problem.

Lemma 6.8

The sets \(L_\vee \) and \(L_{\wedge ,\text {sim}}\) from Lemma 6.6 satisfy conditions (i)-(iv) of Lemma 6.5 and Proposition 6.7.

Proof

Properties (i), (ii) and (iv) are obvious. Consider (iii). Fix an arbitrary \(\mathbf{a }=(a_1,a_2)^\top \in {\mathbb {R}}^2_{>\mathbf{0 }}\). It holds that

$$\begin{aligned} \partial (\mathbf{a }+L) = \mathbf{a }+\partial (L) \end{aligned}$$

and we have

$$\begin{aligned} \partial (L_\vee )= & {} \{\mathbf{x }\in {\mathbb {R}}^2: ~(x_1=0 \wedge x_2\le 0) \vee (x_1\le 0 \wedge x_2=0) \}, \\ \partial (L_{\wedge ,\text {sim}})= & {} \{\mathbf{x }\in {\mathbb {R}}^2: ~(x_1=0 \wedge x_2\ge 0) \vee (x_1\ge 0 \wedge x_2=0) \}. \end{aligned}$$

Set

$$\begin{aligned} M_{1}(\mathbf{a })&:= \{(x_1,x_2)\in {\mathbb {R}}^2: ~ x_1\le a_1 ~\wedge ~ x_2=a_2\}, \\ M_{2}(\mathbf{a })&:= \{(x_1,x_2)\in {\mathbb {R}}^2: ~ x_1= a_1 ~\wedge ~ x_2\le a_2\}, \end{aligned}$$

so that \(\mathbf{a }+\partial L_\vee = M_{1}(\mathbf{a })\cup M_{2}(\mathbf{a })\). Now consider the set \(M_1(\mathbf{a })\). Let \(t\in (1,\infty )\cap {\mathbb {Q}}\), then

$$\begin{aligned} t M_{1}(\mathbf{a }) = \{(x_1,x_2)\in {\mathbb {R}}^2: ~ x_1\le t a_1 \wedge x_2=ta_2 \}. \end{aligned}$$

Thus, for \(t_1\ne t_2\), we have \(t_1M_{1}(\mathbf{a }) \cap t_2M_{1}(\mathbf{a }) = \emptyset \). Further, the set \(\bigcup _{t\in (1,\infty )\cap {\mathbb {Q}}} tM_{1}(\mathbf{a })\) is obviously bounded away from zero, since \((a_1,a_2)>\mathbf{0 }\). We thus obtain

$$\begin{aligned} \infty > \mu ^*\left( \bigcup _{t\in (1,\infty )\cap {\mathbb {Q}}} tM_{1}(\mathbf{a })\right)&= \sum _{t\in (1,\infty )\cap {\mathbb {Q}}} \mu ^*(tM_{1}(\mathbf{a })) \\&= \sum _{t\in (1,\infty )\cap {\mathbb {Q}}} t^{-\min \{\alpha _1,\alpha _2\}}\mu ^*(M_{1}(\mathbf{a }))\\&=\mu ^*(M_{1}(\mathbf{a })) \sum _{t\in (1,\infty )\cap {\mathbb {Q}}} t^{-\min \{\alpha _1,\alpha _2\}}. \end{aligned}$$

Since the last sum is infinite, \(\mu ^*(M_{1}(\mathbf{a }))\) must be zero. The same argument applied to \(M_2(\mathbf{a })\) thus yields the result for \(L_\vee \). The proof for \(L_{\wedge ,\text {sim}}\) is analogous. \(\square \)

Proof of Theorem 3.5

The first statement has been shown in Lemma 6.4. The asymptotics for \(\Psi _\vee \) and \(\Psi _{\wedge ,\text {sim}}\) are direct consequences of Lemma 6.8 and Proposition 6.7. \(\square \)

For the proof of Proposition 3.6, we will use the following lemma.

Lemma 6.9

Let fg be regularly varying with indices \(\alpha ,\beta >0\) and set

$$\begin{aligned} \zeta :=\lim _{t\rightarrow \infty } \frac{\lambda _1f(t)}{\lambda _2g(t)}\in [0,\infty ], \end{aligned}$$

for \(\lambda _1,\lambda _2>0\), so that \(\zeta \in (0,\infty )\) clearly implies \(\alpha =\beta \). Then for any constants \(\gamma _1,\gamma _2>0\),

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{\lambda _1f(\gamma _1t)+ \lambda _2g(\gamma _2t)}{\lambda _1f(t)+\lambda _2g(t)} = \frac{\zeta \gamma _1^\alpha +\gamma _2^\beta }{1+\zeta }, \end{aligned}$$

where we interpret \(\frac{\infty \cdot x}{\infty }:=x\).

Proof

Obviously it holds that

$$\begin{aligned} \frac{\lambda _1f(\gamma _1t)+ \lambda _2g(\gamma _2t)}{\lambda _1 f(t)+\lambda _2g(t)}&= \frac{\frac{f(\gamma _1t)}{f(t)}}{1+\frac{\lambda _2 g(t)}{\lambda _1 f(t)}} + \frac{\frac{g(\gamma _2t)}{g(t)}}{1+\frac{\lambda _1f(t)}{\lambda _2 g(t)}} \\&\underset{t\rightarrow \infty }{\longrightarrow }\frac{\gamma _1^{\alpha }}{1+\zeta ^{-1}} + \frac{\gamma _2^\beta }{1+\zeta }=\frac{\zeta \gamma _1^\alpha +\gamma _2^\beta }{1+\zeta }. \end{aligned}$$

\(\square \)

Proof of Proposition 3.6

We concentrate first on the \(\vee \)-case and start by determining the constant \(C_\vee \). Using the limiting-measure property of \(\mu ^*\), (3.13) and the properties of \(\mathbf{A }\) and \(\mathbf{B }\) we obtain

$$\begin{aligned}&\int _0^\infty \mu ^*(v\mathbf{c }^*+\mathbf{b } +L_{\vee })\mathop {}\!\mathrm {d}v \\&\quad = \int _0^\infty \lim _{t\rightarrow \infty } \frac{{\mathbb {P}}(\mathbf{A }\mathbf{B }\mathbf{X }\in t(v\mathbf{c }^*+\mathbf{b } + L_{\vee }))}{{\mathbb {P}}(\left\Vert \mathbf{A }\mathbf{B }\mathbf{X }\right\Vert>t)} \mathop {}\!\mathrm {d}v \\&\quad =\int _0^\infty \lim _{t\rightarrow \infty } \left( \frac{\frac{\lambda _1}{\lambda }{\mathbb {P}}\left( \big ({\begin{matrix} A_{11} X_1\\ A_{21}X_1\end{matrix}}\big )\in t(v\mathbf{c }^*+\mathbf{b } + L_{\vee })\right) }{\frac{\lambda _1}{\lambda }\cdot {\mathbb {P}}(X_1>t) +\frac{\lambda _2}{\lambda }\cdot {\mathbb {P}}(X_2>t)} +\frac{\frac{\lambda _2}{\lambda }{\mathbb {P}}\left( \big ({\begin{matrix} A_{12} X_2\\ A_{22}X_2\end{matrix}}\big )\in t(v\mathbf{c }^*+\mathbf{b } + L_{\vee })\right) }{\frac{\lambda _1}{\lambda }\cdot {\mathbb {P}}(X_1>t) +\frac{\lambda _2}{\lambda }\cdot {\mathbb {P}}(X_2>t)}\right) \mathop {}\!\mathrm {d}v. \end{aligned}$$

Now recall that \(L_\vee = \{(x_1,x_2)\in {\mathbb {R}}^2: ~ x_1>0 \vee x_2>0\}\), which yields

$$\begin{aligned} t(v\mathbf{c }^*+\mathbf{b }+L_\vee ) = \left\{ (x_1,x_2)\in {\mathbb {R}}^2: ~(x_1> tvc_1^*+tb_1) \vee (x_2>tvc_2^*+tb_2)\right\} . \end{aligned}$$

Hence

$$\begin{aligned} {\mathbb {P}}\left( \big ({\begin{matrix} A_{11}X_1 \\ A_{21}X_1\end{matrix}}\big ) \in t(v\mathbf{c }^* + \mathbf{b }+L_\vee )\right)&= {\mathbb {P}}\left( A_{11}X_1>t(vc_1^*+b_1) \vee A_{21}X_1>t(vc_2^*+b_2)\right) \\&= {\mathbb {P}}\left( X_1> \min \left\{ \tfrac{t(vc_1^*+b_1)}{A_{11}},\tfrac{t(vc_2^*+b_2)}{A_{21}} \right\} \right) \\&= {\mathbb {P}}\left( X_1> t\cdot \min \left\{ \tfrac{vc_1^*+b_1}{A_{11}},\tfrac{vc_2^*+b_2}{A_{21}} \right\} \right) . \end{aligned}$$

A similar computation for \(\big ({\begin{matrix} A_{12}X_2 \\ A_{22}X_2\end{matrix}}\big )\) thus leads to

$$\begin{aligned}&\mu ^*(v\mathbf{c }^*+\mathbf{b } +L_{\vee }) \\&\quad = \lim _{t\rightarrow \infty } \left( \frac{\lambda _1 {\mathbb {P}}\left( X_1> t\cdot \min \left\{ \frac{vc_1^*+b_1}{A_{11}},\frac{vc_2^*+b_2}{A_{21}} \right\} \right) }{ \lambda _1 {\mathbb {P}}(X_1>t) +\lambda _2 {\mathbb {P}}(X_2>t)} +\frac{\lambda _2 {\mathbb {P}}\left( X_2> t\cdot \min \left\{ \frac{vc_1^*+b_1}{A_{12}},\frac{vc_2^*+b_2}{A_{22}} \right\} \right) }{\lambda _1 {\mathbb {P}}(X_1>t) +\lambda _2 {\mathbb {P}}(X_2>t)}\right) \\&\quad = \lim _{t\rightarrow \infty } \int _{\mathbf{a }\in {\mathbb {A}}} \frac{\lambda _1 {\mathbb {P}}\left( X_1> t\cdot \min \left\{ \frac{vc_1^*+b_1}{a_{11}},\frac{vc_2^*+b_2}{a_{21}} \right\} \right) + \lambda _2 {\mathbb {P}}\left( X_2> t\cdot \min \left\{ \frac{vc_1^*+b_1}{a_{12}},\frac{vc_2^*+b_2}{a_{22}} \right\} \right) }{\lambda _1 {\mathbb {P}}(X_1>t) +\lambda _2 {\mathbb {P}}(X_2>t)} \mathop {}\!\mathrm {d}{\mathbb {P}}_\mathbf{A }\\&\quad =\int _{\mathbf{a }\in {\mathbb {A}}} \lim _{t\rightarrow \infty } \frac{\lambda _1 {\mathbb {P}}\left( X_1> t\cdot \min \left\{ \frac{vc_1^*+b_1}{a_{11}},\frac{vc_2^*+b_2}{a_{21}} \right\} \right) + \lambda _2 {\mathbb {P}}\left( X_2> t\cdot \min \left\{ \frac{vc_1^*+b_1}{a_{12}},\frac{vc_2^*+b_2}{a_{22}} \right\} \right) }{\lambda _1 {\mathbb {P}}(X_1>t) +\lambda _2 {\mathbb {P}}(X_2>t)} \mathop {}\!\mathrm {d}{\mathbb {P}}_\mathbf{A }, \end{aligned}$$

where \({\mathbb {P}}_\mathbf{A }({}\cdot {})\) denotes the probability measure induced by A and \({\mathbb {A}}\) denotes the set of all possible realisation of \(\mathbf{A }\). Hereby, the second equality has been obtained by conditioning on \(\mathbf{A }=\mathbf{a }\), while the last equality follows from Lebesgue’s theorem of dominated convergence. Note that Lebesgue’s theorem is applicable since

$$\begin{aligned}&\frac{\lambda _1 {\mathbb {P}}\left( X_1>t\min \left\{ \frac{vc_1^*+b_1}{\mathbf{a }_{11}},\frac{vc_2^*+b_2}{\mathbf{a }_{21}}\right\} \right) +\lambda _2 {\mathbb {P}}\left( X_2>t\min \left\{ \frac{vc_1^*+b_1}{\mathbf{a }_{12}},\frac{vc_2^*+b_2}{\mathbf{a }_{22}}\right\} \right) }{\lambda _1 {\mathbb {P}}(X_1>t) +\lambda _2 {\mathbb {P}}(X_2>t)} \\&\le \frac{\lambda _1 {\mathbb {P}}\left( X_1>t\min \left\{ vc_1^*+b_1,vc_2^*+b_2\right\} \right) +\lambda _2 {\mathbb {P}}\left( X_2>t\min \left\{ vc_1^*+b_1,vc_2^*+b_2\right\} \right) }{\lambda _1 {\mathbb {P}}(X_1>t) +\lambda _2 {\mathbb {P}}(X_2>t)}\\&\le \frac{\lambda _1 {\mathbb {P}}\left( X_1>t\min \left\{ vc_1^*+b_1,vc_2^*+b_2\right\} \right) }{\lambda _1\cdot {\mathbb {P}}(X_1>t)} +\frac{\lambda _2 {\mathbb {P}}\left( X_2>t\min \left\{ vc_1^*+b_1,vc_2^*+b_2\right\} \right) }{\lambda _2\cdot {\mathbb {P}}(X_2>t)}\\&\rightarrow \left( \min \{vc_1^*+b_1,vc_2^*+b_2\}\right) ^{-\alpha _1} + \left( \min \{vc_1^*+b_1,vc_2^*+b_2\}\right) ^{-\alpha _2} \end{aligned}$$

and thus there exists \(t_0>0\) independent of the realisation \(\mathbf{a }\) such that for all \(t>t_0\) the integrand is smaller than

$$\begin{aligned} 2 \left( \left( \min \{vc_1^*+b_1,vc_2^*+b_2\}\right) ^{-\alpha _1} + \left( \min \{vc_1^*+b_1,vc_2^*+b_2\}\right) ^{-\alpha _2} \right) , \end{aligned}$$

which, as a constant (with respect to \(\mathbf{A }\)), is clearly \({\mathbb {P}}_\mathbf{A }\)-integrable.

By Tonelli’s theorem we thus obtain

$$\begin{aligned}&C_\vee = \int _0^\infty \mu ^*(v\mathbf{c }^*+\mathbf{b } +L_{\vee })\mathop {}\!\mathrm {d}v \\&= {\mathbb {E}}\left[ \int _0^\infty \lim _{t\rightarrow \infty } \frac{\lambda _1 {\overline{F}}_1\left( t\min \left\{ \frac{vc_1^*+b_1}{A_{11}},\frac{vc_2^*+b_2}{A_{21}}\right\} \right) +\lambda _2 {\overline{F}}_2\left( t\min \left\{ \frac{vc_1^*+b_1}{A_{12}},\frac{vc_2^*+b_2}{A_{22}}\right\} \right) }{\lambda _1\cdot {\overline{F}}_1(t) +\lambda _2\cdot {\overline{F}}_2(t)} \mathop {}\!\mathrm {d}v\right] . \end{aligned}$$

Applying Lemma 6.9 now yields (3.14).

The proof of (3.16) can be carried out in complete analogy. \(\square \)

Proof of Lemma 3.7

It is enough to prove that under the present assumptions also the assumption of Lemma 3.2 is fulfilled. Hence, we need to show that \(X_1\in {\text {RV}}(\alpha _1), ~ X_2\in {\text {RV}}(\alpha _2)\) for \(\alpha _1,\alpha _2>1\) implies that \(F_I^i\in {\mathcal {S}}\). Recall \(Y_i\) and \(F^{Y_i}_I=F_I^i\) from the proof of Lemma 3.2 and assume for the moment that neither \(A_{i1}=0\) a.s., nor \(A_{i2}=0\) a.s. Then, using Proposition 6.3 and the same argumentation as in the proof of Lemma 6.4 we obtain that \(Y_{i}\in {\text {RV}}(\min \{\alpha _1,\alpha _2\})\). Thus, the corresponding tail functions of the integrated tail functions \(\overline{F^{Y_i}}_I\) are regularly varying as well, with index \(-\min \{\alpha _1,\alpha _2\}+1\), which implies \(F^i_I \in {\mathcal {S}}\). If \(A_{i1}=0\) a.s. then and clearly \(Y_i\in {\text {RV}}(\alpha _2)\) which again implies \(F^i_I \in {\mathcal {S}}\). \(\square \)

Proof of Proposition 3.8

Assume (3.6) holds true. From Lemma 3.7 and its proof, we obtain directly that, as \(u=u_1+u_2 \rightarrow \infty \),

$$\begin{aligned} \Psi _1(b_1 u)+\Psi _2(b_2 u)&\sim \frac{1}{\lambda } \left( \lambda _1 \left( \frac{1}{c_1^*} \int _{b_1 u}^\infty {\mathbb {P}}(A_{11}X_1>y) \mathop {}\!\mathrm {d}y + \frac{1}{c_2^*} \int _{b_2 u}^\infty {\mathbb {P}}(A_{21}X_1>y) \mathop {}\!\mathrm {d}y \right) \right. \\&\qquad +\left. \lambda _2 \left( \frac{1}{c_1^*} \int _{b_1 u}^\infty {\mathbb {P}}(A_{12}X_2>y) \mathop {}\!\mathrm {d}y + \frac{1}{c_2^*} \int _{b_2 u}^\infty {\mathbb {P}}(A_{22}X_2>y) \mathop {}\!\mathrm {d}y \right) \right) , \end{aligned}$$

where the first two terms on the right-hand side are regularly varying with index \(-\alpha _1+1\), while the latter two terms are regularly varying with index \(-\alpha _2+1\).

Together with (3.11), (3.13) we thus obtain that, as \(u\rightarrow \infty \),

$$\begin{aligned} \Psi _\wedge (u)&= \Psi _1(b_1 u) + \Psi _2(b_2 u) - \Psi _{\vee }(u)\\&\sim \frac{1}{\lambda } \left( \lambda _1 \left( \frac{1}{c_1^*} \int _{b_1 u}^\infty {\mathbb {P}}(A_{11}X_1>y) \mathop {}\!\mathrm {d}y + \frac{1}{c_2^*} \int _{b_2 u}^\infty {\mathbb {P}}(A_{21}X_1>y) \mathop {}\!\mathrm {d}y - C_\vee u {\overline{F}}_1(u) \right) \right. \\&\qquad +\left. \lambda _2 \left( \frac{1}{c_1^*} \int _{b_1 u}^\infty {\mathbb {P}}(A_{12}X_2>y) \mathop {}\!\mathrm {d}y + \frac{1}{c_2^*} \int _{b_2 u}^\infty {\mathbb {P}}(A_{22}X_2>y) \mathop {}\!\mathrm {d}y - C_\vee u {\overline{F}}_2(u) \right) \right) , \end{aligned}$$

where (3.6) ensures that terms with the same index of regular variation do not cancel out asymptotically. Using Tonelli’s theorem as in the proof of Lemma 3.7 this yields

$$\begin{aligned} \Psi _\wedge (u)&\sim \frac{1}{\lambda } \left( \lambda _1 \left( \frac{1}{c_1^*} {\mathbb {E}}\left[ \int _{b_1u}^\infty {\overline{F}}_1(\tfrac{y}{A_{11}}) \mathop {}\!\mathrm {d}y\right] + \frac{1}{c_2^*} {\mathbb {E}}\left[ \int _{b_2 u}^\infty {\overline{F}}_1(\tfrac{y}{A_{21}}) \mathop {}\!\mathrm {d}y\right] - C_\vee {\overline{F}}_1(u) \right) \right. \\&\qquad +\left. \lambda _2 \left( \frac{1}{c_1^*} {\mathbb {E}}\left[ \int _{b_1 u}^\infty {\overline{F}}_2(\tfrac{y}{A_{12}}) \mathop {}\!\mathrm {d}y\right] + \frac{1}{c_2^*} {\mathbb {E}}\left[ \int _{b_2 u}^\infty {\overline{F}}_2(\tfrac{y}{A_{22}}) \mathop {}\!\mathrm {d}y\right] - C_\vee {\overline{F}}_2(u) \right) \right) \end{aligned}$$

and hence (3.17) by substituting \(v=\tfrac{y-b_i u}{c_i^*}\). If (3.6) fails, then the statement follows in analogy to the proof of Proposition 3.3. \(\square \)

6.3 Proofs for Sect. 4

Proof of Lemma 4.1

We take up the notation used in the proof of Lemma 3.7 and denote the jumps of the resulting one-dimensional risk processes by \(\{Y_{i,k},k\in {\mathbb {N}}\}\), \(i=1,2\). Then the given bound for \(\Psi _i(u)\) follows from [3, Thm. IV.5.2] with \(\kappa _i>0\) such that \(c_i\kappa _i = \lambda (\varphi _{Y_i}(\kappa _i)-1)\). (Note that in [3] the constants c and \(\lambda \) are combined as \(\beta =\lambda /c\).) But since, by conditioning,

$$\begin{aligned} \varphi _{Y_i}(y)&= {\mathbb {E}}\big [\hbox {e}^{y (B_{11}A_{i1}X_1 + B_{22} A_{i2}X_2)} \big ] = \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \hbox {e}^{y A_{i1}X_1 }\right] +\frac{\lambda _2}{\lambda } {\mathbb {E}}\left[ \hbox {e}^{y A_{i2}X_2} \right] \\&= \frac{\lambda _1}{\lambda } {\mathbb {E}}\left[ \varphi _{X_1}(y A_{i1}) \right] +\frac{\lambda _2}{\lambda } {\mathbb {E}}\left[ \varphi _{X_2}(y A_{i2}) \right] , \quad i=1,2, \end{aligned}$$

this is equivalent to (4.2).

Further, by [3, Thm. IV.5.3], it holds that

$$\begin{aligned} \lim _{u\rightarrow \infty } \hbox {e}^{\kappa _i u} \Psi _i(u) = \frac{c_i-\lambda {\mathbb {E}}[Y_i]}{\lambda \varphi _{Y_i}'(\kappa _i)-c_i}, \end{aligned}$$

with \({\mathbb {E}}[Y_i]\) as given in (6.4) and

$$\begin{aligned} \varphi _{Y_i}'(y)&= \frac{\lambda _1}{\lambda } \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}y} {\mathbb {E}}\left[ \hbox {e}^{y A_{i1}X_1 }\right] +\frac{\lambda _2}{\lambda } \frac{\mathop {}\!\mathrm {d}}{\mathop {}\!\mathrm {d}y}{\mathbb {E}}\left[ \hbox {e}^{y A_{i2}X_2} \right] = \frac{\lambda _1}{\lambda } \varphi '_{A_{i1}X_i}(y) +\frac{\lambda _2}{\lambda } \varphi '_{A_{i2}X_2}(y), \end{aligned}$$

where, again by conditioning,

$$\begin{aligned} \varphi '_{A_{ij}X_j}(y)&= \mathbb {E}\left[ A_{ij}X_j e^{y A_{ij}X_j }\right] = {\mathbb {E}}\left[ \mathbb {E}\left[ A_{ij}X_j e^{y A_{ij}X_j }|A_{ij} \right] \right] \\ {}&= {\mathbb {E}}\left[ A_{ij} \frac{\partial }{\partial (yA_{ij})} {\mathbb {E}}[ e^{yA_{ij}X_j}|A_{ij}]\right] = \mathbb {E}\left[ A_{ij} \varphi _{X_j}'(y A_{ij})\right] , \end{aligned}$$

which yields the given asymptotics. \(\square \)

Proof of Theorem 4.4

Recall from Sect. 2.2 that

$$\begin{aligned} R_i(t)= \sum _{k=1}^{N(t)} \left( (A_{i1})_k (B_{11})_k X_{1,k} + (A_{i2})_k (B_{22})_k X_{2,k}\right) - tc_i, \end{aligned}$$

so that the joint cumulant exponent of the two-dimensional Lévy process \((-R_1(t_1),-R_2(t_2))\) can be determined via conditioning first on \((\mathbf{B }_k)_{k\in {\mathbb {N}}}\), then on the components of \(\mathbf{A }\), as

$$\begin{aligned} k(t_1,t_2)&= \log {\mathbb {E}}[\exp (-t_1 R_1(1) - t_2 R_2(1))] \\&= \log {\mathbb {E}}\bigg [ \exp \bigg (- \sum _{k=1}^{N(1)} \Big ( \big (t_1 (A_{11})_k (B_{11})_k + t_2 (1-(A_{11})_k) (B_{11})_k\big ) X_{1,k} \\&\qquad \qquad + \left( t_1(A_{12})_k (B_{22})_k + t_2 (1-(A_{12})_k) (B_{22})_k \right) X_{2,k} \Big ) + t_1c_1 + t_2 c_2 \bigg )\bigg ] \\&=\log {\mathbb {E}}\left[ \exp \left( - \sum _{\ell =1}^{N_1(1)} (t_1 (A_{11})_\ell + t_2 (1-(A_{11})_\ell ) X_{1,\ell } \right) \right] \\&\qquad + \log {\mathbb {E}}\left[ \exp \left( - \sum _{\ell =1}^{N_2(1)} (t_1 (A_{12})_\ell + t_2 (1-(A_{12})_\ell ) X_{2,\ell } \right) \right] + t_1c_1 + t_2c_2\\&= \lambda _1\big (\varphi _{(t_1 A_{11} + t_2 (1-A_{11}))X_1}(1)-1\big ) + \lambda _2\big (\varphi _{(t_1 A_{12} + t_2 (1-A_{12}))X_2}(1)-1\big ) \\&\qquad + t_1c_1 + t_2c_2 \\&= {\mathbb {E}}\big [\lambda _1 (\varphi _{X_1}(-t_1 A_{11} - t_2 (1-A_{11}))-1)\big ] \\&\qquad + {\mathbb {E}}\big [ \lambda _2 (\varphi _{X_2}(-t_1 A_{12} - t_2 (1-A_{12}))-1)\big ] + t_1c_1 + t_2c_2, \end{aligned}$$

which is, by assumption (4.1), well defined on some set \(\Xi \supsetneq [0,\infty )^2\). The first two statements thus follow from [4, Thm. 3], as long as there exist \(\gamma _1,\gamma _2\) such that \(k(-\gamma _1,0)=k(0,-\gamma _2)=0\) and \((-\gamma _1,0),(0,-\gamma _2)\in \Xi ^\circ \), the interior of \(\Xi \). But since

$$\begin{aligned} k(-x,0)&= \exp \big (\lambda _1 ({\mathbb {E}}[\varphi _{ X_1}(x A_{11} )]-1)+ \lambda _2 ({\mathbb {E}}[\varphi _{ X_2}(x A_{12})]-1)\big ) -xc_1, \end{aligned}$$

we observe that \(\gamma _1=\kappa _1\) which exists and is such that \((-\kappa _1,0)\in \Xi ^\circ \) by assumption. Likewise, we obtain \(\gamma _2=\kappa _2\) with \((0,-\kappa _2)\in \Xi ^\circ \).

The last equation now follows directly from the fact that \(\Psi _{\wedge ,\text {sim}}(u)\le \Psi _\wedge (u)\). \(\square \)