Asymptotics and Approximations of Ruin Probabilities for Multivariate Risk Processes in a Markovian Environment

Abstract

This paper develops asymptotics and approximations for ruin probabilities in a multivariate risk setting. We consider a model in which the individual reserve processes are driven by a common Markovian environmental process. We subsequently consider a regime in which the claim arrival intensity and transition rates of the environmental process are jointly sped up, and one in which there is (with overwhelming probability) maximally one transition of the environmental process in the time interval considered. The approximations are extensively tested in a series of numerical experiments.

References

  1. Aldous D, Hennequin P, Ibragimov I, Jacod J (1985) Ecole d’été de probabilités de Saint-Flour XIII 1983. Lecture notes in mathematics. Springer

  2. Anderson D, Blom J, Mandjes M, Thorsdottir H, de Turck K (2016) A functional central limit theorem for a Markov-modulated infinite-server queue. Methodol Comput Appl Probab 18:153–168

    MathSciNet  MATH  Article  Google Scholar 

  3. Asmussen S (1984) Approximations for the probability of ruin within finite time. Scand Actuar J 1984:31–57

    MathSciNet  MATH  Article  Google Scholar 

  4. Asmussen S (1987) The heavy traffic limit of a class of Markovian queueing models. Oper Res Lett 6:301–306

    MathSciNet  MATH  Article  Google Scholar 

  5. Asmussen S (1989) Risk theory in a Markovian environment. Scand Actuar J 1989:69–100

    MathSciNet  MATH  Article  Google Scholar 

  6. Asmussen S (2003) Applied probability and queues. Springer

  7. Asmussen S, Albrecher H (2010) Ruin probabilities, vol 14. World Scientific

  8. Avram F, Palmowski Z, Pistorius M (2008) A two-dimensional ruin problem on the positive quadrant. Insur Math Econ 42(1):227–234

    MathSciNet  MATH  Article  Google Scholar 

  9. Badescu A, Chueng E (2011) A two-dimensional risk model with proportional reinsurance. J Appl Probab 48(3):749–765

    MathSciNet  MATH  Article  Google Scholar 

  10. Billingsley P (1999) Convergence of probability measures. Wiley

  11. Bäuerle N (1996) Some results about the expected ruin time in Markov-modulated risk models. Insur: Math Econ 18(2):119–127

    MathSciNet  MATH  Google Scholar 

  12. Bäuerle N, Kötter M (2007) Markov-modulated diffusion risk models. Scand Actuar J 2007(1):34–52

    MathSciNet  MATH  Article  Google Scholar 

  13. Cai J, Li H (2007) Dependence properties and bounds for ruin probabilities in multivariate compound risk models. J Multivar Anal 98:757–773

    MathSciNet  MATH  Article  Google Scholar 

  14. Cai J, Landriault D, Shi T, Wei W (2017) Joint insolvency analysis of a shared map risk process: a capital allocation application. North Am Actuar J 21 (2):178–192

    MathSciNet  MATH  Article  Google Scholar 

  15. Chen C, Panjer H (2009) A bridge from ruin theory to credit risk. Rev Quant Finan Acc 32(4):373–403

    Article  Google Scholar 

  16. D’Amico G (2014) Moments analysis of a Markov-modulated risk model with stochastic interest rates. Commun Stoch Anal 8(2):227–246

    MathSciNet  Google Scholar 

  17. Davidson J (1994) Stochastic limit theory: an introduction for econometricians. Advanced texts in econometrics. Oxford University Press

  18. Dickson D, Qazvini M (2018) Ruin problems in Markov-modulated risk models. Ann. Actuar Sci 12:23–48

    Article  Google Scholar 

  19. Escobar M, Hernandez J (2014) A note on the distribution of multivariate Brownian extrema. Int J Stoch Anal 2014:1–6

    MathSciNet  MATH  Article  Google Scholar 

  20. Gong L, Badescu A, Cheung E (2012) Recursive methods for a multi-dimensional risk process with common shocks. Insur: Math Econ 50(1):109–120

    MathSciNet  MATH  Google Scholar 

  21. Grandell J (1977) A class of approximations or ruin probabilities. Scand Actuar J 1977:37–52

    MathSciNet  MATH  Article  Google Scholar 

  22. He H, Keirstead W, Rebholz J (1998) Double lookbacks. Math Financ 8:201–228

    MathSciNet  MATH  Article  Google Scholar 

  23. Iyengar S (1985) Hitting lines with two-dimensional Brownian motion. SIAM J Appl Math 45:983–989

    MathSciNet  MATH  Article  Google Scholar 

  24. Jacod J, Shiryaev A (2002) Limit theorems for stochastic processes. Grundlehren der mathematischen Wissenschaften. Springer

  25. Joshi M (2003) The concepts and practice of mathematical finance mathematics, finance and risk. Cambridge University Press

  26. Kaishev V, Dimitrova D, Ignatov Z (2008) Operational risk and insurance: a ruin-probabilistic reserving approach. J Oper Risk 3(3):39–60

    Article  Google Scholar 

  27. Kallenberg O (1997) Foundations of modern probability. Springer

  28. Kosorok M (2008) Introduction to empirical processes and semiparametric inference. Springer

  29. Kou S, Zhong H (2016) First-passage times of two-dimensional Brownian motion. Adv Appl Probab 48(4):1045–1060

    MathSciNet  MATH  Article  Google Scholar 

  30. Loisel S (2007) Time to ruin, insolvency penalties and dividends in a Markov-modulated multi-risk model with common shocks. Bulletin Français d’Actuariat 7:4–24

    Google Scholar 

  31. Lu Y, Tsai C (2007) The expected discounted penalty at ruin for a Markov-modulated risk process perturbed by diffusion. North Am Actuar J 11(2):136–149

    MathSciNet  Article  Google Scholar 

  32. Lundberg F (1903) Approximerad framställning af sannolikhetsfunktionen: Återförsäkering af kollektivrisker. Almqvist & Wiksell

  33. Pang G, Zheng Y (2017) On the functional and local limit theorems for Markov modulated compound Poisson processes. Stat Probab Lett 129:131–140

    MathSciNet  MATH  Article  Google Scholar 

  34. Picard P, Lefévre C, Coulibaly I (2003) Multirisks model and finite-time ruin probabilities. Methodol Comput Appl Probab 5:337–353

    MathSciNet  MATH  Article  Google Scholar 

  35. Reinhard J (1984) On a class of semi-Markov risk models obtained as a classical risk models in a Markovian environment. ASTIN Bulletin 14:23–43

    Article  Google Scholar 

  36. Whitt W (2007) Proofs of the martingale FCLT. Probab Surveys 4:268–302

    MathSciNet  MATH  Article  Google Scholar 

  37. Wise M, Bhansali V (2008) Correlated random walks and the joint survival probability. SSRN Electronic Journal

  38. Zhang X (2008) On the ruin problem in a Markov-modulated risk model. Methodol Comput Appl Probab 10(2):225–238

    MathSciNet  MATH  Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to G. A. Delsing.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proof of Theorem 1

Appendix: Proof of Theorem 1

In this section we follow the same line of reasoning as the proof of Theorem 1.1 of Pang and Zheng (2017). We begin by decomposing the diffusion-scaled process into three separate processes in Lemma 1. The convergence of each process separately towards a Brownian motion is proven in Lemma 4, Lemma 5 and Lemma 6. These lemmas are the multivariate equivalents of Lemmas 2.6, 2.7 and 2.8 of Pang and Zheng (2017), respectively. Finally, we conclude the proof with the joint convergence of the processes at the end of this section.

Lemma 1

The diffusion-scaled process \({\hat {\boldsymbol Y}}^{n}\) can be decomposed into the following three processes:

$$\hat{Y}_{i}^{n}(t)=\hat{U}_{i}^{n}(t)+\hat{V}_{i}^{n}(t)+\hat{W}_{i}^{n}(t)$$

where

$$\hat{U}_{i}^{n}(t):=\frac{1}{n^{\delta}}\sum\limits_{k=1}^{A^{n}(t)}\left( Z_{i,k}^{n}(J^{n}(\tau_{i,k}^{n}))-m^{n}_{i,J^{n}(\tau_{i,k}^{n})}\right),$$
$$\hat{V}_{i}^{n}(t):=\frac{1}{n^{\delta}}\left( \sum\limits_{k=1}^{A^{n}(t)}m^{n}_{i,J^{n}(\tau_{i,k}^{n})}-{{\int}_{0}^{t}} m^{n}_{i,J^{n}(s)}\lambda^{n}_{i,J^{n}(s)}ds\right),$$
$$\hat{W}_{i}^{n}(t):=\frac{1}{n^{\delta}}\left( {{\int}_{0}^{t}} m^{n}_{i,J^{n}(s)}\lambda^{n}_{i,J^{n}(s)}ds-\sum\limits_{j=1}^{I}\lambda_{i,j}^{n}\mu_{i,j}^{n}\pi_{j} t\right).$$

For each \(n\in \mathbb {N}\), define \(\mu ^{n}_{i,*}:=\max \limits _{j\in S}\mu _{i,j}^{n}\), \(\lambda ^{n}_{i,*}:=\max \limits _{j\in S}\lambda _{i,j}^{n}\) and \(\sigma ^{n}_{i,*}:=\max \limits _{j\in S}\sigma _{i,j}^{n}\). By the scaling of the parameters of Yn, we obtain that, for all i ∈{1,...,m},

$$ \frac{1}{n}\lambda^{n}_{i,*}\rightarrow \lambda_{i,*}, \ \mu^{n}_{i,*}\rightarrow \mu_{i,*} \ \text{and} \ \sigma^{n}_{i,*}\rightarrow \sigma_{i,*}, $$
(10)

in \(\mathbb {R}\) as \(n\rightarrow \infty \). Then we can find n1 > 0 and Δ > 0 such that, for any n > n1 and all i ∈{1,...,m},

$$ \max\Big\{\frac{1}{n}\lambda^{n}_{i,*},\mu^{n}_{i,*},\sigma^{n}_{i,*}\Big\}<{\Delta}. $$
(11)

We fix the n1 and Δ throughout the proof. We start by proving the convergence of \({\hat {\boldsymbol U}}^{n}\). For this we require the next auxiliary result, which is a direct extension of Lemma 2.2 of Pang and Zheng (2017).

Lemma 2

Letz1,1, z1,2,...,zn, n− 1, zn, nandw1,1, w1,2,...,wn, n− 1, wn, nbe complex numbers of modulusb. Then

$$\Bigg|\prod\limits_{i=1}^{m}\prod\limits_{j=1}^{n} z_{i,j}-\prod\limits_{i=1}^{m}\prod\limits_{j=1}^{n} w_{i,j}\Bigg|\leq b^{m-1}\sum\limits_{i=1}^{m}\Bigg|\prod\limits_{j=1}^{n} z_{i,j}-\prod\limits_{j=1}^{n} w_{i,j}\Bigg|\leq b^{m+n-2}\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n}|z_{i,j}-w_{i,j}|$$

Lemma 3

The finite-dimensional distributions of\(\hat {\boldsymbol U}^{n}=(\hat {U}_{1}^{n},...,\hat {U}_{m}^{n})\)converge to those of\(\hat {\boldsymbol U}\), where\( \hat {\boldsymbol U}:=(\hat {U}_{1},...,\hat {U_{m}})\)with

$$ \hat{\boldsymbol U}:=\left\{\begin{array}{llll}\mathbf{B}^{1}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ \boldsymbol{0}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1); \end{array}\right. $$
(12)

here\(\mathbf {B}^{1}=({B^{1}_{1}},...,{B^{1}_{m}})\)is am-dimensional Brownian motion with\(\mathbb {E}\left [(\mathbf {B}^{1}(t))(\mathbf {B}^{1}(t))^{\top }\right ]=\bar {\Sigma }^{1}t\), where\(\bar {\Sigma }^{1}\)has been defined in Section 3.1.

Proof

We need to prove

$$ (\hat{\boldsymbol U}^{n}(t_{1}),...,\hat{\boldsymbol U}^{n}(t_{k}))\Rightarrow(\hat{\boldsymbol U}(t_{1}),...,\hat{\boldsymbol U}(t_{k})) \ \text{in} \ \mathbb{R}^{m\times k} \ \text{as} \ n\rightarrow\infty, $$
(13)

for any 0 ≤ t1 ≤⋯ ≤ tkT and k ≥ 1. We first consider the case of a single point in time: we aim at proving that, for each t ≥ 0,

$$\hat{\boldsymbol U}^{n}(t)\Rightarrow\hat{\boldsymbol U}(t) \in \mathbb{R}^{m} \ \text{as} \ n\rightarrow\infty.$$

By Lévy’s continuity theorem on \(\mathbb {R}^{m}\) (Kallenberg 1997, Thm, 4.3), it is sufficient to show convergence of the characteristic function: we have to prove that, as \(n\to \infty \)

$$ {\psi_{t}^{n}}(\boldsymbol{\theta}):=\mathbb{E}\left[e^{i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}^{n}(t)}\right]\to \psi_{t}(\boldsymbol{\theta}):=\mathbb{E}\left[e^{i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}(t)}\right]$$

for every \(\boldsymbol {\theta }\in \mathbb {R}^{m}\). By the definition of \(\hat {\boldsymbol U}\) in Eq. 12,

$$ \psi_{t}(\boldsymbol{\theta}):=\mathbb{E}\left[e^{i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}(t)}\right]=\left\{\begin{array}{llll}\exp\left( -\frac{1}{2}\boldsymbol{\theta}^{T}\bar{\Sigma}^{1}\boldsymbol{\theta} t\right), & \delta=\frac{1}{2}, \ \alpha\geq 1\\ 1, & \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1). \end{array}\right. $$
(14)

Let \(\mathcal {A}^{n}_{t}:=\sigma \{\mathbf {A}^{n}(s):0\leq s\leq t\}\vee \sigma \{J^{n}(s):0\leq s\leq t\}\vee \mathcal {N}\), where \(\mathcal {N}\) is the collection of P-null sets. Then, by conditioning, we obtain

$$ \begin{array}{@{}rcl@{}} {\psi_{t}^{n}}(\boldsymbol{\theta})&=&\mathbb{E}\left[\exp\left( i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}^{n}(t)\right)\right]=\mathbb{E}\left[\mathbb{E}\left[\exp\left( i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}^{n}(t)\right)\big|\mathcal{A}^{n}_{t}\right]\right]\\ &=&\mathbb{E}\left[\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\mathbb{E}\left[\exp\left( i\theta_{i}\frac{1}{n^{\delta}}\left( Z_{i,k}^{n}(J^{n}(\tau_{i,k}^{n}))-\mu^{n}_{i,J^{n}(\tau_{i,k}^{n})}\right)\right)\big|\mathcal{A}^{n}_{t}\right]\right]\\ &=&\mathbb{E}\left[\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\left( 1-\frac{{\theta_{i}^{2}}}{2n^{2\delta}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}+o(n^{-2\delta})\right)\right] \end{array} $$
(15)

By Eq. 10, we can find n2 such that for any n > n2 and all i ∈{1,...,m},

$$0<\max\limits_{1\leq k\leq {A^{n}_{i}}(t)}\left\{\frac{{\theta_{i}^{2}}}{2n^{2\delta}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}-o\left( n^{-2\delta}\right)\right\}<1.$$

Furthermore, recall the definition of n1 in Eq. 11. Then, for \(\delta =\frac {1}{2}\), α ≥ 1 and for any

$$ n>n_{3}:=\max\{n_{1},n_{2}\}, $$
(16)

we have

$$ \begin{array}{@{}rcl@{}} \Big|{\psi_{t}^{n}}(\boldsymbol{\theta})-\psi_{t}(\boldsymbol{\theta})\Big|&\leq& \mathbb{E}\Bigg[\Bigg|\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\left( 1-\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}+o(n^{-1})\right) \\ &&\qquad-\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\exp\left( -\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\Bigg|\Bigg]\\ &&+\Bigg|\mathbb{E}\left[\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\right]-\exp\left( -\sum\limits_{i=1}^{m}\frac{{\theta_{i}^{2}}}{2}\bar{\sigma}_{i}^{2}t\right)\Bigg|\\ &\leq & \mathbb{E}\left[\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{4}}}{4n^{2}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{4}\right]+o(1)\\ &&+\Bigg|\mathbb{E}\left[\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\right]-\exp\left( -\sum\limits_{i=1}^{m}\frac{{\theta_{i}^{2}}}{2}\bar{\sigma}_{i}^{2}t\right)\Bigg|\\ &\rightarrow& \ 0 \ \text{as} \ n\rightarrow\infty; \end{array} $$
(17)

here, the first inequality is due to the triangle inequality and the second inequality follows by Lemma 2 above in combination with Lemma 2.3. of Pang and Zheng (2017). By Eq. 11, for large enough n defined above, we have

$$\mathbb{E}\left[\frac{1}{n}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{4}\right]\leq{\Delta}^{5} t, \ \forall i, \ t\geq 0.$$

As a result, the first two terms in the last equation converge to 0 when \(n\rightarrow \infty \). For the convergence of the last term, since the sequence

$$\left\{\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right):n\geq 1\right\}$$

is bounded for each t ≥ 0, it suffices to show that, for all i ∈{1,...,m},

$$ \frac{1}{n}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\Rightarrow\bar{\sigma}_{i}^{2} t, \ \ \text{in} \ \mathbb{R} \ \text{as} \ n\rightarrow\infty. $$
(18)

This follows from the convergences

$$\sum\limits_{j=1}^{I} \frac{\lambda_{i,j}^{n}}{n}{{\int}_{0}^{t}} \mathbb{1}(J^{n}(s)=j) \mathrm{d}s\rightarrow\sum\limits_{j=1}^{I}\lambda_{i,j}\pi_{j}t \ \text{a.s.},$$

and

$$\sum\limits_{j=1}^{I} \frac{\lambda_{i,j}^{n}}{n}(\sigma_{i,j}^{n})^{2}{{\int}_{0}^{t}} \mathbb{1}(J^{n}(s)=j) \mathrm{d}s\rightarrow\sum\limits_{j=1}^{I}\lambda_{i,j}\sigma_{i,j}^{2}\pi_{j}t \ \text{a.s.}$$

by claim (4) in Anderson et al. (2016), the weak law of large numbers for Poisson processes, and the ‘random change of time lemma’ (Billingsley 1999, pp. 151).

For \(\delta =1-\frac {\alpha }{2}\) and α ∈ (0, 1), we follow the same line of reasoning and prove

$$\Bigg|\mathbb{E}\left[\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n^{2\delta}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\right]-1\Bigg|\rightarrow 0 \ \text{as} \ n\rightarrow\infty.$$

Thereby, we have shown Eq. 13.

To show the convergence of the finite-dimensional distributions, it is sufficient to prove that for any \((\boldsymbol {\theta }^{1},...,\boldsymbol {\theta }^{k})\in \mathbb {R}^{m\times l}\) and 0 ≤ t1 < ⋯ < tlT,

$$\mathbb{E}\left[\exp\left( i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}^{n}(t_{k})\right)\right]\rightarrow\mathbb{E}\left[\exp\left( i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}(t_{k})\right)\right] \ \text{as} \ n\rightarrow\infty.$$

By the definition of \(\hat {\boldsymbol U}\), we have

$$ \begin{array}{@{}rcl@{}} &&\mathbb{E}\left[\exp\left( i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}(t_{k})\right)\right]\\ &=& \left\{\begin{array}{llll}{\displaystyle \exp\left( -\frac{1}{2}\sum\limits_{k_{1}=1}^{l}\sum\limits_{k_{2}=1}^{l}(\boldsymbol{\theta}^{k_{1}})^{\top}\bar{\Sigma}^{1}\boldsymbol{\theta}^{k_{2}} (t_{k_{1}}\wedge t_{k_{2}})\right)}, & \delta=\frac{1}{2}, \ \alpha\geq 1\\ 1, & \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1). \end{array}\right. \end{array} $$

By conditioning and direct calculation as in Eq. 15, we have

$$ \begin{array}{@{}rcl@{}} &&\mathbb{E}\bigg[\exp\bigg(i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}^{n}(t_{k})\bigg)\bigg]\\&=& \mathbb{E}\left[\prod\limits_{j=1}^{l}\prod\limits_{i=1}^{m}\exp\left( i\frac{1}{n^{\delta}}\sum\limits_{k=j}^{l}{\theta^{k}_{i}}\sum\limits_{h=A^{n}(t_{j-1})+1}^{A^{n}(t_{j})}\left( Z_{i,h}^{n}(J^{n}(\tau_{i,h}^{n}))-\mu^{n}_{i,J^{n}(\tau_{i,h}^{n})}\right)\right)\right]\\ &&\rightarrow\left\{\begin{array}{llll}{\prod}_{j=1}^{l}{\prod}_{i=1}^{m}\exp\left( -\frac{1}{2}\left( {\sum}_{k=j}^{l} {\theta^{k}_{i}}\right)^{2}\bar{\sigma}_{i}^{2}(t_{j}-t_{j-1})\right), & \delta=\frac{1}{2}, \ \alpha\geq 1\\ 1, & \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1), \end{array}\right. \end{array} $$

as \(n\rightarrow \infty \), and

$$ \begin{array}{@{}rcl@{}} &&\prod\limits_{j=1}^{l}\prod\limits_{i=1}^{m}\exp\left( -\frac{1}{2}\left( \sum\limits_{k=j}^{l} {\theta^{k}_{i}}\right)^{2}\bar{\sigma}_{i}^{2}(t_{j}-t_{j-1})\right)\\ &=&\exp\left( -\frac{1}{2}\sum\limits_{k_{1}=1}^{l}\sum\limits_{k_{2}=1}^{l}(\boldsymbol{\theta}^{k_{1}})^{\top}\bar{\Sigma}^{1}\boldsymbol{\theta}^{k_{2}} (t_{k_{1}}\wedge t_{k_{2}})\right). \end{array} $$

Applying Lévy’s continuity theorem (on \(\mathbb {R}^{m}\) now), the convergence can be shown in a similar way as in Eqs. 15 and 17. Therefore, we have proven the weak convergence of the finite-dimensional distributions. □

Lemma 4

\(\hat {\boldsymbol U}^{n}\Rightarrow \hat {\boldsymbol U}\)in\(\mathbb {D}^{m}\)as\(n\rightarrow \infty \), where\(\hat {\boldsymbol U}\)is given in Eq. 12.

Proof

Marginal tightness of the \(\hat {\boldsymbol U}_{i}^{n}\) has been proven by Pang and Zheng (2017, Lemma 2.5) which implies joint tightness for \(\hat {\boldsymbol U}\) (Kosorok 2008, Lemma 7.14(i)). Together with the continuity of \(\hat {\boldsymbol U}\), Lemma 3, we apply Thm. 13.1 of Billingsley (1999) to conclude the convergence of \(\hat {\boldsymbol U}^{n}\). □

Lemma 5

\(\hat {\boldsymbol V}^{n}\Rightarrow \hat {\boldsymbol V}\)in\(\mathbb {D}^{m}\)as\(n\rightarrow \infty \), where\(\hat {\boldsymbol V}\)is given by

$$ \hat{\boldsymbol V}:=\left\{\begin{array}{llll} \mathbf{B}^{2}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ \mathbf{0}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1); \end{array}\right. $$
(19)

here\(\mathbf {B}^{2}:=({B_{1}^{2}},...,{B_{m}^{2}})\)is am-dimensional zero-mean Brownian motion with\(\mathbb {E}\left [(\mathbf {B}^{2}(t))(\mathbf {B}^{2}(t))^{T}\right ]=\bar {\Sigma }^{2}t\), where\(\bar {\Sigma }^{2}\)has been defined in Section 3.1.

Proof

As centered Poisson processes are \(\mathbb {R}\)-valued martingales, i.e., for each \(n\in \mathbb {N}\) and every i ∈{1,...,m} the process

$$\left\{{A_{i}^{n}}(t)-{{\int}_{0}^{t}}\lambda_{i,J^{n}(u)}^{n} \mathrm{d}u:t\geq 0\right\}$$

is a martingale, \(\hat {\boldsymbol V}^{n}\) is a \(\mathbb {R}^{m}\)-valued martingale. The maximum jump for \(\hat {V}_{i}^{n}\) is \(\mu _{i,*}^{n}/n^{\delta }\). By Eq. 1, we obtain that the expected value of the maximum jump is asymptotically negligible, i.e., for all i ∈{1,...,m}

$$\frac{1}{n^{\delta}}\mathbb{E}\left[\mu_{i,*}^{n}\right]\rightarrow 0, \ \text{as} \ n\rightarrow\infty.$$

For \(n\in \mathbb {N}\), let \(\{[\hat {V}_{i}^{n},\hat {V}_{j}^{n}](t):t\geq 0\}\) be the quadratic covariation process of \(\hat {V}_{i}^{n}\) and \(\hat {V}_{i}^{n}\). Then, for each t, we have by the quadratic variation of a compound Poisson process, as \(n\to \infty \),

$$ \begin{array}{@{}rcl@{}} [\hat{V}_{i}^{n},\hat{V}_{j}^{n}](t)&=&\frac{1}{n^{2\delta}}\left\{\begin{array}{llll} {\sum}_{k=1}^{{A_{i}^{n}}}(\mu^{n}_{i,J^{n}(\tau_{i,k})})^{2} & \text{for} \ i=j,\\ 0, & \text{for} \ i\neq j, \end{array}\right. \end{array} $$
(20)
$$ \begin{array}{@{}rcl@{}}&\Rightarrow&\left\{\begin{array}{llll}\bar{\Sigma}_{i,j}^{2} t, & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ 0, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1), \end{array}\right. \ \end{array} $$
(21)

in \( \mathbb {R}^{m}\), where the convergence is proven in the same way as Eq. 18. Applying Thm. 2.1 of Whitt (2007), we have shown the convergence of \(\hat {\boldsymbol V}^{n}\). □

Lemma 6

\(\hat {\boldsymbol W}^{n}\Rightarrow \hat {\boldsymbol W}\)in\(\mathbb {D}^{m}\)as\(n\rightarrow \infty \), where the limit process\(\hat {\boldsymbol W}:=\{\hat {\boldsymbol W}(t):t\geq 0\}\)is given by

$$ \hat{\boldsymbol W}:=\left\{\begin{array}{llll}\mathbf{0}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha> 1,\\ \mathbf{B}^{3}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1]; \end{array}\right. $$
(22)

here\(\mathbf {B}^{3}=({B_{1}^{3}},...,{B_{m}^{3}})\)is am-dimensional Brownian motion with\(\mathbb {E}\left [(\mathbf {B}^{3}(t))(\mathbf {B}^{3}(t))^{T}\right ]=\bar {\Sigma }^{3}t\), where\(\bar {\Sigma }^{3}\)has been defined in Section 3.1.

Proof

Let \(\bar {{\boldsymbol W}}^{n}:=(\bar {W}_{1}^{n},...,\bar {W}_{m}^{n})\) with, for i = 1,…,m,

$$\bar{W}_{i}^{n}:=\frac{1}{n^{\delta}}\left( \sum\limits_{k=1}^{I}\mu_{i,k}^{n}\lambda_{i,k}^{n}{{\int}_{0}^{t}} \mathbb{1}(J^{n}(s)=k) \mathrm{d}s-\sum\limits_{k=1}^{I}\mu_{i,k}^{n}\lambda_{i,k}^{n}\pi_{k} t \right).$$

By Prop. 3.2 of Anderson et al. (2016), we have, as \(n\rightarrow \infty \),

$$\bar{{\boldsymbol W}}^{n}\Rightarrow\left\{\begin{array}{llll}\mathbf{0}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha> 1,\\ {\boldsymbol B}^{3}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1], \end{array}\right.$$

with B3 as defined above. This concludes the proof. □

Proof of Theorem 1

By Lemmas 1, 4, 5 and 6, we have obtained the marginal convergence of \(\hat {\boldsymbol U}^{n}\Rightarrow \hat {\boldsymbol U}\), \(\hat {\boldsymbol V}^{n}\Rightarrow \hat {\boldsymbol V}\) and \(\hat {\boldsymbol W}^{n}\Rightarrow \hat {\boldsymbol W}\). We now have to prove the joint convergence

$$\left( \hat{\boldsymbol U}^{n},\hat{\boldsymbol V}^{n},\hat{\boldsymbol W}^{n}\right)\Rightarrow \left( \hat{\boldsymbol U},\hat{\boldsymbol V},\hat{\boldsymbol W}\right)$$

where \(\hat {\boldsymbol U}\), \(\hat {\boldsymbol V}\) and \(\hat {\boldsymbol W}\) are mutually independent. To this end, we first note that \(\hat {\boldsymbol U}^{n}\) and \(\hat {\boldsymbol V}^{n}\) are compensated compound Poisson processes which are martingales. Furthermore, by Anderson et al. (2016, Lemma 3.1) the process \(\hat {\boldsymbol W}^{n}\) is a martingale.

By Jacod and Shiryaev (2002, Thm. 3.12, Ch. VIII), or a slightly less extensive version is given in Aldous et al. (1985, Cor. 2.17, pp. 264), it suffices to show that, for \({\boldsymbol M}^{n}:=(\hat {\boldsymbol U}^{n},\hat {\boldsymbol V}^{n},\hat {\boldsymbol W}^{n})\) and \({\boldsymbol M}:=(\hat {\boldsymbol U},\hat {\boldsymbol V},\hat {\boldsymbol W})\),

$$ \left[{\boldsymbol M}^{n},{\boldsymbol M}^{n}\right](t)\Rightarrow \left[\begin{array}{llll} \hat{\Sigma}^{1} & \textbf{0} & \textbf{0} \\ \textbf{0} & \hat{\Sigma}^{2} & \textbf{0} \\ \textbf{0} & \textbf{0} & \hat{\Sigma}^{3} \end{array}\right] t, $$
(23)

where

$$(\hat{\Sigma}^{1},\hat{\Sigma}^{2}) = \left\{\begin{array}{llll}(\bar{\Sigma}^{1},\bar{\Sigma}^{2}), & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ (\mathbf{0},\mathbf{0}), & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1), \end{array}\right. \hat{\Sigma}^{3}=\left\{\begin{array}{llll}\mathbf{0}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha> 1,\\ \bar{\Sigma}^{3}, & \text{if \ } \delta=1 - \frac{\alpha}{2}, \ \alpha\!\in\!(0,1]. \end{array}\right.$$

For \(\alpha \in (0,\infty )\) and \(\delta \in [\frac {1}{2},1)\),

$$ \begin{array}{@{}rcl@{}} \left[\hat{U}_{i}^{n},\hat{V}_{j}^{n}\right](t)&=&\frac{1}{n^{2\delta}}\left\{\begin{array}{llll} {\sum}_{k=1}^{{A_{i}^{n}}}\left( Z_{i,k}^{n}(J^{n}(\tau_{i,k}^{n}))-\mu^{n}_{i,J^{n}(\tau_{i,k}^{n})}\right)\mu^{n}_{i,J^{n}(\tau_{i,k})} & \text{for} \ i=j,\\0, & \text{for} \ i\neq j, \end{array}\right. \\ &\Rightarrow &0 \ \text{in} \ \mathbb{R}^{m}, \ \text{as} \ n\rightarrow\infty. \end{array} $$

Together with \(\big [\hat {U}_{i}^{n},\hat {W}_{j}^{n}\big ](t)=0\) and \(\big [\hat {V}_{i}^{n},\hat {W}_{j}^{n}\big ](t)=0\) this proves Eq. 23. The proof of Theorem 1 is completed by applying the continuous mapping theorem. □

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Delsing, G.A., Mandjes, M.R.H., Spreij, P.J.C. et al. Asymptotics and Approximations of Ruin Probabilities for Multivariate Risk Processes in a Markovian Environment. Methodol Comput Appl Probab 22, 927–948 (2020). https://doi.org/10.1007/s11009-019-09742-4

Download citation

Keywords

  • Ruin probability
  • Insurance risk
  • Markov processes
  • Approximations
  • Multi-dimensional risk process

Mathematics Subject Classification (2010)

  • 91B30