Skip to main content
Log in

Deterministic shock vs. stochastic value-at-risk — an analysis of the Solvency II standard model approach to longevity risk

  • Original Research Paper
  • Published:
Blätter der DGVFM

Abstract

In general, the capital requirement under Solvency II is determined as the 99.5% Value-at-Risk of the Available Capital. In the standard model’s longevity risk module, this Value-at-Risk is approximated by the change in Net Asset Value due to a pre-specified longevity shock which assumes a 25% reduction of mortality rates for all ages.

We analyze the adequacy of this shock by comparing the resulting capital requirement to the Value-at-Risk based on a stochastic mortality model. This comparison reveals structural shortcomings of the 25% shock and therefore, we propose a modified longevity shock for the Solvency II standard model.

We also discuss the properties of different Risk Margin approximations and find that they can yield significantly different values. Moreover, we explain how the Risk Margin may relate to market prices for longevity risk and, based on this relation, we comment on the calibration of the cost of capital rate and make inferences on prices for longevity derivatives.

Zusammenfassung

Kapitalanforderungen unter Solvency II werden im Allgemeinen als 99,5% Value-at-Risk des Available Capital bestimmt. Im Standardmodell wird dieser Value-at-Risk für Langlebigkeitsrisiken allerdings durch die Veränderung des Net Asset Values basierend auf einem Langlebigkeitsschock approximiert. Dieser Schock ist eine dauerhafte Reduktion der Sterbewahrscheinlichkeiten für alle Alter um 25%.

Wir analysieren die Angemessenheit dieses Schocks durch einen Vergleich der resultierenden Kapitalanforderungen mit dem Value-at-Risk entsprechend eines stochastischen Sterblichkeitsmodells. Dabei beobachten wir strukturelle Schwächen des 25%-Schocks und schlagen deshalb einen modifizierten Schock für das Solvency II Standardmodell vor.

Im zweiten Teil des Artikels vergleichen wir verschiedene Approximationen für die Risikomarge und stellen fest, dass sie zu signifikant unterschiedlichen Werten führen können. Schließlich gehen wir auf den Zusammenhang zwischen der Risikomarge und Finanzmarktpreisen für Langlebigkeitsrisiken ein. Diesen nutzen wir für eine Analyse der aktuellen Wahl der Kapitalkostenrate und für Rückschlüsse auf mögliche Preise für Langlebigkeitsderivate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Bauer D, Börger M, RußJ, Zwiesler H-J (2008) The volatility of mortality. Asia-Pac J Risk Insur 3:184–211

    Google Scholar 

  2. Bauer D, Börger M, RußJ (2010) On the pricing of longevity-linked securities. Insur, Math Econ 46:139–149

    Article  Google Scholar 

  3. Bauer D, Bergmann D, ReußA (2010) On the calculation of the solvency capital requirement based on nested simulations. Working Paper, Georgia State University and Ulm University

  4. Biffis E (2005) Affine processes for dynamic mortality and actuarial valuations. Insur, Math Econ 37:443–468

    Article  MATH  MathSciNet  Google Scholar 

  5. Booth H, Tickle L (2008) Mortality modelling and forecasting: a review of methods. Ann Actuar Sci 3:3–41

    Google Scholar 

  6. Booth H, Maindonald J, Smith L (2002) Applying Lee-Carter under conditions of variable mortality decline. Popul Stud 56:325–336

    Article  Google Scholar 

  7. Cairns A, Blake D, Dowd K (2006) A two-factor model for stochastic mortality with parameter uncertainty: theory and calibration. J Risk Insur 73:687–718

    Article  Google Scholar 

  8. Cairns A, Blake D, Dowd K (2006) Pricing death: frameworks for the valuation and securitization of mortality risk. ASTIN Bull 36:79–120

    Article  MATH  MathSciNet  Google Scholar 

  9. Cairns A, Blake D, Dowd K (2008) Modelling and management of mortality risk: a review. Scand Actuar J 2:79–113

    Article  MathSciNet  Google Scholar 

  10. Cairns A, Blake D, Dowd K, Coughlan G, Epstein D, Ong A, Balevich I (2009) A quantitative comparison of stochastic mortality models using data from England & Wales and the United States. North Am Actuar J 13:1–35

    Google Scholar 

  11. CEIOPS (2007) QIS3 Calibration of the underwriting risk, market risk and MCR. Available at: http://www.ceiops.eu/media/files/consultations/QIS/QIS3/QIS3CalibrationPapers.pdf

  12. CEIOPS (2008) QIS4 technical specifications. Available at: http://www.ceiops.eu/media/docman/Technical%20Specifications%20QIS4.doc

  13. CEIOPS (2008) QIS4 term structures. Available at: http://www.ceiops.eu/media/docman/public_files/consultations/QIS/CEIOPS-DOC-23-08%20rev%20QIS4%20Term%20Structures%2020080507.xls

  14. CEIOPS (2008) CEIOPS’ report on its fourth quantitative impact study (QIS4) for Solvency II. Available at: http://www.ceiops.eu/media/files/consultations/QIS/CEIOPS-SEC-82-08%20QIS4%20Report.pdf

  15. CEIOPS (2009) Consultation Paper No 49, Draft CEIOPS’ advice for level 3 implementing measures on Solvency II: standard formula SCR—article 109 c life underwriting risk. Available at: http://www.ceiops.eu/media/files/consultations/consultationpapers/CP49/CEIOPS-CP-49-09-L2-Advice-Standard-Formula-Life-Underwriting-risk.pdf

  16. CEIOPS (2009) Consultation Paper No 42, Draft CEIOPS’ advice for level 2 implementing measures on Solvency II: article 85(d)—calculation of the risk margin. Available at: http://www.ceiops.eu/media/files/consultations/consultationpapers/CP42/CEIOPS-CP-42-09-L2-Advice-TP-Risk-Margin.pdf

  17. CEIOPS (2010) Solvency II calibration paper. Available at: http://www.ceiops.eu/media/files/publications/submissionstotheec/CEIOPS-Calibration-paper-Solvency-II.pdf

  18. Chan W, Li S, Cheung S (2008) Testing deterministic versus stochastic trends in the Lee-Carter mortality indexes and its implication for projecting mortality improvements at advanced ages. Available at: http://www.soa.org/library/monographs/retirement-systems/living-to-100-and-beyond/2008/january/mono-li08-6a-chan.pdf

  19. Continuous Mortality Investigation (CMI) (2009) Working Paper 37—Version 1.1 of the CMI library of mortality projections. Available at: www.actuaries.org.uk

  20. Cox S, Lin Y, Pedersen H (2009) Mortality risk modeling: applications to insurance securitization. Insur, Math Econ 46:242–253

    Article  MathSciNet  Google Scholar 

  21. Dahl M (2004) Stochastic mortality in life insurance: market reserves and mortality-linked insurance contracts. Insur, Math Econ 35:113–136

    Article  MATH  MathSciNet  Google Scholar 

  22. Devineau L, Loisel S (2009) Risk aggregation in Solvency II: how to converge the approaches of the internal models and those of the standard formula? Bull Fr Actuar 18:107–145

    Google Scholar 

  23. Doff R (2008) A critical analysis of the Solvency II proposals. Geneva Pap Risk Insur, Issues Pract 33:193–206

    Article  Google Scholar 

  24. Dowd K, Cairns A, Blake D (2006) Mortality-dependent financial risk measures. Insur, Math Econ 38:427–440

    Article  MATH  Google Scholar 

  25. Duffie D, Skiadas C (1994) Continuous-time security pricing: a utility gradient approach. J Math Econ 23:107–131

    Article  MATH  MathSciNet  Google Scholar 

  26. Eling M, Schmeiser H, Schmit J (2007) The Solvency II process: overview and critical analysis. Risk Manag Insur Rev 10:69–85

    Article  Google Scholar 

  27. European Commission (2010) QIS5 technical specifications. Available at: http://www.ceiops.eu/media/files/consultations/QIS/QIS5/QIS5-technical_specifications_20100706.pdf

  28. Grimshaw D (2007) Mortality projections. Presentation at the Current Issues in Life Assurance seminar, 2007

  29. Hanewald K (2009) Mortality modeling: Lee-Carter and the macroeconomy. Working Paper, Humboldt University Berlin

  30. Hari N, De Waegenaere A, Melenberg B, Nijman T (2008) Longevity risk in portfolios of pension annuities. Insur, Math Econ 42:505–519

    Article  MATH  Google Scholar 

  31. Hari N, De Waegenaere A, Melenberg B, Nijman T (2008) Estimating the term structure of mortality. Insur, Math Econ 42:492–504

    Article  MATH  Google Scholar 

  32. Harrison M, Kreps D (1979) Martingales and arbitrage in multiperiod security markets. J Econ Theory 20:381–408

    Article  MATH  MathSciNet  Google Scholar 

  33. Haslip G (2008) Risk assessment. The Actuary, Dec 2008

  34. Holzmüller I (2009) The United States RBC standards, Solvency II and the Swiss solvency test: a comparative assessment. Geneva Pap Risk Insur, Issues Pract 34:56–77

    Article  Google Scholar 

  35. Human Mortality Database (2009) University of California, Berkeley, USA, and Max Planck Institute for Demographic Research, Germany. Available at: www.mortality.org (Data downloaded on 12/04/2009)

  36. Karatzas I, Shreve S (1991) Brownian motion and stochastic calculus. Graduate texts in mathematics, vol 113. Springer, New York

    MATH  Google Scholar 

  37. Lee R, Carter L (1992) Modeling and forecasting US mortality. J Am Stat Assoc 87:659–671

    Article  Google Scholar 

  38. Lee R, Miller T (2001) Evaluating the performance of the Lee-Carter method for forecasting mortality. Demography 38:537–549

    Article  Google Scholar 

  39. Lin Y, Cox S (2005) Securitization of mortality risks in life annuities. J Risk Insur 72:227–252

    Article  Google Scholar 

  40. Loeys J, Panigirtzoglou N, Ribeiro R (2007) Longevity: a market in the making. JPMorgan Global Market Strategy

  41. Milidonis A, Lin Y, Cox S (2010) Mortality regimes and pricing. North Am Actuar J, to appear

  42. Miltersen K, Persson S (2005) Is mortality dead? Stochastic force of mortality determined by no arbitrage. Working Paper, Norwegian School of Economics and Business Administration, Bergen and Copenhagen Business School

  43. Olivieri A (2009) Stochastic mortality: experience-based modeling and application issues consistent with Solvency 2. Working Paper, University of Parma

  44. Olivieri A, Pitacco E (2008) Solvency requirements for life annuities: some comparisons. Working Paper, University of Parma and University of Trieste

  45. Olivieri A, Pitacco E (2008) Stochastic mortality: the impact on target capital. CAREFIN Working Paper, University Bocconi

  46. Olivieri A, Pitacco E (2008) Assessing the cost of capital for longevity risk. Insur, Math Econ 42:1013–1021

    Article  MATH  MathSciNet  Google Scholar 

  47. Plat R (2009) Stochastic portfolio specific mortality and the quantification of mortality basis risk. Insur, Math Econ 45:123–132

    Article  MathSciNet  Google Scholar 

  48. Steffen T (2008) Solvency II and the work of CEIOPS. Geneva Pap Risk Insur, Issues Pract 33:60–65

    Article  Google Scholar 

  49. Stevens R, De Waegenaere A, Melenberg B (2010) Longevity risk in pension annuities with exchange options: the effect of product design. Insur, Math Econ 46:222–234

    Article  Google Scholar 

  50. Sweeting P (2009) A trend-change extension of the Cairns-Blake-Dowd model. Working Paper, The Pensions Institute

  51. Tabeau E, van den Berg Jeths A, Heathcote C (2001) Towards an integration of the statistical, demographic and epidemiological perspectives in forecasting mortality. In: Tabeau E, van den Berg Jeths A, Heathcote C (eds) Forecasting mortality in developed countries. Kluwer Academic, Dordrecht

    Google Scholar 

  52. Thatcher A, Kannisto V, Vaupel J (1998) The force of mortality at ages 80 to 120. In: Odense monographs on population aging, vol 5. Odense University Press, Odense

    Google Scholar 

  53. Vaupel J (1986) How change in age-specific mortality affects life expectancy. Popul Stud 40:147–157

    Article  Google Scholar 

  54. Wilmoth J (1993) Computational methods for fitting and extrapolating the Lee-Carter model of mortality change. Technical Report. University of California, Berkeley

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Börger.

Additional information

The author is very grateful to Andreas Reuß, Jochen Ruß, Hans-Joachim Zwiesler, and Richard Plat for their valuable comments and support, and to the Continuous Mortality Investigation for the provision of data.

Appendix: Specification and calibration of the BBRZ-model

Appendix: Specification and calibration of the BBRZ-model

1.1 A.1 Model specification

In general, a forward mortality model in the framework of Sect. 3 is fully specified by the volatility σ(t,T,x 0) and the initial curve μ 0(T,x 0). For our purposes, however, the volatility and T-year survival probabilities are sufficient, where for the latter we refer to Sect. 4. Regarding the volatility, Bauer et al. [1] propose a 6-factor specification of which we use a slightly modified version in this paper. We keep the general functional structure of the volatility vector σ(t,T,x 0)=(σ 1(t,T,x 0),…,σ 6(t,T,x 0)) and only deploy a different functional form for what they refer to as the “correction term”. Instead of a standard Gompertz form, we use a logistic Gompertz form \(\mu_{x}=\frac{\exp{(ax+b)}}{1+\exp{(ax+b)}}+c\) as introduced by Thatcher et al. [52]. In contrast to the standard Gompertz law, this prevents the volatility from becoming unrealistically large for very old ages. Hence, for Tt the volatility vector σ(t,T,x 0) consists of the following components:

For T<t, the volatility has to be zero obviously, since the forward force of mortality for maturity T is already known at time t and hence deterministic.

1.2 A.2 Generational mortality tables for model calibration

Since forward mortality models reflect changes in expected future mortality they need to be calibrated to quantities which contain information on the evolution of expected mortality. Such quantities could, e.g., be market prices of longevity derivatives, market annuity quotes, or generational mortality tables. However, since data on longevity derivatives’ prices and annuity quotes is very sparse and/or blurred due to charges, profit margins etc., currently, generational mortality tables seem to be the most appropriate starting point for the calibration of forward mortality models. Hence, Bauer et  al. [1, 2] calibrate their model to series of generational mortality tables for US and UK pensioners. However, they have only available 4 or 7 tables, respectively, which is a rather thin data base for model calibration.

Moreover, it is not clear how good the volatility derived from such historical mortality tables fits the future volatility since the methodology of constructing mortality projections has changed significantly over the past decades, i.e. from the technique of age shifting to stochastic (spot) mortality models. Therefore, the transition in projection methods often lead to rather sudden and strong “jumps” in predicted mortality rates from one table to the next which might result in an unreasonably high volatility in the forward model.

Therefore, we believe that it is more appropriate to calibrate forward mortality models to a series of historical generational mortality tables which have all been constructed using the same and up-to-date projection methodology. In order to build such tables, we use the model of Lee and Carter [37]. With this model, a generational mortality table is constructed for each historical year based on the mortality data which was available in that year. We opt for the Lee-Carter model because it has become a standard model in the literature on mortality forecasting (cf. Booth and Tickle [5]) and it has also been used by the CMI in the construction of some of their recent projections for UK annuitants and pensioners (cf. CMI [19] and references therein). However, we think that the model choice is not crucial in our setting for two reasons: First, for the calibration of the forward model, we are not interested in absolute values of projected mortality rates but only in their changes over time. Secondly, the conclusions drawn in this paper with respect to the structure of the longevity stress in the Solvency II standard model should still be valid even if the true volatility in the forward model was generally overestimated or underestimated.

In the Lee-Carter model, the (central) mortality rate m(x,t) for age x in year t is parameterized as

$$\log{\left\{m(x,t)\right\}}=\alpha_x+\beta_x\cdot\kappa_t+\epsilon_{x,t},$$

where the α x describe the average mortality rate for each age x over time, β x specifies the magnitude of changes in mortality for age x relative to other ages, ε x,t is an age and time dependent normally distributed error term, and κ t is the time trend (often also referred to as mortality index). The latter is usually assumed to follow an ARIMA(0, 1, 0) process, i.e.

$$\kappa_t=\kappa_{t-1}+\gamma+e_t,$$

where γ is a constant drift and e t is a normally distributed error term with mean zero.

For fitting this model to historical mortality data, we use the weighted least squares algorithm introduced by Wilmoth [54] and adjust the κ t by fitting a Poisson regression model to the annual number of deaths at each age (see Booth et al. [6] for details). Since mortality data usually becomes very sparse for old ages, we only fit the model for ages 20 to 95 and use the following extrapolation up to age 120: We extract the α x for x≥96 from a logistic Gompertz form which we fit in least squares to the α x for x≤95. For the β x , we assume the average value of β 91 to β 95 to hold for all x≥91.

The (deterministic) projection of future mortality rates is then conducted by setting the error terms ε x,t and e t equal to their mean zero for all future years. Due to the lognormal distribution of the m(x,t), the thus derived mortality rates are actually not the expectations of the future m(x,t), which are typically listed in a generational mortality table, but their medians. However, even though not fully correct, this approach to projecting mortality within the Lee-Carter model is widely accepted (see, e.g., Wilmoth [54], Lee and Miller [38], or Booth and Tickle [5]) and should be unproblematic in particular in our setting, where only changes in projected mortality over time are considered. Finally, we derive approximate 1-year initial mortality rates q(x,t) from the central mortality rates as (cf. Cairns et al. [10])

$$q(x,t)\approx1-\exp\left\{-m(x,t)\right\}.$$

The generational mortality tables are constructed for the male general population of England and Wales. For our purposes, it would obviously be preferable to build tables from annuitant or pensioner mortality data. However, a sufficient amount of such data is only available for a limited number of years and ages. Therefore, we make the assumption that the volatility derived from the general population tables is also valid for insured’s mortality. As discussed in Sects. 5 and 6, an adjustment of the volatility to the mortality level of the population in view may be appropriate to account for this issue but for simplicity we do not consider such an adjustment here.

Mortality data, i.e. deaths and exposures, for years 1947 to 2006 has been obtained from the Human Mortality Database [35]. A generational mortality table is then constructed based on a Lee-Carter fit to each set of 30 consecutive years of data which yields a series of 31 tables. We decided to use only 30 years of data for each Lee-Carter fit as we think it is not reasonable to calibrate a mortality model with constant parameters b x and γ to a significantly longer time series of historical data. Changes in age dependent mortality reduction rates have been observed for most countries in the past (see, e.g., Vaupel [53] and Booth and Tickle [5]), including England and Wales, and using a larger set of data would thus imply the risk of extrapolating outdated mortality trends from the far past into the future. Moreover, Chan et al. [18], Hanewald [29], and Booth et al. [6], amongst others, find structural breaks in the time trend κ t for England and Wales as well as for other industrialized countries. A significantly shorter time series of data, on the other hand, might lead to the extrapolation of noise and merely temporary mortality trends. Nevertheless, the choice of 30 years is still rather arbitrary. Additionally, we assume a gap year for data collection between the fitting period for the Lee-Carter model and the starting year of the corresponding mortality table, e.g., from the data for years 1947 to 1976 a table with starting year 1978 is derived. Thus, in total, we obtain a series of 31 mortality tables.

1.3 A.3 Calibration algorithm

Based on the series of generational mortality tables derived in the previous subsection, we are now able to calibrate the parameters in the forward model. In analogy to Bauer et al. [2], we fit the correction term to the forward force of mortality for a 20-year old in the most recent mortality table using least squares and obtain parameter values of a=0.1069, b=12.57, and c=0.0007896.

For the parameters c i , i=1,…,d with d=6 in our case, Bauer et al. [1] present a calibration algorithm based on Maximum Likelihood estimation. However, due to numerical issues they can only use a small part of the available data, i.e. six 1-year survival probabilities for different ages and maturities from each generational mortality table, the choice of which obviously implies some arbitrariness. Here, we propose a new 2-step calibration algorithm which makes use of all available data and the fact that the deterministic volatility specified in Appendix A.1 can be re-written in the form σ i (s,u,x 0)=c i r i (s,u,x 0), i=1,…,d, with the c i the only free parameters.

Let t 0=0 be the year of the first generational mortality table, i.e. 1978 in our case, and let t 1,…,t N−1 denote the (hypothetical) compilation years of all later tables. We define by

$$p_{x_0+T}^{(t_n):T\rightarrow T+1}:=\frac{E[{_{T+1}p_{x_0}^{(T+1)}}|{\mathcal{F}}_{t_n}]}{E[{_{T}p_{x_0}^{(T)}}|{\mathcal{F}}_{t_n}]}=\exp \biggl\{-\int_T^{T+1}\mu_{t_n}(u,x_0)\,du\biggr\}$$

the 1-year forward survival probability from T to T+1 for an x 0-year old at time zero as seen at time t n , which can be obtained from the mortality tables. However, given a set of forward survival probabilities at time t n , n∈{0,1,…,N−2}, the fixed volatility specification only allows for certain changes in the forward survival probabilities up to time t n+1. We denote by \(\bar{p}_{x_{0}+T}^{(t_{n+1}):T\rightarrow T+1}\) the “attainable” forward survival probabilities at time t n+1 with respect to the forward survival probabilities at time t n and the given volatility specification, and these probabilities satisfy the relation

Inserting the drift condition (8), some further computations yield

where the \(N_{i,m}^{(t_{n+1})}\), i=1,…,d, m=1,…,M are standard normally distributed random variables, \(a_{i}^{(t_{n+1})}\geq 0\), and \(b_{i,m}^{(t_{n+1})}\in\mathbb{R}\), i=1,…,d. Obviously, the above equations hold for any M∈ℕ, but in case forward survival probabilities for different ages x 0 and maturities T are considered the choice of M has a significant influence on the correlation between the evolutions of these probabilities. For M=1, they would be fully correlated which, in general, is only the case if the volatility vector is 1-dimensional or component-wise constant. These conditions are clearly not fulfilled in our situation and hence, the parameter M should be chosen as large as numerically feasible to meet the actual correlation structure as accurately as possible. We will come back to this issue later.

The idea behind the first calibration step is now to choose the parameters \(a_{i}^{(t_{n+1})}\) and \(b_{i,m}^{(t_{n+1})}\) such that the attainable forward probabilities are as close as possible to the actual probabilities listed in the mortality table at t n+1. This is done by minimizing, for each n, the least squares expression

$$\mathit{LS}_{t_{n+1}}:=\sum_{(T,x_0):\,T\geq t_{n+1},\,x_0+t_n\geq20,\atop x_0+T-t_{n+1}\leq\omega}\biggl(\frac{\log\{p_{x_0+T}^{(t_{n+1}):T\rightarrow T+1}\}-\log\{\bar{p}_{x_0+T}^{(t_{n+1}):T\rightarrow T+1}\}}{\log\{p_{x_0+T}^{(t_{n+1}):T\rightarrow T+1}\}}\biggr)^2,$$

where ω is the limiting age. The changes in forward mortality from time t n to time t n+1 can then be described by the parameters \(a_{i}^{(t_{n+1})}\) and \(b_{i,m}^{(t_{n+1})}\) in combination with the volatility σ(s,u,x 0).

In the second step, we derive values for the parameters c i , i=1,…,d from the \(a_{i}^{(t_{n+1})}\) and \(b_{i,m}^{(t_{n+1})}\) via Maximum Likelihood estimation. For each n∈{0,1,…,N−2}, the aggregated change over all ages and maturities in log forward survival probabilities resulting from the i th mortality effect, i.e. the changes driven by the i th component of the vector of Brownian motions W t (cf. (7)), i=1,…,d, is given by

This expression is normally distributed with mean

$$\begin{aligned}[c]m_i^{(t_{n+1})}&:=E\bigl[\mathit{ML}_i^{(t_{n+1})}\bigr]\\&=\sum_{(T,x_0):T\geq t_{n+1},x_0+t_n\geq20,\atop x_0+T-t_{n+1}\leq\omega}c_i^2\int_T^{T+1}\int_{t_n}^{t_{n+1}}r_i(s,u,x_0)\int_s^ur_i(s,v,x_0)\,dv\,ds\,du,\end{aligned}$$

and variance

and according to the independent increments property of a Brownian motion, the \(\mathit{ML}_{i}^{(t_{n+1})}\) are independent for all i=1,…,d and n=0,…,N−2. Hence, the density of the random vector

$$\bigl(\mathit{ML}_1^{t_{(1)}},\ldots,\mathit{ML}_d^{t_{(1)}},\mathit {ML}_1^{t_{(2)}},\ldots ,\mathit{ML}_d^{t_{(N-2)}},\mathit{ML}_1^{t_{(N-1)}},\ldots,\mathit {ML}_d^{t_{(N-1)}}\bigr)$$

is the product of the marginal densities \(f_{\mathit{ML}_{i}^{(t_{n+1})}}\) and we need to maximize the likelihood

$$L:=\prod_{n=0}^{N-2}\,\prod_{i=1}^df_{\mathit{ML}_i^{(t_{n+1})}}\bigl(\hat {\mathit{ML}}_1^{(t_{n+1})},\ldots,\hat{\mathit{ML}}_d^{(t_{n+1})};\,c_1,\ldots,c_d\bigr)$$

with respect to c i , i=1,…,d, where the realizations \(\hat {\mathit{ML}}_{i}^{(t_{n+1})}\) are given by substituting \(c_{i}^{2}\) by the corresponding \(a_{i}^{(t_{n+1})}\) and \(|c_{i}|\,N_{i,m}^{(t_{n+1})}\) by the corresponding \(b_{i,m}^{(t_{n+1})}\). For numerical reasons, we chose to maximize the corresponding log-likelihood

$$\log\{L\}:=\sum_{n=0}^{N-2}\sum_{i=1}^d-\log\bigl\{s_i^{(t_{n+1})}\bigr\}-\frac{1}{2}\biggl(\frac{\hat{\mathit{ML}}_i^{(t_{n+1})}-m_i^{(t_{n+1})}}{s_i^{(t_{n+1})}}\biggr)^2.$$

Table 10 contains the resulting parameter values for M=365, i.e. a daily approximation of W t . However, due to different correction terms (cf. Appendix A.1), these parameter values cannot be directly compared to those in Bauer et al. [2].

Table 10 Optimal values for the volatility parameters

As mentioned above, the correlation structure between the changes in mortality for different ages and maturities can only be approximated with the quality of the approximation increasing in M. In order to check the stability of the calibration algorithm and to ensure the reliability of the resulting parameter values, we performed the algorithm for different choices of M and observed a convergence of the optimal parameter values. For instance, from M=200 to M=365 none of the parameter values changed by more than 0.05%. Therefore, we think the calibration algorithm is stable and the results given in Table 10 are reliable.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Börger, M. Deterministic shock vs. stochastic value-at-risk — an analysis of the Solvency II standard model approach to longevity risk. Blätter DGVFM 31, 225–259 (2010). https://doi.org/10.1007/s11857-010-0125-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11857-010-0125-z

Keywords

Navigation