Abstract
For risk capital calculation within the framework of Solvency II the possible loss of basic own funds over the next business year of an insurance undertaking is usually interpreted as a random variable \( \varvec{X} \). If we assume that the parametric distribution family \(\left\{ \varvec{X} (\theta )|\theta \in I\subseteq \mathbb {R}^d \right\} \) is known, but the parameter \(\theta \) is unknown and has to be estimated from the available historical data, the undertaking faces parameter uncertainty. To assess methods to model parameter uncertainty for risk capital calculations we apply a criterion going back to the theory of predictive inference which has already been used in the context of Solvency II. In particular, we show that the bootstrapping approach is not appropriate to model parameter uncertainty from the undertaking’s perspective. Based on ideas closely related to the concept of fiducial inference we introduce a new approach to model parameter uncertainty. For a wide class of distributions and for common estimators including the maximum likelihood method we prove that this approach is appropriate to model parameter uncertainty according to the criterion mentioned above. Several examples demonstrate that our method can easily be applied in practice.
Similar content being viewed by others
References
Barndorff-Nielsen OE, Cox CR (1996) Prediction and asymptotics. Bernoulli 2(4):319–340
Bignozzi V, Tsanakas A (2015) Parameter uncertainty and residual estimation risk. J Risk Insur. doi:10.1111/jori.12075
Borowicz J, Norman J (2006) The effects of parameter uncertainty in the extreme event frequency-severity model. Int Congr Actuar. http://www.ica2006.com/3020.html. Accessed 28 May 2014
Boucher CM, Danielsson J, Kouontchu PS, Maillet BB (2014) Risk models-at-risk. J Bank Financ 44:77–92
Datta GS, Mukerjee R, Ghosh M, Sweeting TJ (2000) Bayesian prediction with approximate frequentist validity. Ann Stat 28(5):1414–1426
Diers D (2007) Das Parameterrisiko—Ein Vorschlag zur Modellierung. Universität Ulm, Ulm. Preprint
Diers D, Eling M, Linde M (2013) Modeling parameter risk in premium risk in multi-year internal models. J Risk Financ 14(3):234–250
Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. Chapman & Hall, Atlanta
Fisher RA (1930) Inverse probability. Proc Camb Philos Soc 26:528–535
Fraser DA (1968) Fiducial inference. International Encyclopedia Social Sciences, The Macmillan Company and The Free Press 403–406. http://fisher.utstat.toronto.edu/dfraser/documents/44. Accessed 28 July 2014
Gerrard R, Tsanakas A (2011) Failure probability under parameter uncertainty. Risk Anal 8(5):727–744
Hogg RV, Craig AT (1995) Introduction to mathematical statistics. Prentice Hall, Upper Saddle River
Klugman S, Panjer H, Willmot G (2012) Loss models: from data to decisions. Series in probability and statistics, 4th edn. Wiley, New York
Mata A (2000) Parameter uncertainty for extreme value distributions. GIRO Convention Papers
McNeil AJ, Frey R, Embrechts P (2005) Quantitative risk management. Princeton University Press, Princeton
Sauler K (2009) Das Prämienrisiko in der Schadenversicherung unter Solvency II. Gesellschaft f. Finanz- u. Aktuarwiss., Ulm
Sklar A (1959) Fonctions de rèpartition à n dimensions et leurs marges. Publ Inst Stat Univ Paris 8:229–231
Solvency II directive 2009/138/EC (2009). http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2009:335:0001:0155:en:PDF. Accessed 28 July 2014
Young GA, Smith RL (2005) Essentials of statistical inference. Cambridge University Press, Cambridge
Zabell SL (1992) R.A. Fisher and fiducial argument. Stat Sci 7(3):369–387
Acknowledgments
We are indebted to the referees whose thorough reviews and many suggestions helped us in significantly improving the paper.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1
In this section we prove Property 1 for a wide class of distributions. More precisely, we prove Property 1 for distribution families which are a member of the class of one- or two-parameter transformed location-scale families and for the maximum likelihood estimation method, the Bayesian estimation with prior of the form \(\pi (\mu ,\sigma )=\sigma ^{-v}\), \(v\ge 0\) and the percentile matching estimation method. For location-scale families we additionally prove Property 1 for the method of moments and the unbiased parameter estimation.
1.1 Transformed location-scale families
Let us recall the definition of a transformed location-scale family (cf. [19], Sect. 5.2]).
Definition 5
A family \(\mathcal {F}_{LS}=\left\{ \varvec{X} (\theta ) | \theta \in I \subset \mathbb {R}^d\right\} \) of univariate probability distributions is called a location-scale family if for every \( \varvec{X} (\theta _1)\in \mathcal {F}_{LS}\) and \( \varvec{X} _2(\theta _2)\in \mathcal {F}_{LS}\) we can write \( \varvec{X} _2\,{\buildrel d \over =}\,a \varvec{X} _1+b\) for some \(a,b\in \mathbb {R}\) with \(a>0\).
Remark 4
-
1.
Let \( \varvec{Z} \) be an arbitrary random variable. Then we can construct a location-scale family by
$$\begin{aligned} \{\mu +\sigma \varvec{Z} : \mu ,\sigma \in \mathbb {R}, \sigma >0\}. \end{aligned}$$We can assume that every location-scale family \(\mathcal {F}_{LS}\) is given by random variables of the form \(\mu +\sigma \varvec{Z} \).
-
2.
If we have given a location-scale family we can choose a particular element \(F\in \mathcal {F}_{LS}\) and call this the “standard distribution” of the family \(\mathcal {F}_{LS}\). For every \( \varvec{X} \) we can write \(X\,{\buildrel d \over =}\,\mu +\sigma \varvec{Z} \) with \( \varvec{Z} \sim F\) and \(\sigma >0\). We write \( \varvec{X} \sim F(\cdot ;\mu ,\sigma )\) with \(F(x;\mu ,\sigma )=F(\frac{x-\mu }{\sigma })\). Note that \( \varvec{Z} \sim F(\cdot ;0,1)\). We can assume that \( \varvec{Z} \) has mean \(E \varvec{Z} =0\) and variance \(\text{ Var } \varvec{Z} =1\).
-
3.
Examples of two-parameter location-scale families are the normal distribution or the \(t\)-distribution. Both the exponential distribution and the Gamma distribution with known shape parameter are examples of one-parameter scale families.
Definition 6
Let \(\mathcal {F}_{LS}\) be a location-scale family. A set \(\mathcal {F}_{LS}^h=\left\{ \varvec{X} (\theta ) | \theta \in I \subset \mathbb {R}^d\right\} \) of univariate probability distributions is called a transformed location-scale family if there exists a strictly increasing function \(h\) such that for any \(X\in \mathcal {F}_{LS}^h\) we have \(h^{-1}(X)\in \mathcal {F}_{LS}\).
Remark 5
-
1.
For a random variable \( \varvec{X} \) from a transformed location-scale family we have \( \varvec{X} \sim F(\cdot ;\mu ,\sigma ;h)\) where \(F(x;\mu ,\sigma ;h)=F\left( \frac{h^{-1}(x)-\mu }{\sigma }\right) .\)
-
2.
Examples of transformed location-scale families are the lognormal distribution, the Pareto distribution, the log-\(t\)-distribution, the log-logistic distribution, the log-Laplace distribution, the Weilbull distribution and the Gumbel distribution (cf. e.g. [Table II, 11]).
1.2 Property 1 for transformed location-scale families
Additionally to the maximum likelihood method (MLE) we consider the following estimation methods:
1.2.1 Percentile matching (PE)
Let \(x_1,\ldots ,x_n\) be a sample of independent realizations drawn from \( \varvec{X} (\theta )\) and let \(x(1)\le \ldots \le x(n)\) be the order statistics from the sample. A smoothed empirical \(p\)-quantile can be defined by (cf. [13])
Choosing quantiles \(p_1,\ldots ,p_d\) a percentile matching estimate \(\hat{\theta }\in \mathbb {R}^d\) is a solution of the equations
For the sake of simplicity but without loss of generality, we choose the percentiles in the form \(p_k=\frac{m_k}{n+1},m_k\in \mathbb {N}\) such that \(\hat{\pi }(p_k)=x(m_k)\).
Note that the PE does not make sense in the general setting where the data \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta )\) may have distributions different from the distribution of \( \varvec{X} \). Therefore, we assume that \( \varvec{D} _1(\theta )= \varvec{X} _1,\ldots , \varvec{D} _n(\theta )= \varvec{X} _n\) are independent, identically distributed realizations of \( \varvec{X} \) if PE is used as the estimation method.
1.2.2 Bayesian estimation
The parameter is estimated as the mean of the posterior distribution, i.e.
The posterior distribution given the sample \(d=(d_1,\ldots ,d_n)\) drawn from \( \varvec{D} (\theta )=( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta ))\) is given by
with the joint probability density function \(f(\cdot ,\theta )\) of \( \varvec{D} (\theta )\) and the prior distribution \(\pi \).
1.2.3 The method of moments and the unbiased parameter estimation for location-scale families
Consider the independent random variables \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta )\) belonging to (possible different) location-scale families \({\mathfrak {I}}_i:=\{\mu +\sigma \varvec{Z} _i|\mu \in \mathbb {R},\sigma > 0 \}\), \( \varvec{Z} _i\) independent of \((\mu ,\sigma )\) but with the same parameter \(\theta =(\mu ,\sigma )\). Without loss of generality we assume that \(E \varvec{Z} _i=0\) and \(\text{ Var } \varvec{Z} _i=1\).
Let \(d=(d_1,\ldots ,d_n)\) be a sample drawn from \(( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta ))\). The method of moments estimates the parameters \(\mu \) and \(\sigma \) by
Additionally, we define the unbiased parameter estimation by
Note that in this case \(\hat{\sigma }^2\) is the sample variance.
Lemma 1
-
(a)
The MLE is invariant under differentiable increasing transformations, i.e. let \(d_1,\ldots ,d_n\) be drawn from independent random variables \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta )\) corresponding to parametric distribution families \({\mathfrak {I}}_i:=\{ \varvec{D} _i(\theta )|\theta \in I\subset \mathbb {R}^d\}\), \(i=1\ldots ,n\). Let \(S(d_1,\ldots ,d_n)\) be the corresponding MLE. For differentiable increasing functions \(h_i\) consider the transformed data \(d_i^*=h_i(d_i)\) and the transformed distribution families \({\mathfrak {I}}_i^*:=\{ \varvec{h} _i(D(\theta ))|\theta \in I\subset \mathbb {R}^d\}\), \(i=1,\ldots ,n\). Let \(S^*(d_1^*,\ldots ,d_n^*)\) be the corresponding MLE. Then
$$\begin{aligned} S^*(d_1^*,\ldots ,d_n^*)=S(d_1,\ldots ,d_n). \end{aligned}$$ -
(b)
Let \(D^*(\zeta ,\theta )=D^*((\zeta _1,\ldots ,\zeta _n),\theta ):=\left( h_1(D_1(\zeta _1,\theta )),\ldots , h_n(D_n(\zeta _n,\theta ))\right) \). Under the assumption of a) let \({\mathfrak {h}}_{ \varvec{\zeta } }(\theta ):=S\left( D( \varvec{\zeta } ,\theta )\right) \) resp. \({\mathfrak {h}}_{ \varvec{\zeta } }^*(\theta ):=S^*\left( D^*( \varvec{\zeta } ,\theta )\right) \). Then the fiducial parameters \( \varvec{\theta } _{ \varvec{s} im}:={\mathfrak {h}}^{-1}_{ \varvec{\zeta } }(\hat{\theta }_0)\) resp. \( \varvec{\theta } _{ \varvec{s} im}^*:={\mathfrak {h}}_{ \varvec{\zeta } }^{*-1}(\hat{\theta }_0)\) are equal, that is, \( \varvec{\theta } _{sim}^*= \varvec{\theta } _{sim}\).
-
(c)
For independent random variables \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta )\) belonging to (possible different) location-scale families \({\mathfrak {I}}_i:=\{\mu +\sigma \varvec{Z} _i|\mu \in \mathbb {R},\sigma >0\}\), \( \varvec{Z} _i\) independent of \((\mu ,\sigma )\) but with the same parameter \(\theta =(\mu ,\sigma )\). Let \(d_1,\ldots ,d_n\) be a sample drawn from \( \varvec{D} _1,\ldots , \varvec{D} _n\). Let \((\hat{\mu },\hat{\sigma })\) be the MLE of \((\mu ,\sigma )\). It follows that
$$\begin{aligned} \hat{\mu }(a+b\cdot d_1,\ldots ,a+b\cdot d_n)&=a+b\cdot \hat{\mu }(d_1,\ldots ,d_n),\\ \hat{\sigma }(a+b\cdot d_1,\ldots ,a+b\cdot d_n)&=b\cdot \hat{\sigma }(d_1,\ldots ,d_n). \end{aligned}$$
The assertions (a–b) of the Lemma hold true for the Bayesian estimate. The assertion (c) of the Lemma is true for the Bayesian estimate if we choose the prior to be of the form \(\pi (\mu ,\sigma )=\sigma ^{-v}\) , \(v\ge 0\).
Under the additional assumptions that \( \varvec{D} _1(\theta )= \varvec{X} _1,\ldots , \varvec{D} _n(\theta )= \varvec{X} _n\) are i.i.d. realizations of \( \varvec{X} \) and that \(h=h_1=\ldots =h_n\) the assertions (a–c) of the Lemma hold also if the PE is used as the estimation method.
Moreover, assertion (c) is true for the method of moments and the unbiased parameter estimation as described above.
Remark 6
Note that the differentiability assumption on the transformations in Lemma 1 can be dropped for PE.
Proof
First note that a) implies \({\mathfrak {h}}^*_\zeta (\theta )={\mathfrak {h}}_\zeta (\theta )\). Hence b) follows from (a). Therefore, it remains to prove (a) and (c):
Let us first consider the maximum likelihood method. In this case:
For the Bayesian estimate:
-
(a)
is Lemma 1 (ii) in [Appendix A.1, 11].
-
(c)
By Lemma 4 in [Appendix A.1, 11]
$$\begin{aligned} \pi (\mu ,\sigma |a+b\cdot d)=\frac{1}{b^2}\cdot \pi \left( \frac{\mu -a}{b},\frac{\sigma }{b}|d\right) \end{aligned}$$for all \(a,b\in \mathbb {R}\), \(b>0\). Thus
$$\begin{aligned} \hat{\mu }(a+b\cdot d)&=\int \mu \cdot \pi (\mu ,\sigma |a+b\cdot d)d(\mu ,\sigma )=\int \mu \cdot \frac{1}{b^2}\pi \left( \frac{\mu -a}{b},\frac{\sigma }{b}|d\right) d(\mu ,\sigma )\\&=\int (a+b\cdot \mu )\cdot \pi (\mu ,\sigma |d)d(\mu ,\sigma )=a+b\cdot \hat{\mu }(d) \end{aligned}$$and
$$\begin{aligned} \hat{\sigma }(a+b\cdot d)&=\int \sigma \cdot \pi (\mu ,\sigma |a+b\cdot d)d(\mu ,\sigma )=\int \sigma \cdot \frac{1}{b^2}\pi \left( \frac{\mu -a}{b},\frac{\sigma }{b}|d\right) d(\mu ,\sigma )\\&=\int b\cdot \sigma \cdot \pi (\mu ,\sigma |d)d(\mu ,\sigma )=b\cdot \hat{\sigma }(d). \end{aligned}$$
We now consider the PE:
-
(a)
Follows from the equivalence of \(F^{-1}_{h( \varvec{X} (\theta ))}(p_k)=h(x(m_k))\) and \(F^{-1}_{ \varvec{X} (\theta )}(p_k)=x(m_k)\).
-
(c)
Setting \(x_i^*=a+bx_i\) the assertion follows from the equation
$$\begin{aligned} F^{-1}_{a+b\hat{\mu }(x_1,\ldots ,x_n)+b\hat{\sigma }(x_1,\ldots ,x_n) \varvec{Z} }(p_k)&=a+bF^{-1}_{\hat{\mu }(x_1,\ldots ,x_n)+\hat{\sigma }(x_1,\ldots ,x_n) \varvec{Z} }(p_k)\\&=a+bx(m_k)=x^*(m_k). \end{aligned}$$
For the method of moments (c) follows from the fact that \(\overline{a+bd}=a+b\overline{d}\) where \(\overline{d}\) resp. \(\overline{a+bd}\) denotes the sample mean of \(d_1,\ldots ,d_n\) resp. \(a+bd_1,\ldots ,a+bd_n\) and
The proof for the unbiased parameter estimation is analogous. \(\square \)
Corollary 2
Let the data \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta ),\) \(\theta =(\mu ,\sigma ),\) be independent random variables belonging to transformed location-scale families. Set \({\mathfrak {I}}_{ \varvec{D} _i}^{h_i}:=\left\{ h_i(\mu +\sigma \varvec{Z} _i)|\mu \in \mathbb {R},\sigma >0 \right\} \) for \(i=1,\ldots ,n\) for (sufficiently smooth) increasing functions \(h_1,\ldots ,h_n\) and \( \varvec{Z} _i\) random variables independent of \(\theta =(\mu ,\sigma )\).
Consider the estimation methods \(S\)
-
MLE,
-
PE,
-
Bayesian estimation with prior \(\pi (\mu ,\sigma )=\sigma ^{-v}\), \(v\ge 0\)
under the canonical restrictions mentioned in Lemma 1 necessary for the application of the respective estimation method.
Then the parameter distribution according to the fiducial approach described in Sect. 4.3 is explicitly given by
where \(\hat{\theta }_0=(\hat{\mu }_0,\hat{\sigma }_0)\) is the estimate for a given observation \((d_1,\ldots ,d_n)\) and \(\hat{\mu }( \varvec{Z} _1,\ldots , \varvec{Z} _n)\) resp. \(\hat{\sigma }( \varvec{Z} _1,\ldots , \varvec{Z} _n))\) on the observed data are the random estimates for the random vector \(( \varvec{Z} _1,\ldots , \varvec{Z} _n)\) according to the chosen estimation method \(S\).
In particular, the function \({\mathfrak {h}}_\zeta (\theta ):=S(D( \varvec{\zeta } ,\theta ))\) introduced in Sect. 4.3 is invertible.
The assertion of the corollary also holds for location-scale families if we use the method of moments or the unbiased parameter estimation.
Proof
Since by Lemma 1 (a), (b) the transformations \(h_i\) do neither affect the estimate, nor the function \({\mathfrak {h}}_\zeta \), nor the fiducial parameter \(( \varvec{\mu } _{sim}, \varvec{\sigma } _{sim})\), it is sufficient to prove the assertion for location-scale families.
We use the notation \(D_i(\zeta _i,\theta )=\mu +\sigma \varvec{Z} _i\), \( \varvec{Z} _i=Z_i( \varvec{\zeta } _i)=F_{Z_i}^{-1}( \varvec{\zeta } _i)\) for \( \varvec{\zeta } =( \varvec{\zeta } _1,\ldots , \varvec{\zeta } _n)\) uniformly distributed on \([0;1]^n\) with independent components. It follows from Lemma 1 (c) that
for fixed \(\zeta =(\zeta _1,\ldots ,\zeta _n)\in [0;1]^n\).
Its inverse is given by
This yields the assertion since by definition \(( \varvec{\mu } _{sim}, \varvec{\sigma } _{sim})={\mathfrak {h}}_{ \varvec{\zeta } }^{-1}\left( (\hat{\mu }_0,\hat{\sigma }_0)\right) \) for \( \varvec{\zeta } =( \varvec{\zeta } _1,\ldots , \varvec{\zeta } _n)\) uniformly distributed on \([0;1]^n\). \(\square \)
Proposition 2
The model introduced in Sect. 4.3 is parameter invariant in the sense of Property 1 for location-scale families if the MLE , the Bayesian estimation with prior \(\pi (\mu ,\sigma )=\sigma ^{-v}\), \(v\ge 0\) or the method of moment is used for parameter estimation.
More precisely, let \( \varvec{X} (\theta )\) and the data \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta )\), \(\theta =(\mu ,\sigma )\), be independent random variables belonging to location-scale families \({\mathfrak {I}}_{ \varvec{X} }\) resp. \({\mathfrak {I}}_{ \varvec{D} _1},\ldots ,{\mathfrak {I}}_{ \varvec{D} _n}\) and use MLE, the Bayesian estimate with prior \(\pi (\mu ,\sigma )=\sigma ^{-v}\), \(v\ge 0\), the method of moments or the unbiased parameter estimation. Let \( \varvec{Y} (\hat{\theta }(\zeta ,\theta ))\) be the modelled risk according to the fiducial approach (cf. Equation 9). Then for all \(\xi \in [0;1]\), \(\zeta \in [0;1]^n\)
Under the additional assumption that \( \varvec{D} _1(\theta )= \varvec{X} _1,\ldots , \varvec{D} _n(\theta )= \varvec{X} _n\) are i.i.d. realizations of \( \varvec{X} \) and \(h=h_1=\ldots =h_n\) the assertion holds also for the estimation method PE.
Proof
Let \( \varvec{X} (\mu ,\sigma )=\mu +\sigma \varvec{Z} \) and \( \varvec{D} _i(\mu ,\sigma )=\mu +\sigma \varvec{Z} _i\) with independent random variables \( \varvec{Z} \sim F_{ \varvec{Z} }\), \( \varvec{Z} _1,\ldots , \varvec{Z} _n\) not depending on \((\mu ,\sigma )\).
We get
Let \(\hat{\mu }_0:=\hat{\mu }(d_1,\ldots ,d_n)\) and \(\hat{\sigma }_0:=\hat{\sigma }(d_1,\ldots ,d_n)\) be the estimates of \(\mu \) resp. \(\sigma \) based on the observed data \(d_i=h_i(\mu +\sigma z_i)\), \(i=1,\ldots ,n\). From Corollary 2 and Lemma 1 it follows that
and
is independent of \((\mu ,\sigma )\). This completes the proof. \(\square \)
Proposition 3
The model introduced in Sect. 4.3 is parameter invariant in the sense of Property 1 for increasing transformations of location-scale families if the MLE or the Bayesian estimation with prior \(\pi (\mu ,\sigma )=\sigma ^{-v}\), \(v\ge 0\) is used for parameter estimation.
More precisely: Let the random variable \( \varvec{X} (\theta )\) and the data \( \varvec{D} _1(\theta ),\ldots , \varvec{D} _n(\theta )\), \(\theta =(\mu ,\sigma )\), be independent random variables belonging to location-scale families \({\mathfrak {I}}_{ \varvec{X} }\) resp. \({\mathfrak {I}}_{ \varvec{D} _1},\ldots ,{\mathfrak {I}}_{ \varvec{D} _n}\).
For an invertible function \(h\) and differentiable, increasing functions \(h_1,\ldots , h_n\) consider the transformed random variables \( \varvec{X} ^*(\theta )=h( \varvec{X} (\theta ))\) and \( \varvec{D} _1^*(\theta )=h_1( \varvec{D} _1(\theta ))\),\(\ldots , \varvec{D} _n^*(\theta )=h_n( \varvec{D} _n(\theta ))\).
Let \( \varvec{Y} ^*(\hat{\theta }^*(\zeta ,\theta ))\) be the modelled risk according to the fiducial approach (cf. Equation 9) corresponding to both the respective transformed location-scale families of \( \varvec{X} ^*, \varvec{D} _1^*,\ldots , \varvec{D} _n^*\) and the MLE or the Bayesian estimate with prior \(\pi (\mu ,\sigma )=\sigma ^{-v}\), \(v\ge 0\) as estimation method.
Then for all \(\xi \in [0;1]\), \(\zeta \in [0;1]^n\)
Under the additional assumption that \( \varvec{D} _1(\theta )= \varvec{X} _1,\ldots , \varvec{D} _n(\theta )= \varvec{X} _n\) are i.i.d. realizations of \( \varvec{X} \) and \(h=h_1=\ldots =h_n\) the assertion holds also for the estimation method PE.
Proof
Lemma 1 (b) yields that
for \(\xi \in [0;1]\) uniformly distributed. Note that by Lemma 1 (a) \(\hat{\theta }^*(\zeta ,\theta )=\hat{\theta }(\zeta ,\theta )\) for fixed \(\xi \in [0;1]\) and \(\zeta \in [0;1]^n\).
Thus
The assertion follows from Proposition 2. \(\square \)
Appendix 2: Independence of the bootstrapping failure probability for transformed location-scale families
In this section we prove that, given a random variable \( \varvec{X} \) which belongs to a transformed location-scale family, the probability \(P\left( \varvec{X} \le SCR(\alpha ; \varvec{X} _1,\ldots , \varvec{X} _n;M)\right) \) does not depend on \((\mu ,\sigma )\) if the estimation method \(S\) is the maximum likelihood method and \(M\) is either non-parametric or parametric bootstrapping.
Throughout this section let \(\{\mu +\sigma \varvec{Z} : \mu ,\sigma \in \mathbb {R}, \sigma >0\}\) be a location-scale family and let \(S\) be the maximum likelihood method. Let \(\{\mu +\sigma Z_1,\ldots ,\mu +\sigma Z_n\}\) be a fixed sample drawn from \(\mu +\sigma \varvec{Z} \). Bootstrapping generates a random sample \( \varvec{X} _1^*=\mu +\sigma \varvec{Z} _1^*,\ldots , \varvec{X} _n^*=\mu +\sigma \varvec{Z} _n^*\) from \(\{X_1=\mu +\sigma Z_1,\ldots ,X_n=\mu +\sigma Z_n\}\). This sample leads to parameters \(( \varvec{\mu } ^*, \varvec{\sigma } ^*)\).
Lemma 2
Let \(( \varvec{\mu } ^*, \varvec{\sigma } ^*)\) be the (random) parameters obtained by bootstrapping. Then \(\frac{\mu - \varvec{\mu } ^*}{ \varvec{\sigma } ^*}\) and \(\frac{\sigma }{ \varvec{\sigma } ^*}\) are independent of \((\mu ,\sigma )\).
Proof
1. For non-parametric bootstrapping we get
It follows that
and
are independent of \((\mu ,\sigma ).\)
-
2.
For parametric bootstrapping \( \varvec{X} _i^*\) are independent random variables with distribution equal to \( \varvec{X} (\hat{\mu },\hat{\sigma })\). We get that
$$\begin{aligned} \frac{\sigma ^2}{{ \varvec{\sigma } ^*}^2}&=\frac{\sigma ^2}{\hat{\sigma }^2}\cdot \frac{\hat{\sigma }^2}{{ \varvec{\sigma } ^*}^2} =\frac{\sigma ^2}{\frac{1}{n}\sum (X_i-\overline{X})^2}\cdot \frac{\hat{\sigma }^2}{\frac{1}{n}\sum ( \varvec{X} _i^*-\overline{ \varvec{X} _i^*})^2}\\&=\frac{\sigma ^2}{\frac{\sigma ^2}{n}\sum (Z_i-\overline{Z})^2}\cdot \frac{\hat{\sigma }^2}{\frac{1}{n}\sum \left( \hat{\mu }+\hat{\sigma } \varvec{Z} _i^*-(\hat{\mu }+\hat{\sigma }\frac{\sum Z_i}{n})\right) ^2}\\&=\frac{n}{\sum (Z_i-\overline{Z})^2}\cdot \frac{n}{\sum \left( \varvec{Z} _i^*-\frac{\sum Z_i}{n})\right) ^2} \end{aligned}$$and
$$\begin{aligned} \frac{ \varvec{\mu } ^*-\mu }{ \varvec{\sigma } ^*}&=\frac{ \varvec{\mu } ^*-\hat{\mu }}{\hat{\sigma }}\cdot \frac{\hat{\sigma }}{ \varvec{\sigma } ^*}+\frac{\hat{\mu }-\mu }{\sigma }\cdot \frac{ \sigma }{ \varvec{\sigma } ^*}\\&=\frac{\frac{1}{n}\sum (\hat{\mu }+\hat{\sigma } \varvec{Z} _i^*)-\hat{\mu }}{\hat{\sigma }}\cdot \frac{\hat{\sigma }}{ \varvec{\sigma } ^*}+\frac{\frac{1}{n}\sum (\mu +\sigma Z_i)-\mu }{\sigma }\cdot \frac{\sigma }{ \varvec{\sigma } ^*}\\&=\frac{1}{n}\sum \varvec{Z} _i^*\frac{\hat{\sigma }}{ \varvec{\sigma } ^*}+\frac{1}{n}\frac{\sigma }{ \varvec{\sigma } ^*}\sum Z_i, \end{aligned}$$are both independent of \((\mu ,\sigma )\).
\(\square \)
Proposition 4
Let \(\{ \varvec{X} (\theta ): \theta =(\mu ,\sigma )\in \mathbb {R}^2, \sigma >0\}\) be a transformed location-scale family and let \(S\) be the maximum likelihood method. For a fixed parameter \(\theta \) , consider a fixed set of data \(X_1(\zeta _1,\theta ),\ldots ,X_n(\zeta _n,\theta )\) and let \( \varvec{Y} (\zeta ,\theta )\) be the random variable defined by the bootstrapping approach, that is, \( \varvec{Y} (\zeta ,\theta )= \varvec{X} ( \varvec{\theta } ^*)\) where \( \varvec{\theta } ^*=( \varvec{\mu } ^*, \varvec{\sigma } ^*)\) is obtained using either parametric or non-parametric bootstrapping.
Then
does not depend on \(\theta \).
Proof
The proof is analogous to the proof of Proposition 2 and Proposition 3 above. In this case we consider
and apply Lemma 2. \(\square \)
Corollary 3
With the assumptions of the Proposition 4 we have
does not depend on \(\theta \) if \( \varvec{M} \) is either parametric or non-parametric bootstrapping.
Proof
Using Proposition 4 the assertion follows from
\(\square \)
Rights and permissions
About this article
Cite this article
Fröhlich, A., Weng, A. Modelling parameter uncertainty for risk capital calculation. Eur. Actuar. J. 5, 79–112 (2015). https://doi.org/10.1007/s13385-015-0109-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13385-015-0109-4