Skip to main content
Log in

Premium rating without losses

How to estimate the loss frequency of loss-free risks

  • Original Research Paper
  • Published:
European Actuarial Journal Aims and scope Submit manuscript

Abstract

In insurance and even more in reinsurance it occurs that about a risk you only know that it has suffered no losses in the past, e.g. seven years. Some of these risks are furthermore such particular or novel that there are no similar risks to infer the loss frequency from. In this paper we propose a loss frequency estimator that copes with such situations, by just relying on the information coming from the risk itself: the “amended sample mean”. It is derived from a number of practice-oriented first principles and turns out to have desirable statistical properties. Some variants are possible, enabling insurers to align the method to their preferred business strategy, by trading off between low initial premiums for new business and moderate premium increases after a loss for renewal business. We further give examples where it is possible to assess the average loss from some market or portfolio information, such that overall one has an estimator of the risk premium.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Albrecher H, Beirlant J, Teugels JL (2017) Reinsurance: actuarial and statistical aspects. Wiley, Hoboken

    Book  Google Scholar 

  2. Brazauskas V, Kleefeld A (2009) Robust and efficient fitting of the generalized Pareto distribution with actuarial applications in view. Insur Math Econ 45(3):424–435

    Article  MathSciNet  Google Scholar 

  3. Brazauskas V, Jones BL, Zitikis R (2009) Robust fitting of claim severity distributions and the method of trimmed moments. J Stat Plan Inference 139(6):2028–2043

    Article  MathSciNet  Google Scholar 

  4. Bühlmann H, Gisler A (2005) A course in credibility theory and its applications. Springer, Berlin

    MATH  Google Scholar 

  5. Fackler M (2011) Panjer Class United: one formula for the probabilities of the Poisson, binomial, and negative binomial distribution. Anales del Instituto de Actuarios Españoles 17:1–12

    Google Scholar 

  6. Fackler M (2013) Reinventing Pareto: fits for both small and large losses. In: ASTIN Colloquium 2013

  7. Fackler M (2017) Experience rating of (re)insurance premiums under uncertainty about past inflation. PhD thesis, Universität Oldenburg

  8. FINMA (2006) Technical document on the Swiss Solvency Test. FINMA, Bern

    Google Scholar 

  9. Hao M, Macdonald AS, Tapadar P, Thomas RG (2019) Insurance loss coverage and social welfare. Scand Actuar J 2:113–128

    Article  MathSciNet  Google Scholar 

  10. Heckman PE, Meyers GG (1983) The calculation of aggregate loss distributions from claim severity and claim count distributions. Proc Casualty Actuar Soc 70:133–134

    Google Scholar 

  11. Klugman SA, Panjer HH, Willmot GE (2008) Loss models: from data to decisions. Wiley, Hoboken

    Book  Google Scholar 

  12. Mack T (1997) Schadenversicherungsmathematik. Verl Versicherungswirtschaft, Karlsruhe

    Google Scholar 

  13. Major J, Wang R, Woolstenhulme M (2015) The most dangerous model: a natural benchmark for assessing model risk. In: Society of Actuaries Monograph: Enterprise Risk Management Symposium

  14. Parodi P (2014a) Pricing in general insurance. CRC Press, Boca Raton

  15. Parodi P (2014b) Triangle-free reserving: a non-traditional framework for estimating reserves and reserve uncertainty. Br Actuar J 19(1):168–218

  16. Riegel U (2015) A quantitative study of chain ladder based pricing approaches for long-tail quota shares. ASTIN Bull 45(02):267–307

    Article  MathSciNet  Google Scholar 

  17. Riegel U (2018) Matching tower information with piecewise Pareto. Eur Actuar J 8(2):437–460

    Article  MathSciNet  Google Scholar 

  18. Schmutz M, Doerr RR (1998) The Pareto model in property reinsurance: formulas and applications. Swiss Reinsurance Company, Zurich

    Google Scholar 

  19. Zhao Q, Brazauskas V, Ghorai J (2018) Robust and efficient fitting of severity models and the method of Winsorized moments. ASTIN Bull 48(1):275–309

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Fackler.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Large tables

For the amending function \(g_{3}\) we compare four distribution scenarios:

  • Poisson

  • Binomial: \(m=5\)

  • Negative Binomial 1: \(\alpha =4,k=7,\kappa =3\)

  • Negative Binomial 2: \(\alpha =1,k=4,\kappa =3\)

For the Poisson scenario we compare the five admissible amending functions proposed in this paper.

All quantities are displayed as functions of \(\lambda =\lambda _{+}\).

Table 5 Four scenarios for \(g_{3}\)
Table 6 MSE delta of five admissible amending functions, Poisson model

Appendix 2: How to use a wrong tariff

Consider a portfolio or a large complex Commercial/Industrial risk that is (re-)insured from the ground up, i.e. with no or only very small deductibles. Large portfolios usually produce many enough losses for classical experience rating, but single accounts and small portfolios (e.g. Special Lines, emerging or newly started business) can be loss-free over some years, which makes them candidates for ASM rating. This requires an assessment of the average loss. As stated in Sect. 4, ground-up business appears much more heterogeneous than layer business, such that we cannot expect easy rules like market Pareto alphas yielding immediately the average severity. On the other hand, for such risks the (re)insurer may get quite granular information about how the portfolio/risk is composed, namely whether the single insured units (objects, persons, vehicles, ...) are large or small with respect to a measure of size indicating the maximum loss potential, e.g.: sum insured (Property lines, Personal Accident, some Third Party Liability business), PML/EML/MPL (large Property risks), insured value (Ocean Hull), etc., see Section 2.1 of Fackler [7]. This often comes with some qualitative info about relevant characteristics of the units, which may be clustered into groups according to size and characteristics (risk profile), or reported each separately (bordereau).

Let us explain how such information can help to roughly assess the average loss. As an example see the bordereau in Table 7, where each row represents a unit or a number of units having equal characteristics. (Suppose the bordereau has been adjusted for inflation etc., representing as if the future year).

Table 7 Bordereau

The first and second column show that this portfolio is very heterogeneous as for size, having only a few units in the million Euro range, while the majority is fifty times smaller. If we knew that the large units don’t produce more losses on average than the small ones, we could infer that the average loss is dominated by the many small units and must be in the range of some ten thousand Euro, if not lower.

How could we manage to make this sure? Look at the third column, which displays the (gross) premium rate in per mil of the size. (Note that for Property PMLs this rate deviates from the usual premium rate, which always relates to the sum insured).

If we have been provided with such premium rates (or equivalently the premiums) and if we believe that the given premiums are reasonable, the rating is essentially done: one can infer the risk premium from the given gross premium with small uncertainty. Yet, this is not the situation we aim to address here. Instead, we want to study cases where we have “found” or been given premium rates, but don’t trust them too much, thus want to use them only as a vague indication, e.g. in one of the following situations:

  • The reinsurer receives from the insurer a bordereau with the premiums actually charged by the latter, but suspects the overall premium level of the insurer to be inadequate, i.e. heavily underpriced or overpriced.

  • The (re)insurer receives from the client a bordereau without premiums, but with other information enabling him to assign premium rates to the units, by using a tariff from “similar” business.

The bottom line in both cases is the following: We have premium rates from a tariff (or alike) and feel that this tariff discerns fairly well between “good and “bad” units (assigning accordingly low/high premium rates), but we have doubts about the overall level of the tariff. Our goal is to use this tariff only up to a factor, to be precise:

  • use the tariff only to assess the average loss,

  • use the empirical loss count (via ASM rating) to assess the frequency.

Note that this is not a Credibility approach. It seems straightforward to apply Credibility by using the tariff premium as the a priori premium, but this is only adequate if one believes that the tariff overall yields a reasonable premium level. Instead it will turn out that we are able to work with much weaker (albeit a bit unusual) assumptions, which would lead to the same result if we replaced our tariff by one having e.g. three times higher or lower premium rates.

We need some notation. For a single unit j we consider the quantities as assembled in Table 8.

Table 8 Key figures of the units

In business lines covering units of variable size it is often possible to split each loss into a (usually major) part depending strongly (albeit not always proportionally) on unit size and a rather independent part (e.g. certain legal expenses). Accordingly, one gets a split of the average loss into a “constant” and a “variable” component:

$$\begin{aligned} L_{j}={}^{c\!}L_{j}+{}^{v\!}L_{j}={}^{c\!}L_{j}+{}^{v}l_{j}S_{j} \end{aligned}$$

The idea behind this split is that, while the sizes \(S_{j}\) may vary a lot across units, the average constant loss \(^{c\!}L_{j}\) and the average variable loss degree \(^{v}l_{j}={}^{v\!}L_{j}/S_{j}\) should usually vary much less, which makes them easier to assess.

Definition 11

For a finite set of real figures \(u_{j}\) and corresponding weights \(a_{j}\ge 0\), we write \({\overline{u}}\) for the ordinary arithmetic mean, while for the weighted average we write

$$\begin{aligned} {}^{a}{\overline{u}}:=\frac{\sum a_{j}u_{j}}{\sum a_{j}} \end{aligned}$$

Let us calculate upper bounds for risk premium and average severity of the risk as a whole. In the following the sums run over the units j and are sometimes indicated briefly by the subscript \(\Sigma \). For the risk premium we have

$$\begin{aligned} R_{\Sigma }&= \sum R_{j}=\sum f_{j}L_{j}=\sum f_{j}{}^{c\!}L_{j}+\sum f_{j}{}^{v}l_{j}S_{j}\\ &\le {}^{c\!}L_{max}\sum f_{j}+{}^{v}l_{max}\sum f_{j}S_{j}=f_{\Sigma }\left( ^{c\!}L_{max}+{}^{v}l_{max}\frac{\sum f_{j}S_{j}}{\sum f_{j}}\right) \\&= f_{\Sigma }\left( ^{c\!}L_{max}+{}^{v}l_{max}{}^{f}{\overline{S}}\right) \end{aligned}$$

and for the average loss

$$\begin{aligned} \frac{R_{\Sigma }}{f_{\Sigma }}={}^{f}{\overline{L}}\le {}^{c\!}L_{max}+{}^{v}l_{max}{}^{f}{\overline{S}} \end{aligned}$$

In the latter upper bound

  • \(^{c\!}L_{max}\) and \(^{v}l_{max}\) are not known, but in many situations reasonable prudent estimates should be possible, e.g. 4000 Euro and \(25\%\), respectively. Instead,

  • \(^{f}{\overline{S}}\) is completely unknown. This is the frequency-weighted average of the insured values, but its weights, the single frequencies, are not known.

However, another weighted average of the insured values can be calculated from the given data, the gross-premium-rate-weighted average:

$$\begin{aligned} ^{g}{\overline{S}}=\frac{\sum g_{j}S_{j}}{\sum g_{j}}=\frac{\sum G_{j}}{\sum g_{j}} \end{aligned}$$

How are these averages related? Noting that \(f_{j}S_{j}=w_{j}G_{j}\), we calculate

$$\begin{aligned} \frac{^{f}{\overline{S}}}{^{g}{\overline{S}}}=\frac{\sum f_{j}S_{j}}{\sum f_{j}}\frac{\sum g_{j}}{\sum g_{j}S_{j}}=\frac{\sum w_{j}G_{j}}{\sum w_{j}g_{j}}\frac{\sum g_{i}}{\sum G_{i}}=\frac{\sum G_{j}w_{j}}{\sum G_{j}}\frac{\sum g_{j}}{\sum g_{j}w_{j}}=\frac{^{G}{\overline{w}}}{^{g}{\overline{w}}} \end{aligned}$$

The final term looks promising, although the auxiliary quantities \(w_{j}\) are unknown. They are averaged in two different ways. Can these averages be very different?

To see first how much the \(w_{j}\) themselves may vary across the units, note that \(w_{j}=f_{j}/g_{j}=q_{j}/l_{j}\). The loss ratios \(q_{j}\) should hardly vary, provided that the tariff discerns fairly well between good and bad units: then the loss ratios of large units can be expected to be somewhat higher (due to a lower percentage of administration expenses and more power to negotiate low premiums), but differences should be rather small. The \(l_{j}={}^{c\!}L_{j}/S_{j}+{}^{v}l_{j}\) certainly vary more, yielding possibly smaller values for larger units: this is plausible for both summands, although huge variation should not be the norm for the second one. As this latter will mostly dominate, we can expect the \(l_{j}\) to vary rather moderately, such that overall the \(w_{j}\) should not vary too much, certainly much less than the units’ sizes \(S_{j}\). Larger units should have somewhat larger \(w_{j}\) than small units.

Let us now look at the weights of the two averages. The \(G_{j}\) are usually much larger than average for large units, while the \(g_{j}\) are mostly much more balanced. Thus, \(^{G}{\overline{w}}\) should normally be the larger average. But, with the \(w_{i}\) being fairly homogeneous, it is hard to imagine that the two averages be extremely far from each other. Of course, assuming \(^{G}{\overline{w}}\thickapprox {}^{g}{\overline{w}}\) would carry approximations too far, but in many situations it should be fair to assume

$$\begin{aligned} \frac{^{G}{\overline{w}}}{^{g}{\overline{w}}}\le C \end{aligned}$$

with a prudently chosen constant C well above 1, e.g. \(C=5\). Overall we get the upper bound

$$\begin{aligned} ^{f}{\overline{L}}\le {}^{c\!}L_{max}+{}^{v}l_{max}C\,{}^{g}{\overline{S}} \end{aligned}$$

where \(^{g}{\overline{S}}\) can be inferred from the bordereau and the three further terms are assessed by expert judgment.

To calculate the former, we develop Table 9 out of Table 7 by adding three columns showing the subtotals per row, indicated by “st”.

Table 9 Enhanced bordereau

Calculating the totals over all units, we get the quantities

$$\begin{aligned} \sum G_{j}=26{,}960,\quad \sum g_{j}=957.0\permille ,\quad {}^{g}{\overline{S}}=\frac{\sum G_{j}}{\sum g_{j}}=28,171 \end{aligned}$$

which combined with the expert estimates

$$\begin{aligned} ^{c\!}L_{max}=4000,\quad {}^{v}l_{max}=25\%,\quad C=5 \end{aligned}$$

yield a conservative estimate of the average severity

$$\begin{aligned} \widehat{^{f}{\overline{L}}}=4000+35{,}214=39{,}214 \end{aligned}$$

To illustrate how this can be used in the premium rating, suppose we have observed e.g. \(k_{+}=5.8\) loss-free years. Then ASM rating yields the frequency estimate

$$\begin{aligned} \widehat{f_{\varSigma }}=\frac{g\left( 0\right) }{k_{+}}=\frac{0.889}{5.8}=15.3\% \end{aligned}$$

which multiplied with the estimated average severity yields 6011 as a conservative estimate for the risk premium \(R_{\varSigma }\). If the gross premium in the bordereau, which totals 26,960, is the actual premium, the corresponding estimated loss ratio equals \(22.3\%\).

Note that the only input taken from the tariff is the ratio \(\sum G_{j}/\sum g_{j}\), such that, as anticipated, a three times higher/lower tariff would have led to the same final result. We have indeed used only mild assumptions and in particular not the overall level of the tariff.

Yet, this approach should be applied with care, it is not a panacea for loss-free situations of any kind. In particular, the assumptions about the homogeneity of the \(^{v}l_{j}\) and the \(w_{j}\) can break down easily if among the units there are some very unusual ones, e.g. large units having an extremely low premium rate being due to a specific high deductible. Most Third Party Liability business is problematic too. While in some cases the sums insured are closely tied to the loss potential (e.g. insolvency insurance), for many TPL covers the insureds are rather free to choose their sum insured, or more precisely their first-loss policy limit, according to their budget and risk aversion. Then very similar units can have sums insured between e.g. 1 and 20 million Euro, such that the latter tell hardly anything about the average loss, which will mostly be far below the policy limit and only weakly affected by it.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fackler, M. Premium rating without losses. Eur. Actuar. J. 12, 275–316 (2022). https://doi.org/10.1007/s13385-021-00302-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13385-021-00302-0

Keywords

Navigation