Introduction

The purpose of this paper is to analyze the handling of operational risks (OpRisks) when they are transferred to insurance agencies from an economic standpoint. OpRisk losses are all the extra costs that are not incurred if the event causing them does not happen. In Basel II/III, OpRisk is defined as “the risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events. This definition includes legal risk but excludes strategic and reputational risk” (Basel Committee 2004). The China Risk Orientated Solvency System (C-ROSS) conceptual framework, issued by the China Insurance Regulatory Commission (CIRC), analogously quotes “The risk of direct or indirect losses due to inadequate internal processes, personnel and systems or from external events, including legal and supervisory compliance risk—but excluding strategic risk and reputational risk” (CIRC C-ROSS Conceptual Framework 2013).

Among the different OpRisk classes projected by Basel II and introduced in “Analysis of the reference scope”, the propositions of this paper are applicable to all insurable risks that can be quantified through historical loss data.

Within business organizations, insurable risk management is indeed one of the major tasks that fall under the responsibility of chief risk officers (CROs). In this regard, CROs are increasingly called to pursue more meaningful and productive dealings with insurance agencies by optimizing retained losses, gaining more control over their insurance program, and addressing specific risk exposures (Colquitt et al. 2008).

One of the most powerful approaches that CROs can adopt in dealing with insurance agencies is commonly referred to as total cost of risk (TCoR). The TCoR is a consistent and reportable approach; hence, it is a cross-sector method that CROs in different businesses can apply to minimize OpRisk loss costs.

The TCoR has traditionally been defined in three broad categories:

  • Premiums: cost of risk transfer to the marketplace (insurance carriers);

  • Losses: cost of retained losses;

  • Administrative expenses: cost of internal and external risk management functions, which include staffing costs, consulting services costs, brokers’ fees, claim funding costs, and actuarial and ancillary service costs (Aon Risk Solutions 2011).

From a CRO’s point of view, the TCoR value calculation equals the sum of the cost of insurance—premiums and administrative expenses—and costs not covered by insurance—retained losses.

In this regard, the TCoR can be defined as the—negative—value contribution of the considered risk factors. TCoR optimization hence aims to reduce the net cost of risks and related management measures in a transparent, comprehensible, and defensible manner. Therefore, by adopting this approach, the company can derive the optimal balance between risk retention and risk transfer within an overall insurance management strategy that delivers a net value contribution to the enterprise (Kross and Gleissner 2009).

Methods of calculating the TCoR vary by organization, by industry, and by the type of risk being assessed. Additionally, the results of TCoR calculations are often difficult to interpret and usually fail to provide useful guidance for identifying key loss cost drivers (i.e., the most significant loss costs) and decreasing the TCoR (Galusha et al. 2013). The TCoR can be expressed per $1000 of turnover to allow a year-on-year comparison and a comparison against industry peers or other industries, where data are available (Gardener 2008).

The TCoR scope of application covered in this article mainly considers the internal operational risks that CROs first need to quantify through historical loss data and then to address through aligned insurance programs. As stated in the analytical study illustrated below, the chosen TCoR definition does not include surcharges—administrative expenses, markup, etc.—the cost of capital of the company, or the premium loading factor of the insurance agency.

Literature review

Over the last decade, risk management (RM) has strived to gain enterprise-spanning applications by adopting a holistic approach in which strategic, operational, reporting and compliance risks are addressed simultaneously rather than separately. In this light, the RM discipline has followed an evolutionary path from company function-centric management (Bowling and Rieger 2005; Hoyt and Liebenberg 2011) to a company business-centric approach, or enterprise risk management (ERM) (Arena et al. 2010).

Within business organizations, CROs oversee the ERM strategy. The CRO position is increasingly becoming involved in the management of a broader spectrum of risks facing firms. This broadened focus also affects CROs’ roles within organizations and influences the RM tools that are being used (Colquitt et al. 2008).

In addition to performing their duties, CROs spend a significant amount of their time building and maintaining relationships either internal to the organization or external to the organization—with government agencies, competitors, insurance agencies, and professional organizations. These interactions encompass a broad range of roles, e.g., monitor, liaison, spokesman, disseminator, and negotiator roles (Karanja and Rosso 2017). Based on an extensive survey, Karanja and Rosso (2017) described the liaison role as follows: ‘‘…works closely with line management, actuarial, finance, legal, reinsurance and investments and deals with other key corporate governance and control functions…’’. Within the OpRisk area, CROs are increasingly called to optimize retained losses, gain more control over their insurance program, and address specific risk exposures (Colquitt et al. 2008).

Regarding the RM tools that are being used by CROs, quantitative methods and techniques that aim to provide potentially effective responses to risk management challenges have been refined and spread among world-class companies. Using data from 825 organizations, Paape and Speklé (2012) concluded that the use of quantitative approaches to assess risk appears to improve ERM effectiveness.

For the purpose of this paper, the two focal points of specific interest are the relationship between business and insurance agencies within the ERM framework and TCoR implementation as a quantitative approach within the ERM framework.

First, within the ERM framework, the relationship between business and insurance agencies attracted some attention in the literature following the introduction of OpRisk as a new set of risk factors to be addressed under the new Basel Capital Accord, commonly referred to as Basel II (Kross and Gleissner 2009). Furthermore, Kross and Gleissner (2009) demonstrated that insurance and other risk transfer mechanisms should be considered a set of suitable instruments that can deliver a positive net contribution to enterprise value.

Regarding the Solvency II issue, the European insurance industry embraced the regulation of integral risk management systems (Santomil 2020). However, although risk management is a fundamental element in the insurance sector, little research has been conducted regarding the degree of companies’ implementation of ERM. Gatzert and Kolb (2013) concluded that insurance companies need to measure and manage OpRisk in the context of a holistic enterprise risk management system. This system should feature an internal risk-sensitive approach that accounts for dependencies between risk factors and should include additional insurance and prevention programs.

In light of the literature examined, it appears that the relationship between business and insurance agencies has been recognized and studied very little on the enterprise side, despite having received some attention on the insurance industry side. For this reason, this paper aims to contribute to the current literature with a scientific dissertation on how CROs can have meaningful and productive dealings with insurance agencies.

Regarding TCoR implementation, Ward (2001) identifies specific tasks overseen by CROs when transferring OpRisks to external third parties. These tasks revolve around optimizing the structure of an insurance program by identifying the need for insurance, negotiating insurance, providing insurance programs, rationalizing insurance, streamlining the administration of insurance, and administering insurance arrangements, e.g., brokers and captives. For these purposes, the TCoR emerges as a useful concept for companies striving to reduce their economic exposure to OpRisk over the long term due to the uncertainty and variability inherent in their manufacturing and business processes (Gardener 2008).

In the literature, consideration of the TCoR has either focused on specific areas of a company, e.g., safety (Derradji and Hamzi 2020; Galusha et al. 2013; Nassif 2015), or the use of a qualitative approach within the ERM framework (Gardener 2008; Kross and Gleissner 2009).

As far as the TCoR literature is concerned, it is apparent how little has been studied and written both on TCoR implementation with an overall approach to the relationship between business and insurance agencies and through a quantitative methodology. Hence, this paper provides a comprehensive, quantitative framework for CROs to use in analyzing the economic implications of OpRisks when they are transferred to insurance agencies.

Methodology

A three-step methodology is displayed in Fig. 1. The first step, “The analysis of the reference scope”, refers to how the OpRisks are transferred to insurance agencies and how their losses are estimated considering past event data. The second step, “The analysis of the core concept”, discusses how OpRisks can be linked to the insurance agency premium and evaluated through the TCoR approach. The last step of the methodology, on the basis of the application of the results of the previous analyses, addresses the outcome implementation in terms of how CROs can optimize the TCoR by cost-effectively balancing the insurance premiums and retained losses, i.e., the OpRisk costs not transferred to insurance agencies. Through these methodological steps, the paper provides a detailed analytical description of the following specific issues that CROs need to overcome while mitigating the OpRisk cost by applying the TCoR method:

  • OpRisks vs. their transfer to insurance agencies;

  • An estimate of OpRisk losses on a historical data set basis;

  • The linking of OpRisk losses with the insurance agency’s premium;

  • The TCoR evaluation of OpRisks transferred to insurance agencies;

  • The company’s OpRisk TCoR optimization.

Fig. 1
figure 1

Three-step methodology: the analysis of the reference scope, the analysis of the core concept, and the outcome implementation

Analysis of the reference scope

This first step of the methodology describes how the OpRisks are transferred to insurance agencies and how their losses are estimated considering past event data.

The OpRisks vs. their transfer to insurance agencies

In broad terms, RM covers four conventional approaches to handling OpRisks, i.e., actions to mitigate, eliminate, accept, and transfer them. In relation to OpRisk transfer to third parties, CROs are increasingly called to address cost-effective risk exposure by optimizing the OpRisk costs and to gain awareness of their insurance program by dealing more meaningfully and productively with insurance agencies.

By analyzing the paper reference scope, the evaluation of the cost related to OpRisk exposure—namely, the definition of the TCoR value (see Eq. 5)—generally includes, in addition to the consequences of adverse events, additional legal costs and loss of profit until a partial or full recovery is made. Recovery can be direct, when implemented by the company, or indirect, through insurance coverage. One of the critical steps in the TCoR value calculation is to quantify the distribution of risk losses. Henceforth, the distribution of risk losses will be set through a total loss approach, specifically addressing the transfer of risk losses to insurance agencies.

A CRO may find it difficult to determine the dimension of OpRisks because of the lack of real and complete data for each individual risk. To overcome this lack, risks can be grouped into classes with similar characteristics—operational risk classes (ORCs)—thus increasing the amount of data available for each class. Although it refers to the banking sector, Basel II offers a useful benchmark for the classification of OpRisks:

  • Internal fraud: losses due to fraud, embezzlement or violations/circumvention of laws, regulations or company directives that involve at least one internal company resource;

  • External fraud: losses due to fraud, misappropriation or violations of laws by third parties;

  • Employment and safety at work: losses deriving from acts that do not comply with laws or agreements regarding employment or health and safety at work or from episodes of discrimination or nonapplication of equal conditions;

  • Customers, products and business practices: losses deriving from defaults, either involuntary or due to negligence, relating to professional obligations towards customers or the nature or characteristics of the product;

  • Damage to tangible assets: losses due to damage or destruction of tangible assets due to natural disasters or other events;

  • Interruptions of operations and malfunctions of IT systems: losses due to interruptions of operations or malfunctions of IT systems;

  • Execution, delivery and management of processes: losses due to deficiencies in the processing of operations or in the management of processes or relations with commercial counterparties and sellers.

In addition to these categories, when a company decides to transfer risks externally through specific insurance contracts, underwriting risks1 must be considered, the evaluation of which is carried out by the insurance agency2. On this basis, in the occurrence of an adverse event, it might happen that the insurance—despite its assessment of the underwriting risk—is insolvent and, therefore, unable to pay compensation to the company.

An estimate of OpRisk losses on a historical data set basis

The financial losses arising from OpRisks are determined based on historical losses by means of quantitative methods: the risks can then be determined in a simple, understandable, and reliable way in terms of expected and unexpected losses that the company might face. The CRO can also face difficulties due to a lack of hard data, i.e., quantitative information, about risks or risk classes. Quantitative methods can be integrated with qualitative methods, which have the great advantage of allowing the measurement of risk in the absence of historical loss data, when the impact on the company can potentially be even greater. To perform a qualitative analysis, questionnaires or workshops can be employed, and attention should be paid to the application of methodological rigor in collecting the answers to ensure their consistency. In this way, the historical vision based on loss data can be integrated with a perspective view supported by the contribution of experts. This approach can provide the company with new and important information that will benefit the quality of measurements of the OpRisk exposure.

The discussion below refers to the evaluation of losses related to a single class of risk.

The overall effect of losses due to adverse events essentially depends on two factors that are taken as the two input variables of the evaluation process: the number of past harmful events N (number or frequency of occurrences in a given interval of time) and the damage caused by individual events Xi (impact or severity of the i-th event). It is worth noting that the number of harmful events per period of time N is a random variable.

The frequency and severity of the impact are each characterized by a probability distribution, the frequency distribution and severity distribution, respectively.

The frequency distribution is a discrete probability distribution, which indicates for each class of the number of events the probability that they can occur in a specific time interval. The frequency distribution of the economic losses due to internal OpRisks can be represented by a Poisson distribution3, under a specific hypothesis, which is usually satisfied: if, for a defined time interval (e.g., 1 year), and for subsequent observations on several time intervals of equal amplitude (e.g., 10 years), the harmful events occur continuously, independently, and with steady probability (i.e., with a constant probability in each interval), then it can be demonstrated that the random variable “number of harmful events” is effectively described by a Poisson distribution. This distribution describes the frequency of random and independent events; thus, it is useful in determining the aggregate loss distribution within the timeframe under analysis, as outlined below.

Regarding the severity distribution of the events, in the most common situation, the individual impacts do not depend on the number of occurrences but are independent and identically distributed.

When determining the distribution of the individual impacts with a continuous random variable, the CRO must consider that usually the impacts deriving from internal OpRisks are characterized by strong asymmetry. More precisely, there is a high probability that the damage is rather modest, while there is a low probability that the damage is high and a very low probability of very high damage. Among the possible continuous distributions that can represent the impact of individual harmful events, the most commonly used are the log-normal, Weibull, and exponential distributions.

The aggregate loss distribution, combining frequency with severity within each time interval considered, is represented by the following equation:

$${\text{Aggregate}}\, {\text{loss}}_{{{\text{time}}\, {\text{interval}}}} = \mathop \sum \limits_{i = 1}^{N} X_{i} ,$$
(1)

where as mentioned above, Xi is the loss related to a single event (loss severity) and N is the total number of events (i.e., the frequency) in the time interval.

Considering the probability of the aggregate loss per year over the timeframe under analysis leads to the creation of a distribution of aggregate losses. This distribution can be obtained as a convolution product of the frequency and severity distributions, as shown in Fig. 2 (Savelli and Clementi 2014).

Fig. 2
figure 2

Generation of the distribution of aggregate losses: the convolution of the frequency distribution of events and the severity distribution (economic value) of events (Savelli and Clemente 2014, modified)

The mathematical calculation used to obtain the distribution of aggregate losses would be expensive and difficult to perform with analytical methods. For this reason, common statistical techniques are used, among which the best known and most used is the Monte Carlo simulation. Monte Carlo techniques, supported by automatic systems, combine large amounts of different values of random input variables to obtain a statistically significant sample of output values. The output sample is obtained through a combination of many iterations produced by the Monte Carlo simulation following a defined mathematical model and tends to be distributed like the total loss function, giving an estimation of the real results (Halton 1979; Niederreiter 1992). The output values of the Monte Carlo method are statistical; i.e., in this example, they are based on the frequency distribution of occurrences and their severity in economic terms.

The steps to obtaining a correct quantification of the reference scope are as follows:

  1. 1.

    Building the model;

  2. 2.

    Executing the Monte Carlo simulation;

  3. 3.

    Estimating the model outcome.

On the distribution obtained (Fig. 3), the CRO can evaluate the fundamental amounts for quantitative risk management: the most probable average loss—the expected loss (EL)—the unexpected loss (UL), and the maximum exposure to risk—the value at risk (VaR):

$${\text{VaR}} = {\text{EL}} + {\text{UL}}.$$
(2)
Fig. 3
figure 3

Distribution of aggregate losses: the values below the mean are the expected losses, while the values above the mean and below the VaR are the unexpected losses (Basel Committee 2005, modified)

The VaR is defined as the maximum loss that can be incurred in a certain time interval and with a certain level of confidence (1 − α) (Denton et al. 2003). In other words, the VaR is the maximum loss level that will not be exceeded with probability 100 * (1 − \(\alpha\))%. Typical values of the \(\alpha\) coefficient are 0.05, which corresponds to 95%, and 0.01, corresponding to 99%. Let us explain with an example: a 95% risk exposure over a year means that a higher loss is expected with a probability of 5% (\(\alpha\)) over the year. This limit can be set considering the company’s ability to cope with the specific risk—the risk capacity—e.g., the company will face only a 5% probability of suffering damage of severity above the threshold defined at 95% (1 − α) probability.

Therefore, the VaR can be considered back-up capital to cover any possible loss within a given confidence level—the risk capacity or level of risk tolerance. This evaluation is obviously related to the risk class considered. The other important variables, also highlighted in Fig. 3, are, therefore:

  • Expected loss—EL: the average loss (weighted mean of the frequency4 of the probability of a loss distribution). The histogram of the frequencies of the classes is the discrete probability curve of the loss distribution.

  • Unexpected loss—UL: the difference between the maximum exposure to risk—at a specific level of risk tolerance (VaR)—and the expected loss. This is the potential level of deviation from the expected loss up to the maximum exposure at the level (1 − α) % for all N events considered in the aggregate loss, namely, \(\sum_{i=1}^{\mathrm{N}}{X}_{i}\) (1 − \({\alpha }\))% for each time interval.

On this basis, the CRO can decide which part of the risk losses can be retained internally and which part can be transferred externally by paying a premium to an insurance agency.

It is also worth noting that for a continuous distribution of aggregate losses, the risk exposure value at a specific risk tolerance level is given by the percentile (1 − α) of the curve; on the other hand, for discrete loss distributions represented by a frequency histogram, the risk exposure is given by the aggregate losses for each time interval multiplied by the risk tolerance factor (1 − α)%.

The analysis of the EL and UL changes over time enables the CRO to verify the effectiveness of the actions implemented to reduce the EL and subsequently the UL. The total loss distribution—by individual risk or by risk class—is used for the evaluation of the TCoR, as shown below.

A leading example

The example chosen concerns a generic logistics company and the risk that road transportation from the company’s warehouse to the final destination could undergo accidents leading to goods loss, vehicle damage, and additional costs.

To obtain a correct quantification of the reference scope, we follow the three aforementioned steps.

  1. 1.

    Building the model

    The first step aims to estimate the characteristics of the random variables of the model that are then used in the Monte Carlo simulation.

    By analyzing historical data relating to this risk, the model input variables can be identified as:

    • The number of road accidents in a defined period of time (1 year);

    • The extent of the damage in each single accident (financial losses).

    The distributions of these two input variables need to be combined through convolution to obtain an estimation of the output (aggregate losses), as illustrated in Fig. 2.

    The Poisson distribution is hypothesized to describe the first input variable, “number of accidents”. For a period of 1 year, it can be assumed that accidents take place independently of each other and, more specifically, that the occurrence of one does not change the probability of the same type of event for a subsequent period. Then, there are no trends or time intervals in which the events are concentrated. The total time span is 10 years, from 2008 to 2017. The details of the annual number of incidents are shown in Table 1.

    From these data, the histogram of the accident frequency can be obtained, which indicates the frequency of accidents in each given year.

    Table 2 shows in detail the frequency, number, and mean of accidents in 10 years for each class of the number of accidents per year. The average number of accidents that occurred in the 10 years considered can be calculated by multiplying the number of accidents by the frequency and summing these values, obtaining 26, which is then divided by the sum of the frequencies and leads to a weighted mean of 2.6.

    The histogram of the frequencies of accidents can be generated by reporting these data in a graph, as shown in Fig. 4.

    The value of the mean reported in Table 2 can be used as an estimate for the λ parameter to build the Poisson distribution that approximates the number of accidents, as plotted in Fig. 5.

    Regarding the second input variable, i.e., the impact of individual accidents, it is hypothesized that the severity value can be considered continuous and that it follows a log-normal probability distribution. The historical data that characterize the severity of each individual accident that occurred in the 10-year period under consideration are shown in Table 3. To trace the log-normal distribution, this series of historical data was considered the Y-variable, and the natural logarithm ln of each single value was then calculated. This second series of data is distributed according to a normal distribution with a mean of 5.79 and a standard deviation of 0.70, as plotted in Fig. 6.

    Using the mean and standard deviation of this distribution, the log-normal distribution that characterizes the severity of past accidents was obtained (Fig. 7). The severity distribution of past incidents has an average of $420.98.

  2. 2.

    Executing the Monte Carlo simulation

    The second step for the correct quantification of the reference scope is the convolution of the two input variables through the Monte Carlo simulation. A very large number of random values are extracted for both input variables using the obtained probability distributions to generate a sample of values representing the “aggregate annual losses by year” associated with accidents; each value is the sum of the casual loss values up to the casual frequency of accidents in a year (see Eq. 1). This operation implements the convolution of the frequency and the severity distributions. In a Monte Carlo simulation, the sample size is a sensitive issue, chiefly in the case of OpRisks, where the focus is on high-impact, low-frequency events. This condition means that to capture events with very low frequency, the analysis must also include the tails of the event distributions. It is, therefore, necessary to generate a very large number of random events to give significance to the simulation. In this context, 10,000 random draws are considered the minimum acceptable value (Mosca et al. 2008).

  3. 3.

    Estimating the model outcome

    Ultimately, by graphically representing the data obtained from the simulation, the Poisson distribution of the annual aggregate losses is drawn by tracing the histogram of the relative frequency (Y-axis in Fig. 8) of each class of annual aggregate loss values (X-axis in Fig. 8). The median value of each class is shown on the X-axis, and the percentage of occurrences is shown on the Y-axis. This distribution represents the probability of occurrence of each median of the class value for the annual aggregate losses. The fundamental quantities for the CRO evaluation can be calculated and represented on this distribution of potential losses. The expected loss, or the expected annual loss that occurs in the event of one or more accidents, corresponds to the mean of the distribution, i.e., $1141.56. Now, the unexpected loss can be determined as the maximum loss that the company can suffer in a period of time equal to 1 year in 100 * (1 − α) %, i.e., 99%, of cases (α = 0.01). Through the percentile function of the spreadsheet, the 99-th percentile of the annual loss distribution was calculated, giving $3260.41.

    After quantifying the risk, the CRO decides whether to transfer it externally through an insurance program, whose fair premium is approached by starting with the expected value, as detailed below.

Fig. 4
figure 4

Frequency of accidents (actual data): the histogram shows the frequency (Y-axis) of each number of accidents per year (X-axis) for data collected from 2008 to 2017

Fig. 5
figure 5

Poisson distribution of the number of accidents per year: built with the λ parameter derived from data in the 2008–2017 period

Fig. 6
figure 6

Normal distribution of ln(losses): the natural logarithm ln of each single loss value is calculated to obtain the parameters for drawing the log-normal distribution of the severity of accidents

Fig. 7
figure 7

Distribution of accident severity: the lognormal distribution of accident severity has been built using the parameters of the normal distribution of ln(losses)

Fig. 8
figure 8

Poisson distribution of the annual aggregate losses: the results of the Monte Carlo simulation for the convolution of the distribution of annual losses and the number of accidents per year (i.e., the annual aggregate losses)

Table 1 Number of accidents per year: the number of accidents that occurred in each year from 2008 to 2017, ranging from 0 accidents in 2015 to 7 accidents in 2011
Table 2 Frequency of accidents, number of accidents and mean: for each class of the number of accidents (from 0 to 7, first column), the table shows the number of years (frequency, second column) and the total number of accidents per class (third column)
Table 3 Distribution of losses: the value of the loss for each accident, split by year of occurrence

Analysis of the core concept

The second step of the methodology discusses how OpRisks are linked to the insurance agency premium and evaluated through the TCoR approach.

The link between OpRisk losses and the insurance agency’s premium

To illustrate the risk transfer to insurance agencies, we refer to Fig. 3 for a single risk or a single risk class, to Fig. 8 for the case study, and to the following formula:

$${\text{Unexpected}}\,{\text{ loss }}\left( {{\text{UL}}} \right) \, = {\text{ Value-at-Risk }}\left( {{\text{VaR}}} \right) \, {-}{\text{ Expected}}\,{\text{ loss }}\left( {{\text{EL}}} \right).$$
(3)

Before introducing the TCoR, it is useful to recall that the optimization of the OpRisk transfer to insurance depends on the following three factors:

  • The insurance premium P: how much the company periodically pays to the insurance agency for loss coverage;

  • The deductible D: the minimum loss left to the company. The deductible can be absolute (systematic deduction) or relative (total compensation without deduction for losses greater than the deductible);

  • The maximum insurable limit L (limit loss): the maximum loss that the insurance agency undertakes to refund the company (the maximum coverage does not refer to a single event but rather to the cumulative losses in the insured period).

As a first approximation, according to the total loss theoretical approach, the fair premium equals the expected value of the global compensation paid by the insurer in the coverage period. Therefore, it is set to the average losses of the company with reference to the distribution of aggregate losses.

$${\text{Fair}}\,{\text{ premium}} \approx {\text{EL}}.$$
(4)

The premium can be defined at different levels:

  • Fair premium (F.P.): cost of the risk coverage formulated by the insurance, which depends only on the expected losses that are net of insurance charges;

  • Pure premium (P.P.): fair premium + security loading; the security loading is usually added by the insurance agency in accordance with its marginalization policies to market a “risk price” to companies;

  • Tariff premium (T.P.): pure premium + fixed loading; the fixed loading (or surcharge) covers the insurance agency’s administrative costs and includes the acquisition, collection, and management commission costs. It is approximately 10–15% of the fair premium.

The TCoR evaluation of OpRisks transferred to insurance agencies

According to the total loss approach, which involves a positive-asymmetry Poisson distribution, CROs can determine the distribution of aggregate losses. Figure 9 shows an example for the case discussed.

Fig. 9
figure 9

Insured and uninsured risk: the losses exceeding the deductible level D but not the insured limit L form the insured risk, and the values below the deductible D and the losses exceeding the insured limit L form the uninsured risk

Figure 9 shows the distribution of aggregate losses underlying the quantification of OpRisks in a defined unit of time, typically 1 year. The cost of the risk between the deductible D and the maximum insured limit L is transferred to the insurance agency and represents a cost indicated by the term CostIR (insured costs), equivalent to the premium. The value up to D and the value beyond L on the aggregate loss curve represent the uninsured risk, also indicated by the term retained risk. This is the level of risk that the company takes on, and it is measured by a cost, called CostUR (uninsured costs).

Therefore, the deductible D and maximum insurable limit L reduce the area of the insured risk, increase the level of financial losses borne by the company—the retention level—and consequently scale the price of the pure premium that insurance agencies require from companies. From this perspective, these two parameters are the fundamental conditions for negotiating the amount of risk transferred to the insurance agency and the price of the premium (P.P.) paid by the company.

From a CRO’s point of view, the TCoR value calculation equals the sum of the cost of insurance (CostIR) and the costs of the risks not covered by insurance (CostUR)5:

$${\text{TCoR}} = {\text{Cost}}_{{{\text{IR}}}} + {\text{Cost}}_{{{\text{UR}}}} .$$
(5)

The insurance costs depend on the F.P. and, therefore, on the deductible and insurable limit levels that are agreed upon with the insurance agency. Likewise, the uninsured costs depend on the deductible and losses above the insured limit. Therefore, this can be expressed as follows:

$${\text{TCoR}} = {\text{Cost}}_{{{\text{IR}}}} \left( {D,L} \right) + {\text{Cost}}_{{{\text{UR}}}} \left( {D,L} \right).$$
(6)

As anticipated, the insurance cost net of the insurance agency charges is the F.P., and therefore, we have

$${\text{Cost}}_{{{\text{IR}}}} \cong {\text{F}}.{\text{P}}.$$
(7)

In the theory of actuarial sciences, a correct reference for the value of the F.P. is given by the EL—see Eq. (4)—of the distribution of aggregate losses in a given time interval; therefore, the cost of the insured risk CostIR would be substantially equivalent to the F.P. However, since the cost of the insured risk depends on D and L, its value is more correctly estimated by the P.P. The pure premium takes into account the security loading of the insurance agency and is determined by considering the levels of D and L negotiated with the company. Then:

$${\text{Cost}}_{{{\text{IR}}}} \left( {D,{\mkern 1mu} L} \right) \cong {\text{P}}.{\text{P}}.$$
(8)

In adopting an insurance program, CROs deal with a changed aggregate loss curve, as illustrated in Fig. 10 (right) (Management Solutions 2014).

Fig. 10
figure 10

Construction of the curve of the retained (uninsured) loss: the uninsured costs, i.e., those below the deductible D and above the maximum insured limit L (Management Solutions 2014), are grouped on a new curve showing the costs retained by the company

Therefore, new values of EL, UL, and VaR must be considered (see Fig. 11) with insurance (w/i):

  • A different EL, called ELw/i

  • A different VaR, called VaRw/i

  • A different UL (always referred to as a confidence level 1 − \(\alpha\)), called ULw/i = VaRw/i − ELw/i.

Fig. 11
figure 11

Reconfigured aggregate loss curve of the retained (uninsured) risk: new values for expected loss with insurance and value-at-risk with insurance are identified, allowing the CRO to evaluate the uninsured costs with insurance (Bettanti and Lanati 2019)

On this basis, CostUR (uninsured costs) is VaRw/i and can be defined as:

$${\text{Cost}}_{{{\text{UR}}}} = {\text{ EL}}_{W/I} + {\text{ UL}}_{W/I} (1 - \alpha ).$$
(9)

The TCoR formula can be rewritten as:

$${\text{TCoR}} = {\text{P}}.{\text{P}}. \, + {\text{ EL}}_{W/I} + {\text{ UL}}_{W/I} (1 - \alpha ).$$
(10)

The TCoR, therefore, depends on D and L, i.e., on the retention level. A higher deductible and a lower maximum insured level imply an increase in the retention level—i.e., the level of risk that the company takes on—and vice versa. CROs, based on the company’s risk appetite and consequently the consistent retention level, will have to choose the combination of D and L that optimizes the TCoR.

Outcome implementation

The last step of the methodology addresses the outcome implementation. On the grounds of the application of the results of the previous analyses, CROs are able to optimize the TCoR value by cost-effectively balancing the insurance premiums and retained losses.

The company’s OpRisk TCoR optimization

Based on the definition in Eq. (10), CROs can optimize the TCoR by finding the best balance among its three components, the pure premium (P.P.), the expected loss with insurance ELW/I, and the unexpected loss with insurance ULW/I (1 − \({\alpha }\)). The last two components address the company’s OpRisk retention level in terms of the deductible (D) and the maximum insurable limit (L) thresholds. In principle, when aiming to reduce the TCoR by lowering the pure premium, CROs challenge the OpRisk cost by dealing with a higher deductible (D) and a lower maximum insurable limit (L). As indicated, this means increasing the OpRisk retention level by paying a lower transfer price, i.e., the pure premium. Using a TCoR sensitivity simulation program (Management Solutions 2014), the complex trade-off is that, as shown in Fig. 12, the increase of ELW/I (the expected value of the risks retained by the company) is approximately the reduction in the pure premium (Savelli and Clementi 2014). By considering ULW/I (1 − \({\alpha }\)) instead, a reduction in L leaves the company with a longer uncovered “tail”, which nonlinearly increases the losses suffered by the company and thus the TCoR.

Fig. 12
figure 12

Uninsured risk curve ‘reconfigured’ and ‘conditioned’: the shifting of the D and L values changes the area under the reconfigured curve of uninsured losses, with nonlinear behavior (Bettanti and Lanati 2019)

Figure 12 illustrates this dynamic: the area under the curve of the aggregated losses—with insurance coverage—increases nonlinearly when the CRO increases the retention level, mostly on the tail of the original UL losses, to the right of L. According to this, CROs need to pay close attention to decreasing the pure premium in an attempt to reduce the TCoR. As a result of doing this, CROs could increase the risk of an adverse effect. Therefore, the balance of the pure premium and retention level requires CROs to achieve the optimum balance through precise simulation. In other words, the optimization of the TCoR is strictly linked with the amount to transfer to insurance agencies. Even accepting higher pure premiums can produce a lower TCoR through the beneficial impact on the uninsured costs (CostUR).

Conclusions and discussion

Usually, CROs perform the optimization of OpRisk costs through qualitative approaches based on their risk appetite and risk capacity. Our study demonstrates that when the target is OpRisk cost minimization, CROs can use quantitative and analytical methods to cost-effectively balance insurance premiums and retained losses. In fact, by applying the TCoR method, CROs can pursue decision-making processes that are driven by the explained trade-off between the cost of insurance premiums and the retained risk losses. In this regard, as illustrated in this paper, by carefully focusing on a TCoR-based premium, CROs can choose between multiple insurance programs featuring different coverage levels and premiums and can make their relationship with the insurance agencies more meaningful and productive. This type of strategy pursues the most valuable balance between the company’s retention level—consistent with the company’s risk appetite—and the premium paid to insurance agencies.

Implications for research and practice

The implications for research and practice are twofold. First, given that in the literature, the relationship between business and insurance agencies from the enterprise’s perspective seems to attract little attention, this paper offers a scientific and analytical study of how CROs can beneficially deal with insurance agencies. Second, given that the reference literature on the TCoR method seems to cover insights that are limited to specific areas and are explained in a qualitative way, this article provides a comprehensive, quantitative TCoR framework for CROs to use in analyzing the economic handling of OpRisks and their transfer to insurance agencies.

Limitations and future studies

This paper shows how TCoR value minimization deals with a cost-effective balance of the pure premium, deductible (D), and maximum insurable limit (L). The main limitations are the assumptions that the chosen TCoR value definition does not include surcharges—administrative expenses, markup, etc.—the cost of capital of the company, and the premium loading factor of the insurance agency. Considering these limitations, with this theoretical framework, future studies can explore the impacts of considering the tariff premium instead of the pure premium; of including the cost of capital in the TCoR formula, reducing the TCoR value while not altering its qualitative behavior; and of introducing a loading factor, causing the pure premium’s behavior to be nonproportional to changes in the retention level and the limit.

Notes

  1. 1.

    The underwriting risk derives from the chance that the insurance agency’s incoming premiums are not sufficient to cover customer claims plus expenses. To quantify this risk, insurance agencies generally evaluate their exposure to the frequency of customer claims, major accidents, and catastrophes. This evaluation refers to the most significant part of the insurance agency’s portfolio and considers the gross and net of the insurance agency’s reinsurance program.

  2. 2.

    The underwriting risk categories are defined by Solvency II, which is the insurance industry’s benchmark and encompasses the underwriting risks in nonlife fields, life fields, and health fields.

  3. 3.

    The Poisson distribution is a discrete probability distribution that expresses the likelihood related to the number of events occurring subsequently and independently in a given time interval, once λ has been set as the average number of events. When the events under study are related to other random events, l can also embed its probability distribution. In this case, the Poisson distribution underpins a mixing (or mixed) feature. In brief, the probability that exactly K accidents will occur in a period of T years (T time intervals of 1 year each) is:

    $${P}_{t }\left(K\right)= \frac{{[e}^{-\lambda T}(\lambda T)]}{K!}.$$
  4. 4.

    The frequency-weighted mean is calculated as the sum of the product of the frequency classes—i.e., the number of adverse events (0, 1, 2, etc.) in the frequency distributions—or of the classes of loss values in aggregate loss distributions in the chosen time interval (1 year, 6 months, 1 month, etc.), multiplied by the frequency of occurrence of the classes themselves over the period of time (5, 10, 20 years, etc.), all divided by the total of the time intervals in relation to the time period being analyzed or the number of extractions in the simulation phase.

  5. 5.

    The chosen TCoR definition does not include surcharges (administrative expenses, markup, etc.), the cost of capital of the company, and the premium loading factor of the insurance agency.