Based on stylized facts from the literature in the previous section, we develop an agent-based model of insurance–reinsurance systems. To make it possible to study both systemic aspects and characteristics of individual elements, we choose a modular design, so that agents of different types can be switched on and off individually. We discuss a range of relevant applications in Sect. 5.
Agents
The model, illustrated in Fig. 2, includes five types of agents: insurance customers, insurers, reinsurers, shareholders, and catastrophe bonds. Customers buy insurance coverage and pay premiums. Insurance firms may obtain reinsurance from either traditional reinsurance companies or catastrophe bonds. Insurance and reinsurance contracts oblige the customer (or the insurer obtaining reinsurance) to make regular premium payments, but entitle them to claim reimbursements for covered damages under certain conditions.
Insurance and reinsurance firms (discussed in more detail below) are the core of the model. Most of the decision making capacity in the model lies with them. They consult risk models to support their decision making. They further pay dividends to shareholders.
Customer side
Insurance customers (households)
Customers are modeled in a very simple way. They own insurable risks which they attempt to insure. They approach one insurer per time step and accept the current market premium if the insurer offers to underwrite the contract. The value of the insurable risks is normalized to 1 monetary unit each and the total number of insurable risks is fixed. The risks are not destroyed but are assumed repaired to their previous value after each damage incident.Footnote 11
Perils and peril regions
It is convenient to distinguish catastrophic and non-catastrophic perils. Catastrophic perils are the ones affecting most of the risks of a particular peril region, e.g., resulting from a hurricane in Florida, an earthquake in Japan or a flood in Southern England. While perils are rare, they can lead to heavy losses and are thus a primary reason for reinsurance. Non-catastrophic perils, on the other hand, typically affect only individual risks and are more frequent and uncorrelated in time. Examples of this type of perils are car accidents, residential fires, or retail burglaries.
In our model we only consider catastrophic perils, assuming that the effect of the non-catastrophic ones is minor, sometimes covered by deductibles, and subject to averaging out across many risks as a result of the central limit theorem. Catastrophic perils are modeled as follows:
-
Catastrophe event times are determined by a Poisson process, i.e., event separation times are distributed exponentially with parameter \(\lambda \).
-
Total damage follows a power-law with exponent \(\sigma \) that is truncated at total exposure (since insurance payouts cannot be higher than the amount insured).
-
Total damage is assigned to individual risks following a beta distribution calibrated to add up to the total damage.
In the model each insurable risk belongs to one of n peril regions, see Fig. 3. For simplicity we assume that all the risks of the respective peril region are affected by every catastrophic peril hitting that region. In this study we typically consider \(n=4\).
Risk event and loss distributions
Time distribution of catastrophes We assume that the number of catastrophes in the different peril regions follow a Poisson distribution, which means that the separation time between them is exponentially distributed with density function
$$\begin{aligned} e(t) = \lambda e^{-\lambda t}, \end{aligned}$$
(1)
where \(\lambda \) is the parameter of the exponential distribution and the inverse of the average time between catastrophes. We generally set \(\lambda =3/100\), that is, a catastrophe should occur on average every 33 years. We draw all the random variables necessary to set the risk event profile (when a catastrophe occurs and of what size the damages are) at the beginning of the simulation. In order to compare the n different “worlds” with different risk model diversity settings, we set the same M risk event profiles for the M replications of all n risk model diversity settings. That is, we compare the same hypothetical “worlds” with different risk model settings, but with the same catastrophes happening at the same time and with the same magnitude.
Global loss distribution We use a Pareto probability distribution \(\varphi \) for the total damage inflicted by every catastrophe, since historically they follow a power law. The Pareto distribution is defined as
$$\begin{aligned} \varphi (D_x) = \frac{\sigma }{D_x^{\sigma +1}}, \end{aligned}$$
(2)
where \(D_x\) are the values of the damages caused by the catastrophes. We generally set the exponent \(\sigma =2\). The distribution is truncated with a minimum (below which the damage would be too small to be considered a catastrophic event) and a maximum. The maximum is given by the value of insured damages. The density function is therefore:
$$\begin{aligned} {\tilde{\varphi }}({D_x}) = {\left\{ \begin{array}{ll} 0 &{} \quad 1 \le D_x, \\ \frac{\varphi (D_x)}{\int _{0.25}^{1} \varphi (D_x)d D_x} &{}\quad 0.25\le D_x\le 1, \\ 0 &{} \quad D_x \le 0.25. \end{array}\right. } \end{aligned}$$
(3)
Like the separation times, the damages of the catastrophes are drawn at the beginning of the simulation and are the same for the different risk model settings. We denote the total normalized loses drawn from this truncated Pareto distribution as \(L_i\).
Individual loss distribution
For the sake of simplicity we assume all risks in the region to be affected by the catastrophe, albeit with a different intensity. To determine the specific distribution of the known total damage across individual risks we use a beta distribution, defined as
$$\begin{aligned} \beta (d_x) = \frac{\varGamma (g+h)x^{g-1}(1-d_x)^{h-1}}{\varGamma (g)\varGamma (h)}, \end{aligned}$$
(4)
where \(\varGamma \) is the Gamma function and \(d_x\) is in this case the individual loss inflicted by the catastrophe to every individual risk. The two parameters g and h determine the shape of the distribution and define the expected value of the beta distribution which is,
$$\begin{aligned} E[\beta (x)] = \frac{g}{g+h}. \end{aligned}$$
(5)
Since the total loss inflicted by the catastrophe is \(L_x\) and this should match the expected value (for large numbers of risks), we use this fact to compute h for every catastrophe while always setting \(g=1\). That is,
$$\begin{aligned} L_x = \frac{1}{1+h}. \end{aligned}$$
(6)
Solving for b we get
$$\begin{aligned} h = \frac{1}{L_x} -1. \end{aligned}$$
(7)
The shape of the individual loss distribution depends on the total loss value and has to be adjusted for every catastrophe. We draw as many values from the distributions as we have risks in the peril region. Finally, the claims received by the insurer j from all risks i insured by j are computed as
$$\begin{aligned} \mathrm{Claims}_{x,j} = \sum _i {\left\{ \begin{array}{ll} \min (e_i, d_{x,i} \cdot v) - Q_i &{}\quad Q_i \le d_{x,i} \cdot v_i, \\ 0 &{} \quad d_{x,i} \cdot v_i \le Q_i. \end{array}\right. } \end{aligned}$$
(8)
where \(e_i\) is the excess of the insurance contract, \(d_{x,i}\) is the individual loss, \(v_i\) the total value of the risk and \(Q_i\) is the deductible. For convenience, we generally have \(v_i=1\).
Insurer side
Firms, capital, entry, exit
The number of firms in the model at time t is \(f_{t}=i_{t}+r_{t}\), of which \(i_{t}\) are insurance firms and \(r_{t}\) are reinsurance firms. The number of firms is dynamic and endogenous with initial values \(i_{0}\) and \(r_{0}\).
Market entry is stochastic with constant entry probabilities for insurers (\(\eta _i\)) and reinsurers (\(\eta _r\)). New insurance firms have a given initial capital \({\overline{k}}_i\) and new reinsurance firms have initial capital \({\overline{k}}_r\). These are both constants, chosen so that \({\overline{k}}_r\) is substantially larger than \({\overline{k}}_i\).
Market exit occurs with bankruptcy or when insurers or reinsurers are unable to find enough business to employ at least a minimum share \(\gamma \) of the cash that they hold for \(\tau \) time periods. (We calibrate the model so that one time period is roughly a month). Since the return on capital would be extremely low in that case insurers and reinsurers prefer to leave the market or focus on other lines of business. We typically set the parameters to \(\gamma _i=0.6\), \(\tau _i=24\) for insurance firms and to \(\gamma _r=0.4\), \(\tau _r=48\) for reinsurance firms. That is insurance firms exit if they employ less than 60% of their capital for 24 months, reinsurance firms when they employ less then 40% of their capital for 48 months.Footnote 12
Firms obtain income from premium payments and interest on capital \(k_{j,t}\) (of firm j at time t) at interest rate \(\xi \). Firms also cover claims and may attempt to increase capacity by either obtaining reinsurance or issuing CAT bonds. They pay dividends at a rate \(\varrho \) of positive profits. Firms decide whether or not to underwrite a contract based on whether their capital \(k_{j,t}\) can cover the combined value-at-risk (VaR) of the new and existing contracts in the peril region with an additional margin of safety corresponding to a multiplicative factor \(\mu \). They additionally try to maintain a diversified portfolio with approximately equal values at risk across all n peril regions.
Policyholders, shareholders, catastrophe bonds, and institutional investors that would buy catastrophe bonds (such as pension funds and mutual funds ) are not represented as sophisticated agents in this model. Shareholders receive dividend payments. Institutional investors buy catastrophe bonds at a time-dependent price that follows the premium price. They do not otherwise reinvest or have any impact on the companies’ policies.
CAT bonds pay claims as long as they are liquid and are dissolved at bankruptcy or otherwise at the scheduled end of life (at which point the remaining capital is paid out to the owners). The modular setup of the ABM allows us to run replications with and without reinsurance and CAT bonds.
Table 2 Risk model diversity (underestimated (U) and overestimated (+) peril regions) and risk model usage by risk model diversity setting (right) Dividends
Firms in the simulation pay a fixed share of their profits as dividends in every iteration, provided there were positive profits. In time periods in which the firm writes losses no dividend is paid. That is,
$$\begin{aligned} R = \max (0, \varrho \cdot \mathrm {profits}), \end{aligned}$$
(9)
where R are the dividends and \(\varrho \) is the share of the profits that is paid as dividends. For the results that we report in this paper we have fixed \(\varrho =0.4\).
Risk models
VaR Each insurance and reinsurance firm employs only one risk model. It uses this risk model to evaluate whether it can underwrite more risks (or not) at any given time. We assume that risk models are imperfect in order to allow investigation of effects of risk model homogeneity and diversity.
There is empirical evidence that risk models are inaccurate (see Sect. 2). In some peril regions they tend to underestimate risk while in others they overestimate it. In our model risk models are inaccurate in a controlled way: they are calibrated to underestimate risks in exactly one of the n peril regions and to overestimate the risks in all other peril regions by a given factor \(\zeta \) (see Table 2). Since the n peril regions are structurally identical, with about the same number of risks and with risk events governed by the same stochastic processes, this allows up to n different risk models of identical quality.Footnote 13
The risk models use the VaR in order to quantify the risk of the insurers in each one of the peril regions. The VaR is a statistic that measures the level of financial risk within an insurance or reinsurance firm over a specific time frame. It is employed in some regulation frameworks including Solvency II, where it is used to estimate the Solvency Capital Requirement. Under Solvency II, insurers are required to have \(99.5\%\) confidence they could cope with the worst expected losses over a year. That is, they should be able to survive any year-long interval of catastrophes with a recurrence frequency equal to or less than 200 years. The probability that catastrophes generating net losses exceeding the capital of the insurer in any given year is \(\alpha =\frac{1}{\mathrm {recurrence\, interval}}=\frac{1}{200}= 0.005\). For a random variable X that would represent the losses of the portfolio of risks of the insurer under study, the VaR with exceedance probability \(\alpha \in [0, 1]\) is the \(\alpha \)-quantile defined as
$$\begin{aligned} \mathrm{VaR}_{\alpha }(X) = \mathrm {inf}\{x \in {\mathbb {R}} : P(X > x) \le \alpha \}. \end{aligned}$$
(10)
This means that, e.g., under Solvency II, the capital that the insurer is required to hold can be computed with the \(\mathrm{VaR}_{0.005}(X)\).
Computation of the firm’s capital requirement A firm’s VaR can be derived from the firm’s risk model as a margin of safety factor over the VaR of the entire portfolio: companies should hold capital \(k_{j,t}\) such that
$$\begin{aligned} k_{j,t} \ge \mu \mathrm{VaR}(X_1+X_2+X_3+ \cdots + X_N), \end{aligned}$$
where the \(X_i\) represent all sources of cash flow for the company (including investment returns, credit risk, insurance losses, premium income, expenses, operational failures, etc.) and \(\mu \ge 1\) is a factor for an additional margin of safety. In other words the firm’s whole balance sheet from \(t_0\) to \(t_0+1\) year must be modeled and capital must be sufficient for the firm to have a positive balance sheet \(99.5\%\) of the time as a minimum. Due to catastrophes, this condition can occasionally be violated, e.g., if the company takes a loss such that \(k_{j}\) is suddenly and severely reduced. In the present model, the companies will in such cases stop underwriting until enough capital is recovered.
Estimation of the VaR in the simulation We opt for a simplified approximation of the computation of the true VaR in the firm’s risk models. This simplification is necessary in order to save significant amounts of the otherwise prohibitively long computation time. This section elaborates on why this simplification is necessary and how we nevertheless ensure a largely accurate result.
Computing the VaR over the firm’s portfolio requires computation of the convolution of the distributions of damages and those of the frequency of catastrophes both over time and in all peril regions while also taking into account reinsurance contracts. Reinsurance contracts essentially remove part of the support of the damage distribution and make them non-continuous.Footnote 14 Estimating the non-continuous distribution of cash flows would require a Monte Carlo approach. Since this is necessary for every underwriting decision, it would increase the computation time required for the ABM by orders of magnitude.
We argue that to study the effects of systemic risk of risk model homogeneity, it is not necessary to compute the \(\mathrm{VaR}\) combined for all peril regions and over the entire year.Footnote 15 We will make two simplifications: (1) working with the values at risk due to individual catastrophes in the model and (2) considering the VaR separately by peril region and combining the peril regions with a maximum function.
(1) The focus on individual catastrophes instead of on 12-month periods transforms the timescale in the results of our simulations, but the type of dynamics and the shape of distributions obtained are the same. Evidently, bankruptcies should be more frequent in our approach since we are only holding capital to survive individual catastrophes with a returning period of 200 years, but not catastrophe recurrence in 12-month period intervals. However, bankruptcy frequency is the only aspect that is affected.Footnote 16
(2) Further, computationally expensive convolution of distributions across peril regions can be avoided, since a good approximation can be obtained with the maximum function over the VaRs in individual peril regions. To see this, consider two extreme cases. If, on the one hand, the separation times of catastrophes were perfectly correlated between all n peril regions and catastrophes would therefore always coincide, we would have \(\mathrm{VaR}^c=\mathrm{VaR}^1+\mathrm{VaR}^2+\cdots +\mathrm{VaR}^n\). If, on the other hand, catastrophes would never coincide, we would have \(\mathrm{VaR}^c=\max (\mathrm{VaR}^1,\mathrm{VaR}^2, \ldots , \mathrm{VaR}^n)\). The first scenario overestimates the VaR; the second underestimates it, by neglecting the probability of the coincidence of multiple catastrophes. In other words, there is a residual VaR term \(\mathrm{VaR}^{r}\) to account for this:
$$\begin{aligned} \mathrm{VaR}^c=\max (\mathrm{VaR}^1,\mathrm{VaR}^2, \ldots , \mathrm{VaR}^n) + \mathrm{VaR}^{r}. \end{aligned}$$
We choose our parameters such that the probability of such a coincidence happening is small,
$$\begin{aligned} \begin{array}{r l} P_\mathrm{coincidence}&{}=1-\left( {\begin{array}{c}n\\ 0\end{array}}\right) (1-P_\mathrm{peril})^n-\left( {\begin{array}{c}n\\ 1\end{array}}\right) P_\mathrm{peril}(1-P_\mathrm{peril})^{n-1}\\ &{}=1-\left( {\begin{array}{c}n\\ 0\end{array}}\right) (e^{-\lambda })^n-\left( {\begin{array}{c}n\\ 1\end{array}}\right) e^{-\lambda }(e^{-\lambda })^{n-1}. \end{array} \end{aligned}$$
Namely, we choose \(\lambda =100/3\), \(n=4\), hence \(P_\mathrm{coincidence}\approx 0.005\). Consequently, our \(\mathrm{VaR}^{r}\) is smaller than the Solvency II capital requirement threshold. We can therefore avoid performing the prohibitively resource-consuming exact computation of the VaR in the risk models and approximate
$$\begin{aligned} \widetilde{\mathrm{VaR}^c}=\max (\mathrm{VaR}^1,\mathrm{VaR}^2, \ldots , \mathrm{VaR}^n) \end{aligned}$$
$$\begin{aligned} k_{j,t} \ge \mu \widetilde{\mathrm{VaR}^c} = \mu \max (\mathrm{VaR}^1,\mathrm{VaR}^2, \ldots , \mathrm{VaR}^n). \end{aligned}$$
(11)
Balancing of portfolios based on VaR in the simulation In addition, and especially when getting close to the limit \(k_{j,t} \approx \mu \mathrm{VaR}^i\), firms will prefer to underwrite risks in different peril regions such that the portfolio is approximately balanced, keeping a similar amount of risk in every peril region. More specifically, they underwrite a new contract only if the new standard deviation of the \(\mathrm{VaR}^*\) in all peril regions is lower with than without this new contract. That is,
$$\begin{aligned} \mathrm{std}(\mathrm{VaR}^{1*},\mathrm{VaR}^{2*}, \ldots , \mathrm{VaR}^{n*}) > \mathrm{std}(\mathrm{VaR}^1,\mathrm{VaR}^2, \ldots , \mathrm{VaR}^n), \end{aligned}$$
(12)
where \(\mathrm{VaR}^{n*}\) would be the estimated VaR in every peril region if the new contract is accepted. If the standard deviation is higher, firms will only be willing to accept a new contract if they are already balanced enough. In other words, the standard deviation computed with the new \(\mathrm{VaR}^{n*}\) must be small compared to the total cash held by the firm:
$$\begin{aligned} \mathrm{std}(\mathrm{VaR}^{1*},\mathrm{VaR}^{2*}, \ldots , \mathrm{VaR}^{n*}) < \vartheta \frac{k}{n}, \end{aligned}$$
(13)
where \(\vartheta \in [0, 1]\) is a parameter that regulates how balanced a firm wants to be and n is the number of peril regions.
Premium prices
The insurance industry is highly competitive. This justifies the assumption that all agents are price takers. Insurance and reinsurance premiums depend on the total capital \(K^{T}_{t}=\sum _{j=1}^{f_t} k_{j,t}\) available in the insurance sector. For the sake of simplicity we assume that insurance premiums oscillate around the fairFootnote 17 premium \(p_f\). When the total capital of the industry increases, the premiums paid by a policyholder decrease, and conversely, they increase when the total capital decreases. To avoid unrealistically high volatility, we set hard upper and lower bounds to the premium proportional to \(p_f\), \(p_f \cdot Max_L\) and \(p_f \cdot MinL\). This gives us a development equation for the premium price:
$$\begin{aligned} p_t = {\left\{ \begin{array}{ll} p_f \cdot MaxL &{}\quad p_f \cdot MaxL\le p_t \\ p_f \cdot MaxL - \frac{s \times K^{T}_t}{K^{I}_0 \times {\widetilde{D}} \times H} &{}\quad p_f * MinL\le p_t\le p_f * MaxL \\ p_f \cdot MinL &{} \quad p_t\le p_f \cdot MinL, \end{array}\right. } \end{aligned}$$
(14)
where the slope \(\frac{s \times K^{T}_t}{K^{I}_0 \times {\widetilde{D}} \times H}\) depends on the available capital \(K^{T}_{t}\), the number of risks available in the market H, the expected damage by risk \({\widetilde{D}}\), and the initial capital held by insurers at the beginning of the simulation, \(K^{I}_0\). \(s=s_i\) is a sensitivity parameter.
The thresholds \(Max_L\) and MinL are implemented in the model as parameters and can be varied, although we have run most of the simulations with values of \(MinL=70\%\) of the fair premium as lower boundary and \(MaxL=135\%\) of the fair premium as upper boundary. The boundaries are rarely hit.
Reinsurance prices also follow Eq. 14. The thresholds \(Max_L\) and MinL are the same. The only differences are that
-
the reinsurance premium depends only on the capital available in the reinsurance market
$$\begin{aligned} K^{T}_{t}=\sum _{j=1}^{f_t} z_{j,t}k_{j,t} \end{aligned}$$
where z is a vector of length \(f_t\) such that element \(z_{j,t}\) is 1 for reinsurers and 0 for insurers)
-
initial capital is also that of the reinsurers \(K^{R}_0\)
-
the sensitivity to capital changes, \(s=s_r\) is larger than for the insurance premium (steeper slope), \(s_r>s_i\).
The capital in the reinsurance market is usually an order of magnitude below the capital available in the insurance market. This is true for contemporary global insurance and reinsurance markets (see Sect. 2) and is reproduced by the model in the average steady-state values of the capital after careful calibration (see Sect. 5).
For the sake of simplicity premiums are the same for all peril regions. This is a base case that allows us to design risk models of identical quality.
Contracts
Insurance and traditional reinsurance contracts
Insurers provide standard insurance contracts lasting 12 iterations (months). At the end of a contract the parties try to renew the contract, which leads to a high retention rate.
Insurers may obtain excess-of-loss reinsuranceFootnote 18 for any given peril region. The standard reinsurance contract lasts 12 iterations (months). The insurer proposes a deductible for the reinsurance contract; the reinsurer will evaluate whether or not to underwrite the contract. Each reinsurance contract has a deductible (i.e., the maximum amount of damages the insurer has to pay before the contract kicks in). In our model, the deductible for each contract is drawn from a uniform distribution between [25%, 30%] of the total risk held per peril region by the insurer at the start of the contract.
Alternative reinsurance: CAT bonds
The model also includes a simplified alternative insurance capital market: Both insurers and reinsurers may issue catastrophe bonds (CAT bonds) by peril region. A CAT bond is a risk-linked security that allows institutional investors like mutual funds and pension funds to reinsure insurers and reinsurers. They are structured like a typical bond, where the investors transfer the principal to a third party at the beginning of the contract and they receive coupons (some points over LIBOR) every year for it. If during the validity of the contract no catastrophe occurs the principal is returned to the investor. If a catastrophe occurs the losses of the insurer and reinsurer are covered with the principal until it is exhausted. CAT bonds are attractive since they are uncorrelated with the other securities available in the financial market. Since institutional investors are risk averse only the high layers of the reinsurance programs with a low probability of loss are covered by CAT bonds.
If insurers cannot get reinsurance coverage over five or more iterations they issue a CAT bond. The premium of CAT bonds is a few points over the reinsurance premium of the traditional reinsurance capital market.
Model setup and design choices
Settings
Risk model homogeneity and diversity can be studied by comparing settings with different numbers of risk models used by different firms. In a one risk model case, all firms use the same (imperfect) risk model. In a two risk model case, firms are divided between two equally imperfect risk models, etc.
Experimental design
We compare n settings with different numbers of risk models \(\nu =1, 2, \ldots , n\). The up to \(\nu =n\) different risk models are of identical accuracy and distinguished by underestimating risks in different peril regions. As a consequence, we need to model n different peril regions; we retain this number of peril regions in all settings including the ones with \(\nu <n\) different risk models in order to allow for a more direct comparison.
Simulations are run as ensembles of M replications for each of the n settings considered. In every replication we run the model with identical parameters, changing only the original random seed. We typically set \(M=400\) and \(n=4\), which means we run \(4\times 400=1600\) replications of a simulation just varying the number of risk models (\(M=400\) each for \(\nu = 1, 2, 3, 4\)).
The catastrophes in the model are random, but occur at the same time steps for the different model diversity settings to provide a meaningful comparison. If a catastrophe x of size \(D_x\) happens in time step \(t_x\) in replication \(m_x\) for the one risk model case, then a catastrophe of the same size \(D_x\) will hit the simulation in the same time step (\(t_x\)) in the same replication \(m_x\) in the two risk model case, in the three risk model case, and in the four risk model case. This allows us to isolate the effects of risk model diversity as the four different settings are exposed to the same sequences of perils.
We run experiments for various parameter values, e.g., to consider the effects of different margins of safety \(\mu \) and the effect of the presence or absence of reinsurance.
We have designed the model so that after transients die out the behavior is stationary. This allows us to take long time averages of quantities such as the frequency of bankruptcies. Quantities such as the number of firms and number of reinsurance firms are set initially, but these change in time in response to the other parameters. After a long time the initial settings become irrelevant. To avoid biasing our results with data from the transient stage we remove the first 1200 periods (100 years) of the simulations.Footnote 19
Software
The model was written in Python. The source code is publicly available.Footnote 20 It is still under development, e.g., with extensions for validation, calibration, and visualization.