1 Introduction

Multi-risk assessment is still in its infancy and often only refers to the analysis of multiple single hazards in a same framework (Grünthal et al. 2006; Carpignano et al. 2009; Schmidt et al. 2011). Major catastrophes however remind us that multi-risk is not simply the sum of individual risks but that correlations between natural hazards, technological hazards and our complex socioeconomic networks lead to greater risks (e.g., 2005 hurricane Katrina, USA; 2010 eruption of Eyjafjallajökull, Iceland; 2011 Tohoku earthquake, Japan). Innovative methods have been proposed in recent years to tackle the problem of hazard interactions (e.g., Marzocchi et al. 2012) and of other dynamic aspects of risk, such as time-dependent vulnerability and exposure (e.g., Selva 2013) or network failures (e.g., Adachi and Ellingwood 2008). However, so far, only a limited number of scenario-based and/or site-specific multi-risk studies have been proposed due to the difficulty and novelty of the task.

Development of a comprehensive multi-risk framework is hampered by the following requirements: (1) large amount of input data (2) cross-disciplinary expertise and (3) innovative risk assessment methods. The first two points are generally solved in dedicated multi-risk projects at the national (e.g., HAZUS-MH, http://www.fema.gov/hazus), international (e.g., CAPRA, http://www.ecapra.org/) or private sector levels (e.g., Grossi and Kunreuther 2005; Schmidt et al. 2011). The third point remains to be solved. As indicated by Kappes et al. (2012), “despite growing awareness of relations between hazards, still neither a uniform conceptual approach nor a generally used terminology is applied”. Similarly, but based on feedback from civil protection stakeholders, Komendantova et al. (2014) noted that: “two areas are most problematic. These are (1) the absence of clear definitions and (2) the lack of information on the added value of multi-risk assessment”.

In the present study, we present a novel, generic, multi-risk framework based on the sequential Monte Carlo Method (MCM) to allow for a straightforward and flexible implementation of hazard interactions, which may occur in a complex system. Real-world examples of hazard interactions include: earthquake clustering, storm clustering, tsunamis following earthquakes or landslides, landslides or fire following earthquakes, storm surges associated to hurricanes, technological accidents triggered by natural events (i.e., NaTech events). Time-variant vulnerability and exposure related to hazard clustering are also considered, although not the primary focus of this study. More generally, time-variant vulnerability may refer to different processes, such as structure ageing, not-repaired pre-damage due to past events or damage conditioned on the co-occurrence of several events. Time-variant exposure supposes the evolution of assets value with time, which may be due to socioeconomic factors or to previous losses.

Our goal is specifically to capture and quantify extreme (i.e., low-probability–high-consequences) events using inductive generalization (e.g., Bier et al. 1999) and by following the recommendation of Kameda (2012), which is to “mobilize “scientific imagination” in the process of decision”—by incorporating extreme events in risk modelling if no observations but sound scientific bases are available (Note that the term “reasoned imagination” is used by Paté-Cornell 2012). Our approach differs from site-specific and scenario-based studies (e.g., Adachi and Ellingwood 2008; Marzocchi et al. 2012; Selva 2013) in that we do not define any specific hazard or risk interaction but a framework to implement any type of interaction. With such an objective, real interaction processes have to be abstracted to more basic concepts and engineering methods by-passed. The proposed framework is described in Sect. 2, in which the concept of hazard correlation matrix is introduced.

Validation of our framework, which should be considered as a proof-of-concept, is made using generic data and processes defined heuristically. This strategy, that is the use of intuitive judgment and simple rules, allows for the solving of problems that are otherwise difficult to consider. Based on an extensive literature survey, we generalize the concepts of peril, of peril characterization and of hazard interaction. Our aim, by abstracting these concepts into basic categories, is to provide some general guidelines for extreme event quantification. The proof-of-concept is presented in Sect. 3, and a discussion on the applicability of the proposed framework to real-world conditions is given in Sect. 4.

Basic hazard and risk terms as well as definitions used in the present article are based, when available, on the book “Catastrophe Modelling: A new approach to managing risk” by Grossi and Kunreuther (2005). Symbols used in our study are listed in Table 1.

Table 1 List of the symbols used in the present study

2 Generic multi-risk framework

2.1 Sequential Monte Carlo Method

The proposed multi-risk framework is formed of a core simulation algorithm based on the MCM. We adopt the MCM for its flexibility when dealing with complex systems. We generate N sim time series, sampling events from a Poisson distribution (homogeneous or non-homogeneous process). Each time series represents one risk scenario, and the analysis of N scenarios allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. These risk paths emerge naturally from the system implemented in the MCM.

A set of stochastic events is defined as input for the MCM, with each event characterized by an identifier, a long-term occurrence rate λ and a loss Λ. The loss is defined as Λ = δ Ε with δ the mean damage ratio and Ε the system exposure. δ is derived from the hazard intensity ι with each event being represented by one unique hazard footprint. Each event is therefore implicitly related to one specific source, e.g., an earthquake related to a given fault segment, a storm related to a given track (see Sect. 3.1.1 for the definition of a stochastic event set).

Hazard interactions as well as time-variant exposure and vulnerability represent non-stationary processes, which require additional inputs (see Sects. 3.1.2, 3.1.3) and a sequential processing strategy. The proposed sequential MCM is defined as follows:

  • Multi-hazard assessment: define the simulation set with simulation identifier, event identifier and event occurrence time t.

    1. 1.

      Generate N sim random time series: Sample N sim sets of events over the time interval Δt = [t 0; t max] drawn from the Poisson distribution with each stochastic event i characterized by the long-term rate parameters λ i . Affix an occurrence time t to each event following the random uniform distribution. Record the time series in the simulation set S 0, which represents the null hypothesis H 0 of having no interaction in the system. Fix increment j = 1, which indicates the occurrence of the first event in the time series.

    2. 2.

      For each of the N sim simulations, record the characteristics of the jth event, which occurs at t j , in simulation set S 1. Resample events k occurring in the interval [t j ; t max] if the conditional probability Pr(k|j) exists. This conditional probability is defined in the hazard correlation matrix, described in Sect. 2.2. Affix t k  = t j  + ε with ε ≪ Δt. Fix j = j + 1.

    3. 3.

      Repeat step 2 while t j  ≤ t max.

    4. 4.

      Fix j = 1.

  • Multi-risk assessment: update the simulation sets S 0 and S 1 with event loss Λ.

    1. 5.

      For each of the N sim simulations, calculate the mean damage ratio δ j due to the jth event, which is potentially conditional on the occurrence of previous events. The implementation of time-variant vulnerability is described in Sect. 2.3.

    2. 6.

      For each of the N sim simulations, calculate the loss Λ j due to the jth event, which is potentially conditional on the occurrence of previous events. The implementation of time-variant exposure is also described in Sect. 2.3. Record Λ j .

    3. 7.

      Repeat steps 5 and 6 while t j  ≤ t max.

Figure 1a illustrates the difference between simulation sets S 0 and S 1. All simulated time series in S 1 start with the same event as in S 0, since changes are only conditional on the occurrence of previous events. Following an event, changes may or may not occur in S 1 depending on the conditional probabilities and the effects of sampling. Once an event is triggered, the trigger/target pair clusters in time with chains of n events potentially emerging in the time interval [t, t + ]. The impact of time-variant exposure and vulnerability on event losses are not represented in Fig. 1a. It should be noted that the proposed framework for hazard interactions is a variant of a Markov chain. In that view, the hazard correlation matrix corresponds to a transition matrix, the conditional probabilities to transition probabilities and events to states.

Fig. 1
figure 1

Generic multi-risk framework (multi-hazard part). a Sequential MCM: the simulation set S 0 represents the null hypothesis H 0 of having no interaction in the system, while set S 1 represents any multi-risk hypothesis. Grey rectangles represent different simulated time series. b Concept of hazard correlation matrix: trigger events are represented in rows i and target/triggered events in columns j. A given peril P consists of n events P i with 1 ≤ i ≤ n. Each cell of the square matrix indicates the 1-to-1 conditional probability of occurrence Pr(P j |P i ) over Δt, which is used as input in the MCM. The n-to-1 conditional probability is considered by incorporating a memory element to the correlation matrix. The proposed approach can be seen as a variant of a Markov chain. See text for details

2.2 Hazard correlation matrix

We introduce the concept of hazard correlation matrix to quantify hazard interactions. It should be noted that we use a loose definition of the word interaction, as we also refer to one-way causal effects by this term (noted→). The hazard correlation matrix is illustrated in Fig. 1b where trigger events are represented in rows i and target/triggered events in columns j. A given peril P consists of n events P i with 1 ≤ i ≤ n. Each cell of the square matrix indicates the 1-to-1 conditional probability of occurrence Pr(j|i) = Pr(P j |P i ) over Δt, which is used as input in the MCM. We also consider the n-to-1 conditional probability by incorporating a memory element to the correlation matrix. Various interaction processes may be implemented, based on empirical, statistical or physical laws. Those are run in the background with only the conditional probability represented in the hazard correlation matrix. However, the memory element is defined such that it can alter the process in the background by informing it of the sequence of previous events. This is illustrated by several examples in Sect. 3.1.2. The approach is different from a strict Markov chain in the fact that it is not memoryless and because the matrix does not require \(\mathop \sum \limits_{j} \Pr \left( {j|i} \right) = 1\) (i.e., finite chains of events).

We define various terms (noted in italics) to categorize different types of interactions based on the concept of hazard correlation matrix: An event repeat is described by P i  → P i , an intra-hazard interaction by P i  → P j and an inter-hazard interaction by A i  → B j with A and B two different perils. Moreover, perils can be separated into primary perils A when λ A  > 0 and secondary perils B when λ B  = 0 and Pr(B|A) > 0, with λ the long-term occurrence rate. Invisible events i, which do not yield direct losses, should be included in the system if they trigger events j that do (case Pr(j|i) > 0, Λ i  = 0 and Λ j  > 0).

2.3 Time-variant exposure and time-variant vulnerability

Cumulative losses due to the occurrence of successive events cannot exceed the total exposure Ε. Time-variant exposure can be defined by.

$${E}_{i} = {E}_{0} - \mathop \sum \limits_{i = 1}^{i} \delta_{i} {E}_{i - 1} + e_{i}$$
(1)

where Ε 0 the original exposure, Ε i the exposure of the system immediately following event i and e i a function representing the exposure reconstruction up to the occurrence time of event i. With a time-variant exposure, the event loss also becomes time-variant with Λ i  = δ i Ε i−1. Two end-members of Eq. 1 are instantaneous reconstruction (i.e., e i  = ∑Λ i ) with Ε i  = Ε 0 and no reconstruction (i.e., e i  = 0) with 0 ≤ Ε i  ≤ Ε 0. Both cases are tested in Sect. 3. Although not considered in the present study, a time-dependent term e(t) could be added to Eq. 1 to represent system recovery after a disaster and socioeconomic evolution (e.g., increase in wealth/assets with time).

The clustering of events in time may also influence the vulnerability, which can be described by the conditional mean damage ratio δ j|i . The dependence on the trigger event i may take different forms, independently of the framework developed here. An example of vulnerability dependence on hazard intensity is given in Sect. 3.1.3. Time-dependent vulnerability δ i (t), such as ageing, is not considered.

3 Proof-of-concept

3.1 Generic data and processes

We generate generic data and processes by following the heuristic method and by abstracting the concepts of peril, of peril characterization and of hazard interaction into basic categories. Our approach provides some general guidelines for extreme event quantification and a dataset for testing the generic multi-risk framework described in Sect. 2. It follows the existing recommendations on extreme event assessment (Bier et al. 1999; Kameda 2012; Paté-Cornell 2012) by combining inductive generalization and “scientific imagination” to include known examples of extremes as well as potential “surprise” events in a same framework. We intentionally do not consider the case of networks (Adachi and Ellingwood 2008). Data and processes are then implemented in the MCM, and the results analysed in Sects. 3.2 and 3.3. In the present study, we use N sim = 105, Δt = [t 0 = 0; t max = 1] and ε = 0.01. It should be noted that the numerous assumptions made below are not a requirement of the proposed multi-risk framework but are working hypotheses for basic testing purposes.

3.1.1 Building a stochastic event set

We first generate a stochastic event set in which each event is defined by an identifier, a long-term occurrence rate λ and a loss Λ. Let’s consider peril A consisting of n A events A i with 1 ≤ i ≤ n A . First, we define the frequency-intensity distribution

$$\lambda_{i} = \exp ( - \beta \iota_{i} )$$
(2)

where ι i is the hazard intensity of event A i , λ i the long-term occurrence rate of event A i and β the exponential law exponent for peril A. Equation 2 applies to numerous perils, such as earthquakes (Gutenberg and Richter 1944), volcanic eruptions (Simkin and Siebert 1994) or asteroid impacts (Brown et al. 2002). Second, we estimate the mean damage ratio δ of repair cost to the replacement cost of the system from a vulnerability curve that relates the susceptibility of the system to the hazard intensity, with 0 (no damage) ≤ δ ≤ 1 (total destruction). We use the cumulative lognormal distribution

$$\delta_{i} = \frac{1}{2}erfc\left( { - \frac{{\ln \left( {\iota_{i} } \right) - \mu }}{\sigma \sqrt 2 }} \right)$$
(3)

where ι i is the hazard intensity and δ i the mean damage ratio due to event A i . μ and σ are, respectively, the mean and standard deviation of the variable’s logarithm and depend on the peril and asset response type (see Sect. 3.1.3 for a variable μ). Equation 3 is well suited to describe the typical S-shaped vulnerability curve used in damage assessment for earthquakes (Fabbrocino et al. 2005), volcanic eruptions (Spence et al. 2005; Zuccaro et al. 2008), winds (Wehner et al. 2010), fluvial floods (Reese and Ramsay 2010), tsunamis (Srivihok et al. 2012) or asteroid impacts (Mignan et al. 2011). In the present case, we consider a single-damage state, which is an oversimplification compared to standard risk assessment. This limitation is discussed in Sect. 4.

Let’s now consider additional perils other than A. Direct comparison of hazards (so-called joint visualization of multiple hazards) would require the use of a rather subjective hazard intensity classification scheme (e.g., low–medium–high) (Kappes et al. 2012) since intensity ι differs from one peril to another one (e.g., ground shaking, ash load, inundation depth, wind speed, bolide energy). Thus, we remove ι by combining Eqs. 2 and 3, which yields

$$\delta_{i} = \frac{1}{2}erfc\left( { - \frac{{\ln \left( { - \ln \left( {\lambda_{i} } \right)} \right) - \ln \left( \beta \right) - \mu }}{\sigma \sqrt 2 }} \right)$$
(4)

Using risk as a common language, we resolve the problem of hazard comparability. Event loss Λ i is then defined by Λ i  = δ i Ε where Ε = 1 the system exposure. Since a unique vulnerability curve is used for the full exposure, all losses given in the present study are mean loss values.

We generate the stochastic event set based on Eq. 4 with 10−4 ≤ λ i  ≤ 10−1 in 0.1 increments in the log scale. It follows that 31 events are defined per peril with return periods varying from 10 to 10,000 years. This applies only to primary perils (here A and B), which occur spontaneously following Eq. 2. We fix β A  = ln(10) and β B  = 0.5ln(10) (Fig. 2a). Perils A and B are analogue to earthquakes and volcanic eruptions for instance. Secondary perils (here C, D and E), by definition, have a null long-term occurrence rate λ = 0 and only occur following primary events. Examples include tsunamis following earthquakes (Suppasri et al. 2012), storm surges (Irish et al. 2008) or Natech events (i.e., industrial accidents with natural hazard triggers, Krausmann et al. 2011). Vulnerability curve parameters (Eq. 3, Fig. 2b) are fixed such that perils A and B show distinctive risks (i.e., different risk ranking), with peril A dominating risk at short return periods and peril B dominating risk at long return periods (Eq. 4, Fig. 2c). We use μ A  = ln(5), σ A  = 0.4, μ B  = ln(6) and σ B  = 0.1. We define three events per secondary peril (C 1C 3, D 1D 3 and E 1E 3) and use a same vulnerability step function such that δ 1 = 0.01, δ 2 = 0.1 and δ 3 = 1 for the 3 perils. Table 2 shows the resulting stochastic event set, which is used as input in the MCM.

Fig. 2
figure 2

Characteristics of the generic primary perils A and B. a Power-law frequency–intensity distributions with frequency λ and intensity ι (Eq. 2). b Cumulative lognormal vulnerability curves defined by the mean damage ratio δ as a function of intensity ι (Eq. 3). c. Risk curves defined by the mean damage ratio δ as a function of frequency λ (Eq. 4). The list of parameters is given in Table 3

Table 2 Stochastic event set defined as input for the MCM

3.1.2 Populating the hazard correlation matrix

Providing a comprehensive review of hazard interactions is out of the scope of this study. Such a daunting task was for example described by Gill and Malamud (2012) (see also Kappes et al. 2012). Here, we describe the principal types of hazard interactions that can be implemented in the correlation matrix to be used for proof-of-concept of the proposed multi-risk framework. The proposed hazard correlation matrix is shown in Fig. 3. We consider the three categories of interactions previously defined: event repeat (e.g., A i  → A i ; C → C), intra-hazard interaction (e.g., A i  → A j ) and inter-hazard interaction (e.g., A i  → B j ). The effect can be positive (i.e., probability increase) or negative (i.e., probability decrease),, and temporary or lasting (concept of memory). Table 3 lists the different combinations defined in Fig. 3 and examples of analogies with real perils. More types of interactions could be envisioned, as not all possible combinations are described here (primary vs. secondary/interaction category/effect/memory or not).

Fig. 3
figure 3

Hazard correlation matrix with examples of potential interactions. Trigger events are represented in rows i and target/triggered events in columns j. Each cell indicates the 1-to-1 conditional probability of occurrence Pr(j|i) over Δt. The n-to-1 conditional probability is considered by incorporating a memory element to the correlation matrix. See Table 3 for an analogy with real perils and real interaction processes

Table 3 Characteristics of generic perils and processes and analogy with real ones

Event repeat for primary perils is here described by the lognormal distribution

$$\Pr \left( {A_{i} |A_{i} } \right) = \mathop \smallint \limits_{{t_{0} }}^{{t_{\text{max}} }} \frac{1}{{t\sqrt {2\pi \alpha_{i}^{2} } }}\exp \left( { - \frac{{(\ln \left( t \right) - 1/\lambda_{i} )^{2} }}{{2\alpha_{i}^{2} }}} \right){\text{d}}t$$
(5)

where λ i is the long-term occurrence rate of event A i and α i  = 1 the shape. In Eq. 5, the probability over Δt is very low immediately following t 0 for Δt ≪ 1/λ i and progressively increases through time. It hence represents a renewal process in which energy is first released by an event and then accumulates gradually through time due to a loading process. It applies for example to earthquakes, which occurrence is controlled by tectonic loading (Parsons 2005). We apply Eq. 5 to perils A and B, which significantly lowers the probability of occurrence compared to the long-term probability Pr(A i ) = 1 − exp(−λ i Δt) for Δt ≪ 1/λ i (Fig. 4). Event repeat for secondary perils is described by Pr(C j |C i ) = Pr(D j |D i ) = Pr(E j |E i ) = 0, which indicates that a given peril can only occur once, whether immediately after the event (case of peril C—no memory) or through Δt (case of perils D and E—with memory) (Fig. 3; Table 3). Peril C is analogue to a tsunami or storm surge, which can occur anytime the proper trigger occurs, but only once per triggering. Perils D and E are analogue to technological accidents. If such an accident yields non-functionality, the same accident cannot occur again if the critical infrastructure is not repaired. If it were repaired immediately, it would then be in the peril C class (Table 3). Memory is here defined by the recording of previous instances of event occurrences.

Fig. 4
figure 4

Probability of occurrence Pr(A i |A i ) of a repeating event based on the lognormal probability distribution (Eq. 5) compared to the Poisson probability distribution

We consider that primary perils are due to an underlying loading process, as described previously for repeating events by Eq. 5. In this case, events are advanced or delayed from a time shift ΔT ij . We define

$$\Pr \left( {j|i} \right) \, = \, 1 - \exp ( - \lambda_{{j,{\text{mem}}}} \Delta t){\text{ for}}\,\,i \ne j$$
(6)

based on the non-homogenous Poisson process with i the trigger event, j the target event and

$$\lambda_{{j,{\text{mem(updated)}}}} = \frac{1}{{1/\lambda_{{j,{\text{mem}}}} + \Delta T_{ij} }}$$
(7)

ΔT ij  > 0 represents a time delay. λ j,mem is updated after each event occurrence j, including the case i = j, and represents the memory of the process (at t 0, λ j,mem = λ j ). The same approach is used in earthquake interaction modelling, which is well established with the theory of stress transfer (Toda et al. 1998; King 2007). This theory also applies to earthquake/volcanic eruption interactions (Hill et al. 2002; Eggert and Walter 2009). Here, we apply Eqs. 6 and 7 to intra-hazard interactions (A i  → A j ; B i  → B j ) and to inter-hazard interactions (A i  → B j ; B i  → A j ) (Fig. 2). We define the matrix ΔT ij  = ± f/λ i where f an ad hoc coupling factor, such that the time shift due to event i is proportional to the return period of this event (i.e., a rare large event will have a greater effect than a small, more common, event). Since the spatial relationship between different events is not considered, we simply fix the sign of ΔT ij randomly for each target event j. We fix f = 0.1 (the role of different values is discussed in Sect. 3.3).

Secondary perils occur based on the following one-way causal effects: A → C, C → D and D → E (Fig. 3). Hence, we can produce the domino effect A → C → D → E. We consider a linear relationship between trigger event i and target event j, such that

$$j(i) = \frac{{n_{j} - 1}}{{n_{i} - 1}}\left( {i - 1} \right) + 1$$
(8)

where n i and n j are the number of stochastic events i and j, respectively. This is described in Fig. 5a for the cases A → C and C → D (same for D → E). It shows that for any event i, in the range [1, n i ], there is an associated event j in the range [1, n j ]. Using a relationship between event increments is however artificial. For real perils, the relationship would normally relate hazard intensities ι i to ι j . Examples include the relationship h = 3.10−5 M 6.2344 between tsunami height h and earthquake magnitude M (Suppasri et al. 2012) or the relationship between storm surge height and maximum wind speed, as defined in the Saffir–Simpson hurricane scale (e.g., Irish et al. 2008). Such simple relationships are controversial, since effects depend on site conditions. The concepts remain however valid at a more abstract level. In the generic case presented here, we relate event increments directly for sake of simplicity. To determine the conditional probability Pr(j|i), we assume a binomial distribution

$$\Pr \left( {k|i} \right) = \frac{{n_{j} !}}{{k!\left( {n_{j} - k} \right)!}}\left( {\frac{j(i)}{{n_{j} }}} \right)^{k} \left( {1 - \frac{j(i)}{{n_{j} }}} \right)^{{n_{j} - k}}$$
(9)

where 0 ≤ k ≤ n j . k = j except for the case k = 0, which corresponds to the probability of having no event triggered Pr(Ø|i). Pr(k|i) is shown in Fig. 5b. Note that the higher the trigger increment i is (i.e., proxy to hazard intensity), the greater is the probability of triggering a severe target event j; which is a direct consequence of Eq. 8.

Fig. 5
figure 5

Conditional probability of occurrence of secondary perils Pr(C|A) and Pr(D|C). a Ad hoc linear relationship between trigger increment i and target increment j (i.e., proxy to hazard intensity, Eq. 8). b Conditional probabilities defined from a binomial distribution (Eq. 9) based on Eq. 8. See text for details

3.1.3 Considering time-variant vulnerability

Hazard clustering may also influence the vulnerability of the system. One example, which has recently been considered, is the increase in vulnerability to ground shaking due to increased structural load following an ash fall (Zuccaro et al. 2008; Selva 2013). Another well-known case is the increased vulnerability of structures to successive shakings during an earthquake cluster (Jalayer et al. 2010). Here, we consider the conditional vulnerability curve

$$\delta_{j|i} = \frac{1}{2}erfc\left( { - \frac{{\ln \left( {\iota_{j} } \right) - \mu_{j|i} }}{{\sigma_{j} \sqrt 2 }}} \right)$$
(10)

where δ j|i the mean damage ratio due to event j conditional on the occurrence of event i and μ j|i  = μ j  − γln(ι i ). ι i and ι j are the hazard intensities of events i and j, respectively, and γ is an ad hoc calibration parameter. The proposed generic relationship is based on the idea that the higher the intensity ι i is, the greater is the vulnerability to event j. The validity of such formulation remains to be verified with the logarithmic term here only proposed for consistency within Eq. 10. For real data, a different relationship may be obtained empirically or from structural engineering (Zuccaro et al. 2008; Jalayer et al. 2010). Here, we apply Eq. 10 to perils A and B for the case δ A|B where γ = 0.3. Figure 6 shows the vulnerability curve linked to peril A conditional on the occurrence of events B 1 to B 31. This process is analogue to increased vulnerability to earthquake shaking due to the occurrence of a volcanic eruption with ash fall (Zuccaro et al. 2008; Selva 2013).

Fig. 6
figure 6

Vulnerability curve of peril A conditional on the occurrence of peril B. The curve shifts to higher damage for a same hazard intensity ι A with the intensity ι B increasing (Eq. 10)

3.2 Definition of extremes

Extreme events may be defined as events, which are “rare, severe and outside the normal range of experience of the system in question” (Bier et al. 1999). This definition however assumes that extreme events are somewhat anomalous. In this line of reasoning, quantification of extremes would make them normal, i.e., non-extreme. In the present study, extreme events are simply defined as low-probability–high-consequences events, whether they seem normal or abnormal. Here, extremes do not only refer to individual events, but also to groups of events of which only the overall impact is considered. Therefore, the definition of an event remains blurry and depends on the level considered in the system (i.e., an event may be composed of sub-events and a meta-event of events). Then, extremes are categorized into individual events (Sect. 3.2.1) and multiple events, i.e., coinciding events or triggered chains of events (Sect. 3.2.2).

In recent years, anomalous events, or outliers, have been repackaged into fancier animals, some of which made their way into popular culture. Taleb (2007) coined the term “black swan” to describe rare events, which in principle cannot be anticipated. Another popular term is “perfect storm”, which refers to an event resulting of a rare combination of circumstances (Paté-Cornell, 2012). Finally, the concept of “dragon king” introduced by Sornette and Ouillon (2012) explains that outliers are due to a physical mechanism not represented in the distribution tail considered. These different terms are not used in the present study.

3.2.1 Individual extreme events: concept of heavy tail

Let’s first consider the role of individual events on the overall risk. Figure 7 shows the metric λ i Λ i . (loss over Δt) for events i characterized by their long-term occurrence rate λ i and loss Λ i (Table 2). A related metric is the annual average loss (AAL). As described in Sect. 3.1.1, peril A dominates the risk at higher frequencies, while peril B dominates at lower frequencies. The highest loss over period Δt is due to event A 15 (at λ = 0.0039) for peril A and to event B 21 (at λ = 0.0010) for peril B. We see that although we consider most of the risk in our stochastic event set, we miss a non-negligible part below λ = 10−4. It is common practice in insurance industry to only consider risk scenarios with a return period that does not exceed 1,000 years (Smolka 2006). In our example and with Δt = 1 year, we would miss most of the risk using such an arbitrary choice (grey area in Fig. 7). This illustrates the concept of heavy tail from which extremes are populated. This tail is however bounded whether by the maximum possible event size (e.g., concept of earthquake maximum magnitude in seismic hazard assessment) or by the exposure (i.e., loss saturation).

Fig. 7
figure 7

Metric λ i Λ i (loss over Δt) for events i characterized by their long-term occurrence rate λ i and loss Λ i (Table 2). Using an arbitrary minimum rate in risk assessment, e.g., λ ≥ 10−3, may yield an underestimation of the risk. Rare unknown perils could also contribute to overall risk, showing the potential instability of the risk process, as any added information on possible extreme events could significantly, if not dramatically, alter the risk measure. Here, an ad hoc λ i Λ i distribution is shown to illustrate the potential role of an unknown peril

Rare unknown perils could also contribute to overall risk, showing the potential instability of the risk process, as any added information on possible extreme events could significantly, if not dramatically, alter the risk measure (Fig. 7). The level of knowledge on extreme individual events is directly linked to the length of the available records (Smolka 2006). New approaches, such as the study of myths (Piccardi and Masse 2007) or of odd geomorphological structures (e.g., Scheffers et al. 2012) within the scope of scientific imagination, should help improving these records and reassess the overall risk. This is however out of the scope of the present study.

3.2.2 Coinciding events and triggered chains of events

Implementation and evaluation of coinciding events and triggered chains of events is the main purpose of this work. Coinciding events are independent events, which occur in cluster by chance. Such behaviour is in principle trivial to treat (if all events are known—see previous section), since it emerges naturally from the Poisson process. This is illustrated in Fig. 8 where the distribution of the number of event occurrences k per simulation for simulation set S 0 (homogeneous Poisson process, null hypothesis H 0) is shown in light grey. With the event rate λ tot = ∑λ = 0.97 (Table 2), we find for instance that the probability of having k = 5 events occurring during the same simulation is Pr(k = 5) ~ 0.003. Large sets of coinciding events are however controlled by the moderate-size events with low return periods. More damaging events are rarer and their random co-occurrence even rarer.

Fig. 8
figure 8

Probability distribution of the number of events k per simulation showing the migration of risk to lower-probability–higher-consequences meta-events when hazard interactions are considered. Definition of hypotheses H 0 and H 1 is given in Table 4

Chains of events are the result of interaction processes. This means that their clustering is non-random. Here we consider chains of events by adding knowledge to the system by implementing the generic interaction processes described in Sects. 2.2 (hazard contribution) and 2.3 (risk contribution). We test four different hypotheses H 1 to H 4 (four simulation sets S 1 to S 4), as defined in Table 4. These what-if scenarios are as follows: primary peril interactions between events of type A and B (H 1); primary peril interactions and time-variant vulnerability linked to the clustering of events of type A and B (H 2); primary and secondary peril interactions (including A → C → D → E chains) and time-variant vulnerability (H 3); and finally, primary and secondary peril interactions, time-variant vulnerability and time-variant exposure (i.e., no reconstruction) (H 4). All these hypotheses are based on the data and processes defined in Sect. 3.1. Results are discussed below.

Table 4 List of tested hypotheses H

3.3 Emergence of extremes

3.3.1 Risk migration

In hypothesis H 1, events from perils A and B interact following the rules described previously (Fig. 3; Eqs. 67). The number of events per simulation is compared to the one expected in the null hypothesis H 0 in Fig. 8. Using the akaike information criterion (AIC), we find that the number of event occurrences k per simulation for simulation set S 1 (in dark grey) is better described by the negative binomial distribution than by the Poisson distribution, which indicates over-dispersion (i.e., clustering of events). Here, we use a coupling factor f = 0.1 (see Sect. 3.1.2 for the definition of f), which gives the index of dispersion (variance/mean ratio) ϕ = 4.7. 0 < ϕ < 1 represents under-dispersion and ϕ > 1 over-dispersion. A higher f yields a higher ϕ, more coupling yielding more clustering. Let’s note that one could directly simulate risk scenarios by sampling from the negative binomial distribution instead of from the Poisson distribution in step 1 of the MCM and by not applying steps 2–3 (see Sect. 2.1). This approach applies when clustering is not due to event interactions but to a higher-level process. The negative binomial distribution is frequently used in storm modelling for instance (Mailier et al. 2006; Vitolo et al. 2009). This is however not tested in the present study.

In Fig. 8, the probability of having k = 5 events is Pr(k = 5) ~ 0.04 in set S 1 in contrast with Pr(k = 5) ~ 0.003 in set S 0. For k = 7, Pr(k = 7) ~ 0.02 in set S 1 while Pr(k = 7) ~ 0.0004 in set S 0. With N sim = 105, we find a maximum number of events per simulation k max = 7 in set S 0 and k max = 41 in set S 1. For hypothesis H 3, in which the domino effect A → C → D → E is added (Fig. 3), the index of dispersion increases to ϕ = 7.0, and we get Pr(k = 5) ~ 0.05, Pr(k = 7) ~ 0.03 and k max = 49 in set S 3. This demonstrates that risk migrates to lower-probability–higher-consequences events when hazard interactions are considered. Here, we talk about meta-events, the term consequence being defined as the aggregated loss over the k events of the cluster (assuming in a first time a homogeneous distribution of losses Λ over k).

3.3.2 Risk amplification

As defined in Sect. 3.1.2, the time advance or delay ΔT ij of events j due to a given trigger i increases with the trigger intensity ι (for perils A and B—hypothesis H 1). Therefore, the rarer the event is, the stronger is the associated clustering and the higher is the aggregated loss. Similarly, the higher the intensity ι of a trigger is, the higher is the probability of a larger secondary event (for perils C, D and E; Fig. 5—hypothesis H 3). Finally, the higher the intensity of peril B is, the higher the loss due to peril A is (Fig. 6—hypothesis H 2). This is illustrated in Fig. 9, which shows the mean aggregated losses (Fig. 9a) and the median aggregated losses (Fig. 9b) observed in simulations where the largest event from peril A is A i . We find that by adding more information on the dynamic risk process, losses tend to be amplified with increasing risk. We can fit the increase in aggregated losses by a power-law relationship with exponent 2.9 and 3.6 for mean and median aggregated losses, respectively. In comparison, the power-law exponent is 2.7 for individual event losses (with Λ i  ~ 3.10−5 i 2.7). This indicates a phenomenon of risk amplification for low-probability–high-consequences events.

Fig. 9
figure 9

Aggregated losses as a function of maximum event size A i showing the amplification of risk at greater risk levels. Definition of hypotheses H 1 to H 3 is given in Table 4. a Mean aggregated losses; b median aggregated losses

Let’s finally remark that aggregated losses saturate to ∑Λ = Ε 0 = 1 in hypothesis H 4 if the exposure is not reconstructed after a loss. Moreover, the renewal process of event repeats limits the number of occurrences of any given event (Figs. 3, 4; hypotheses H 1 and H 3). Such processes counteract risk amplification and avoid the emergence of exploding chain reactions. Since the data and processes defined in Sect. 3.1 were carefully selected to be representative of existing perils and interactions based on an extensive survey of the literature, our results (risk migration and amplification) are believed to represent some common characteristics of multi-risk. It is evident that both aspects strongly depend on site conditions and that only the analysis of real sites will permit to determine the real impact of multi-risk. We have demonstrated that the proposed framework could be used for such a task.

3.3.3 A multi-risk metric: the risk migration matrix

We can evaluate the role of multi-risk in the emergence of extremes using two different standard metrics: the exceedance probability (EP) curve (e.g., Grossi et al. 2005; Smolka 2006) and the risk matrix (Cox 1998; Krausmann et al. 2011). Both are different representations of the relationship between event frequency and event loss (in the present study, between frequency of meta-events and aggregated loss). Figure 10 shows EP curves of the five different simulation sets (Table 4). We see the progressive increase in the area below the curve and the tail becoming fatter when more information on the risk process is added. We also see the case of a bounded EP at ∑Λ = Ε 0 = 1. However, such a metric provides only limited information on the extreme events that populate the tail of the EP curve.

Fig. 10
figure 10

Exceedance probability (EP) curves for the five hypotheses H 0 to H 4 defined in Table 4

In contrast, the risk matrix provides a more visual representation of the risk although the colour code, from green (minimum risk) to red (maximum risk) (Krausmann et al. 2011), remains subjective (Fig. 11, top left). We populate the risk matrix with the risk scenarios or risk paths generated in the different simulation sets. Each simulation is characterized by an aggregated loss, which represents one risk scenario. The number of times N the same aggregated loss is observed gives the frequency N/N sim of this risk scenario. Here, aggregated losses are binned into 0.01 cells in the log scale (i.e., 501 different values in the loss interval 10−4 ≤ Λ ≤ 10). Each risk scenario is then represented by a point in the risk matrix (Fig. 11). Using a smaller bin would shift all values to lower frequencies, but the observed patterns would remain the same. This approach should therefore be considered as semi-quantitative.

Fig. 11
figure 11

Risk migration matrices showing how risk migrates as a function of frequency and aggregated losses when new information is added to the system. Risk increase is represented in red and risk decrease in blue. Risk scenarios are represented by points, in white for the first hypothesis and in black for the second. See text for details

To better evaluate the role of multi-risk in the emergence of extremes, we introduce the notion of risk migration matrix, which is shown in Fig. 11. It is defined as the difference in the density of risk scenarios observed between two hypotheses. To avoid a pixellated result, the densities are first calculated using a Gaussian kernel, here with a standard deviation large enough to focus on the first-order migration patterns. The right column shows the case H i  − H 0, and the left column shows the case H i  − H i−1 with i the hypothesis number (Table 4). Risk scenarios of the first and second hypotheses are represented by white and black points, respectively. An increase in risk is represented in red and a decrease in blue. The proposed approach allows us to visualize how the risk migrates as a function of frequency and aggregated losses when new information is added to the system. From H 0 to H 3, we see a progressive shift of the risk to higher frequencies and higher losses (from yellow to red), indicating that the risk is underestimated when interactions at the hazard and risk levels are not considered. The shift is particularly pronounced in the case of domino effects with A → C → D → E. Finally, the risk migration matrix H 4 − H 3 shows how time-dependent exposure yields a saturation of losses. Feedback from civil protection stakeholders also showed that a risk matrix view might be preferable to the use of loss curves for communicating multi-risk results. This is in the context of this feedback that the concept of risk migration matrix was developed (Komendantova et al. 2014).

4 Applicability to real-world conditions

We have presented a novel, generic and flexible multi-risk framework, which implements coinciding events, triggered chains of events, time-variant vulnerability and time-variant exposure. While the sequential MCM is a well-established method to model complex systems, it is still rarely used as the basis of a multi-risk assessment tool. Figure 12 shows the difference between standard risk modelling (e.g., Grossi et al. 2005) and the newly proposed approach (Sect. 2). Application of the approach to real test sites will require two improvements: (1) a data switch with the transition from generic to real data and processes, which will be the subject of a companion manuscript by the same authors: “The Quantification of Low-Probability–High-Consequences Events: Part II. Guidelines to Multi-Risk Assessment Based on the Virtual City Concept” and (2) an integration of engineering models that are in use in existing risk tools (e.g., moving from a single-damage state equation (Eq. 3) to more realistic multi-damage states as described for instance in Lagomarsino and Giovinazzi 2006). This second requirement is non-trivial, not only due to the different modelling structure, but also to the use here of a stochastic event set instead of aggregated hazard. Aggregated hazard is used for instance in standard seismic risk assessment (e.g., Cornell 1968) and in its recent multi-risk extensions (Marzocchi et al. 2012; Selva 2013). Nonetheless, moving to an event-based sequential simulation approach seems appropriate to model dynamic risk processes and extremes.

Fig. 12
figure 12

Structure difference between standard risk modelling (following Grossi et al. 2005) and the newly proposed multi-risk approach. MCM refers to the sequential MCM defined in Sect. 2

Testing of the proposed multi-risk framework has been made possible thanks to the heuristic method and the definition of generic perils and processes, which is based on the idea of “scientific imagination” (Kameda 2012; Paté-Cornell 2012). We have shown that the hazard correlation matrix, by adding knowledge of potential interaction processes, allows for the capture of low-probability–high-consequences events. In particular, we have shown the role of risk migration and of risk amplification for their occurrence. The present work should be seen as a proof-of-concept as we did not attend to fully resolve the difficult problem of extremes. We only considered a selected number of possible interactions, but while the chains of events that emerge in the system may seem obvious, adding more perils and more interactions will yield more complex risk patterns. We thus recommend a brick-by-brick approach to the modelling of multi-risk, to progressively reduce epistemic uncertainties. A more realistic modelling of low-probability–high-consequences events will also require the consideration of additional aspects, such as uncertainties (e.g., Barker and Haimes 2009), domino effects in socioeconomic networks (e.g., Adachi and Ellingwood 2008; Buldyrev et al. 2010) and long-term processes (Grossi and Kunreuther 2005; Smolka 2006), such as climate change (Garcin et al. 2008), infrastructure ageing (Rao et al. 2010) and exposure changes (Bilham 2009). While the concepts developed in the present study can suggest the theoretical benefits of multi-risk assessment, identifying their real-world practicality will require the application of the proposed framework to real test sites.