1 Introduction

Earthquakes are known to cluster in space and time due to stress redistributions in the Earth’s crust (e.g., King 2007). The impact of this clustering on building damage is nonlinear, as the capacity of a structure degrades with increased damage (e.g., Polese et al. 2013; Iervolino et al. 2014). Such physical interactions at both hazard and risk levels are expected to lead to risk amplification toward the tail of the risk curve (e.g., Mignan et al. 2014), which relates to the concepts of extreme event and tail fattening (e.g., Weitzman 2009; Sornette and Ouillon 2012; Foss et al. 2013).

Performance-based seismic assessment consists in quantifying the response of a structure to earthquake shaking using decision variables, such as damage or economic loss. Such procedure is described in the benchmark Pacific Earthquake Engineering Research (PEER) method, summarized by Cornell and Krawinkler (2000). Aftershock probabilistic seismic hazard analysis was added to the PEER method in recent years (Yeo and Cornell 2009), as well as damage-dependent vulnerability (Iervolino et al. 2014). However, these approaches express earthquake clustering analytically with the temporal component defined from the Omori law (see the limits of this law in Mignan (2015, 2016)) and with an ad hoc spatial component. In particular, they do not consider the coupling of large earthquakes on separate fault segments that is observed in Nature.

This coupling can be the association of a great mainshock and its largest aftershock where both events occur on distinct fault segments, such as the 2010 M w 7.1 Canterbury, New Zealand, mainshock and its 2011 M w 6.3 Christchurch aftershock (Zhan et al. 2011). There is also the case of successive large earthquakes occurring on neighboring fault segments and within days or tens of days of each other. Well-known examples include the 2004–2005 M w 9.0–8.7 Sunda megathrust doublet (Nalbant et al. 2005; Mignan et al. 2006), the 1999 M w 7.4–7.1 Izmit and Duzce North Anatolian doublet (Parsons et al. 2000) and the 1811–1812 M w 7.3–7.0–7.5 New Madrid Central United States triplet (Mueller et al. 2004). In contrast to aftershock statistics in which the largest aftershock is about one magnitude below the mainshock magnitude (Bath 1965), clusters of large earthquakes with similar magnitudes are relatively rare but have a high damage potential.

Here, we quantify the expected impact of the spatiotemporal clustering of large earthquakes on seismic risk, considering the additional role of vulnerability increase. By large, we refer to events that occur on distinguishable (and known) fault segments, so roughly with magnitudes greater than 6. Combining explicit interactions between hazardous events with dynamic vulnerability and exposure is the main feature of the generic multi-risk (GenMR) framework (Mignan et al. 2014; Matos et al. 2015). The present work takes advantage of the GenMR framework’s capability to cope with heterogeneous risk processes and describes its conceptual application. We consider as underlying physical processes (1) the Coulomb stress transfer theory (e.g., King et al. 1994; King 2007; Nalbant et al. 2005; Parsons et al. 2000; Mueller et al. 2004; Zhan et al. 2011; Toda et al. 1998; Parsons 2005) and (2) repeated building ductility capacity reduction (e.g., Iervolino et al. 2014) based on simple relationships between interstory drift and spectral acceleration (e.g., Baker and Cornell 2006). While Coulomb stress transfer is well established, other processes could be considered such as fluid migration (e.g., Miller et al. 2004). The choice of the underlying physical processes is independent of the GenMR modeling structure.

For illustration purposes, we consider the thrust fault system of northern Italy and a generic building stock composed of fictitious low-rise buildings of different performances. Note that other fault systems could have been laid below our generic building stock; we considered the one of northern Italy, as the dataset is readily available and detailed. Moreover, the analyzed region recently encountered a doublet of magnitude M ~ 6 earthquakes (the 2012 Emilia-Romagna seismic sequence; Anzidei et al. 2012) with the second event yielding significantly more damage, partly due to buildings rendered more vulnerable following the first shock (the number of homeless people raised from 5000 to 15,000 between the two events; Magliulo et al. 2014). Coulomb stress transfer has already been applied with success to describe the clustering of large earthquakes in Italy, including the 2012 cluster (e.g., Cocco et al. 2000; Ganas et al. 2012). The aim of this study is to provide an overview of the combined effects of earthquake clustering and damage-dependent fragility on seismic risk, in particular on the shape of the seismic risk curve. The method and the risk results apply in principle to any region subject to multiple active faults. The paper is methodological in nature and not a risk study of the Emilia catastrophe, which would require a more elaborate engineering approach to dynamic vulnerability.

2 Method

2.1 Large earthquakes clustering by Coulomb stress transfer

2.1.1 Coulomb stress transfer theory

The phenomenon of earthquake interaction is well established with the underlying process described by the theory of Coulomb stress transfer (e.g., King et al. 1994). In its simplest form, the Coulomb failure stress change is

$$\Delta \sigma_{\text{f}} = \Delta \tau + \mu^{\prime } \Delta \sigma_{\text{n}}$$
(1)

where Δτ is the shear stress change, Δσ n the normal stress change and μ′ the effective coefficient of friction. Failure is promoted if Δσ f > 0 and inhibited if Δσ f < 0 (see King (2007) for a review). This is not to be confused with rupture propagation due to dynamic stress changes, which may lead to greater earthquakes (Mignan et al. 2015a)

Coulomb stress transfer is generally not considered in seismic hazard assessment except occasionally in time-dependent earthquake probability models where the “clock change” effect of a limited number of historical earthquakes is included (Toda et al. 1998; Field 2007; Field et al. 2009). The conditional probability of occurrence of an earthquake is then expressed through a non-stationary Poisson process as

$$\Pr \left( {\Delta t} \right) = 1 - \exp \left( { - N} \right)$$
(2)

where N is the number of events expected during Δt. Following Toda et al. (1998),

$$N = \lambda \Delta t + \lambda {\rm A}_{t}$$
(3)

The first term represents the permanent stress change (so-called clock change) with

$$\lambda = \frac{1}{{\frac{1}{{\lambda_{0} }} - \frac{{\Delta \sigma_{f} }}{{\dot{\tau }}}}}$$
(4)

where λ 0 is the rate prior to the interaction, Δσ f the stress change and \(\dot{\tau }\) the stressing rate. The second term of Eq. 3 represents the transient stress amplification (Dieterich 1994)

$$A_{t} = t_{a} \log \left( {\frac{{1 + \left( {\exp \left( { - \frac{{\Delta \sigma_{f} }}{A\sigma }} \right) - 1} \right)\exp \left( { - \frac{\Delta t}{{t_{a} }}} \right)}}{{\exp \left( { - \frac{{\Delta \sigma_{f} }}{A\sigma }} \right)}}} \right)$$
(5)

This transient phenomenon is described by Δσ f, the constitutive parameter and the aftershock duration t a = /\(\dot{\tau }\) (Dieterich 1994) (see Parsons (2005) for a review).

2.1.2 Sensitivity analysis

The parameter set \(\theta = \{{\Delta {\sigma_{f}}}, \dot{\tau },A\sigma \}\) is defined over the intervals 10−3 ≤ Δσ f ≤ 1 bar, 10−4 ≤ \(\dot{\tau }\) ≤ 10−1 bar/year and 10−2 ≤  ≤ 10 bar for sensitivity analysis. These intervals are representative ranges of known parameter variations (e.g., King 2007; Catalli et al. 2008; Toda et al. 1998). Figure 1 shows the influence of each one of the parameters on the conditional probability Pr(Δt = 1 year, λ 0 = 10−3 year−1, θ) (Eq. 2) averaged over the ranges of the two remaining free parameters. Δσ f represents the relative local triggering (static stresses decreasing with the inverse of the cubic distance), while \(\dot{\tau }\) controls the absolute regional triggering (being related to the tectonic context).

Fig. 1
figure 1

Sensitivity of the mean conditional probability Pr (Eq. 2) to different values of the parameter set \(\theta = \{ \Delta \sigma_{f} ,\dot{\tau },A\sigma\) with Δt = 1 year and λ 0 = 10−3 year−1 fixed. For the present application (see Sect. 3), we fix  = 0.1 bar and test \(\dot{\tau }\) = {10−4, 10−3, 10−2} bar/year to represent strong, medium and weak clustering

We find that the parameter has a relatively limited influence on probability changes compared to Δσ f and \(\dot{\tau }\), which show opposite effects compared to each other. A strong earthquake clustering requires a low stressing rate (region-dependent) and/or a high stress change (perturbing earthquake very close to the target fault) (e.g., Parsons 2005). These characteristics remain similar for different values of λ 0. The role of Coulomb stress transfer on seismic risk is investigated in the application to the thrust fault system of northern Italy (see Sect. 3).

2.1.3 GenMR implementation

GenMR simulates N sim multi-risk scenarios based on a variant of the Markov chain Monte Carlo method (Mignan et al. 2014). Each simulation generates a time series in the interval Δt = [t 0, t max] in which events are drawn from a non-stationary Poisson process. It requires as input (1) an n-event stochastic set with identifier Ev i and long-term recurrence rate λ 0(Ev i ) and (2) an n × n hazard correlation matrix (HCM) with fixed conditional probabilities Pr(Ev j | Ev i ) or time-variant conditional probabilities Pr(Ev j |Η(t) = {Ev(t 1), Ev(t 2), …, Ev(t)}), Η being the history of event occurrences up to time t (i.e., process memory). Hazard intensities and damages are introduced in Sect. 2.2.

Let us note λ(EQ j , t k ) the non-stationary rate of target event EQ j at the occurrence time t k of the kth event in the time series, λ 0(EQ j ) = λ(EQ j , t 0) the long-term rate of EQ j and Η(t 0) = {Ø}. Due to the accumulation of permanent stress changes after each earthquake occurrence, a summing iteration of Eq. 4 yields

$$\lambda \left( {{\text{EQ}}_{j} ,t_{k} } \right) = \frac{{\lambda \left( {{\text{EQ}}_{j} ,t_{0} } \right)}}{{1 - \lambda \left( {{\text{EQ}}_{j} ,t_{0} } \right)\sum\nolimits_{k = 1}^{k} {\frac{{\Delta \sigma_{f} \left( {{\text{EQ}}_{i} \left( {t_{k} } \right),{\text{EQ}}_{j} } \right)}}{{\dot{\tau }\left( {{\text{EQ}}_{j} } \right)}}} }}$$
(6)

with Δσ f(EQ i (t k ), EQ j ) the stress change on EQ j due to EQ i and \(\dot{\tau }\)(EQ j ) the stressing rate on the receiver fault of EQ j . Combining Eqs. 2 and 3, we obtain the time-variant HCM with conditional probability of occurrence

$$\Pr \left( {{\text{EQ}}_{j} |{\text{EQ}}_{i} \left( {t_{k} } \right),\Delta t} \right) = 1 - \exp \left\{ {\lambda \left( {{\text{EQ}}_{j} ,t_{k} } \right)\left[ {\Delta t + {\rm A}_{t} } \right]} \right\}$$
(7)

The HCM for EQ–EQ interactions (hereafter referred to as HCMEQ–EQ) thus depends solely on the matrix Δσ f(EQ i , EQ j ), the parameter set \(\theta = \{ \dot{\tau },A\sigma \}\) and the history of event occurrences Η defined by the summation term in Eq. 6. Since a ratio \(\Delta \sigma_{\text{f}} /\dot{\tau }\) ~ 50:1 is required to significantly skew occurrence probabilities with confidence great than 80–85 % (Parsons 2005), we only consider Δσ f(EQ i , EQ j ) values that fulfill this condition.

In any given simulated time series (Fig. 2), the occurrence time of independent events is drawn from the uniform distribution with t ∈ [t 0, t max]. If EQ j occurs due to EQ i following Eq. 7, its occurrence time is fixed to t j  = t i  + ε with ε ≪ Δt. If t j  > t max, the event is excluded from the time series. A small ε represents temporal clustering within a time series. Its choice has no significance on the results in the present study since dynamic vulnerability depends on the number of earthquakes in a cluster and not on their time interval (see Sect. 2.2). Temporal processes, such as reparations, are not included (i.e., non-resilient system).

Fig. 2
figure 2

Examples of two simulation sets S 0 and S 1, representing no interaction (null hypothesis H 0) and interactions (H 1), respectively. Each simulation set is composed of N sim time series (or risk scenarios). The second time series is here empty to illustrate the fact that large earthquakes are rare and that many simulation-years “see” no earthquake (for northern Italy, the average return period of large earthquakes is c. 77 years). Modified from Mignan et al. (2014)

Let us now define the null hypothesis H 0 (simulation set S 0) as the case where there is no interaction and the hypothesis H 1 (simulation set S 1) as the case where earthquakes interact with each other (Fig. 2). If Δt ≪ 1/λ 0 in simulation set S 0, time series with more than one earthquake would be much rarer than time series with only one event (i.e., Poisson process). As a consequence, the potential for clock delays (or removal of events) would be much lower than for clock advances (or additions of events) in S 1. With S 1 likely to produce more earthquakes than S 0, the seismic moment rate would not be conserved. If the sum of moment rates \(\sum\nolimits_{i} {\dot{M}_{0i} } = \sum\nolimits_{i} {M_{0i} \lambda_{0i} }\) (Hanks and Kanamori 1979) in S 1 is not in the 3-sigma range of the natural fluctuations observed in S 0, we correct λ 0(Ev i ) of the stochastic event set, such that

$$\lambda_{0}^{'} = \lambda_{0} \frac{{\hat{\lambda }\left( {S_{0} } \right)}}{{\hat{\lambda }\left( {S_{1} } \right)}}$$
(8)

with \(\hat{\lambda }\) the estimated rate in a given simulation set. Simulation sets S 0 and S 1 are then regenerated with the modified stochastic event set. This action is repeated until the 3-sigma rule—i.e., until the regional seismic moment rate conservation—is verified (see example in Sect. 3.2). \(\lambda_{0}^{\prime }\) here represents the rate of trigger earthquakes, which is lower than the rate of trigger and triggered earthquakes combined.

2.2 Damage-dependent seismic fragility

2.2.1 Generic building characteristics and damage assessment

We consider a fictitious low-rise building with height H b and fundamental period

$$T_{\text{b}} = c_{1} H_{\text{b}}^{{c_{2} }}$$
(9)

with parameters c 1 = 0.075 and c 2 = 3/4 (for moment–resistant reinforcement concrete structures), H b in meters and T b in seconds (see review by Crowley and Pinho 2010). The low-rise building is subjected to the spectral acceleration S a(T b) due to earthquake occurrences. We define the maximum interstory drift ratio ΔEQ as

$$\Delta_{\text{EQ}} = \exp \left( {a + b\log \left( {S_{\text{a}} \left( {T_{\text{b}} } \right)} \right)} \right)$$
(10)

with a and b empirical parameters (e.g., Baker and Cornell 2006).

We describe the generic capacity curve of the fictitious low-rise building by its stiffness K, yield strength Q and ductility capacity μ Δ (Fig. 3a). Further, we define the mean damage δ as a function f of the drift ratio (or shear deformation) Δ

$$\left\{ {\begin{array}{*{20}l} {\delta \left( {\Delta < \Delta_{\text{y}} } \right) = \left( {\frac{\Delta }{{\Delta_{\text{y}} }}} \right)^{3} } \hfill \\ {\delta \left( {\Delta_{\text{y}} \le \Delta \le \Delta_{ \hbox{max} } } \right) = 1 + (n_{\text{DS}} - 1)\frac{{\Delta - \Delta_{\text{y}} }}{{\Delta_{ \hbox{max} } - \Delta_{\text{y}} }}} \hfill \\ {\delta \left( {\Delta > \Delta_{ \hbox{max} } } \right) = n_{\text{DS}} } \hfill \\ \end{array} } \right.$$
(11)

where n DS is the number of damage states, Δ y  = Q/K the yield displacement capacity and Δmax = Δy μ Δ the maximum plastic displacement capacity. The relationship between δ and Δ is assumed linear within the plasticity (or ductility capacity) range. Within the elasticity range, we assume that δ decreases faster toward zero by following a power-law behavior. We fix the number of damage stages (DS), that is, n DS = 5 with DS1 to DS5 representing insignificant, slight, moderate, heavy and extreme damage, respectively (Fig. 3b). Equation 11 indicates that DS1 is most likely at Δ = Δ y and DS5 at Δ = Δmax (e.g., FEMA 1998). We here assume that extreme damage corresponds to building collapse and that the building has a perfectly elasto-plastic behavior.

Fig. 3
figure 3

Damage assessment method: a generic capacity curve of a fictitious building; b matching damage states (Eq. 11); c derived fragility curves (Eq. 12)

We then generate fragility curves from the cumulative binomial distribution

$$\Pr \left( { \ge {\text{DS}}_{k} |\Delta _{{{\text{EQ}}}} } \right) = \mathop \int \limits_{0}^{{\Delta _{{EQ}} }} \left( {\frac{{n_{{{\text{DS}}}} !}}{{k!\left( {n_{{{\text{DS}}}} - k} \right)!}}\left( {\frac{{\delta \left( {\Delta _{{{\text{EQ}}}} } \right)}}{{n_{{{\text{DS}}}} }}} \right)^{k} \left( {1 - \frac{{\delta \left( {\Delta _{{{\text{EQ}}}} } \right)}}{{n_{{{\text{DS}}}} }}} \right)^{{n_{{{\text{DS}}}} - k}} } \right){\text{d}}\Delta$$
(12)

for each damage state DS k with 0 ≤ k ≤ n DS (e.g., Lagomarsino and Giovinazzi 2006) (Fig. 3c). Equation 12 represents the uncertainty on the damage state for a given drift ratio, which relates to the concept of imprecise probability (e.g., Caselton and Luo 1992). Note that other formulations could have been used (e.g., for Italy, Faccioli et al. 1999; Dolce et al. 2003; Rasulo et al. 2015).

2.2.2 Concept of damage-dependent fragility

Numerous structural engineering studies deal with damage-dependent fragility (e.g., Polese et al. 2013; Iervolino et al. 2014—and references therein). Most of those studies include extensive nonlinear dynamic analyses and numerical simulations of idealized two- or three-dimensional buildings. The scope of GenMR, as its name indicates, is not to provide an engineering-based platform for actual site-specific multi-risk assessment but a generic framework for a general understanding of hazard interactions and of other dynamical processes of the risk assessment chain. Here, damage-dependent seismic fragility must be viewed from that overarching perspective where transparency is key and a given degree of abstraction is required (Mignan et al. 2014, 2016a, 2016b; Komendantova et al. 2014; Liu et al. 2015; Matos et al. 2015). Albeit simplified, the procedure of deriving damage-dependent seismic fragility is not incompatible with engineering approaches and when integrated with the GenMR approach can in fact provide a blueprint for future region- or site-specific applications.

Conceptually, the capacity of a structure degrades with increased damage. We here consider, as source of degradation, the decrease in the plasticity range (Δy, Δmax2 = Δmax1−Δ r ) due to the residual drift ratio

$$\left\{ {\begin{array}{*{20}l} {\Delta_{r} = 0} \hfill & {{\text{for}} } \hfill & {\Delta_{\text{EQ}} \le \Delta_{y} } \hfill \\ {\Delta_{r} = \Delta_{\text{EQ}} - \Delta_{y} } \hfill & { {\text{for}} } \hfill & {\Delta_{\text{EQ}} > \Delta_{y} } \hfill \\ \end{array} } \right.$$
(13)

This process yields a shift of the fragility curves toward lower Δ values (i.e., increased vulnerability). This is illustrated in Fig. 4 where the evolution of fragility curves per damage state is shown for different pre-damage levels. The solid curves represent the latent vulnerability curves, which are only altered for a damage state equal or greater to DS2 (DS1 does not produce any residual drift in average). The role of changes in stiffness and building resonance period—not included in this study—is discussed in Sect. 3.3.

Fig. 4
figure 4

Damage-dependent fragility curves per damage state (different plots) for different pre-damage levels (different curves). The residual drift ratio (Eq. 13) is here directly defined from the pre-damage level, as shown by the evolution of the ductility capacity range in the top left plot

2.2.3 Sensitivity analysis

We investigate the role of repeated earthquake shaking on the damage time series of generic low-rise buildings of different structural performances. We define low, medium and high levels of seismic performance of low-rise buildings due to increased plastic displacement capacity by the ductility capacity values μ Δ = {2, 4, 6}, respectively. We analyze the N-time repeat of an earthquake producing a constant ΔEQ = 1.1Δ y , which represents insignificant damage (DS1) for the initial building (for any tested μ Δ value). We consider the stochastic damage process described by the random variables

$$\widetilde{\text{DS}}\left( {t_{i} } \right)\sim\,{\text{Bin}}\left( {n_{\text{DS}} ,\frac{{\delta \left( {\Delta_{\text{EQ}} ,\Delta_{\hbox{max} } \left( {t_{i - 1} } \right)} \right)}}{{n_{\text{DS}} }}} \right)$$
(14)

and

$$\tilde{\Delta }_{\hbox{max} } \left( {t_{i} } \right)\sim\Delta_{\hbox{max} } \left( {t_{i - 1} } \right) - \left( {f^{ - 1} \left( {\widetilde{\text{DS}}\left( {t_{i} } \right)} \right) - \Delta_{\text{y}} } \right)$$
(15)

where t i is the occurrence time of the earthquake with 1 ≤ i ≤ N, Δmax(t 0) the initial maximum plastic displacement capacity and δ = f(Δ) (Eq. 11). Results from 10,000 Monte Carlo simulations are shown in Fig. 5. Firstly, the median curves (solid black curves) indicate that the building of poor performance (μ Δ = 2) is the most prone to damage amplification, while no amplification is observed in average for the buildings of medium and high performances (μ Δ = {4, 6}) for N ≤ 10. Secondly, the first and third quartiles (dashed curves) indicate that the results are highly variable across simulations due to the stochasticity of the damage process, as described by the binomial distribution. Let us note that the repeat of a multitude of events producing insignificant damage (here ΔEQ = 1.1Δ y ) is expected in aftershock and induced seismicity sequences (e.g., Iervolino et al. 2014; Mignan et al. 2015b). In the case of large earthquake clustering, N is expected to remain small but with events more likely to directly produce moderate to high damage, thereby facilitating damage amplification (see Sect. 3).

Fig. 5
figure 5

Sensitivity of damage to different building performances (low, medium and high, i.e., ductility capacity values μ Δ = 2, 4 and 6, respectively) for the n EQ-time repeat of an earthquake producing a constant ΔEQ = 1.1Δ y , which represents insignificant damage (DS1) for the initial building (for any tested μ Δ value). Gray curves represent the different simulations where the damage state is drawn from the Binomial distribution (Eq. 14). Black solid and dashed curves represent the median and first and third quartiles, respectively

2.2.4 GenMR implementation

For each realization of a stochastic event EQ i in GenMR, a spatial footprint of the hazard intensity \(\tilde{I}\) is generated such that

$$\tilde{I}\left( {x,y} \right)\sim10^{{\log_{10} I\left( {x,y} \right) + {\text{Norm}}\left( {0,\sigma_{I} } \right)}}$$
(16)

with I the median of the expected hazard intensity and σ I its standard deviation in the log10 scale (see example of ground motion prediction equation in Sect. 3.1.3). The footprint is computed on the exposure grid (x, y), such that a hazard intensity value is attributed to each building location of the considered portfolio. The damage state \(\widetilde{\text{DS}}\left( {x,y} \right)\) is then computed from Eq. 14 with ΔEQ computed from Eq. 10. It means that three stochastic processes are considered in GenMR: the earthquake occurrence (non-stationary Poisson), the hazard intensity (lognormal) and the damage state (binomial).

Each location of coordinates (x,y) represents one building with the four attributes T b, Δ y , μ Δ and Δ r . The first three parameters remain constant over time, while the fourth is a function of ΔEQ(EQ i ) (Eq. 13). For each stochastic event EQ i (t k ), the loss is defined as the total number N DS4+ of buildings with damage state DS4 or DS5 in the exposure grid. Any given simulated time series is thus defined by a list of events with identifier EQ i , time of occurrence t ∈ [t 0, t max] and produced loss N DS4+ (the parent earthquake, if any, is provided as metadata).

3 GenMR application

We provide an application of the proposed procedure tailored for a region in northern Italy by considering available seismogenic faults and a generic building portfolio. It shall be noted that the application is an illustrative example without considering the true building stock of the region. Such inclusion requires information and accuracy of the built environment of the region beyond the scope of the present investigation. However, the use of low-rise buildings with various seismic performance levels (i.e., low, medium and high) provides a suitable starting point, as they represent the majority of residential building of the Italian residential buildings. Additional building typologies could be added for analysis that is more complex if exposure data are available. Thus, given inherent limitations (see Sect. 3.3), this application remains theoretical and a summary of its main procedural steps and elements is presented in the next sections.

3.1 Inputs

3.1.1 Stochastic event set

We define a set of n = 30 stochastic events representing characteristic earthquakes on idealized straight fault segments. These segments are simplified versions of the 20 shallow thrust composite faults defined in the 2013 European Seismic Hazard Model (hereafter ESHM13) for northern Italy. This model represents the latest seismic hazard model for the European–Mediterranean region (Woessner et al. 2015; Basili et al. 2013; Giardini et al. 2013). Table 1 lists the ESHM13 identifier, slip rate \(\dot{s}\), dip, rake and maximum magnitude M max of the 20 ESHM13 composite sources as well as the length L, width W, characteristic magnitude M char and long-term rate λ 0 = λ(t 0) of the 30 stochastic events. Figure 6 shows the map of northern Italy and the correspondence between the stochastic events EQ i and the ESHM13 sources. Only lateral triggering is considered on these simplified fault geometries (Sect. 3.1.2). Potential interactions in the deeper parts of the crust are not included.

Table 1 Stochastic earthquake set
Fig. 6
figure 6

Surface projection of the 20 ESHM13 shallow thrust composite faults in northern Italy (numbered ITCxxxx). The fault traces were simplified to series of straight lines for Coulomb stress transfer calculations. Stochastic earthquakes are defined as the characteristic earthquakes hosted by the 30 straight segments (numbered 1–30). The smaller ~10-km-wide patches represent the spatial resolution used for Coulomb stress transfer calculations before averaging. Colors represent the stress changes Δσ f(EQ1, EQ j ) due to event EQ1 (gray segment) on all other segments (i.e., hosts of EQ j )

M char is derived from the seismic moment M 0 [dyn.cm]

$$\log M_{ 0} = c + {\text{d}}M_{\text{char}}$$
(17)

with c = 16.05, d = 1.5 (Hanks and Kanamori 1979) and

$$\log A = - 13.79 + 0.87\log M_{0}$$
(18)

with A = LW [m2] (Yen and Ma 2011; see Stirling et al. (2013) for a review). It follows that M char ∈ [6.1, 6.6] in the present case.

The rate λ 0 is derived from the long-term slip rate \(\dot{s}\) and fault displacement u following Wesnousky (1986)

$$\lambda_{0} \left( {{\text{EQ}}_{i} } \right) = \frac{{u\left( {{\text{EQ}}_{i} } \right)\dot{s}\left( {{\text{EQ}}_{i} } \right)}}{{\left( {\mathop \sum \nolimits_{k = 1}^{k = n} u\left( {{\text{EQ}}_{k} } \right)} \right)^{2} }}$$
(19)

weighted by the number n of stochastic events EQ k possible on a same ESHM13 source (Table 1). It yields the total characteristic earthquake rate \(\sum {\lambda_{0} = 0.013}\) or in average one M char earthquake every ~77 years somewhere in northern Italy. The fault displacement u (also used in stress transfer calculations) is obtained from

$$u = \frac{{M_{0} }}{\mu LW}$$
(20)

with the shear modulus μ = 3.2 1011 dyn/cm2 (Aki 1966).

3.1.2 HCMEQ–EQ

The HCMEQ–EQ is built from the matrix of Coulomb stress changes Δσ f(EQ i , EQ j ), here computed using the Coulomb 3 software (Lin and Stein 2004; Toda et al. 2005, 2011). The inputs to the software are the effective coefficient of friction μ′ = 0.4, the fault segment characteristics (Fig. 6; Table 1) and the earthquake slip u [m] (Eq. 20). Δσ f [bars] is computed on ~10-km-wide dislocation patches and then averaged over the full fault segments. Figure 6 shows as example the spatial distribution of Δσ f(EQ1, EQ j ), which indicates that triggering occurs principally at the tips of the trigger segment on segments of similar strike and mechanism (all reverse).

The impact of Δσ f on clustering patterns and of \(\dot{\tau }\) on clustering levels is investigated in Sect. 3.2.1. It should be noted that the maximum stress changes, computed on segments closest to a rupture, rarely exceed 1 bar. The stresses released on ruptured segments are of the order of several bars. For central Italy, Catalli et al. (2008) obtained 7 × 10−5 ≤ \(\dot{\tau }\) ≤ 7 × 10−3 bar/year based on seismicity rates and  = 0.4 bar. For the present application, we fix  = 0.1 bar and test \(\dot{\tau }\) = {10−4, 10−3, 10−2} bar/year to represent strong, medium and weak clustering (Fig. 1). Loose constraints on the regional value of \(\dot{\tau }\) (e.g., Catalli et al. 2008) do not allow us to determine which clustering regime is the most likely in northern Italy.

3.1.3 Generic building characteristics and damage assessment

We consider a generic model of low-rise buildings with height H b ~ 3.8 m and fundamental period T b ~ 0.2 s (Eq. 9). A yield displacement capacity Δ y of 0.01 is adopted as a reference value for reinforced concrete buildings (e.g., Panagiotakos and Fardis 2001). We test three different seismic performances, i.e., low, medium and high of low-rise buildings, represented by different ductility capacity values μ Δ = {2, 4, 6} (e.g., FEMA 1998), which lead to plastic displacement capacities Δmax = {0.02, 0.04, 0.06}. Let us note that a low ductility capacity is expected in the existing historic building portfolio of northern Italy (and of Europe in general). Using different ductility capacities allows us to approximate variations observed in structures of different ages (residential houses from the 1960s to present) and different performance levels (e.g., industrial buildings).

The damage to the building is evaluated from the interstory drift ratio (Eq. 11 or 14) estimated itself from the spectral acceleration S a(T b) produced by the earthquake (Eq. 10). We fix the parameters of Eq. 10 to b = 1 (e.g., Baker and Cornell 2006) and a = log(Δ y ) + log(μ Δ) − log(Sacollapse(T b, μ Δ)) = −3.2, assuming that a spectral acceleration Sacollapse = 1 g leads to the collapse (DS5) of buildings of moderate performance (μ Δ = 4). This is a strong assumption, which controls the overall level of damage estimated in this study. However, it provides a reasonable and transparent calibration of damage for our structural model of the fictitious low-rise buildings. Let us note that a = −3.2 is close to existing values (e.g., Baker and Cornell 2006). For μ Δ = 2 or 6, our calibration leads to Sacollapse = 0.5 g and 1.5 g, respectively.

We compute S a(T b) for each stochastic event using the ground motion prediction equation (GMPE) of Akkar and Bommer (2010)

$${\log \left( {S_{\text{a}} } \right) = b_{1} + b_{2} M + b_{3} M^{2} + \left( {b_{4} + b_{5} M} \right)\log \sqrt {R_{{j{\text{b}}}}^{2} + b_{6}^{2} } + b_{7} S_{S} + b_{8} S_{A} + b_{9} F_{N} + b_{10} F_{R} }$$
(21)

with S a in cm/s2, M = M char the earthquake magnitude (Table 1), R jb the distance to the fault surface trace in km, S S  = 1 and S A  = 0 (soft instead of stiff soil) and F N  = 0 and F R  = 1 (reverse instead of normal faulting). The fitting parameters b 1–10, which depend on period T, are given in Akkar and Bommer (2010). We computed S a(T b) on a regular grid of generic buildings spaced every 0.01° (~1 km) in longitude and latitude in northern Italy.

Figure 7 shows the hazard and damage footprints for EQ30 and for the three building performances. The left column presents the median estimates, while the right column presents stochastic realizations (or scenarios), which include the spatial correlation of the ground motion fields (Eq. 16) and uncertainties at the damage levels (Eq. 14). In Eq. 16, σ I is defined as the intra-event sigma, as provided in Akkar and Bommer (2010). The use of intra-event sigma prevents the inflation of the ground motion fields, which in turn might introduce bias to the risk estimates (Jayaram and Baker 2009). Only stochastic versions are considered in GenMR. The role of damage-dependent fragility on damage footprints is investigated in Sect. 3.2.2.

Fig. 7
figure 7

Hazard and damage spatial footprints of event EQ30. The stochastic version of hazard represents the spatial correlation of the ground motion fields defined by the intra-event sigma (Eq. 16). The stochastic version of damage represents imprecise probabilities described from a binomial distribution (Eq. 14) (on top of the random ground motion field). Low, medium and high building performances correspond to ductility capacity values μ Δ = 2, 4 and 6, respectively

3.2 Results

3.2.1 Hazard characteristics of earthquake clustering

Four simulation sets are produced, each composed of N sim = 106 simulations. The simulation set S 0 represents the null hypothesis H 0 that earthquakes are independent following a Poisson process. The simulation sets S 1, S 2 and S 3 represent the hypothesis H 1 that earthquakes interact with each others following the Coulomb stress transfer theory, respectively, with \(\dot{\tau }\) = {10−4, 10−3, 10−2} bar/year.

We verify that the total seismic moment rate \(\dot{M}_{0}\) is conserved in all simulation sets by first evaluating the natural variation of \(\dot{M}_{0} \left( {H_{0} } \right)\) in 1000 iterations of S 0. The resulting normalized distribution \(\dot{M}_{0} \left( {H_{0} } \right)/\sum\nolimits_{i} {M_{0i} \lambda_{0i} }\), with λ 0i the long-term expected rate of EQ i (Table 1), is shown in Fig. 8a. We then consider that the total seismic moment rate is conserved if the values \(\dot{M}_{0} \left( {S_{1} } \right)\), \(\dot{M}_{0} \left( {S_{2} } \right)\) and \(\dot{M}_{0} \left( {S_{3} } \right)\) normalized by \(\sum\nolimits_{i} {M_{0i} \lambda_{0i} }\) remain within ±3σ(H 0) = 0.025 (dotted lines on Fig. 10a). For S 1 and S 2 for which it is not the case, the long-term rate \(\lambda_{0}^{\prime }\) is corrected by removing the implicit role of interactions from λ 0 (Eq. 8) (dashed lines on Fig. 8a). It assumes that the λ 0 values derived from slip rates in ESHM13 represent the long-term behavior of earthquakes in northern Italy and include the phenomenon of clustering.

Fig. 8
figure 8

Earthquake clustering statistics: a distribution of the natural variation of the normalized total seismic moment rate \(\dot{M}_{0} \left( {H_{0} } \right)/\sum\nolimits_{i} {M_{0i} \lambda_{0i} }\) in 1000 iterations of simulation set S 0 where earthquakes are independent. The long-term rate is corrected to \(\lambda_{0}^{\prime }\) by removing the role of interactions from λ 0 (Eq. 8) for sets S 1 and S 2 in order for the normalized seismic moment rate to remain within ±3σ(H 0) = 0.025 (dotted lines) (i.e., seismic moment rate conservation); b distribution of the number of earthquakes per annual simulation, best fitted by a Poisson process for H 0 (no interactions) and by a negative binomial process for H 1 (earthquake clustering). Numbers correspond to the amount of cases in 106 simulations

Figure 8b shows the distribution of the number of earthquakes N per year for simulation sets S 0 and S 1. The distribution is best fitted by a Poisson process for H 0 and by a negative binomial process for H 1 with index of dispersion \(\phi = {\text{Var}}\left( {n_{\text{EQ}} } \right)/\overline{{n_{\text{EQ}} }} = \left\{ {1.38,1.07,1.01} \right\}\) for sets S 1, S 2 and S 3, respectively (combining maximum likelihood estimation and Akaike information criterion). The values taken by ϕ verify that the degree of clustering (or over-dispersion) increases with decreasing stressing rate. This represents an instance of hazard migration (or hazard clustering), as described at the abstract level by Mignan et al. (2014). In the case of strong clustering (S 1), earthquake doublets become relatively common in comparison with the null hypothesis of no earthquake interaction. In rare cases, earthquake triplets and even quadruplets are also observed. Only results from S1 (\(\dot{\tau }\) = 10−4 bar/year) are considered in the rest of this study to illustrate the potential impact of earthquake clustering on seismic risk.

Let us note that clusters correspond to combinations of individual ruptures on a same fault or on several neighbor faults (i.e., where stress increases are the strongest). Some of the longest observed chains of earthquakes exemplify this characteristic with EQ12 → EQ13 → EQ14 → EQ6, EQ25 → EQ26 → EQ5 or EQ4 → EQ3 → EQ29 (see Fig. 6).

3.2.2 Risk characteristics of earthquake clustering (including dynamic vulnerability)

We now analyze additional simulation sets, which include loss values defined by the metric N DS4+ for low, medium and high building performance, respectively, μ Δ = {2, 4, 6}. Hypothesis H1 represents the case of strong earthquake clustering with constant vulnerability. Hypothesis H2 also includes strong earthquake clustering but with dynamic vulnerability. All simulation sets are compared to their respective null hypotheses H0 where earthquake is independent and buildings have the same performance.

Figure 9 shows seismic risk curves with losses defined by the metric N DS4+. They are computed for the hypotheses H 0 (dotted curves), H 1 (dashed curves) and H 2 (solid curves) and for the three building performances (low, medium and high represented in brown, red and orange, respectively). Independently of the building performance, an increase is observed at the tail of the risk curve for both H1 and H2, compared to H 0. The increase is more notable for earthquake clustering than for dynamic vulnerability. However, the impact of damage-dependent fragility increases with increasing losses, in agreement with the concept of risk amplification (Mignan et al. 2014). This tail fattening is representative of extreme seismic catastrophes, which in the present case would be large earthquake clusters combined with nonlinear vulnerability increase.

Fig. 9
figure 9

Seismic risk curves defined as the annual probability of exceeding a given N DS4+ threshold, N DS4+ being the number of fictitious buildings experiencing damage state DS4 (heavy damage) or DS5 (extreme damage or collapse). Results are shown for the three building performances (low, medium and high) and for the three hypotheses H 0 (no interaction), H 1 (earthquake clustering) and H 2 (earthquake clustering and damage-dependent fragility). The tail of the risk curve fattens from H 0 to H 2 due to the addition of multi-risk processes

This tail fattening, although non-negligible, increases losses in the present example of only two third at most compared to the Poisson hypothesis for a fixed return period. It means that the absolute impact of earthquake clustering remains limited in our generic application, even when the clustering level is high and building performance poor. Let us note that the increase in risk is here compared to the Poisson case where clusters of events can occur randomly. This case is rarely considered in seismic risk analysis (e.g., Bazzuro and Luco 2005) although random clusters are known to be fat-tail phenomena (see Foss et al. (2013) for a review). Comparing to 1-event occurrences only, losses can almost triple when dynamic effects are considered, while they only double if earthquakes are independent and damage static. These values should, however, be taken with caution. Overall, a detailed hazard and risk assessment that would include earthquake clustering seems unwarranted for common buildings. Only for critical infrastructures do the dynamic effects start to have an impact on seismic risk (i.e., at the tail). The present results are generic in nature, and only site-specific analyses will allow investigating to what extent this tail fattening is significant for specific portfolios.

The relative impacts of earthquake clustering and dynamic vulnerability are illustrated in Fig. 10 where damage maps with and without damage-dependent fragility are compared in the case of the triplet EQ12 → EQ13 → EQ14. To improve visualization, we show the mean damage δ evolution of moderate performance buildings without spatial hazard correlation. We first see that earthquake clustering has the strongest impact since it multiplies roughly the amount of expected damage by the number of earthquakes in the cluster compared to the damage due to only one individual event in the null hypothesis. While the increase in damage due to damage-dependent vulnerability is clear from Figs. 9 and 10, it remains a limited phenomenon localized at fault segment intersections (where hazard footprints overlap).

Fig. 10
figure 10

Median damage maps with and without dynamic vulnerability in the case of the triplet EQ12 → EQ13 → EQ14 for moderate performance buildings (μ Δ = 4). Earthquake clustering has widespread effects, while dynamic vulnerability has more local effects (at fault intersections)

It is the clustering process, as illustrated in Fig. 11, that explains the unusual step-like structure of the seismic risk curves of Fig. 9. The usual concave shape observed in log plots is characteristic of a homogeneous process where risk is due to an N-event cluster (or N-cluster) with N constant (note again that N = 1 in standard seismic risk analyses). This is best described in the Poisson case where we observe “standard” risk curves for the 1-event and 2-events systems, separated by a transitional risk curve. This transitional phase of the risk curve corresponds to the loss range where 1-event and 2-event clusters are mixed. This pattern is observed systematically, with the dynamic risk curve showing additional jumps for the transitions between 2- and 3-event clusters and between 3- and 4-event clusters.

Fig. 11
figure 11

Risk curve shape analysis. The step-like structure of the risk curves is explained by transitional phases between N-clusters of different N. The main transition is highlighted by vertical dotted lines in both Poisson and dynamic cases

3.3 Limitations

In the analysis of earthquake interactions, we assumed a simplified fault geometry (Fig. 6), a constant displacement slip model and an effective coefficient of friction μ′ = 0.4. Average fault characteristics (strike, rake, dip) were taken from the ESHM13 database. King (2007) showed that small errors in dip and strike, such as simplified fault geometry, do not significantly impact the stress values except in the near field. Zhan et al. (2011) observed that different slip models only result in significant differences in the near field. Near-field analysis is important for the study of the distribution of small aftershocks but not so for the study of fault segment coupling. Parsons (2005) investigated the role of numerous parameters, including the friction coefficient, rake, dip and slip model, on earthquake probability estimates. The author concluded that stress transfer modeling carries significant uncertainty. In agreement with Parsons (2005), we only considered the case \(\Delta \sigma_{\text{f}} /\dot{\tau }\) ≥ 50:1 to limit the analysis to significant stress changes.

For seismic hazard assessment, the attribution of magnitudes and rates was based on standard methods (Aki 1966; Hanks and Kanamori 1979) and the GMPE selection on recent and well-established results (Akkar and Bommer 2010). However, we solely considered characteristic earthquakes (M > 6.0) while classical probabilistic seismic hazard assessment (PSHA) often considers the Gutenberg–Richter law or a combination of both Gutenberg–Richter and characteristic models (e.g., Field et al. 2009; Basili et al. 2013). Moreover, the choice of a different GMPE will yield different outcomes, as well as a different sigma (i.e., total rather than intra-event) as ground motion uncertainties (both epistemic and aleatory) are leading factors in seismic hazard assessment (e.g., Mignan et al. 2015b; Marzocchi et al. 2015). Our choices were made for sake of simplicity and transparence. They limit the number of scenarios considered, provide a clear constraint on fault segment limits and avoid cumbersome computations involving floating ruptures (within and across fault segments) and logic trees. Note that while the Gutenberg–Richter power law naturally leads to a fat tail of the seismic risk curve, consideration of earthquake interactions and dynamic vulnerability would make this tail fatter.

Other simplifications are in the structural damage analysis, including damage calibration and damage-dependent fragility curve derivation. Damage calibration (parameters a and b in Eq. 10) is highly approximate but remains reasonable in the context of a fictitious low-rise building with generic characteristics. The impact of our choice is clearly illustrated on the damage maps of Fig. 7. The definition of fragility curves is based on the fundamental link between building capacity curve and expected damage, in agreement with existing recommendations (e.g., FEMA 1998; Lagomarsino and Giovinazzi 2006). Damage-dependent vulnerability is here controlled by the decrease in the ductility range, which is known to be the main process of structure deterioration (e.g., Iervolino et al. 2014). However, other parameters, such as the stiffness and the strength of the building, also evolve with increasing damage (FEMA 1998; Polese et al. 2013). One could, for instance, consider the decrease in stiffness K′ = λ K K with the generic reduction factor λ K  = 1.2−0.2δ, which can be derived from the FEMA (1998) guidelines. This material property change (additional cracks) additionally influences the effective period of the building

$$T_{\text{b}}^{\prime } = T_{\text{b}} \sqrt {\frac{K}{{K^{\prime } }}}$$
(22)

(FEMA 1998). Due to the non-monotonic behavior of S a(T b), the influence of stiffness reduction on cumulated damage is non-trivial. We consider that using the reduction in the ductility capacity, as sole engineering demand parameter, is reasonable (e.g., Iervolino et al. 2014) in the scope of illustrating the concept of dynamic vulnerability. Building-specific applications would, however, require more advanced structural analyses (e.g., Polese et al. 2013).

4 Conclusions

We have described how to combine spatiotemporal clustering of large earthquakes and dynamic vulnerability in the generic multi-risk (GenMR) framework to quantify seismic risk. With an illustrative application to the thrust fault system of northern Italy, we have shown that consideration of these two physical processes yields a fattening of the tail of the seismic risk curve (Fig. 9). The relative impact of clustering alone is in average more important than dynamic vulnerability since earthquake clustering on neighboring fault segments significantly extends the spatial hazard footprint while dynamic vulnerability has localized effects at the intersections of fault segments. Our results are in agreement with the general aspects of multi-risk (e.g., Mignan et al. 2014). With the risk curve being populated with more extreme scenarios when interactions at the hazard and risk levels are considered, its tail becomes fatter.

While earthquake clustering is likely to impact spatially extended infrastructures or distributed portfolios, damage-dependent vulnerability is more likely to impact elements localized at fault intersections (Fig. 10). This shows the need for the definition of an exposure topology (e.g., local versus extended, system connectivity) to clarify what are the risks most relevant to each exposure class. Although obvious here, defining topologies of exposure—but also of multi-risk processes—shall help better understanding and better mitigating increasingly complex risks, which are defined from multiple hazard interactions and from other dynamical processes in the system at risk. The observation of transitional phases in the dynamic risk curves (Fig. 11) also highlights the increasing complexity of the processes in play and the non-trivial behavior of risk. Although well documented in the case of random processes (e.g., Foss et al. 2013), this concept should be generalized to the broader context of multi-risk processes.