Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

16.1 Introduction

In the 1st European Conference on Earthquake Engineering and Seismology in Geneva in 2006, a keynote paper was presented by Norman Abrahamson on “Seismic hazard assessment: problems with current practice and future developments” (Abrahamson 2006). Abrahamson reviewed areas within the practice of probabilistic seismic hazard assessment (PSHA) that needed improvement and made recommendations on the direction that future research in PSHA should take. In this paper I take inspiration from Abrahamson, but will focus on the practice and development of probabilistic seismic risk assessment (PSRA), i.e. the estimation of the probability of damage and loss, for distributed buildings.

The main components of a PSRA for buildings comprise the hazard model (to get the probability of levels of ground shaking), the exposure model (location and characteristics of buildings) and physical vulnerability models (that provide the probability of loss, conditional on the level of ground shaking). An exposure model provides information of the distribution of assets (e.g. buildings) within the region and might include the location, structural/non-structural characteristics, built area, replacement cost (new), contents value, business interruption cost, number of occupants (day/night). The buildings are grouped in terms of building classes as a function of their similar structural/non-structural characteristics, and a physical vulnerability function is developed for each building class. Vulnerability functions for structures provide the probability of loss or loss ratio (the loss as a percentage of the value, e.g. the repair cost divided by replacement cost), conditional on a level of input ground motion (Fig. 16.1), and can be derived from empirical, analytical or expert opinion based methods, or a combination of these methods (hybrid) (see e.g. Calvi et al. 2006; Rossetto et al. 2014). In empirical and expert-opinion based vulnerability modelling it is common to separate the damage distribution that is conditional on the ground motion (i.e. fragility function), from the loss distribution that is conditional on the damage (i.e. damage-loss model). In analytical vulnerability modelling, fragility functions are developed considering both the nonlinear response (in terms of parameters such as inter-storey drift) that is conditional on the input ground motion, and the damage state that is conditional on the nonlinear response. Aspects related to the application of each of the components of a PSRA are discussed in more detail herein, starting with the hazard model in the following section.

Fig. 16.1
figure 1

Example of a physical vulnerability function, where the intensity measure type on the x axis is Peak Ground Acceleration (PGA) and the mean and distribution of loss ratio is shown at discrete levels of PGA

16.2 Ground-Motion Modelling

16.2.1 Scenario-Based Hazard/Risk Assessment

Abrahamson (2006) summarised both deterministic and probabilistic approaches to hazard assessment, and outlined many of the misunderstandings related to these two approaches. Abrahamson’s focus was on hazard input for design and assessment, whereas herein we are interested in the hazard input for risk assessment of distributed assets. Nevertheless, the key message that Abrahamson put forward – that both deterministic and probabilistic approaches result in probabilistic statements about the ground motion – is also of relevance for risk assessment.

In fact, the use of the term “deterministic” in current hazard and risk assessment practice is misleading as it implies that there is no uncertainty involved in the process. On the contrary, it is just the event characteristics (magnitude, location, style of faulting etc.) that are commonly modelled as deterministic, whereas the ground motion as well as the damage and loss estimation all involve uncertainties. Furthermore, it is not necessarily the case that the event characteristics are deterministic (for example, the location may have an uncertainty associated with it), and it would be possible to model both aleatory and epistemic uncertainties related to the event as part of the assessment. For this reason, it is perhaps better to use the term “scenario-based” risk assessment, rather than deterministic risk assessment.

In a site-specific design project the current practice in “deterministic” hazard assessment is to select a certain number of standard deviations (i.e. epsilon) above or below the median ground motion for the design seismic actions (Abrahamson 2006), but in a scenario risk assessment of distributed assets (e.g. buildings, people, infrastructure), which can be useful for emergency planning as well as risk communication and awareness, the epsilon should not be modelled as fixed across the region of interest. Figure 16.2 shows the natural aleatory variability in ground motions with distance that can be observed from two different earthquakes, together with the median attenuation from both events (thick black line) and the median attenuation from each event (thin black lines). Each event has an inter-event residual (δ e,1 or δ e,2 ) which is given by the difference between the median curve for both events and the median curve for the specific event; this variability arises due to differences in the source mechanics of the events, such as the stress drop. Within a given event, each site, j, where ground motions have been observed, has a different intra-event residual, (δ a,1j or δ a,2j ) which arises due to the varying path characteristics from the source to the site. Many researchers (e.g. Wang and Takada 2005; Goda and Hong 2008; Jayaram and Baker 2009; Esposito and Iervolino 2011) have shown that the intra-event residuals at two different sites for a given event are correlated, as a function of their separation distance – the greater the distance, the lower the correlation between the residuals. Hence, when modelling distributed ground motions for a future potential scenario earthquake, a sample of the inter-event residual/epsilon for the event should be made and then this should be combined (through SRSS) with the intra-event residual/epsilon at each site, which should be obtained by employing a model of spatial correlation of the intra-event residuals (see e.g. Crowley et al. 2008 for a summary of this process). Figure 16.3 shows examples of ground-motion distributions, or fields based on different assumptions: median ground motion everywhere, uncorrelated ground-motion residuals, and spatially correlated ground-motion residuals.

Fig. 16.2
figure 2

Spatial variability from two different earthquake events (Bommer and Stafford 2008)

Fig. 16.3
figure 3

Example of simulated ground-motion fields (PGA in g), based on median ground motion (left), one realization of uncorrelated ground-motion residuals (centre) and one realization of spatially correlated ground-motion residuals (right) (From Silva et al. 2014a)

For the estimation of the loss to all assets in the exposure model, the damage/loss assessment should be based on a simulation of all possible ground-motion fields that could occur, and thus the event should be repeated many times, sampling across the full inter-event variability, and then the total mean damage/loss and total standard deviation of damage/loss across all simulations can be estimated.

Nevertheless, in practice, scenario-based risk assessments are frequently based on the ground motions with a fixed epsilon (often taken as 0 or +1) applied at all sites. Such an approach assumes the unrealistic scenario of full spatial correlation of the ground-motion residuals. When epsilon is taken as +1 everywhere, the assumption being made is that the shaking at all locations has just 16 % probability of ever being exceeded, and the joint probability of occurrence of this level of ground motion at all sites will be extremely low. The resulting damage/loss thus also has an extremely low probability of occurrence, and its usefulness for communicating risk or preparing for emergency situations is questionable.

Even when the damage/loss is required at just a single location, the use of the median or even the mean ground motion should be avoided as the resulting damage/loss will often (though not always) be an underestimation of the damage/loss that would be expected, on average, should the event be repeated many times. An underestimation of damage/loss is expected when the ground motion is concentrated over the range that leads to loss ratios that are less than 50 % (from the vulnerability function), though the opposite may occur if the ground motions are concentrated in the upper 50 %. Figure 16.4 shows an example of the mean loss based on the median ground motion (A) and the mean loss and standard deviation of loss based on the ground motion with aleatory variability (B).

Fig. 16.4
figure 4

Mean loss based on the median ground motion (a) and the mean loss and standard deviation of loss based on the full aleatory variability of ground motion (b) (Silva 2013)

In order to estimate the mean damage/loss at a single site, an alternative procedure can be employed which does not require the added complication of separating the inter- and intra-event ground-motion variability and simulation of the ground motions, as described previously. Instead, at the chosen location, one should combine the probability of occurrence of each intensity measure level IML (by integrating the probability density function of ground motion based on the total aleatory variability) with the mean loss ratio from the vulnerability function at each IML, and sum across all IMLs. Due to the lognormal function of ground-motion variability and the nonlinear vulnerability function, the mean loss at the mean ground motion will not be the same as the mean loss considering the full range of potential ground motions at the site; in the example given in Fig. 16.5, the former is 0.098 and the latter (as shown in the workings of Table 16.1) is 0.105. Although the difference is not pronounced in this example, it can be larger and will depend on the specific ground-motion distribution and vulnerability function.

Fig. 16.5
figure 5

Illustrative figure of the variability in ground motion (in this case PGA) at a given site and how this probability distribution should be integrated at intervals to get the probability of occurrence, and combined with the mean loss ratios from the vulnerability function

Table 16.1 Estimation of the mean loss ratio based on example shown in Fig. 16.5 a

In this example the numerical integration of the ground-motion variability with the mean loss ratio has been used, but since the vulnerability function could also have an analytical form, an analytical integration is also possible, which would be based on the following formula:

$$ \overline{LR}={\displaystyle \underset{0}{\overset{\infty }{\int }}} LR\Big| IML\times {f}_{IML}\left( IML\Big|{\mu}_{IML},{\sigma}_{IML}\right) dIML $$

where LR|IML stands for the conditional loss ratio for a given an intensity measure level (IML), and f IML (IML|μ IML , σ IML ) stands for the conditional probability density function of ground motion given a mean intensity measure level (μ IML ) and associated standard deviation (σ IML ).

For what concerns the estimation of the standard deviation of the loss, it is also possible to do that by combining the probability density function of the loss ratio and ground-motion shaking through the employment of the total probability theorem (more details are given in Crowley et al. 2010).

16.2.2 Probabilistic Hazard/Risk Assessment

In a fully probabilistic risk assessment, where all possible and relevant deterministic earthquake scenarios are considered together with all possible ground motion probability levels, there are two commonly applied approaches in practice: one based on the outputs of a PSHA (i.e. using the rate or probability of exceedance of a set of IMLs) and the other based on the simulated ground-motion fields from scenario events (which can either represent the full set of potential ruptures, or can be a reduced set of scenarios, each with an associated probability of occurrence). The use of one method over the other depends on the application, and whether there is a need to robustly model the standard deviation of damage/loss across the full set of assets, or not. If the main output of interest is the annual expected/average value of damage/loss, if the risk at a single site is required, or if a comparative analysis of the risk at different sites is required, then the outputs of classical PSHA (i.e. Cornell 1968; McGuire 1976) can be employed.

In this approach, a PSHA is carried out for the region leading to hazard maps for a given intensity measure type (e.g. spectral acceleration at 1 s) for a number of return periods. The use of PSHA hazard maps is appropriate for site-specific risk assessment and maps which present the comparative risk at different sites, but a frequent error that is made in practice is to use a single hazard map and to report that the damage/loss at each site has the same return period/probability of exceedance as the hazard map upon which it was derived. The problem with such an approach is that it ignores the uncertainty in the vulnerability assessment (e.g. from the fragility functions and the damage-loss conversion). As shown previously in Fig. 16.1, the probability of exceeding a specific loss value is conditional on a number of different intensity measure levels; from the hazard curve one can obtain the probability of occurrence of those intensity measure levels, and by multiplying the two we obtain a number of unconditional probabilities of exceeding the loss value, which are then summed to get the total probability of exceeding the loss value. We then plot the loss value against its respective probability of exceedance to produce a so-called loss exceedance curve (Fig. 16.6).

Fig. 16.6
figure 6

Loss exceedance curve

An event-based approach to probabilistic risk assessment is required when the mean and standard deviation of the total, aggregated, loss to a spatially distributed portfolio of assets is to be estimated. By modelling each event separately we are able to model the spatial correlation of ground motions, as discussed previously. The way in which the ground-motion aleatory variability is spatially modelled affects the standard deviation of the loss; neglecting site-to-site ground-motion correlation leads to systematically underestimation of large, rare losses and overestimation of smaller but frequent ones (see e.g. Crowley and Bommer 2006; Park et al. 2007; Weatherill et al. 2013). Monte Carlo simulation is generally employed to simulate the seismicity of the next one hundred thousand years or so (see e.g. Pagani et al. 2014), and for each event a spatially correlated field of ground motion is simulated, and the resulting damage/loss is estimated by combing this with the exposure and vulnerability models (see e.g. Crowley and Bommer 2006; Silva et al. 2013a).

However, when different intensity measure types are used in the model (e.g. for the vulnerability functions of different assets) then they need to be cross-correlated (also known as spectrally correlated). Baker and Cornell (2006) looked at the cross-correlation between the residuals of spectral accelerations (i.e. the difference between the spectral acceleration from a record at a given period and the spectral acceleration predicted for that record using a ground-motion prediction equation) at different periods using a number of records and found that they were neither uncorrelated (Fig. 16.7a) nor fully correlated (Fig. 16.7b), but featured a correlation that varied as a function of the inter-period difference. Application of the model leads to simulated spectra like those shown in Fig. 16.7c, which are seen to be highly realistic when compared with real spectra with similar characteristics (Fig. 16.7d). It should be noted that it is not just the intra-event variability of different intensity measures that is cross-correlated but also the inter-event variability (see e.g. Goda and Atkinson 2009).

Fig. 16.7
figure 7

Comparison of simulated spectra with no inter-period correlation (a), full inter-period correlation (b), modelled inter-period correlation (c) with real spectra (d), taken from Baker and Cornell (2006)

When simulating spatial distributions of ground motion for loss assessment, if cross-correlation, for example between the spectral acceleration at 0.3 s (used for the vulnerability function of a low rise building type) and that at 1.0 s (used for a mid rise building type), is not modelled, and each ground-motion field is simulated independently, the impact of the spatial correlation is eroded when the combined damage/loss to both building types is estimated. Weatherill et al. (2013) show that the impact of spatial correlation on the total loss to a heterogeneous portfolio is minimal when cross-correlation is not modelled (Fig. 16.8) but that when both spatial correlation and cross-correlation are accounted for, the impact on the losses at low probabilities of exceedance can be significant. However, it is noted that the portfolio selected by Weatherill et al. (2013) was highly heterogeneous and included building types with a very wide range of periods of vibration; should the portfolio be more clustered around a smaller range of periods of vibration then the impact of the inclusion or not of spatial correlation (without cross correlation) will have a significant effect on the resulting losses, as has been shown in other studies (e.g. Crowley et al. 2008).

Fig. 16.8
figure 8

Comparison of spatial correlation (blue curve) and spatial cross-correlated losses (green and red curves) on the total loss to a heterogeneous portfolio of losses (Weatherill et al. 2013)

16.3 Fragility and Vulnerability Modelling

16.3.1 Issues Related to Commonly Used Intensity Measure Types

The use of macroseismic intensity continues to be a popular choice for fragility and vulnerability modelling, especially when the latter is based on observed damage and loss data. One of the main reasons for this lies behind the volume of macroseismic intensity data that is available following an event, which allows us to constrain the level of shaking, and thus reduce the uncertainty in an empirical vulnerability model. It is furthermore frequently argued that the use of macroseismic intensity leads to more reliable damage/loss estimates as it is possible to carry out an internal consistency check. However, there are still a number of shortcomings in using macroseismic intensity in risk assessment. The previous section discussed the developments on the modelling of spatially correlated ground motion for the loss assessment of distributed portfolios; although state-of-the-art Intensity Prediction Equations are still being developed (e.g. Allen et al. 2012) there are currently few, if any, models of spatial correlation of the residuals of macroseismic intensity. Furthermore, when good data on the site conditions within a given area is available, the impact of site amplification on macroseismic intensity is still generally modelled in an empirical manner without explicit modelling of the uncertainties.

The use of instrumental intensity measures in vulnerability modelling is required when analytical modelling of the response of structures is employed. In this case the explicit nonlinear behaviour of structures of a given class under accelerograms with differing characteristics is evaluated. However, many analytical vulnerability models developed today do not propagate all the uncertainties from the variability in the capacity of the structures of a given class (due to varying geometrical, material and design detailing properties), to the variability in the response from records with the same intensity measure level (i.e. record to record variability), to the variability in the limit state thresholds to damage (e.g. in the values of inter-storey drift that would lead to collapse), to the uncertainties in the conversion of damage to loss (e.g. uncertainty in the cost of repairing buildings that are extensively damaged). Although these uncertainties might not necessarily be robustly and explicitly modelled at every stage of the vulnerability function derivation, an attempt should be made to include them, even just through engineering judgement. This is an area that vulnerability modellers will need to focus on further in the future.

One of the most diffused methodologies for scenario-based risk assessment includes the use of the capacity spectrum method (see e.g. Freeman et al. 1975), as proposed in ATC 40 (ATC 1996) and implemented in the HAZUS software (FEMA 2003). In this methodology the median nonlinear response of the buildings of a given class is estimated by combing the capacity curve with a response spectrum, and then fragility functions based on this nonlinear response parameter provide the damage distribution (see Fig. 16.9).

Fig. 16.9
figure 9

Application of the capacity spectrum method in HAZUS (FEMA 2003)

In the original HAZUS method the spectral ordinates at 0.3 and 1.0 s are estimated, and then the full response spectrum is obtained by applying a code spectral shape. With the use of a fixed spectral shape, the specific spectral characteristics of the event under consideration are not accounted for, and given that a code spectral shape attempts to reproduce a uniform hazard spectrum, enveloping both low magnitude nearby events as well as high magnitude distant events (see Fig. 16.10), the response spectrum used may be unrealistic. An improvement on this practice is to use a scenario spectrum from a ground-motion prediction equation, appropriate for the region and scenario. However, this modelling decision is not without its drawbacks as a fixed epsilon (defined in Sect. 16.2), generally taken as zero, is frequently applied in practice and thus cross-correlation is ignored. Instead, and as mentioned previously, a large number of cross-correlated scenario spectra should be simulated and used in the scenario risk analyses, after which the mean and standard deviation of damage/loss can be estimated. An alternative approach to using ground-motion prediction models for simulating realistic ground motions (with spatially cross correlated intensity measures) would be to use physics-based methods for modelling the fault rupture and wave propagation (and associated uncertainties), leading to a number of synthetic records at the sites in question (see e.g. Atkinson 2012).

Fig. 16.10
figure 10

Schematic sketch of a uniform hazard spectrum at a given return period in which the contributions to hazard at the shorter and longer periods come from different sources (Reiter 1990)

When the capacity spectrum method (or any other nonlinear static procedure, NSP) is used in PSHA-based risk assessment, as has been done in many applications (e.g. in the LESSLOSS project as described in Spence 2007; in the RISK-UE project, as described in Mouroux and Le Brun 2006) and software (see e.g. Crowley et al. 2010), the uniform hazard spectrum (UHS) at a number of different return periods needs to be employed. The problems with this approach are that, again, the spectral shape is unrealistic and all spectral ordinates are assumed to be fully correlated. A vector-based PSHA analysis (e.g. Bazzurro and Cornell 2002), where the joint probability of exceedance of spectral acceleration at multiple periods is estimated, would need to be employed to address these issues. However, applying such a method to the full response spectrum might not be feasible and it would most probably be simpler to revert to a Monte Carlo event-based approach (as mentioned earlier in Sect. 16.2).

There are other issues with the use of NSPs in risk assessment which include bias and uncertainty in the nonlinear response (due to the assumptions on the elongation of the period of vibration and the equivalent viscous damping in the structural system, which often do not have an associated uncertainty) and underestimation of the record-to-record variability (see e.g. Pinho et al. 2013; Silva et al. 2013b). Hence, the use of vulnerability functions based on nonlinear dynamic analysis and derived in terms of elastic scalar intensity measures would both simplify the hazard modelling required in the risk assessment (at least for homogeneous portfolios, as discussed in Sect. 16.2), and avoids issues of response bias and underestimation of uncertainties. The main price that is paid with the use of dynamic analysis is the computational demand, which is much higher when many structures and records are considered. Should there thus be a desire to improve the computational efficiency, NSPs could instead be used (provided the increased uncertainties and bias are both accounted for), but it is nevertheless recommended that they are used to develop scalar intensity measure-based vulnerability functions, to simplify the hazard modelling requirements (see e.g. Silva et al. 2014b).

The elastic scalar intensity measure that is most commonly applied is the spectral acceleration at the fundamental period of the structure. However, as discussed previously, different structures in the portfolio will have different periods of vibration and thus with the use of such an intensity measure type there will be a need to model vector quantities of ground motion. In order to avoid this, one option could be to use a fixed period of vibration (e.g. 0.5 s) for all buildings in the portfolio. This avoids the need to model spectral correlation, but has the drawback that the chosen period may not be the most efficient for all the building types in the exposure model. The primary advantage of an efficient intensity measure is that it should require fewer numerical analyses to achieve a desired level of confidence in the nonlinear response (Mackie and Stojadinovic 2005). Hence, it is to be expected that the use of an inefficient intensity measure type would increase the uncertainty in the vulnerability functions. A comparison of the loss exceedance curves that are produced for a heterogeneous portfolio with vulnerability models based on efficient (structure-dependent) intensity measures and cross-correlation of the ground motion should be made against the curves obtained with vulnerability functions based on a fixed intensity measure type and no cross-correlation, to assess whether the increased simplicity of the analysis is penalised by an increased uncertainty in the final loss.

16.3.2 Correlation of Vulnerability Uncertainty

When vulnerability functions for a class of structures are used in a regional risk assessment, the uncertainty needs to be sampled from the loss distribution (see Fig. 16.1). The question which then arises is whether all the buildings of a given typology within the region will respond better or worse than average, and thus whether there is a correlation in this uncertainty. For example, after the Northridge earthquake in 1994 a previously unknown design deficiency in the connections of steel structures was observed, which led to a correlation in the response of the buildings of this class, and in Turkey after the 1999 Kocaeli earthquake, there was a case where all but one mid-rise concrete frame buildings in the same complex collapsed. Currently, however, it is generally not possible to do more than estimate the losses both with and without vulnerability uncertainty correlation; more research is needed to better constrain this correlation. In the meantime, a useful practice is to run the risk model both with and without full correlation to get the bounds of the expected losses.

16.3.3 Epistemic Uncertainty

Finally, a practice that has increased recently includes the use of logic trees to model the epistemic uncertainties in vulnerability modelling (e.g. Molina et al. 2010). However, this practice is not widespread and more research is needed in order to bring this practice to the level of maturity found with the use of logic trees within PSHA studies. For example, the recent European hazard modelling project SHARE (www.share-eu.org) used a state-of-the-art methodology for developing the ground-motion logic tree that combined expert judgement with the use of strong ground-motion data for the selection, ranking and weighting (Delavaud et al. 2012). Although the data available for testing vulnerability models is sparse, initiatives such as the GEM Global Earthquake Consequences DatabaseFootnote 1 (that is collecting damage and loss data for a number of building typologies around the world) will help improve the potential for data-driven guidance for vulnerability model selection.

16.4 Exposure Modelling

There are two main types of exposure models: building-by-building and aggregated. In the latter case the buildings with the same structural/non-structural characteristics (taxonomyFootnote 2) are aggregated within the boundaries of a given area, which is often a zip code, administrative area or grid cell, and relocated to a single location (either because the locations of the individual buildings are unknown, or to increase computational efficiency of the model). This is the most common type of exposure model (e.g. Crowley et al. 2010; Campos Costa et al. 2009; Erdik et al. 2003), but is also the one that raises the most risk modelling difficulties.

As discussed in Bazzurro and Park (2007), when all of the buildings are relocated and aggregated, the same intensity measure level is input to the vulnerability model which means that a full correlation of ground motion is assumed for these buildings. In reality, however, these buildings would be distributed across the zip code/grid cell and would thus be subject to spatially variable ground motion. Furthermore, all of these buildings will have the same sample of uncertainty in the vulnerability model applied to them, further correlating the loss of these building types. If we know the number of buildings that have been aggregated we can avoid the latter correlation by sampling a number of vulnerability residuals equal to the number of buildings at the given location, and estimate the loss for each building separately, after which the statistics for the building typology can be estimated.

There are at least two options to deal with the induced ground-motion correlation due to aggregation of the buildings: random disaggregation of the buildings within the aggregation area, or modification of the ground-motion aleatory variability (see e.g. Stafford 2012). The former approach is straightforward but increases significantly the computational demands of the analysis, especially when there are millions of assets in the model. The latter approach, described in Stafford (2012), reduces the variance of the ground motion when it is taken to represent the average of a given area, rather than the ground motion of a single point (which is the case for distributed assets), following the recommendations of Vanmarcke (1983). More investigation is needed to compare these methods and to study the difference in losses and computational performance of both these two approaches together with the case that simply ignores this induced correlation, thus adding to the studies and conclusions of Bazzurro and Park (2007). The availability of more building-by-building exposure models (so-called “ground truth” models), such as those that can be produced with the tools developed by the Global Earthquake Model,Footnote 3 will allow the impact of various exposure aggregation assumptions to be further investigated.

In practice exposure models do not generally feature uncertainties, even though they are usually developed with poor data and a large number of assumptions, and are arguably the most uncertain component of the risk model. For large regions these models are often a combination of population and building census data (where the latter might actually refer to the dwellings rather than the buildings and which often do not include the necessary structural information of the buildings), statistics on the average characteristics of dwellings/buildings in the region, expert judgement on replacement costs per square metre and so on. The assignment of uncertainty to exposure models, as well as of any correlations in the uncertainty, is certainly an area that would benefit from increased research attention.

16.5 Conclusions

This paper has looked at many commonly applied modelling assumptions in the seismic risk assessment of portfolios of distributed buildings. One of the main points that should be clear is that as the developments in ground-motion modelling continue to progress, in particular those related to the correlation of aleatory variability, these have an impact on the way in which exposure and vulnerability models are treated in risk modelling. Furthermore, the correlated uncertainties in the vulnerability and exposure models require more attention in future regional risk modelling research.

A number of research questions that require further investigation have been raised herein:

  • Is the penalty for simplifying the intensity measures in vulnerability models too high in terms of the associated uncertainties in the losses?

  • How can we define the correlation of vulnerability uncertainty within a given building class?

  • Can we apply lessons learned from data-driven ground-motion prediction equation logic tree modelling to vulnerability models?

  • How should we deal with the induced ground-motion correlation of aggregated buildings in exposure models, and what is the impact of ignoring it?

  • How can we attempt to model the uncertainties in exposure models?

Hence, although the practice of seismic risk assessment is well established, there are still a number of areas that require further research and exploration by the present and next generations of risk modellers.