Seismic fatality risk evaluation framework for existing buildings in Switzerland

The Swiss code SIA 269/8 for the seismic evaluation of existing structures is risk-based. It relies on the compliance factor αeff, defined as the ratio of the seismic capacity of an existing structure to the required seismic capacity of a hypothetical identical new structure, to indirectly evaluate the casualty risk posed by the existing structure. While it is convenient to have a direct relationship between the casualty risk and the degree of code compliance of the existing structure, it is debatable if the contributions of the seismic hazard, structural performance and occupancy to the casualty risk should be merged into a single quantity, conflicting with the multi-component nature of seismic risk. This paper presents a graphical framework to evaluate the seismic fatality risk for existing buildings. The framework is based on the PEER Center performance-based earthquake engineering methodology to provide a transparent view of the contributions of different sources of hazard and risk. Starting from a seismic hazard model, probabilistic seismic demand, damage and loss models are introduced, taking the uncertainty into account at each step. The final product of the proposed framework, a loss curve representing the (annual) exceedance rate of fatality counts, enables engineers and building owners to make better, risk-based decisions. The proposed framework is demonstrated for an existing building in Switzerland. Notably, in addition to the SIA 269/8 evaluation method, this demonstration compares eight hybrid approaches differentiated by the utilized structural assessment techniques (nonlinear static pushover vs. incremental dynamic analysis), uncertainty propagation in the demand model (record variability only vs. record variability and epistemic uncertainty) and solution strategies (closed-form vs. numerical) to find a practical compromise between the accuracy and the computational effort. A comparison of the obtained results shows that the SIA 269/8 evaluation method underestimates the fatality risk for the presented building and identifies a practical alternative based on the proposed graphical framework.


Introduction
In Switzerland, earthquakes are the natural hazard with the highest potential to cause catastrophic impacts (Bachmann 2002; Federal Office for Civil Protection 2020), such as fatalities, injuries, loss of function and monetary losses. In the light of past earthquakes, death tolls arise mainly from structural collapses, which are responsible for 75% of fatalities (Coburn et al. 1992). In an effort to assess and mitigate earthquake-related losses, SIA 269/8 (SIA 2017), the code for the seismic evaluation of existing structures in Switzerland, prescribes a risk-based method via the compliance factor α eff , defined as the ratio of the capacity of an existing structure to the code demand for an identical new structure to indicate the degree of code compliance of the existing structure. Within this approach, SIA 269/8 makes use of higher casualty risk acceptance for existing structures compared to the new ones, justifying their lower code compliance. To this end, a risk curve defines the relationship between the compliance factor and the unit casualty risk (UCR SIA ), the latter depicted as the annual probability of death of an individual, statistically seen, always occupying a structure (Fig. 1).
The seismic retrofit decision-making procedure implemented in SIA 269/8 consists of two steps. First, if the unit casualty risk (individual risk) is higher than 10 -5 (α eff < 0.25), a seismic retrofit is mandatory at any cost to ensure the minimum acceptable level of life safety ( Fig. 1) (Schneider 2000). This required safety level must be provided regardless of the hazard exposure time, which is the structure's remaining service life. For unit casualty risk levels smaller than 10 -6 (α eff > 1.0), on the other hand, a seismic retrofit may be waived as the associated compliance factor indicates full compliance of the existing structure with the design code for new structures (Fig. 1). Compliance factors of existing structures often lie between these boundaries (α eff between 0.25 and 1.0), requiring further cost/benefit considerations concerning the feasibility and the extent of the seismic retrofit measures. Hence, the second SIA 269/8 step addresses this issue, aiming to significantly decrease the casualty risk (both individual and collective) within feasible economic limits. This feasibility check is defined as the commensurability of the seismic retrofit. It is a cost/benefit analysis considering the number of occupants (collective risk) and the remaining service life of the structure, with the actuarially equivalent value of a saved human life taken to be 10 million Swiss francs. The efficiency of the seismic retrofit is measured as the ratio of the annual casualty risk reduction in Swiss francs to the annualized costs of the seismic retrofit over the remaining service life of the structure. Retrofit measures with an efficiency ratio larger than one are defined to be commensurate, and the corresponding retrofit is then compulsory to mitigate the casualty risk. Efficiency ratios smaller than one indicate high retrofit costs without a considerable risk reduction: thus, the current seismic casualty risk level is accepted following SIA 269/8. Theoretical principles regarding the evolution of SIA 269/8 are given by Wenk (2014), Vogel (2005), (Vogel and Kölz 2005), Lestuzzi (2012Lestuzzi ( , 2013 and Kölz (2017Kölz ( , 2019. The application of the SIA 269/8 method is demonstrated for a collection of Swiss buildings by Wenk (2008).
Evidently, the utmost goal of SIA 269/8 is to protect the life and limb of the individuals exposed to seismic hazards under an efficient allocation of limited financial resources for seismic retrofit. The direct relationship between the compliance factor and the unit casualty risk aims to reach an individual and collective safety goal using a practical procedure readily usable by structural engineers without employing explicit probabilistic approaches. However, this simplicity trades off against the capability of a single compliance factor to accurately and transparently estimate the casualty risk because the contributions of the seismic hazard and the structural performance to the casualty risk are not directly traceable. As an example, a modern building in a high seismicity location having the same compliance factor as a more vulnerable building in a low seismicity location would represent an equal unit casualty risk from the perspective of SIA 269/8, as the contribution of the seismic hazard and the structural performance is merged into a single quantity, the compliance factor. However, the actual unit casualty risk may differ widely due to its multicomponent nature. Furthermore, the shape of the risk curve in Fig. 1 is based on the fitting of compliance factor-unit casualty risk data pairs stemming from a macroseismic approach (Vogel and Kölz 2005), (Kölz and Duvernay 2005). For a collection of benchmark buildings, , Jamali and Kölz (2015), and Michel et al. (2010) have shown that the unit casualty risk computed using a probabilistic approach may be much higher than that estimated via the risk curve in SIA 269/8, primarily if the compliance factor is computed using displacement-based assessment methods. Tsiavos et al. (2021) have performed a compliance-based risk assessment for structures subjected to different seismicity levels to emphasize the effect of the location-dependency on the collapse risk. While the practical features arising from the graphical nature of the compliance-based SIA 269/8 method are desirable to reach a broader acceptance by practicing engineers, careful consideration should be taken to minimize the bias stemming from the oversimplified nature of the SIA 269/8 method. Legal issues regarding the application of SIA 269/8 and the safety and cost/benefit 1 3 criteria in Merkblatt 2018 (SIA 2004) (predecessor of SIA 269/8) have been addressed by Braune and Heunert (2019) and Duvernay et al. (2006), respectively.
One method to assess the economic and social consequences induced by earthquakes is the Pacific Earthquake Engineering Research Center's performance-based earthquake engineering (PEER PBEE) methodology (Porter 2003), (Günay and Mosalam 2013) (Moehle and Deierlein 2004), to quantify the safety of engineered structures using metrics that are available to seismologists, engineers, stakeholders and decision-makers. The framework starts with determining the probabilistic definition of the ground motion features, referred to as the intensity measure (IM). The next step is to calculate the structural response using a structural model and extract an engineering demand parameter (EDP) in terms of deformations, accelerations, or other metrics of the structural response to the seismic actions of the magnitude defined by the IM. After performing the structural analysis, the EDPs are related to damage measures (DM) describing the physical condition of the structure. The final product of the PEER PBEE methodology is the loss curve, describing the exceedance rate of a decision variable (DV), denoted as DV , utilized to make risk management decisions. Typical quantities used as decision variables are the repair or replacement costs in monetary units, downtime, or fatalities. Due to its decoupled nature, the probabilities and the uncertainties in hazard, structural, damage, and loss analysis are considered explicitly in a rigorous manner within PEER PBEE methodology.
The mathematical expression of the PEER PBEE framework is given with a triple integral based on the total probability theorem as follows: In general terms, G X|Y ( X|Y) denotes the complementary cumulative distribution of X conditioned on Y . The probability density function of variable X conditioned on variable Y is indicated with dG X|Y ( X|Y) . While the PEER PBEE integral allows the analyst to decouple the assessment procedure into smaller portions, its solution is not straightforward, and its application is widely limited to academia. Closed-form solutions on a graphical basis have been put forward by Mackie and Stojadinovic (2006) and Mander et al. (2012), among others, to solve the PEER PBEE framing integral with the aim of reaching widespread adoption.
In this paper, we propose a PEER PBEE based graphical framework for the seismic fatality risk assessment as an alternative to the risk-based seismic evaluation approach in SIA 269/8. Thanks to the closed-form solution of the PEER PBEE total probability integral, the uncertainties related to the conditional dependence of the exceedance rate of fatalities given earthquake intensity levels are estimated without the need to integrate the PEER PBEE equation numerically. To this end, the Fourway (Mackie et al. 2006) graphical method to integrate the PEER PBEE total probability integral is extended to include building occupancy and fatality rate calculations in an effort to make the proposed framework easier to adopt in engineering practice. To facilitate the proposed graphical framework, it is assumed that each interim model is lognormally distributed and the associated conditional dispersion is constant.
Before we go into the interim steps of the proposed framework, the reader should note the terminology, starting with the description of earthquake-related losses. The unit casualty risk in SIA 269/8 (UCR SIA ), which is calculated by combining the occurrence probabilities of seismic events of EMS intensities, damage states and fatality rates associated (Vogel and Kölz 2005), refers to deaths solely (Wenk and Beyer 2014), even though the term "casualty" tends to refer to the loss of life or severe injury. The implemented decision variable herein does not address injuries: only the term fatality is used. Further clarification is required regarding the decision variables in SIA 269/8 and this work: while the former focuses, among other things, on unit casualty risk UCR SIA and the societal risk linked with the compliance factor α eff , the latter uses the number of fatalities n fatal to make retrofit-related decisions. Moreover, to avoid confusion between the two methods, the graph in Fig. 1 is referred to as the risk curve, while the PEER PBEE integral outcome is termed the loss curve. The term aleatory randomness is referred to recordto-record variability, while the term epistemic uncertainty is referred to the uncertainty regarding the material, model and failure parameters. Finally, even though the damage models usually refer to the physical condition of structures, this study entitles fatality rates as damage measures resulting from structural damage.
The remainder of this paper is divided into the following sections. The proposed framework to evaluate the seismic fatality risk posed by an existing building is presented first. An important element of the proposed framework are the possible combinations of tools to assess the seismic performance of the structure, represent the associated uncertainties, and compute the seismic fatality risk. Next, a typical Swiss post-war unreinforced masonry wall residential building is used in a case study to exemplify the proposed framework. The results of the case study are the basis for comparisons and discussion presented in the next section. The final section presents the conclusions on the accuracy and effort involved in using the proposed framework and recommendations for practice.

Fatality risk evaluation framework
This section lays out the probabilistic background of the interim models behind the fatality risk evaluation framework, namely: the probabilistic seismic demand, probabilistic damage and probabilistic loss models, with a primary focus on the structural assessment techniques (incremental dynamic analysis vs. static pushover), treatment of uncertainties (aleatory only vs. aleatory and epistemic), and the strategy to solve the PEER PBEE integral (closedform vs. numerical). Combinations of these techniques yield a total of eight routes to apply the proposed framework and create a loss curve for the decision variable, which illustrates the mean annual exceedance rate of fatalities. Each presented route benefits from task parallelization via parallel processing, which is illustrated in the case study in the following sections. A generic parallelization scheme is presented in the appendix.

Seismic hazard model
Seismic hazard at the site of interest is described by the (annual) exceedance rate (frequency) of an intensity measure (IM), denoted as im . In structural engineering applications, the 5% damped spectral pseudo-acceleration at the fundamental period of the structure (hereafter referred to as spectral acceleration), S a T 1 , 5% , is widely accepted as an optimal intensity measure due to its capability to interface between structural behavior and seismicity. The adopted hazard model SUIhaz2015 was developed by the Swiss Seismological Service (Wiemer et al. 2016) and formed the foundation of the revised response spectra in the Swiss code SIA 261 (SIA 2020). The SUIhaz2015 hazard data in terms of hazard curves (or uniform hazard spectra) is available on the web 1 , allowing automated access from software such as Matlab.
For a more practical definition of the seismic hazard in structure-specific risk evaluations and to better match the fundamental period of the structure, hazard curves at predefined oscillation periods provided in the SUIhaz2015 model were interpolated via cubic spline on log-log space onto a finer resolution. While the benchmark SUIhaz2015 model refers to a reference rock with a shear-wave velocity of V s30 = 1100 m/s, soil amplification factors to adjust seismic hazard to soil conditions other than the reference rock, provided in Danciu and Fäh (2017), are implemented into the framework.
For the closed-form solution of the PEER PBEE integral, a common method for the approximation of the hazard curve is to conduct a power-law fitting (linear in log space) with the slope parameter k and the intercept k 0 as follows: The required fitting can be performed globally, considering the entire hazard curve, locally for each damage state (Vamvatsikos and Cornell 2004), or within an intensity measure range of interest. As an example of a local fit, Mackie and Stojadinovic (2007), Jalayer and Cornell (2009) consider an intensity measure domain bracketed by 2% and 10% exceedance in 50 years, corresponding to the return periods of 2475 and 475 years, respectively, as the seismic hazard levels considered in many building codes. In countries subjected to low to moderate hazard levels, such as Switzerland, longer return periods may be required so that the structure experiences extensive damage and the total risk range is covered (Michel et al. 2010). In their study,  scrutinize the sitedependency of safety levels reached with the typical 475 years return period. They expect that the safety level of structures designed according to this return period will be lower in low to moderate seismicity regions than those in high seismicity regions. On top of that, the number of fatalities strongly correlates with the building population, which means that the fitting range of the hazard curve may vary significantly. In other words, the hazard level causing one fatality in a building is much lower in densely populated buildings compared to a building with fewer occupants.
Naturally, the hazard curve approximation through a global fitting with a single pair of shape parameters will be limited to a narrow range at best. The decision fragility curve representing the conditional dependence of the decision variable (DV) given IM can deliver information regarding the validity of the selected fitting range. However, such an operation requires an iterative procedure and is time-consuming. Hence, this study employs a bin-based approach to create a vector of shape parameter pairs to approximate the hazard curve, presented in more detail in the case study.

Probabilistic seismic demand model
A probabilistic seismic demand model (PSDM) correlates seismic hazard IM to the structural EDPs (response quantities of interest) using a structural response model that can capture its linear and nonlinear behavior. The propagation of uncertainties is crucial when linking the EDPs to IM: it occurs due to the random nature of the ground motions, (2) im = k 0 (IM) −k limited knowledge of the material characteristics, and the assumed simplifications in the mathematical model of the structure. With a focus on nonlinear dynamic analysis, we present several possibilities to link EDPs to IMs, characterized by different complexity levels of treating the uncertainties.
Widely recognized as the most accurate technique, nonlinear dynamic analysis offers realistic results for assessing the dynamic behavior of the structure, accounting for the variability in its response due to the random nature of the ground motions employed. Following the selection of ground motions in line with the seismic hazard characteristics of the location of interest, the structural model is subjected to multiple ground motions, each of them characterized by an IM. After each analysis, a representative EDP is extracted. Cloud analysis, incremental dynamic analysis (IDA) and stripe analysis are the most common approaches to derive the median (or mean) relationship between EDP-IM pairs to define a PSDM. While the efficiency of dynamic analysis methods to accurately capture the structural behavior is discussed elsewhere (Mackie and Stojadinovic 2005), IDA (Vamvatsikos and Cornell 2002a) has been proven to be a practical method to create PSDM and lies at the core of this study for two reasons: First, incremental scaling of ground motions allows detailed understanding of the structural response including the record-specific variability on the dynamic equivalent of the familiar nonlinear static pushover (SPO) curve, and second, an SPO-based tool called SPO2IDA (Vamvatsikos and Cornell 2006) can mimic the dynamic behavior of the structure by transforming the SPO results to IDA curves, when running dynamic analysis is not a feasible option. Thanks to the plug & play nature of the interim models, the PSDM can easily be replaced with SPO-based computationally "cheaper" assessment alternatives.
An efficient algorithm to perform a multi-record IDA is the hunt & fill tracing (Vamvatsikos and Cornell 2004), where each of the N GM ground motions is stepwise scaled N RPR times (RPR: run per record). These accelerograms excite the building model, aiming to bracket the structural collapse and reach a sufficient resolution of the IDA curve. Once the relationship between the EDP-IM pairs is known, the PSDM is created through a least-squares single line or a piecewise linear regression, assuming that the EDPs follow a power-law relationship in natural log space. The EDP data are assumed to have a lognormal distribution when conditioned on IM, meaning that the conditional mean of EDP given IM is linear in log space. The relationship between the median EDP (denoted as Ê DP ) and the IM is described by the regression coefficients A and B [Eq. (3)].
Under the assumption of constant conditional dispersion, the standard deviation of the PSDM due to record-to-record variability R,EDP|IM , termed conditional dispersion of EDP given IM, is defined as the standard deviation of the natural logarithm of the demand model residuals [Eq. (4)] (Mackie and Stojadinovic 2003). If multi-record IDA is performed on a deterministic mean-parameter structural response model (abbreviated as the mean-parameter model hereafter) without addressing the epistemic uncertainty, record-to-record variability R,EDP|IM solely represents the dispersion of the PSDM PSDM . (3) Significant computational demand linked with nonlinear dynamic analysis and issues regarding the interpretation of IDA results are the main barriers standing in the way of its implementation in the daily engineering routine. Hence, it is natural to investigate the performance of the SPO in rapid fragility assessment tools. The vital connection between the static (SPO) and dynamic (IDA) assessment approach is the SPO2IDA tool. It is based on an R − − T (strength reduction factor -ductility -fundamental period) relation in normalized coordinates, followed by their transformation into EDP-IM pairs using Eqs. (5)  Assuming the EDP of choice is the maximum interstory drift ratio max , the final approximation of the IDA curves is generated using a one-to-one mapping between roof and max , followed by the calculation of the regression coefficients A and B using the maximum interstory drift ratios of the median IDA approximation [Eq. (3)].
As the SPO2IDA tool does not deliver the individual IDA approximations but only their fractiles (i.e., 16%, 50% and 84%), further considerations are required on the definition of the PSDM dispersion. In their SPO-based fragility function tool SPO2FRAG, Baltzopoulos et al. (2017) carry out an IM-based approach to estimate the variability over the logarithmic distance between the 50% and the 16% fractiles for each limit state. The main advantage of the IM-based approach is the IM availability for each EDP level, while its vice versa is not guaranteed for extensive damage states due to the flattening of the IDA curves. Still, we require an EDP-based approach because the dispersion of the demand model is measured as the EDP variability conditioned on IM. The proposed solution to overcome this issue is to populate a cloud of random EDP-IM pairs having the sample statistics of the fractile SPO2IDA approximations and to measure the PSDM dispersion through the standard deviation of the residuals in Eq. (4).
So far, the mean-parameter model used within PSDM only considered the recordto-record variability without taking the material, modeling and failure parameters into account. Dolsek (2009) presented a method for the complete coverage of both aleatory and epistemic uncertainty, where a Monte Carlo simulation with Latin Hypercube Sampling (LHS) is utilized to create N LHS realizations of a structure, followed by a multiple (for each structure realization) multi-record IDA and demonstrated the proposed method on a planar structure. However, for a large 3D model, such consideration is still beyond the practical applicability limits due to the extensive computational cost: the required number of nonlinear dynamic runs is N LHS × N GM × N RPR .
An alternative method proposed by Fragiadakis and Vamvatsikos (2010) allows the approximation of response statistics based on Monte Carlo simulations with LHS where the structural behavior is investigated through SPO analysis, followed by SPO2IDA for a fast evaluation of the variability. Assuming that the mean-parameter building model provides the overall median response and the random variable sampling adds variability around the median (Fragiadakis and Vamvatsikos 2010), only the median fractiles of the SPO2IDA output are required for the estimation of epistemic uncertainty of PSDM U,PSDM , and the dispersion between the median SPO2IDA curves is assumed to arise due to the epistemic uncertainty only. Contrary to Vamvatsikos and Fragiadakis (2009) adopt the EDP-based method in accordance with our fatality risk estimation framework as our PSDM requires EDPs conditioned on IMs. Then, the standard deviation of the PSDM due to epistemic uncertainty ( U,PSDM ) is defined in Eq. (7), where ln EDP 50% is the mean of median EDPs natural logarithm, and ln EDP i 50% the natural logarithm of median EDP of i-th sample over the range of EDPs of interest.
The dispersion of the PSDM associated with the record-to-record variability R,EDP|IM is either extracted from IDA or SPO2IDA analysis using Eq. (4). The total dispersion of the PSDM due to both aleatory and epistemic uncertainty can then be combined as in Eq. (8) (Cornell et al. 2002), (Jalayer 2003):

Probabilistic damage model
Probabilistic damage models (PDM) predict the complementary cumulative distribution of a damage measure (DM) conditioned on the engineering demand parameter (EDP). In this study, the selected DM is the fatality rate, defined as the ratio of the number of fatalities to the total number of occupants in a building at the time of the earthquake, conditioned on the maximum interstory drift max , defined as the EDP. The dependent variable of the PDM describing the earthquake-induced rate of fatalities can be gained from a variety of sources, such as statistical data obtained from historical seismic events or international data gained through expert opinion. Such data rest upon observations of the correlation between the fatalities and the damage states of buildings after earthquakes. Naturally, fatality observations are related and documented with respect to qualitative damage states such as "partial collapse" or "total collapse" and not necessarily with respect to EDPs. The apparent challenge is twofold: First, the observer-dependent vague definition (qualitative & non-uniform) of damage states impedes a uniform comparison of fatality rates found in literature, and second, even in the case of a uniform nomenclature, damage states still require translation into engineering terms for the implementation of the proposed damage model.
Often, it is assumed that the death tolls initiate at an extensive damage state, reaching their maximum value at total collapse. A direct comparison among the casualty rates stemming from literature is challenging due to the non-uniform definition of the damage states (near-collapse, extensive damage, partial collapse), casualty classification scales (sudden  Jaiswal and Wald (2010) death, life-threatening injuries) and structure types. Nevertheless, we summarize a compilation of the expected values for structure-specific fatality rates given structural damage states in Table 1, as found in the literature, where the damage state at which the death tolls initiate is termed "near-collapse." The second obstacle to creating the PDM remains the numerical translation of nearcollapse and collapse into EDPs. While the EDPs causing collapse are computed from the numerical nonconvergency of the structural response model (the EDP values become infinite for marginal changes in IM, indicated by the flatlining of an IDA curve), the definition of the EDPs at near-collapse is a challenging task. Several attempts have been made to translate the damage states into numerical values for their quantitative identification. For the life-safety limit states, Penna et al. (2013) investigated the performance of various criteria, including the degradation of the maximum base shear of an SPO curve, the percentage area of piers achieving a maximum drift value and the significant decrease in the slope of the average IDA curve. Vamvatsikos and Cornell (2004) defined the collapse prevention state at 20% of the elastic slope on the IDA curve, corresponding to the softening of the curve toward the numerical nonconvergency. Another limit state-based definition is proposed by Beyer et al. (2019) for unreinforced masonry walls, where the ultimate drift ratio is set to 4/3 of the drift ratio at significant damage. Motivated by these studies, we compare two EDP candidates to identify the EDP at near-collapse damage state: The EDP at 20% of the elastic slope of the IDA curve and an interstory drift equal to 3/4 of the maximum interstory drift at collapse, designated as the initiation of the IDA flatline. The first occurring EDP among these two is then defined as the EDP associated with near-collapse.
Following the definition of DMs and EDPs, the probabilistic damage model is created as follows [Eq. (9)]:

Probabilistic loss model
A probabilistic loss model (PLM) addresses the consequence by linking the damage measures (DM) to the decision variables (DV), the latter defined as the number of fatalities, n fatal , in this study. Given the continuous nature of both DM and DV, their median relationship and the corresponding dispersion define the probabilistic loss model. However, before facilitating the PLM, it is necessary to create a population model representing the number of occupants at the time of the earthquake, n pop , capable of estimating the number of fatalities in buildings exposed to a seismic event. While SIA 269/8 estimates the number of occupants through occupancy-specific units (Table 2), it is also possible to create a population based on the footprint of the building (Seligson 2008). As we aim for SIA 269/8 compatibility, we chose to define our probabilistic loss model based on the SIA 269/8 occupancy model in Table 2. The total population n pop is determined as the product of the occupancy rate given in person ( OR ) and the number of occupancy units ( n unit ) . If the (expected) number of occupants and the fatality rate are known, the number of fatalities is computed per Eq. (10), followed by the PLM for the median relationship in Eq. (11).
Regardless of the employed population model, the occupancy fluctuates over time as a matter of course, which needs to be taken into account through the dispersion of PLM. For example, in a residential building, it is natural to expect a higher population overnight than during the daytime. One possibility to treat the time-based variation of the population is to consider multiple scenarios, such as the maximum peak population or an equivalent number of occupants statistically occupying the building at all times, as defined in FEMA P-58 (Seligson 2008). For the treatment of the population fluctuation, SIA 269/8 defines the occupancy rates in terms of the expected number of occupants within upper and lower boundaries without specifying the associated confidence level (Table 2).

Solution strategies
Once the interim probabilistic models are created, the PEER PBEE total probability integral [Eq. (1)] can be employed to couple the models together and obtain the (annual) rate of the decision variable of interest. Despite the decoupled nature of the problem, the numerical solution of the PEER PBEE integral requires a programmable computer. However, a graphical tool named Fourway, created by Mackie and Stojadinovic (2006), provides a simple method for estimating the conditional dependence of a decision variable in closed-form without the requirement to solve the PEER PBEE integral numerically. This tool lies at the core of this study. As introduced already, the main assumptions for its application are the lognormality of the probabilistic models and their constant dispersion over the range of independent variables being considered.
The median decision variable in closed-form is derived from the probabilistic interim models and presented in Eq. (12). The dispersion of the decision variable conditioned on intensity measure is defined in Eq. (13) (Mackie et al. 2006).
Equivalent to the closed-form solution, it is also possible to obtain the median and the fractile curves of the decision variable graphically via Fourway, whose application is presented in Fig. 2 for a generic case without consideration of upper and lower fractiles. Starting from the right upper pane, the user can trace the interim models in a clockwise For the median relationship, a vertical arrow is drawn starting from the PSDM in the right upper pane (Quadrant I) intersecting the PDM in the right lower pane (Quadrant II), followed by an orthogonal arrow intersecting the PLM in the left lower pane (Quadrant III). Finally, the median relationship between the decision variable and the intensity measure is derived in the left upper pane (Quadrant IV) from the intersection of the last arrow at the median DV-IM curve. The probabilistic models, including the decision variable conditioned on the intensity measure provided so far, require a solution of the PEER PBEE integral to obtain the loss curve for the decision variable of interest: Either through a closed-form solution derived by Mackie and Stojadinovic (2007) [Eq. (14)], or the numerical integration of the PEER PBEE integral [Eq. (1)] by combining the interim models based on the total probability theorem. While the PSDM, PDM and PLM are identical for both the closed-form and numerical solutions, the latter can deliver the conditional dependence of the decision variable through decision fragility surfaces, which serves as a validation of the closed-form solution.

Outline of the proposed framework
The application of the proposed framework is presented in various combinations of the structural assessment technique to create the PSDM (IDA vs. SPO-based), the treatment of uncertainties (aleatory only vs. aleatory and epistemic uncertainty) and the solution strategy (closed-form vs. numerical) to obtain the loss curve. Eight different routes to approach (14)   Table 3 the framework are detailed in Table 3 and presented as follows: C1 requires a single SPO analysis with the SPO2IDA tool applied on top of the capacity curve to obtain the PSDM. As the structural model is the mean-parameter model, no epistemic uncertainties are addressed. C2 has the same PSDM as C1, departing from it only regarding the treatment of epistemic uncertainties. To this end, Monte Carlo simulation with Latin Hypercube Sampling (LHS) is used to create N LHS realizations of the building model with varying material, modeling and failure parameters. C3 route employs a multi-record incremental dynamic analysis (IDA) using N GM ground motion accelerograms, each scaled N RPR times to create a dynamic equivalent of the SPO curve. Finally, C4 has the same PSDM as C3 but addresses the epistemic uncertainty based on N LHS realizations of the building model in an analogy to the relationship between C1 and C2. Routes N1-N4 share identical interim models with C1-C4; the only difference is their numerical integration to create the conditional dependence of the decision variable and the loss curve as opposed to their closed-form solution counterparts C1-C4. For the sake of completeness, the flowchart in Fig. 3 presents eight different routes to apply the framework. Once the PSDM is created, the PDM and PLM in each branch follow the same routine.

Building specifications
A 4-story unreinforced masonry (URM) residential building in Switzerland (Fig. 4, Table 4) was selected to demonstrate the seismic fatality risk evaluation framework described in the previous sections. A comprehensive investigation of the case study building was previously conducted by Karbassi and Lestuzzi (2014) and  using a sophisticated applied element model to verify the risk curve in SIA 269/8. The following URM mechanical property values, detailed in Table 5, are used to create a mean-parameter structural response model of the building: compressive strength f m , elasticity modulus E , shear modulus G and shear strength 0 . Two additional parameters are required for nonlinear dynamic analyses: G c , governing the stiffness reduction after damage initiation, and softening , governing the slope of the softening branch.

Structural response model
Various methods, including equivalent frames and simplified or explicit micro-models based on planar or 3D finite elements, have been developed to assess the seismic safety of URM structures. One of them is the macroelement approach implemented in Tremuri (Lagomarsino et al. 2013), which employs equivalent frame modeling and nonlinear analysis methods to accurately compute the seismic response of a URM structure with distinctly less computational effort than refined models, offering an optimal balance. The building model was created in the commercial version of the Tremuri software. For both nonlinear static and dynamic analyses, the refined macroelement model formulated by Penna (2014) was employed to account for the axial elongation due to rocking. Local failure modes, such as the out-of-plane URM wall failure mechanism, were assumed to be prevented and were not modeled so that the structural behavior could be assessed on a global scale model.  Table 4 Main characteristics of the studied YVR14 building a For the sake of comparison with other studies, the seismic hazard is defined for Basel instead of the original location of the building b The first seismic regulations in Switzerland were introduced in SIA 160 (1970) c The range of wall thickness (150-180 mm) is defined following Jamali and Kölz (2015). Due to the lack of more precise information, the façade walls are assumed to be thicker than the inner walls

Seismic hazard
The selected location of the building, the city of Basel, is exposed to some of the highest seismicity in Switzerland. The hazard curve taken from the SUIhaz2015 study was interpolated to match the building's spectral acceleration at the fundamental period T 1 = 0.243 s. The selected location is assumed to be located on the reference rock with a shear wave velocity of V s30 = 1100 m/s. The resulting hazard curve was then fitted to estimate the slope and the intercept parameters, k and k 0 , respectively. To this end, the hazard curve was divided into intensity measure bins, where a step function was created for each bin for the accurate estimation of the hazard curve. Finally, the parameters were smoothed out via a third-order polynomial fit, as shown in Figs. 5 and 6. The interpolation of the raw hazard curve and the corresponding fitted curves via step function and the polynomial fit showed good agreement over the entire hazard region, thanks to the high resolution of the fitting (Fig. 7).

Probabilistic seismic demand model
The maximum interstory drift ratio was selected as the EDP to correlate the seismic hazard (IM) to the structural response. Using the Tremuri macroelement model capable of performing nonlinear dynamic and nonlinear static analysis, incremental dynamic and nonlinear static pushover based probabilistic seismic demand models are presented next.

Incremental dynamic seismic response analysis
The demand model is created via nonlinear dynamic macroelement simulation (Newmark integration method) with the Tremuri software through a multi-record IDA study. The ground motions used in this study were selected from Karbassi and Lestuzzi (2014) to represent the seismic hazard in Switzerland and documented in the appendix. A total of 492 nonlinear dynamic analyses are performed, with N GM = 41 ground motions, each of them scaled N RPR = 12 times via the hunt & fill algorithm to achieve a high resolution of the IDA curve. Through the selection of a relatively large number of ground motions, the underprediction of the model dispersion was avoided, and the bias was minimized (Mackie and Stojadinovic 2005), (Vamvatsikos and Cornell 2004). For the efficiency of the dynamic analysis, a Matlab-based Tremuri routine is created to preprocess the building models, allowing a fully automated process without user interaction, and integrated into the IDA bundles created by Vamvatsikos 2 . Both horizontal components of the records were used simultaneously, with the highest spectral acceleration component oriented parallel to the building's transverse direction, while the other component acts along the longitudinal direction, in accordance with the applied element model presented in Karbassi and Lestuzzi (2014). The vertical components of the ground motions were neglected. The scaling factors within the hunt & fill algorithm were defined based on the transverse axis of the building, and the identical factor was also applied to the longitudinal axis.
As commonly applied in structural engineering, the maximum interstory drift ratio max and the spectral acceleration S a T 1 , 5% were defined as the EDP and the IM, respectively. At this stage, a few notes on the analysis structure are necessary. At the preliminary stages, it was observed in a few cases that Tremuri failed to finish the nonlinear dynamic runs in conjunction with ground motions with a high number of timesteps. This problem was eliminated by truncating the corresponding ground motions, as documented in the appendix. A further problem was observed in some cases where Tremuri successfully finished the analysis but failed to write the output to a text file due to the large amount of nodal output data, resulting in an out-of-memory error 3 . Even though it was possible to open the binary file containing the analysis results in the research version of Tremuri, such an operation required user interaction after each run and conflicted with automating the entire procedure. To overcome this problem, we decided to evaluate the interstory drifts based on the average displacement of stories in order to reduce the data size. Given the diaphragm action due to the reinforced concrete slabs and a preliminary evaluation of the nonlinear static pushover analysis results, the nodal displacements were found to be approximately constant within the story levels.
The resulting IDA curves and their fractiles are presented in Figs. 8 and 9, respectively. To summarize the IDA curves, EDP stripes given IM levels were used to describe the probability distribution of the EDP for the considered IM levels. Given the reasonably high continuous and monotonic nature of the single record IDA curves, only a marginal difference was observed between the fractiles conditioned on IM or EDP.
As we intend to employ the number of fatalities in the decision-making, the PSDM was fitted within the entire damage domain excluding collapse, simultaneously aiming to avoid a regression model governed by EDP-IM pairs close to the origin (Jalayer et al. 2015). Given the observation that the fractile IDAs approximately follow the "equal displacement rule" and our narrow range of interest (fatalities exceeding one), a single line fit leastsquares regression was employed to obtain the PSDM. Mai et al. (2017) provided a comprehensive review of the breakpoint selection for bilinear regression models, which falls out of the scope of this study due to practical reasons. The resulting regression coefficients were found to be A = − 5.17, B = 1.20 with a model dispersion of PSDM = 0.55.

Nonlinear static pushover seismic response analysis
The computationally most demanding part of the proposed framework is the generation of the probabilistic seismic demand model, as it requires the evaluation of the dynamic behavior of the building model. The excessive computational effort associated with nonlinear dynamic analysis emerges due to the analysis duration and the postprocessing of results. Hence, it is beneficial to exploit SPO-based assessment alternatives to speed up the computation time, which will be demonstrated in the following.
Before going into the SPO analysis details, a side note regarding the definition of displacements is worth mentioning: As described previously in Sect. 3.4.1, it was not always possible to extract the nodal output after a dynamic analysis due to an out-of-memory error given the large output file size caused by the large timesteps of ground motion input in some cases. Therefore, the IDA-based PSDM is created based on the average displacement of 3D nodes at floor levels, automatically assigned to wall intersections by Tremuri. To allow a meaningful comparison between the IDA-and the SPO-based PSDM, the EDPs defined in this section also refer to the average displacement of 3D nodes at floor levels despite the availability of nodal results. A preliminary analysis regarding the sensitivity of drift ratios to the displacement criterion (individual nodes vs. average) showed that the employment of average displacements promises good accuracy as expected since the nodal displacements assigned to the same floor level only differ marginally due to the presence of rigid floor diaphragms in the case study building.
As the ground motion component with the larger spectral acceleration within IDA is oriented parallel to the building's transverse direction, and the same direction also governs the ground motion scaling factors, the transverse direction was selected to apply the load patterns within the SPO analyses. Hence, with a focus on the transverse direction of the case study building, 12 SPO analyses (number of analysis steps: 500, error tolerance: 0.001) were conducted on the mean-parameter model with complete coverage of combinations regarding the load pattern (uniform or first-mode shape), load direction (positive or negative) and torsional effects (no eccentricity, positive eccentricity, or negative eccentricity). The selection of the governing pushover curve for further use poses a significant challenge as it significantly affects the accuracy of probabilistic seismic demand model approximation. Assuming the collapse is triggered following a most-damaging, least-energy path and the corresponding "worst-case" load pattern is most likely to mimic the dynamic behavior of the MDOF model (Vamvatsikos and Cornell 2002b), analysis with the ID 23 (denoted as in Tremuri, negative transverse direction, inverse triangular distribution, positive eccentricity) yielded the pushover curve with the lowest compliance factor, equally representing the highest code demand and the lowest displacement capacity. The extracted pushover curve, illustrating the roof drift ratio and the base shear, was then approximated by a quadrilinear fit (elastic-hardening-negative-residual) using the algorithm proposed by De Luca (2013) and is ready to be fed into SPO2IDA (Fig. 10).
The SPO2IDA outputs, which are the dimensionless R − − T relations, are transformed into spectral acceleration and roof drift ratio pairs belonging to the MDOF system through Eqs. (5) and (6). The roof drift ratio at yield can be defined at the apparent yield point of the quadrilinear approximation of the pushover curve or be based on other drift criteria concerning the damage state of structural elements of the global building model. For this study, the roof drift ratio at yield was defined as the value causing the first pier to yield in the corresponding pushover step. The spectral acceleration at yield S yield a [Eq. (5)] can either be estimated without performing nonlinear dynamic analysis, based on a response spectrum analysis, using the ratio of yield force to effective modal mass or through elastic response history analysis (Fragiadakis and Vamvatsikos 2010). For the case study building, the simplified yield force to effective modal mass ratio gives a spectral acceleration at yield of 0.324 g. As the IDA results of the mean-parameter model were already available, spectral acceleration at yield S yield a was calculated as the average spectral acceleration of the multi-record IDA for verification purposes and found to be 0.333 g. As the SPO results are available for both roof and max , the final approximation of the IDA curves was generated using a one-to-one mapping, yielding the resulting SPO2IDA fractiles in Fig. 11.
The 16% and 50% fractile SPO2IDA curves slightly underestimated the spectral acceleration at collapse, while the capacity is overestimated for EDPs close to the origin. A combined load pattern or adaptive pushover algorithm would likely pose better estimations. However, for the purposes of our study, we will employ the SPO2IDA approximations in Fig. 11 as the logarithmic regression for the PSDM will be created within a close-to-collapse EDP range, where the approximate IDA curves gave reasonably good estimates. The corresponding regression coefficients of the demand model were found to be A = − 5.31 and B = 1.90 with a record-to-record variability of R,EDP|IM = 0.58.

Probabilistic damage model
Following the postprocessing of IDA results, the EDPs associated with fatalities due to collapse were obtained from the apparent initiation of the flatline for each single-record IDA curve. For the extraction of the EDPs leading to a near-collapse damage state, the 20% slope and the 3/4 rules were applied, and whichever occurred the first governed the criterion. For only five out of 41 ground motion records, the 20% slope criterion dictated the EDP at near-collapse: one of them is illustrated in Fig. 12 for the record ESMD 199.
While SIA 269/8 does not explicitly mention them, Karbassi and Lestuzzi (2014), whose study served to verify the SIA 269/8 method, have set the fatality rates for nearcollapse and collapse to DM NC 50% = 0.02 and DM C 50% = 0.10, respectively, which were also adopted for the case study building. Due to the lognormality assumption of the PDM, the fatality rate does not desaturate for very low EDPs, meaning that the model assigns probabilities starting from the origin of EDPs. To avoid this, it is possible to introduce a cutoff by setting the fatality rates assigned to EDPs before near-collapse to zero. However, for the closed-form solution of the case study building, such a choice would not affect the exceedance rate of one fatality, which is defined as the variable to be employed in the decisionmaking procedure. In addition, the abandonment of a cutoff for the numerical integration is further justified, given that the EDPs before near-collapse can still lead to fatalities due to the failure of nonstructural elements. Following the homoskedasticity assumption of the PDM, the model dispersion is constant, which means the variability of the fatality rate is identical conditioned on both collapse and near-collapse damage states. The constant dispersion of the damage model, PDM , is defined based on the collapse limit state for two reasons: First, there is a broad body of literature on the fatality rates given collapse than those given damage states before the collapse. Second, quantitively capturing the collapse state is less vague than the nearcollapse as the latter often involves the subjective judgment of the observer. The lower bound for the fatality rate due to the collapse of unreinforced masonry buildings in Table 1 is assumed to represent the 16% fractile fatality rate at collapse ( DM C

16%
) and set to 0.06. The dispersion of PDM is calculated from the distance between the 16% and 50% fractiles as follows in Eq. (15). An alternative approach might be to define the dispersion through the half distance between the 16% and 84% fractiles, upper bounds for the fatality rates, however, are too widespread, particularly for unreinforced masonry buildings, making their definition as DM C 84% difficult.
The median relationship was created through a linear regression on the 41 EDP pairs associated with near-collapse and collapse. The resulting probabilistic damage model and the associated model uncertainty are illustrated in Fig. 13. The resulting regression coefficients were found to be C = 20.13, D = 4.79.

Probabilistic loss model
Given the seismic fatality rates, the last missing step before obtaining the conditional dependence of the decision variable is the loss model describing the number of fatalities (DV) conditioned on the fatality rates (DM), which correlates with the building-specific number of occupants present at the time of the earthquake. To create the probabilistic loss model, the relationship between the median fatality rate DM 50% and the median number of fatalities DV 50% at collapse and near collapse, denoted with superindices " C " and " NC ," respectively, are required. As the floor plan documentations were available, the number of rooms was treated deterministically, meaning that the fluctuation of the population, modeled through the occupancy rate, represents the dispersion of the probabilistic loss model PLM . In this manner, the lower and upper bounds of the residential occupancy rate in Table 2 were taken as the 16% and 84% fractiles, as they are interpreted as lower and upper limits within some (unknown) confidence intervals. The fluctuation of the population was measured from the half distance between the 84% and 16% fractile occupancy rates in log space [Eq. (16)].
The total number of residential occupancy units (rooms) n unit was extracted from the provided floor plan as 68 (Fig. 4). The median occupancy rate OR 50% derived from the 16% and 84% fractiles of the lognormally distributed population model was calculated as 0.346. After substituting the median number of fatalities and the fatality rates in Eq. (10), linear regression on the DV-DM pairs yielded the model coefficients E = 3.16 and F = 1.00. The resulting probabilistic loss model is shown in Fig. 14.

Conditional dependence of the decision variable
Once the probabilistic demand, damage and loss models are defined, the conditional dependence of the decision variable given the intensity measure (decision fragility) is obtained through the combination of the PSDM, PDM and PLM, or the closed-form  12) and (13). Equivalent to the closed-form solution introduced earlier, the median and the fractile curves are obtained graphically by tracing the interim models in a clockwise direction starting from the PSDM. For example, using the median curves in Fig. 15, the spectral acceleration representing the number of fatalities exceeding one is obtained as 1.29 g. For the estimation of the second moments, the dispersion in Quadrant III changes from 16 to 84% fractiles (or vice versa) as an approximation, which is proposed and tested by Mackie and Stojadinovic (2006). The need for such approximation arises because the dispersion in Quadrant IV is conditioned on IM, not DV. The graphical method presented in Fig. 15 illustrates the case study example; its implementation, however, can be carried over any structure type or any decision variable as long as the underlying assumptions are maintained. By adjusting the regression coefficients and the conditional dispersion, their effect on the decision variable can be tracked visually. As a possible scenario, the Fourway plot can be used to decide whether more ground motions need to be employed to minimize the dispersion of the demand model, aiming to decrease the decision variable dispersion conditioned on the intensity measure. Where dynamic analyses are not feasible, SPO-based alternatives can be used to create a demand model due to the plug & play nature of the Fourway plot, allowing an efficient allocation of computational resources.

Latin hypercube sampling for epistemic uncertainty quantification
Optionally, the uncertainty surrounding the probabilistic demand model associated with the material, modeling and failure parameters can be coupled with the record-to-record variability for both SPO-and IDA-based PSDM. For the case study building, geometric uncertainty (e.g., story height, wall thickness, size of openings) is explicitly not addressed as it would require updating the mass and the stiffness matrix for each realization of the model, which needs to be processed one by one using the GUI of Tremuri, demanding a user interaction, and hence making automation quite difficult.
Following random variables regarding the material, modeling and failure parameters are defined for the uncertainty quantification: compressive strength f m , elasticity modulus E , shear modulus G , shear strength 0 , shear ultimate drift shear and rocking ultimate drift rocking (Table 6). Two aspects need further consideration while random sampling: First, unrealistic parameter pairs need to be avoided (e.g., low elasticity modulus combined with high shear modulus, or vice versa), and second, parameters with very low or high values sampled from the tail of the distribution might lead to non-convergence of the structural response model. For the first, a multivariate Gaussian copula family was defined to assign dependence between the random variables using the UQLab software (Marelli and Sudret 2014). To this end, a strong correlation between f m , E and G was imposed with a linear correlation coefficient of 0.7, while a moderate correlation with 0.5 was defined between the shear and the rocking ultimate drift limits, as proposed by Tomić et al. (Tomić et al. 2021). For the second, each distribution was truncated in the 90% confidence interval within the distribution domain.
Through a script created in Matlab that allows parallel processing, N LHS = 500 building realizations were created in Tremuri, followed by an SPO analysis for each realization using the same load pattern (analysis ID: 23) as the mean-parameter model. The computed capacity curves are presented in Fig. 16. Next, using the capacity curves as input, the median fractile SPO2IDA curves are generated (Fig. 17) to estimate the variability, which was found to be U,PSDM = 0.16 using Eq. (7).

Case study results and discussion
This section starts with assessing the accuracy of the employed equivalent frame model by comparing the analysis results in terms of spectral accelerations to those stemming from the more sophisticated applied element model used in Karbassi and Lestuzzi (2014). For the comparison of the results from each seismic fatality risk assessment route (Table 3, Fig. 3), the accuracy of the loss evaluations is presented and discussed next. The loss curve stemming from the IDA-based risk evaluation on the mean-parameter model via the numerical solution of the PEER PBEE integral (Route N3) serves as the benchmark used to evaluate the accuracy of the remaining routes. Furthermore, the computational cost associated with each route is documented to facilitate a systematic presentation of the findings. Finally, unit casualty risk gained through the numerical integration of the PEER PBEE integral, UCR num , is compared to UCR SIA estimated via SIA 269/8 compliance factor approach. Karbassi and Lestuzzi (2014) performed a comprehensive structural assessment on the same case study building investigated in this study. However, their modeling tools, risk metrics, and definition of fragility functions are different from ours. Nevertheless, our results in terms of median accelerations at collapse were found to deviate only marginally from the ones obtained by Karbassi and Lestuzzi (2014). To be precise, they performed 74 nonlinear dynamic analyses using a sophisticated applied element model and found the median spectral acceleration at collapse at 1.44 g. In our study, the IDA with 41 ground motions (selected from the same set as in Karbassi and Lestuzzi (2014)) of an equivalent frame model created in Tremuri delivered a median spectral acceleration at collapse at 1.42 g, showing a satisfactory agreement with the applied element model. As our fragility functions are EDP-based, contrary to the damage states employed in Karbassi and Lestuzzi (2014), a direct comparison for limit states other than collapse is not possible. Still, assuming their damage state DS4 definition corresponds to the EDP at the near-collapse damage state in this study, the median spectral acceleration at the near-collapse damage state was 1.33 4 g by Karbassi and Lestuzzi (2014), while it was found to be 1.13 g in our case study.
The regression coefficients of the interim models PSDM, PLM and PDM are presented in Table 7. While both the closed-form and the numerical solution share identical interim models, they deviate in calculating the conditional dependence of the decision variable. The PSDMs created through IDA approximately follow the "equal displacement rule" ( B ≈ 1, noting that the fundamental vibration period of the case study building model is T 1 = 0.243 s), justifying a single line fit regression combined with our narrow range of interest (fatalities exceeding one). For the collapse limit state, the IDA approximation via SPO2IDA was found to perform reasonably well, estimating a median spectral acceleration of 1.25 g at collapse, while the IDA-based assessment yielded 1.42 g for the same limit state. However, when lower EDP levels were considered, SPO2IDA overestimated the capacity of the case study building, notably the 84% fractile curve of EDP given IM (Fig. 11). As the selected decision variable n fatal mainly emerges at higher EDPs between near-collapse and collapse, the narrow fitting range for the PSDM is assumed to suffice for our application.
The median spectral acceleration associated with the number of fatalities exceeding one was found to be 1.17 g and 1.29 g for SPO-and IDA-based PSDM, respectively (Fig. 18a,  b).
For both SPO-and IDA-based assessment techniques, the epistemic uncertainty related to the material, modeling and failure parameters have only affected the PSDM marginally, meaning that the framework's application is based on a mean-parameter building model would suffice in our case study. The slight contribution of the epistemic uncertainty to the interim models, illustrated in Fig. 18c and d, is hardly distinguishable compared to their counterparts resulting from a meanparameter model in Fig. 18a and b, respectively. As addressed by Rota et al. (2010), among others, the random nature of ground motions represented the most significant uncertainty source. Nevertheless, it is worth mentioning that the epistemic uncertainty was only addressed on the median SPO2IDA approximations in a simplified manner. Still, given the high record-to-record variability, epistemic uncertainty is not likely to have a dominant effect on the overall uncertainty. Equivalent to the closed-form solution given in Eqs. (12) and (13), the graphical representation in Fig. 18 provides the transparency of the seismic fatality risk evaluation process where the user can clearly identify the effect of each risk component on the decision variable, including the uncertainties associated with the interim probabilistic models. Starting from the PSDM in the right upper pane, Fig. 18 illustrates the estimation of the median relationship between the DV (number of fatalities) and the IM (spectral pseudo-acceleration at the fundamental period of the structure) by tracing the arrows starting from the right upper pane (Quadrant I) to the decision fragility in the left upper pane (Quadrant IV).
One of the important aspects of the results is the accuracy of the loss curves obtained using different routes (Table 3, Fig. 3), as the seismic retrofit decisions are based on the fatality risk. To this end, the loss curves (decision hazard curve) representing the mean annual exceedance rate of fatalities, solved via the "exact" numerical and closed-form solutions, are shown in Fig. 19. For the numerical solution, Eq. (1) is employed while the decision fragilities from the Quadrant IV of the graphical framework are combined with the hazard curve for the closed-form solution. The values of the computed mean annual fatality rates of exceeding one fatality ( n fatal ≥1 ) and a comparison to the benchmark N3 route are illustrated in Fig. 19 and listed in the right-most column of Table 8. In addition, the following is observed: Fig. 19 Loss curves for the YVR14 building computed using the routes listed in Table 3 and shown in Fig. 3 (a) (b) (c) (d)  1 3 • The epistemic uncertainty did not significantly contribute to the seismic fatality risk.
• The IDA-based fatality risk in closed-form (C3, C4) yields approximately 40% higher values than that computed by numerical integration (N3, N4). • The deviation between the closed-form (C1, C2) and numerical integration (N1, N2) of SPO-based routes is only 10%, with the closed-form solutions being slightly more conservative. • If only the closed-form solutions are compared, the SPO-based loss curves (C1, C2) are approximately 30% less conservative than the IDA-based loss curve (C3, C4). • Compared to their IDA-based counterparts (N3, N4), the SPO-based numerical assessment (N1, N2) is less conservative. • The mean annual exceedance rate of one fatality is within the same order of magnitude for each of the eight routes.
As already mentioned, for the closed-form solution in this study, the hazard curve was fitted using intensity measure bins to capture the nonlinearity of the hazard curve at the cost of having a large vector of fitting parameters k and k 0 for each intensity measure bin, as shown in Figs. 5 and 6. To test the validity of the alternative practical fit with constant k and k 0 parameters, the deaggregation of the fatality risk delivers valuable insight on the contribution of hazard levels to the fatality risk. Once the numerical integration is performed, Eq. (17), adjusted from Eq. (1) and Eads et al. (2013), is utilized to illustrate the risk deaggregation contributing to the rate of fatalities exceeding one, n fatal ≥1 . For Basel, the spectral acceleration associated with the 2475 return period is 0.43 g, which contributes only 2.3% to the total risk of fatalities exceeding one, confirming the observation that significantly longer return periods are required to cause structural damage in countries with low to moderate seismicity. At the same time, this finding justifies our bin-based hazard curve fitting instead of the bracketed intensity measures for code-typical return periods between 475 and 2475 years. Table 8 summarizes the computing time required to create PSDMs for various framework routes benefiting from parallel processing to accelerate the structural assessment, for which an illustrative scheme is presented in the appendix. The analysis duration for a single SPO analysis may seem somewhat lengthy; it is caused by the analysis parameter selection, mainly the desired low error tolerance for the case study building. For the C2 and N2 routes, the required SPO analysis benefits from high load-balancing efficiency, as each building realization is created a priori, allowing each N LHS realization to be assigned to available CPU cores immediately. Depending on the number of timesteps of the ground motion accelerogram and the applied scaling factor, a single nonlinear dynamic run took between 20 min and 3 h, with an average duration of 1 hour when all dynamic runs are considered. The total duration is further increased by 35% due to the inefficient load balancing between the CPU cores. An equivalent confidence level can likely be achieved with fewer ground motions, which means the total duration to perform a multi-record IDA study can be reduced further to perhaps an overnight analysis. Still, the increase in speed achieved via parallel processing is remarkable.
IDA-based routes (C3 and N3) appear to be the most efficient, assuming one can benefit from parallel processing. However, despite the good agreement between the loss curves, n fatal ≥1 = � dm overestimated capacity at close-to-origin EDP levels within routes C1 and C2 seems problematic if one considers property loss in monetary units on top of the fatality risk for retrofit decision-making, as such consideration requires EDP levels before the near-collapse limit state.
In contrast to the final product of the graphical framework presented herein, the rate of fatalities exceeding one, SIA 269/8 uses unit casualty risk UCR SIA as a risk metric. Furthermore, SIA 269/8 facilitates seismic retrofit decisions by assigning the computed compliance factor α eff to UCR SIA (Fig. 1). To evaluate the capability of the compliance factor to estimate unit casualty risk for the case study building, a comparison between UCR SIA and the numerically derived unit casualty risk UCR num is made using Eq. (18). To this end, fatality rates in the range of 0.02 ( dm nc ) and 0.10 ( dm c ) have been assigned to EDPs (taken from route N3) between the damage states near-collapse and collapse, respectively, followed by their integration over the entire range of IM.
A preliminary analysis for estimating the associated unit casualty risk (UCR SIA ) of the case study building is presented next. First, the seismic hazard is defined assuming a seismic zone 3b and a soil class A according to SIA 261 (SIA 2020) to reflect the seismic microzonation information of the location as close as possible. The SPO analysis and the resulting capacity curve were truncated at the first attainment of 0.4% maximum interstory drift, regardless of the URM wall failure mechanism (SIA 2017). Then, an N2 methodbased (Fajfar 1999) SPO analysis found the compliance factor to be higher than 1.0 (specifically, the force-based compliance factor was 1.15 and the displacement-based compliance factor was 3.05), meaning that the case study building complies with the SIA codes for new structures. According to SIA 269/8, the associated unit casualty risk (UCR SIA ) for a codecompliant building is 10 -6 , while the same derived numerically (UCR num ) has been found to be six times higher (Fig. 20). On the other hand, if we perform a comparison through the compliance factors, the numerically derived unit casualty risk of 6.44·10 -6 corresponds to a compliance factor of 0.34 according to the SIA 269/8 risk curve (Fig. 20), even though the case study building has a compliance factor higher than 1.0.

Conclusions
The Swiss code SIA 269/8 for the seismic evaluation of existing structures prescribes a methodology using the compliance factor α eff , defined as the ratio of the seismic capacity of an existing structure to the safety requirement of a hypothetical identical new structure. SIA 269/8 links a compliance factor to an individual's annual probability of death, statistically seen, always occupying a structure, defined as the unit casualty risk UCR SIA . This metric decides whether a seismic retrofit is prescribed to mitigate the associated casualty risk. Despite this practical feature of the compliance-based approach, its capability as a risk metric is questionable as the contribution of the seismic hazard, structural performance and building occupancy is merged into a single quantity, conflicting with the multi-component nature of seismic risk. Furthermore, the shape of the risk curve in Fig. 1 is based on the fitting of compliance factor-unit casualty risk data pairs stemming from a macroseismic approach, raising doubts about its application to various building types. Indeed, the case study presented herein shows that the SIA 269/8 compliance factor is not capable of representing the unit casualty risk for an individual occupying a building at all times: the associated UCR SIA has been found to underestimate the numerically computed unit casualty risk UCR num at least by a factor of six. Even assuming that the SIA 269/8 approach can precisely assess the unit casualty risk for any structure type using the compliance factor solely, a further step is required to assess the collective risk considering the actual building population.
As an alternative to the compliance-based approach in SIA 269/8, this paper has presented a novel graphical framework for the fatality-based seismic risk assessment of existing buildings based on the PEER PBEE equation [Eq.
(1)] without the need for numerical integration. Under the assumptions of lognormality of the interim probabilistic models and constant dispersion (homoskedasticity), the uncertainties associated with the probabilistic demand, damage and loss models are reflected in the decision variable, the number of fatalities, n fatal , which can be used to make seismic retrofit decisions. The novelties of the presented graphical framework include: transparency allowing an engineer to monitor the contribution of different sources of risk to the computed fatality risk; the practicability in terms of the compatibility with the SIA 269/8 occupancy model, and the flexibility regarding the interchangeability of the probabilistic interim models. The following conclusions are drawn on the basis of this study: • The proposed graphical framework to evaluate the seismic fatality risk posed by an existing structure is transparent. It separates the sources of seismic fatality risk (seismic hazard, structural performance and occupancy) and clearly shows how much effect each has on the seismic fatality risk. • The proposed framework is practical. We investigated eight different routes to implement it and showed that the IDA-based risk evaluation on the mean-parameter model via the closed-form solution (Route C3) is a good compromise between the invested effort in modeling and calculation and the obtained accuracy when compared to its counterpart via the numerical solution (Route N3). Thus, we recommend route C3 for practical applications, specifically in combination with parallel processing. However, in cases where running dynamic analysis is not feasible, SPO-based application on the mean-parameter model via the closed-form solution (Route C1) still promises decent seismic fatality risk estimates, as proven for the case study building.
• The probabilistic models incorporated into the framework are compatible with the current SIA codes. To be precise, seismic hazard for any location in Switzerland is directly implemented from the SUIhaz2015 model, upon which the response spectra in SIA 261 (SIA 2020) rest, eliminating the need for the zone-specific SIA 261 response spectra. The damage and loss models employing fatality rates and building population, respectively, are founded on the SIA 269/8 and the pioneer studies behind SIA 269/8. However, the interim models are easily replaceable with other models thanks to the plug & play nature of the proposed framework. • The plug & play nature of the proposed framework allows the treatment of the interim models at various complexity levels (e.g., SPO vs. IDA, mean-parameter vs. epistemic uncertainty), yielding smaller fatality risk as a reward for taking the most comprehensive framework route. • The SPO-based PSDM needs to be interpreted cautiously as only a narrow EDP range showed a good agreement with the IDA-based demand model. Still, the resulting loss curves representing the mean annual exceedance rate of fatalities are within the same order of magnitude for each framework route. Further work needs to be conducted on the definition of the most-damaging SPO load pattern and the applicability of SPO2IDA on unreinforced masonry structures to approximately mimic their dynamic behavior. A preliminary study regarding the selection of the "worst-case" load pattern has shown that the selected load pattern associated with the highest code demand and the lowest displacement capacity is the best candidate among other load patterns to mimic the dynamic behavior of the case study building in regions close to collapse. An adaptive pushover load pattern, falling outside of the scope of this study, may deliver more accurate results regarding the approximate dynamic behavior. Furthermore, SPO2IDA is mainly applied to reinforced concrete and steel structures. Hence, its applicability to unreinforced masonry buildings requires more research. • The epistemic uncertainty associated with the material, modeling and failure parameters has only affected the demand model and the decision variable marginally, meaning the record-to-record variability due to the random nature of ground motions represented the most significant source of uncertainty for the case study building. • SIA 269/8 sets a significant milestone by stepping from the customary code concept to risk-based procedures. However, compared to the fatality risk gained through the numerical integration of the PEER PBEE equation, the compliance-based SIA 269/8 approach has been found to underestimate the risk at least by a factor of six for the case study building. Thus, the proposed framework is an improvement over the SIA 269/8 approach. • When coupled with SPO analysis, the application of the proposed graphical tool does not require more effort than the compliance-based SIA 269/8 approach and promises, in addition, a better understanding of the fatality risk components and the uncertainty surrounding them, having the potential to reach a broad acceptance by practicing engineers without sacrificing the practical graphical nature of the SIA 269/8 approach. • Even assuming that the SIA 269/8 approach is fully capable of assessing the unit casualty risk for any structure type using the compliance factor solely, subsequent decisions regarding the necessity of seismic retrofit are taken based on the risk exposure over the remaining service life of the structure, whose declaration may create legal loopholes open to abuse. Legally seen, following a performance assessment, a building owner or a decision-maker might tend to state a shorter remaining service life than they anticipate to bypass a retrofit due to the lack of legally binding effect of remaining service life declaration. Alternatively, the decision of whether a seismic retrofit is prescribed to mitigate the fatality risk can be reflected in a performance objective, such as the acceptable annual exceedance rate of a given number of fatalities. Still, the concept of remaining service life is needed to address the financial extent of the seismic retrofit in terms of cost/benefit considerations.
Let us assume a hunt & fill based IDA scenario where eight ground motions (N GM = 8), each scaled seven times (N RPR = 7), are employed. For the sake of simplicity, every single dynamic run associated with each ground motion is assumed to take the same particular amount of time, regardless of the applied scaling factor. The total number of single dynamic runs amounts to 56, requiring a total CPU time of 126 h if handled on a single CPU core one after another (Fig. 21). However, employing the most straightforward partitioning technique based on the hunt & fill algorithm with single records on each of the four CPU cores will drastically reduce the required time to run a multi-record IDA (Fig. 22). To be precise, CPU1 finishes the first single record (GM 1) IDA with seven dynamic runs after three and a half hours, ready to pick up the next ground motion GM5. After 21 h, CPU1 remains idle until the completion of multirecord IDA, as the remaining records are assigned to CPU2, CPU3 and CPU4. Such inefficiency can be minimized by assigning dynamic runs at the single analysis level, but it requires communication between the CPU cores. Even with the demonstrated most straightforward partitioning, parallel processing accelerates the assessment procedure by 300% (126 h / 42 h) for the presented hypothetical set of ground motions.

Conflict of interest
The authors certify that they have no relevant financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.