The proposed methodology aims at determining optimal protective strategies based on parameterizations of various protective measures, physics-based storm surge models that will evaluate these strategies, damage estimates due to storm-induced flooding, and quantification of the success of a protective strategy, integrated with input from professional stakeholders and members of at-risk communities for determining a realistic optimal solution.
The conceptual layout of the proposed methodological framework is depicted in Fig. 3. Its four basic components are:
-
1.
New protective strategy Each iteration starts with the formulation of a new protective strategy which is based on the evaluation of previous protective strategies as well as random perturbations. A protective strategy may consist of multiple protective measures implemented at different geographic (spatial) locations and at different times.
-
2.
Simulated flooding The extent of the storm-induced flooding over the area under consideration is estimated accounting for the various different protective measures of the corresponding protective strategy. This is accomplished using the available simulation tools (the more computationally inexpensive GIS-based model or the more accurate GeoClaw model).
-
3.
Flooding damage assessment Using the extent of the storm-induced flooding over the area under consideration, the resulting damage/loss is estimated.
-
4.
Suitability of protective strategy Using the estimated damage/loss for the current iteration and corresponding damage/loss of previous iterations, it is possible to assess the relative suitability or effectiveness of the current protective strategy with respect to those of previous iterations. The cost of implementing the protective strategies and stakeholder feedback are taken into account.
Key aspects of the proposed methodological framework are described in further detail in the following.
Parameterization of protective strategies
Each protective strategy at a specific iteration will include a set of different protective measures that will be appropriately parameterized. There are various possible protective measures such as large-scale storm barriers, seawalls, levees, artificial islands and reefs, restoration of wetlands, sand dunes, sealing of individual components of the infrastructures (e.g., installing water-tight door/hatches at subway station entrances and ventilation openings), raising individual components of the infrastructure to larger heights, relocation of individual components of the infrastructure or of entire communities to higher ground away from their current location, etc.
For example, for the case of a seawall, the corresponding parameterization includes: (1) the location and length of the seawall and (2) its height. The parameterization allows for spatially and temporarily varying location, length, and height of the seawalls. The temporal dependence allows for the sea-wall to be built at a certain height at a certain time and to increase its height at a later time if needed. The construction cost including labor wages will be provided by stakeholders and informed by previous similar projects. A critical consideration when building the original wall will be to design it in such a way that it could carry the additional weight resulting from an increase in its height in the future. Parameterizations can be easily developed for all the other protective mechanisms considered.
Storm-induced flood models
Previous approaches to calculating storm-induced flood damage have addressed the problem by using either a computationally inexpensive model capable of performing a large number of calculations with limited accuracy (lacking a robust and complete description of the flood), or a computationally expensive and highly accurate model that can only be used to perform a limited number of calculations because of its high computational cost. One of the main innovations of the proposed methodology is the combination of both of these models by narrowing down the parameter space using the computationally inexpensive model, before using the higher accuracy but more computationally expensive model to determine the exact final solution.
In the proposed methodological framework, the computationally inexpensive model uses a GIS-based flood model (GIS-based Subdivision-Redistribution Simulation (GISSR) Miura et al. 2021) described in Sect. 3.2.2. The computationally expensive but more accurate model uses GeoClaw, which can solve the shallow water equations with high accuracy, providing a dynamic simulation of a body of water’s response to a storm. It is described in Sect. 3.2.3.
Future storms and sea level rise
Both models require a series of inputs. The first input is an ensemble of storms over a prescribed time frame that each protective strategy will be tested against. One major challenge in analyzing the susceptibility of New York City’s ICI to a weather-related exogenous event is that few such events have taken place. The ensemble of storms is defined using the record of historical storms that have hit the New York City area in a probabilistic manner, as well as taking advantage of a synthetic ensemble of physically realizable storms as was done in Lin et al. (2012). A specific maximum storm surge height can be simulated for each storm using a probabilistic model based on the historical data. The simulation for the proposed optimization methodology uses a modified beta distribution function for the future peak storm surge estimations in Lower Manhattan, NY. The details of this modified beta distribution model will be provided in an upcoming paper. The uncertainties in occurrence and intensity of future storms over the prescribed period of time considered are addressed by simulating a large number of storm sequences over the time period considered. Each storm sequence contains a different number of storms, with different occurrences and corresponding intensities. At the end, a histogram of the overall damage loss can be established over the prescribed period of time.
The second input is the consideration of sea level rise. Given a time-frame for evaluation defined at the outset of the optimization process, the specific amount of sea level rise at a specific time in the future will be determined using a probabilistic model. This sea level rise will be combined with the height of a storm surge. For example, the sea level rise projections for New York City shown in Table 1 (Horton et al. 2015; Gornitz et al. 2019) can be added to the generated storm surge data. It should be noted that the variations in sea level rise may alter the coastal configuration and the storm surges but this is beyond our current modeling capabilities.
Table 1 Sea level rise projections (2000–2004 is the sea level baseline) GIS-based dynamic flood level models
The initial iterations in the optimization process in Fig. 3 will use the GIS-based system GISSR developed by the authors (Miura et al. 2021; Jacob et al. 2011). The GISSR system includes a very detailed digital elevation model (DEM) with a resolution of 1 ft, building footprints with essential building information [including height, area, value (asset), usage, and basement data] in New York City, complete description of transportation systems (above and below ground), etc. Layering these features together with flood height data will enable us to determine the overall height of water at the location of each and every infrastructure component in the city.
The simulation only needs topographical data, surge data (time history of water level along the coastline), and protective measures, if any, as inputs. The simulation can estimate the amount of flooding when the surge exceeds the height of the protective measures. Representative results from two different GIS systems regarding the flooding of the infrastructure systems above and below ground in New York City are shown in Figs. 4 and 5 (Jacob et al. 2011; Miura et al. 2021).
The GeoClaw model
Once the optimization process in Fig. 3 has sufficiently narrowed down the space of protective strategies using the fast GIS-based model GISSR, the GeoClaw model will be used to calculate time-dependent, storm-induced flooding with high accuracy. GeoClaw is a finite volume, wave-propagation numerical method described in LeVeque (2002) and is a part of the Clawpack (Mandli et al. 2016). An example of GeoClaw’s capabilities is shown in Fig. 6.
The overall simulation model
The GISSR model and the GeoClaw model can each calculate the extent of flooding from a specific storm with varying levels of accuracy and computational efficiency. However, the overall methodological framework and the associated optimization scheme depicted in Fig. 3 do not depend on the outcome of a single specific storm. Instead, they depend on the outcome of the entirety of possible storms over a prescribed period of time. This is accomplished in the following way.
Assuming that the prescribed period of time under consideration is N years, one simulation consists of determining the number and occurrence times of all storms within these N years (a Poisson model can be used for this purpose). Then, a specific maximum storm surge height is simulated for each and every storm in this period of N years using the modified beta distribution model mentioned earlier. This simulation process over the period of N years is repeated a large number of times M (e.g., \(M=1000\)). For each of these M times, the number and specific times of occurrence of the storms within the N years will be different. The corresponding maximum storm surge heights will also be different. Eventually, statistics will be derived from the M simulations performed over the prescribed period of N years. For example, statistics of the overall damage/loss over the N years can be established (mean, standard deviation, and even a rough estimate of its PDF if M is large enough).
The following section describes how to compute damage/loss for a specific storm event. Adding the damage/loss from all storms within the N years of one of the M simulations provides the corresponding overall damage/loss. This is repeated M times to determine M values of the overall damage/loss over N years. After that, computation of corresponding statistics is straightforward. The resulting very large number of generated storms ensures the capturing of rare events with very large storm surges.
Flooding damage assessment
Using appropriate fragilities (e.g., Hazus 2018), the loss for every component of the infrastructure can be computed (including structural damage, damage to contents, and loss of use) for a given storm with known water height at any location within the geographical area considered (using GISSR or GeoClaw). The losses are then added for all infrastructure components inside the geographical area considered to establish the overall loss from this specific event. The interconnections of different infrastructures will be considered. For example, a power failure can trigger interruptions in subway service, street traffic lights, communications (especially wireless), water supply to tall buildings and many other areas. The interconnections can continue beyond this level. For example, loss of street traffic lights can lead to traffic chaos preventing emergency vehicles to reach their assigned destinations. Loss of wireless communications can negatively affect the coordination of emergency services, and so on.
This damage assessment procedure requires a detailed description of all components of the infrastructure in the geographic area under consideration. This information has to include the following for each component of the infrastructure: exact geographic location, critical elevations (elevations of openings from where water can enter and flood the structure), description of structure (materials, form, use, etc.), fragility of structure as a function of the height of water at its location, value of structure and of its contents, per diem cost resulting from loss of use of the structure, etc. Note that here, cost is associated with the loss and subsequent repair of the infrastructure, not the cost of the protective measures that are accounted for separately. It should be pointed out that both the above ground and below ground infrastructures are accounted for.
Damage functions for physical loss
The first step in flooding damage assessment translates water height to percent loss of a structure using damage functions/fragilities. Water heights at the location of every structure are computed using GISSR or GeoClaw. This step will also be informed by the stakeholder interviews so that first-hand knowledge of weaknesses, strengths, and interconnections of the infrastructure is included as accurately as possible.
Figure 7 shows typical damage functions providing damage percentage as a function of flood height for different types of structures. They are provided by Hazus that was developed by the Department of Homeland Security, Federal Emergency Management Agency. Damage functions are available for a variety of different classes of buildings, including various residential building types, commercial building types, utilities, factories, theaters, hospitals, nursing homes, churches, etc. It should be mentioned here that the Hazus damage functions/fragilities have been developed for static type of flooding resulting from rainfall. The storm surge type of flooding considered in this study is somehow different as it involves wave action close to the coastline. This is accounted for through appropriate modifications of Hazus damage functions. Another unique characteristic of this study centered in New York City is the prevalence of tall and very tall buildings. Appropriate damage functions will be developed for such buildings that usually sustain damage from flooding that is only a small percentage of their overall value. Finally, the authors have developed damage functions/fragilities for the underground transportation system in New York City including direct damage and loss of use. The total damage/loss in a target area \(C_{dmg}\) related to physical loss is computed from:
$$\begin{aligned} C_{dmg} = \sum _{i}^{N} a_i D_i(h_i) \end{aligned}$$
(1)
where N is the total number of buildings in the area, \(a_i\) is a total value/asset of building i, and \(D_i\) is the percentage of the total replacement cost associated with flood height \(h_i\) observed at the location of building i. The flood height at each location \(h_i\) is computed by subtracting the critical elevation of each building from the flood height.
Economic loss assessment
Damage/loss due to suspended business operations during the restoration period will be considered if the building has commercial areas and did not collapse (the buildings with over 50% damage will be considered as collapsed). Hazus (2018) developed damage functions for two types of economic loss: inventory loss and income loss. The total inventory loss \(C_{inv}\) is computed as:
$$\begin{aligned} C_{inv} = \sum _{i}^{N_{inv}} D_i(h_i) A_i(h_i) S_i B_i. \end{aligned}$$
(2)
where \(N_{inv}\) is the total number of commercial/industrial buildings/occupancies dealing with inventories, \(A_i\) is the floor area at and below the flood height \(h_i\), \(S_i\) denotes the annual gross sales for occupancy i, and \(B_i\) is the business inventory which is a percentage of gross annual sales. This applies to retail trade, wholesale trade, and industrial facilities. The total income loss \(C_{inc}\) is computed from:
$$\begin{aligned} C_{inc} = \sum _{i}^{N_{com}} (1-f_i) A_i(h_i) I_i d_i(h_i). \end{aligned}$$
(3)
where \(N_{com}\) is the total number of buildings/occupancies with commercial areas, \(f_i\) is the income recapture factor for occupancy i, \(I_i\) is the income per day for occupancy i, and \(d_i\) is loss of function time for the business in days. When a storm destroys more than 50% of a building, the building is considered as collapsed and will be demolished (not repaired).
Overall damage assessment
Following the damage assessment methods described in Sects. 3.3.1 and 3.3.2, overall damage cost estimates can be computed as a function of flood height. The overall cost estimates include physical damage loss [Eq. (1)], inventory loss [Eq. (2)], and income loss [Eq. (3)]. For example, Fig. 8 displays representative damage cost estimates in Lower Manhattan for different levels of flood height using Eqs. (1) to (3).
Assessment of suitability of a specific protective strategy
The iterative process depicted in Fig. 3 is based on the assumption of a prescribed time horizon of N years (N can be 20, 50, 100 years or any other number). The first step is to calculate the overall losses over the N years from all possible storms during this period, without any protective strategy implemented. We denote these losses by \(L_{no}\) (losses are considered in a statistical sense in this section as \(L_{no}\) is computed M times from M different simulations over the period of N years).
The basic requirement for any protective strategy is that its implementation (construction) cost \(L_{co}\) and overall losses \(L_{ps}\) is less than \(L_{no}\):
$$\begin{aligned} L_{co} + L_{ps} < L_{no}. \end{aligned}$$
(4)
If Eq. (4) is not satisfied for a specific protective strategy, then this strategy is unacceptable (since doing nothing has a lower overall cost).
During the iterative optimization process shown in Fig. 3, a large number of different protective strategies are considered (a new strategy at every iteration). If the sum of the implementation cost and overall losses of the protective strategy at iteration (i) is less than that of iteration (\(i-1\)), the protective strategy at iteration (i) becomes the temporarily optimum solution. Otherwise, the protective strategy at iteration (\(i-1\)) remains the temporarily optimum solution and a new protective strategy is tested against it. This procedure is expressed as:
$$\begin{aligned}&\hbox {If }(L_{co} + L_{ps})_{(i-1)} > (L_{co} + L_{ps})_{(i)}\hbox { then }(L_{co} + L_{ps})_{(i)}\hbox { becomes the new optimal }\nonumber \\&\quad \hbox {strategy temporarily and a new protective strategy is tested against it}. \end{aligned}$$
(5a)
$$\begin{aligned}&\hbox {If }(L_{co} + L_{ps})_{(i-1)} < (L_{co} + L_{ps})_{(i)}\hbox { then }(L_{co} + L_{ps})_{(i-1)}\hbox { remains the optimal }\nonumber \\&\quad \hbox {strategy, }(L_{co} + L_{ps})_{(i)}\hbox { is discarded and a new protective strategy is tested against }\nonumber \\&\quad (L_{co} + L_{ps})_{(i-1)}. \end{aligned}$$
(5b)
The iterations continue until \((L_{co} + L_{ps})\) stabilizes without further reduction possible in subsequent iterations. It should be mentioned that as with \(L_{no}\), \(L_{ps}\) is considered in a statistical sense.
Stakeholder identification of pertinent metrics will be incorporated into the assessment through the social science component of the method. Knowledge from stakeholder interviews and community meetings will be included by adjusting the weighting of various components or adding new criteria as appropriate. For example, stakeholders might identify critical areas that should not flood and therefore should be weighted more heavily. Considering multiple time horizons will allow the evaluation of the long-term success of each protective strategy (Hancilar et al. 2014; Lopeman et al. 2015).
Finally, it is important to mention that the methodology includes an upper limit for the implementation cost of the optimal protective strategy:
$$\begin{aligned} L_{co} \le \text {upper limit of budget (dollars)} \end{aligned}$$
(6)
Stakeholder interactions and feedback
As part of developing this method, local stakeholders who have technical and empirical first-hand knowledge about ICI in New York City have been interviewed to elicit their understandings and perceptions of how critical infrastructure is impacted by storm surges, and how these impacts are amplified through infrastructure interdependencies. Findings from these interviews provide information that goes beyond the strictly technical understanding that usually informs protective strategy planning. By engaging and integrating the knowledge of stakeholders in this arena, adaptation options to protect coastal infrastructure will be attuned to the particular and contextual risks that transportation, power grid, emergency services, and other components of the infrastructure face due to storm surge hazards. Moreover, by understanding how storm surge intersects with infrastructure, and about interdependencies within complex infrastructure systems from multiple perspectives, a deeper understanding of the complexity is gained than from any single perspective. In this way, the optimization methodological framework will not only rely on GISSR and GeoClaw modeling but also on the wealth of experiential knowledge about infrastructure interdependencies held by local stakeholders, particularly based on their experiences with Hurricane Sandy which devastated communities and the infrastructure systems on which they rely.
Complementing the stakeholder interviews, the method also includes participating in community meetings with local groups who are actively pursuing coastal community resilience activities. The community meetings will be an opportunity to reach out to selected communities with results of the optimization framework and assess it according to community needs and priorities.
Stakeholder interviews
Stakeholder interviews were conducted in the initial stages of the project to enable the interview results to inform the optimization framework from the early stages of development. Stakeholders that were interviewed were selected based on relevance to the project and their connection to NYC’s ICI, with additional stakeholders identified based on recommendations from the initial interviewees following a purposive snowball sampling technique (Burawoy 1998). According to standard social science practice, interviews were recorded and transcribed for analysis. In order to ensure that interviews yield insight relevant to the operation and goals of the optimization framework, the research team collaborated iteratively to create the interview protocol and the coding scheme used to guide analysis, thus including the computer modeling and infrastructure experts in the social science methodologies.
Our current implementation of the method includes interviews conducted in two phases. In phase one, at the beginning of the project, ten stakeholders were interviewed to elicit their mental models of storm surge impact on ICI in New York City. Mental models are causal beliefs about how the world works, including complex relationships within specific processes (Morgan et al. 2002; Jones et al. 2011; de Bruin and Bostrom 2013; Lazrus et al. 2016). To characterize stakeholders’ mental models of storm surge impacts and critical infrastructure interdependencies, stakeholders were asked to describe storm surge risks to infrastructure in general and progressively narrow the field of questioning to follow up on and hone in on specific elements of their mental models following Lazrus et al. (2016) and Morss et al. (2015). Interviewees also viewed a sample of initial GeoClaw model simulations and provided feedback on their understanding, trust, and potential utility of the model information. In phase two, once initial modeling results using the input from the phase one interview became available, the same set of stakeholders was re-interviewed, this time to assess how well the optimization framework captures their perspectives and to identify key areas for improvements to the optimization methodology. The authors who developed physical models also joined the second set of interviews and discussed the models and simulations directly with the stakeholders to obtain their feedback.
Initial findings from the interviews have already informed the development of the optimization framework, and they will continue to do so as the modeling and optimization are performed. First, they are informing the types of critical infrastructure and infrastructure interdependencies that we are including in the modeling. Second, among the stakeholders interviewed, there is a great diversity of familiarity with storm surge models, and thus some stakeholders are much more fluent in interpreting the model simulations than others. We have made adjustments to how the model is presented with this in mind. For example, in order to make the simulations more legible to this audience, we have noted the importance of increasing the geographic resolution of the model so that stakeholders can more easily identify key features of the New York area. One example is the Rockaway Peninsula, which is a key geographic feature of the New York landscape and thus is important to resolve for stakeholders to become oriented in the simulations.
Community meetings
Another component of the method is participating in community meetings in the final phases of the project, as an opportunity to share results of the optimization framework, learn where it may need to be adjusted to conform to community values, and explore social acceptability. For example, if the optimization framework points to coastal solutions that may hinder crucial subsistence, recreational, or other cultural activities, the research team will understand where flexibility and options need to be included in the framework. Communities in which these meetings will be held will be identified using the initial results of the optimization framework itself: where the risks of sea level rise are greatest and any existing protective/adaptation measures are inadequate to address them. Conversations during the community meetings will be carefully guided to manage expectations based on the optimization framework results and to empower local community members to make adaptive decisions that meet economic, social, and cultural priorities.
Optimization component
The optimization component of the iterative scheme shown in Fig. 3 involves the selection of the new protective strategy to be evaluated in the following next iteration as described in Sect. 3.4. This constitutes a significant challenge due in part to the scale and scope of the underlying problem, but primarily due to deep nonlinearities in the model, and uncertain and noisy data. It is expected that a large number of iterations will be necessary to identify the optimal solution for the protective strategy. In the beginning, and for the majority of iterations, the computationally efficient GISSR model will be used. The GeoClaw model will be used at the end to provide high accuracy (but at a higher computational cost).
As discussed above, the optimization scheme includes a constraint: there is an upper limit of the budget available to implement the optimal protective strategy as indicated in Eq. (6). The optimal protective strategy will therefore be different with different budgetary considerations.