Background on the waas case
To illustrate the outlined approach, we use a hypothetical case, called ‘the Waas’. The case is based on the Waal, a river reach in the Rhine delta of the Netherlands. The river and floodplain are highly schematized, but have realistic characteristics. The river is bound by embankments, and the floodplain is separated into five dike rings. A large city is situated on higher grounds in the southeast part. Smaller villages exist in the remaining area, including greenhouses, industry, conservation areas, and pastures. In the future, climate change and socio-economic developments may increase the pressure on the available space and potential future damages, so actions are needed. We use an IAMM (for details, see Haasnoot et al. 2012) implemented in PCRaster (van Deursen 1995). The model was checked for internal consistency and plausibility of the outcomes by expert judgment.
The analysis takes into account uncertainties related to climate change, land use, system characteristics, and the effects of policy actions (Table 1). The effects of different climate change scenarios are considered through changes in river discharge (see Haasnoot et al. 2012 for details). Uncertainties in the cause-effect relations for the fragility of dikes and economic damage functions are taken into account by putting a bandwidth of plus and minus ten percent around the default values; for each experiment, we randomly pick a value in this interval and update the default values accordingly.
Table 1 Overview of the uncertainties
Table 2 provides an overview of the 20 policy options that are explored. The actions include flood prevention measures such as heightening the dikes, strengthening the dikes, and giving room for the river, and flood mitigation actions such as upstream collaboration, evacuation preparation, alarms, additional embankments around cities, houses on stilts, etc. The actions can be combined into sequences: adaptation pathways. In order to govern the activation of the next action on a pathway, we use a simple rule-based system. Every five years, the results of the system in terms of causalities and economic damages are evaluated and classified into no event, small event, large event, and extreme event. We activate a new action if, in the previous five years, an event of the pre-specified level is encountered. The choice for using a step size of five years is motivated by runtime constraints and prior experience with this particular case.
Table 2 Overview of policy actions
Formulating the optimization problem
We can now formulate the optimization problem that we are trying to solve.
Minimize F(l
p,r
) = (f
costs
, f
casualties
, f
damage
)
where \( \begin{array}{ll}{\mathrm{l}}_{p, r}=\left[{p}_1,\ {p}_2,{p}_3,{r}_1,{r}_2\right]\hfill & \forall p\in P;\ \forall r\in R\hfill \\ {}{f}_i\left({y}_i\right) = {\overset{\sim }{y}}_i \cdot p \left( IQR\left({y}_i\right)+1\right)\hfill & \hfill \\ {} i\in \left\{ costs,\ casualties,\ damages\right\}\hfill & \hfill \end{array} \)
subject to \( \begin{array}{l}{c}_{damage}:{\tilde{y}}_{damage}\ \le 50000\hfill \\ {}{c}_{c asualties}:{\tilde{y}}_{c asualties}\ \le 1000\hfill \end{array} \)
where l
p,r
denotes a policy pathway, p
m
is a policy action, P is the set of policy actions as specified in Table 2, r
n
is a rule, R is the set of rules as discussed in the previous section, y
i
is the set of outcomes for outcome i across a set of scenarios, \( {\overset{\sim }{y}}_i \) is the median value for y
i
, and IQR is the interquartile range for y
i
. So, each of the three outcomes of interest (costs, casualties, and damages) is defined as the median value multiplied by the interquartile distance plus one. That is, we try to simultaneously minimize the median outcome as well as the dispersion around the median. This is very similar to risk discounting (Rosenhead et al. 1973). The minimization problem is subject to two constraints. For flood damage, the median value of the sum total of flood damage over 100 years should not exceed 50,000 million Euro. For casualties, the median value for the sum total of casualties over 100 years should not exceed 1,000 people.
The robustness metric requires evaluating each candidate pathway using many computational experiments. We used the same set of experiments for all evaluations. If we were to regenerate experiments for every iteration of the algorithm, the performance of the candidate solutions could vary slightly from one generation to the next, introducing the additional problem of optimizing noisy performance indicators. We return to this point in the discussion. An analysis of 10 randomly generated pathways revealed that the robustness metric for the three indicators stabilized at around 150 experiments. Therefore, in what we present below, we use 150 experiments, generated using Latin Hypercube sampling.
We implemented the algorithm in Python (van Rossum 1995), using DEAP, a library for genetic algorithms (Fortin et al. 2012). We ran the optimization for 50 generations, with a population size of 50, a mutation rate of 0.05, and a crossover rate of 0.8. The assessment of convergence was based on tracking the changes to the set of non-dominated solutions over the generations. Runtime was roughly one week on a six core Intel Xeon workstation with hyper threading. To assess the robustness of the results to randomization, we ran the optimization problem three times using different random seeds. The resulting set of non-dominated solutions was identical for these three runs. The population size was established by experimenting with different population sizes in the range 10–100. A population size of 50 appeared to offer a balance between computational time and the need to maintain diversity.
Results
Figure 2 shows the resulting scores for the non-dominated pathways. It gives insight into the trade-offs among the best solutions. In total, we identified 74 unique non-dominated pathways. However, for a substantial number of these pathways, their resulting scores on the three criteria are identical, and hence they do not show up clearly in the figure. Remember that we aim to minimize the three outcomes, so the lower the score, the better. As can be seen, low scores on flood damage robustness and casualty robustness co-occur with high scores on cost robustness. Conversely, high scores on both flood damage and casualties co-occur with low scores on cost robustness. It appears however, that there are a few solutions where a relatively low score on cost robustness can be combined with modest values on casualty and flood damage robustness. There is no strong trade-off between damage and casualties: high values on one correspond to high values on the other.
Figure 3 shows the resulting Adaptation Pathways Map and associated scorecard. To arrive at this figure, we took several steps. First, in case of several families of actions, it is not possible to scale back; for example, it is not possible to lower a dike. If a less severe action is preceded by a more severe action, the less severe action is ignored. Second, we analyzed the timing of the adaptation tipping points. This showed that there are pathways that contain actions that are hardly ever or never reached. We removed these actions from the pathway. Third, we ignored the difference in adaptation tipping points arising from differences in rules. Fourth, we used the normalized values for the robustness scores, as shown in Fig. 2. These steps produced 11 unique pathways.
It is worth noting that this set of pathways contains either very expensive solutions (the dike raising pathways) that are also very effective in reducing casualties and damage, or very cheap solutions with more severe casualties and damages. Moreover, we see that the resulting pathways contain at most two solutions. We suggest that this is primarily due to the robustness metric. Remember, we searched for solutions that have both a low median value as well as a low interquartile distance. This implies that we have searched for solutions where the costs are quite similar across the set of 15 scenarios. For costs, this is a debatable way of defining robustness. A major argument for adaptation pathways is that it can potentially reduce costs. That is, the distribution of costs for an attractive flexible solution across a range of scenarios is skewed towards the lower side. This negative skewness is not considered by the robustness metric we used.