Introduction

Climate change poses several new challenges for traditional risk management. This is particularly the case for long-lived decisions with high stakes and high sunk (irreversible) costs, such as major infrastructure, building developments and spatial planning. In such cases, the decisions we make today can be highly sensitive to our assumptions about future risk. If long-term climate change is not accounted for appropriately, it could mean locking-into greater costs, higher residual risks or wasted investments down the line.

Of course, in this type of decision, climate change is just one additional factor to what can already be a ‘complex’ decision problem, given the broad and long-term implications, the multiple stakeholders and objectives involved (economic, social, environmental), different values of stakeholders, intangible (non-monetary) impacts, value trade-offs, and multiple sources of risk and uncertainty (Keeney 1982).

In this paper, we focus on the implications of climate change for decisions concerning major infrastructure projects. We take the example of the Thames Estuary 2100 (TE2100) project and draw lessons for risk management more broadly. The Thames Estuary area was identified in the UK’s Foresight report on Future Flooding (Evans et al. 2004) as one where future flood risk is probably the most serious in Britain. The objective of the TE2100 project was to provide a plan to manage tidal flood risk in London and the Thames Estuary over the next 100 years. This example is relevant because it was one of the first major infrastructure projects to explicitly address the issue of deep uncertainty in climate projections throughout the planning process.

This paper is timely given the significant public and private investments in infrastructure and buildings that countries including the UK plan to make over the coming decade, to replace ageing infrastructure, create homes for expanding populations and to stimulate economic growth (HMT, 2011). The UK’s National Climate Change Risk Assessment (Defra, 2012) identified several important risks related to infrastructure, including the UK’s flood protection systems, buildings, roads and railways, water supply infrastructure, drainage systems and energy infrastructure. Several other studies have highlighted the need to account for climate change in such decisions today; for example, Wilby et al. (2011) highlight the potential risks to new nuclear sites (located on the coast), which have lifetimes in excess of 150 years and where the risks of a failure could be disastrous. This was also reflected in the UK’s National Infrastructure Plan in 2011 (Treasury 2011). From our analysis of the TE2100 project, we hope to draw lessons that are relevant to these types of major infrastructure investments, as well as more broadly.

We suggest that for major infrastructure, climate change poses two key challenges. Firstly, the ‘non-stationarity’ of risk; traditionally, risk management decisions have assumed that the climate is stationary and so have taken a backward-looking approach to using historical observations to estimate risk (Hallegatte 2009). But climate risks can no longer be assumed to be stationary. The first decade of the twenty-first century was the warmest in the instrument record globally; since the middle of the twentieth century, global mean temperatures have risen at a rate of 0.13 °C per decade and sea levels by 7 mm per decade (Solomon et al. 2007). Looking forward, we are already committed to further warming and sea level rise (Solomon et al. 2009; Nicholls and Lowe 2004). Indeed, without action to reduce greenhouse gas (GHG) emissions, the Intergovernmental Panel on Climate Change (IPCC) concluded that global average surface temperatures could reach around 2–7 °C above pre-industrial levels by the end of the century, with some land areas seeing considerably greater warming (Meehl et al. 2007). This is expected to have significant implications for many forms of climate risk, for example, altering the characteristics of climate extremes, such as storms, droughts and heavy rainfall.

Several authors have noted that the non-stationarity of climate risk calls for a new forward-looking, long-term paradigm in risk management (Hallegatte 2009). If long-term risks are not accounted for in decision-making today, the lifetime and value for money of many long-lived, climate-sensitive investments could decline. Where those investments affect the vulnerability of local people, such as water supply infrastructure, urban planning or sea defences, not considering climate change could mean putting people at greater risk. For example, if storm surge risk increased, then a flood defence designed for the current climate would need to be replaced or retrofitted before the end of its design lifetime to maintain its standard of protection. Fankhauser et al. (1999) report that for many long-lived infrastructure and planning decisions, it is cheaper and easier to account for long-term trends upfront in decisions today, rather than retrofitting later.

The second challenge is that the scale and direction of future changes in risk are often difficult and sometimes impossible to predict on many time-scales of interest. Conversely, common decision-making frameworks, like cost-benefit analyses and expected utility analyses, assume that the future is known, or has quantifiable uncertainty (HM Treasury 2003; Morgan et al. 1999). Haasnoot et al. (2013) describe how traditionally decision makers in many domains, including water management, make this assumption and consequently, develop static ‘optimal’ plans based on a single ‘most likely’ future; however, if the future turns out to be different from the hypothesised future, then plans are likely to fail.

In the case of climate change (as with many long-term trends), the uncertainty in risk grows at longer prediction lead times (Hawkins and Sutton 2009; Cox and Stephenson 2007). Beyond a decade (or less in some cases), uncertainties cannot be adequately quantifiedFootnote 1 for some key real world adaptation decisions given the limitations in current global circulation models, GCMs (Pielke et al. 2012; Oreskes et al. 2010; Knutti et al. 2010; Stainforth et al. 2007). The current generation of climate models is based on laws of physics and chemistry and has been shown to be able to replicate many aspects of the present day climate. For example, the IPCC Fourth Assessment Report assessed the performance of multiple models against observations of the atmosphere, ocean and cryosphere (Randall et al. 2007). Generally, it is found that climate models can reproduce many aspects of the near present day mean state and variability, and some aspects of large-scale climate change when driven by appropriate radiative forcings. Climate models have known limitations (Knutti et al. 2010), resulting from a lack of a complete understanding of all of the processes involved in the real climate, and from our inability to be able to represent sufficient process understanding in computationally affordable models (Stainforth et al. 2007). For example, models perform less well on simulating small-scale features in some parts of the world (Randall et al. 2007). For instance, many climate models have underestimated the frequency of winter blocking events over western Europe—and this is particularly important for credibly simulating the statistics of some types of impacts (Murphy et al. 2009). Further uncertainty is added when the projections are downscaled from the hundreds of kilometers used in current global models to the few 10 s of km used in the regional climate models (Wilby et al. 2009).

Many authors working in climate change adaptation now describe this situation as ‘deep’ uncertainty (Lempert et al. 2003; Groves et al. 2008a, b; Ranger et al. 2010; Oreskes et al. 2010), a term analogous to Knightian uncertainty (Knight 1921) or ambiguity (Gilboa et al. 2009). Gilboa et al. (2009) and Treasury (2003) show that traditional decision analysis tools like cost-benefit analysis (CBA) and multi-criteria decision analysis (MCDA) can give misleading results in such conditions. For example, Hall (2007) warns that improper consideration of uncertain climate information in planning could lead to maladaptation; that is, taking too much, too little or the wrong types of actions. Where dealing with major projects, like public infrastructure or urban planning, this becomes a serious problem, potentially exposing society to much greater risks, wasted investments or unnecessary retrofit costs (Ranger and Garbett-Shiels 2012).

Research over the next few years is likely to increase the spatial resolution of climate models, which is likely to improve the representation of large-scale climate and the simulation of local-scale simulations. Additionally, model improvements are being targeted at better representing processes currently crudely treated in the model, or processes that are not included at all, such as representation of permafrost. But, it is likely to be some time before the uncertainty in climate projections is fully quantified and/or narrowed, particularly for more difficult to model components like the extremes (including storm surges). Thus, we argue that there is a need for further research on how to make good decisions with the existing information. This means drawing on the existing body of literature on decision-making under uncertainty (for example, see reviews by Gilboa et al. 2009 and Walker et al. 2013) and conducting new research to apply this to climate change adaptation. The TE2100 project provides a unique practical case study to contribute to this important area of research.

A secondary goal of this paper is to draw lessons on the needs from climate science and modelling to support adaptation. Over the past 10 years, a significant stream of investment has been committed to producing probabilistic climate projections, following on from the seminal study Murphy et al. (2004). In the UK, a pertinent example was the UK Climate Projections 2009 (Murphy et al. 2009), which was the first to attempt to provide probabilistic projections of future climate to users. We compare this to the approach used by TE2100; which implies a somewhat different direction in climate research, and which was necessitated by the pragmatic need to cope with the known limitations in modelling extreme sea level changes. This discussion is timely given the current activities to gear up Meteorological Agencies to deliver “climate services” for adaptation, including, for example, the World Meteorological Organisation’s (WMO) ‘Global Framework for Climate Services’ (WMO 2012).

The following section introduces TE2100 and the specific issues it faced in dealing with uncertain climate projections. Sections 3, 4, 5, 6 then explore four key innovations of TE2100. Section 7 then discusses the application of these innovations to other adaptation decisions and what they may imply for the needs from future climate science. We note that the TE2100 project also included other innovations, including its expansive MCDA analysis, which has been documented by Penning-Rowsell et al. (2013). We draw on these papers to give context, but the discussion focuses on managing deep uncertainty in future climate risk.

The Thames Estuary 2100 Project

The TE2100 Project began in 2002 and was a resource-intensive, 6-year long project led by the UK Environment Agency, costing in the range of £16 million. As well as its work on managing risks from climate change, it included detailed hazard and risk modelling, engineering analyses, CBA and MCDA analyses (Penning-Rowsell et al. 2013) and stakeholder consultation. The resulting plan was issued for consultation in April 2009.Footnote 2 The final plan was issued in November 2012 (EA 2012a) and provided a strategy for managing tidal flood risk throughout the Thames Estuary, from Teddington in the west to Shoeburyness in the east.

The largest component of the plan was the strategy to upgrade or replace the Thames Barrier, which would cost between £1.6 billion and £5.3 billion depending on the option selected. The Barrier is a moveable structure that spans more than 500 m across the Thames and protects London from a storm surge from the North Sea (Fig. 1). Surge tides occur when a band of low-pressure moves across the Atlantic towards the British Isles, causing the sea under it to rise above the normal level, creating a ‘hump’ of water. As the low-pressure system moves down the east coast of England, the hump grows higher as it gets squeezed between the coastlines of the UK and mainland Europe and then funnels up the Thames estuary (EA 2012a). Strong northerly winds can further increase the height of the surge. A surge tide entering the Thames estuary can increase water levels by 1–3 m and can be a major flood threat especially if this happens during a ‘spring’ tide when tide levels are much higher than normal.

Fig. 1
figure 1

The Thames barrier (bottom of photo). Source: Environment Agency

The impacts of an unmitigated storm surge flood in London would be disastrous in terms of lives lost, property damaged and economic disruption. For example, the TE2100 project estimated that 1.25 million residents, over 500,000 homes, 40,000 commercial and industrial properties, key government buildings, 400 schools, 16 hospitals, 4 World Heritage sites, and 55 km2 of designated habitat sites are located on the Thames flood plain (EA 2012a). The last time central London was flooded by a storm surge was in 1928. In 1953, a major surge affected the eastern part of the Estuary causing extensive damage and loss of life. The Thames Barrier was built in response and opened in 1984. The system was originally designed to last to 2030.Footnote 3 It is one part of an extensive flood management system, comprising eight other major barriers, more than 330 km of floodwalls and embankments and 36 industrial gates (EA 2012a).

Today, the Barrier provides a high standard of protection (estimated at more than 1-in-1,000 year), but this standard will fall as sea levels rise (EA 2009). The broader system, which is more than 25 years old in places, is also beginning to deteriorate and will come to the end of its useful life between 2030 and 2060 (EA 2012a), requiring major investments in replacement or repair. The TE2100 project aimed to examine whether and when the whole flood management system might need to be modified and to provide a forward plan to 2100. The plan needed to consider not only growing hazards due to climate change and ageing infrastructure, but also the rising economic value at risk and population at risk throughout the Estuary. Value for money was the central objective of the plan (Penning-Rowsell et al. 2013), though the plan also needed to meet a range of economic, social, cultural and environmental objectives (EA 2012a), which were captured through its Strategic Environmental Assessment (EA 2012b) and MCDA (Penning-Rowsell et al. 2013) processes.

The TE2100 plan needed to account for a wide range of uncertainties, for example, over the valuation of non-monetary impacts and property values; these were captured in the analysis through sensitivity testing the outcomes of the CBA and MCDA (Penning-Rowsell et al. 2013). But a key challenge for TE2100 was how to manage the ‘deep’ uncertainty over the scale of future increases in extreme water levels in the Estuary.

It was clear that the large-scale and irreversibility of the potential investments in London’s flood management system, the high risks associated with failure, and the long life-times and lead times of the infrastructure together meant that the ‘optimal’ strategy to 2100 was likely to be highly sensitive to assumptions about future extreme water levels. It was also clear that there is deep uncertainty over future extreme water levels in the Thames Estuary. In particular, the IPCC AR4 made clear that current GCM-based projections are likely to underestimate global sea level rise, due to known missing processes in those models, like the dynamics of ice sheets (Solomon et al. 2007). There was also known to be deep uncertainty over the response of North Atlantic storm tracks to future warming (Lowe et al. 2009). In light of these facts, the TE2100 project set out to design a plan that was as robust as possible—“adaptable to change and remain[ing] fit for purpose throughout its 100 year lifetime” (EA 2012a). To develop such a plan, we suggest that the TE2100 project adopted four main innovations:

  • Firstly, the ‘decision-centred’ planning process, which while common in project appraisal (e.g. Treasury 2003), is not typically adopted in most of the applied literature on climate change adaptation.

  • Secondly, the ‘narrative scenarios’, which went beyond the outputs from GCMs and aimed to explore the plausible range of long-term extreme water levels using evidence from additional sources, such as observations of the distant past.

  • Thirdly, the ‘adaptation pathways’ approach, which facilitated the explicit consideration of timing and sequencing of adaptation options to maintain flexibility while keeping risk below acceptable levels.

  • Finally, the use of defined ‘decision-points’ to guide implementation, triggered by observations of ten key indicators.

The separation of these four innovations as above is somewhat artificial as each is interdependent and ran in parallel during the TE2100 project. Each of these innovations is described in turn in the following sections.

Finally, we note that these innovations discussed in this paper came about through a long and active process of learning, collaboration and development during the project. In particular, the project built on work by the UK Climate Impacts Programme (for example, Willows and Connell 2003), which was further developed by the Environment Agency in collaboration with the ESPACE (European Spatial Planning Adapting to Climate Events) initiative. The project was supported through collaboration and knowledge sharing with partners across North West Europe. Stakeholder engagement, including using novel online consultations, was also critical throughout the project.

The ‘decision-centred’ planning process

Most of the literature on climate risk assessment and adaptation planning has been ‘science-first’ (also known as ‘top-down’ in Pielke et al. 2012 or ‘science-based’ in Gregory et al. 2012). This means that the risk assessment or decision analysis focuses on scientific analyses of the risks and options, in this case multi-decadal projections from GCMs, which are ‘downscaled’ to provide local projections and then fed into some form of impacts model (e.g. a hydrological model, or other response function) to give some estimate of impact. This is information is then used to identify and appraise options. This approach is manifest across large parts of the literature on climate change (Pielke et al. 2012; Dessai and Hulme 2007), including, for example, the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, Solomon et al. 2007 and Parry et al. 2007) (Pielke et al. 2012), the Stern Review on the Economics of Climate Change (Stern 2007), the UK National Climate Change Risk Assessment (Department for Environment, Food and Rural Affairs 2012), as well as many academic studies, for example, Ranger et al. (2011).

The TE2100 project reversed this approach, placing an understanding of the characteristics of the decision problem (the objectives and values of stakeholders, trade-offs, constraints and decision criteria), the vulnerability of the system and the options themselves at the heart of the analysis. This approach is increasingly recommended across the academic literature on climate change adaptation, though it is given a variety of names, for example, a decision-centric or decision analytic approach (Brown et al. 2011), policy-first (Ranger et al. 2010, Fig. 2), bottom-up (Pielke et al. 2012), access-risk-of-policy (Dessai and Hulme 2007), or risk management approach (Willows and Connell 2003). While, there may be subtle differences between these approaches (see Brown et al. 2011 for discussion), fundamentally they each frame the analysis as a choice between options to reduce vulnerability, predicated on a strong understanding of the decision problem itself, rather than focussing on climate projections. In this paper, we will refer to the approach as ‘decision-centred’, but this does not exclude other literature.

Fig. 2
figure 2

Example of a ‘policy-first’ approach from Ranger et al. (2010). Other analogous examples are available, for example, Willows and Connell (2003) and Dessai and Hulme (2007)

While this approach is relatively new in the literature on climate change adaptation, it is rooted in long established theories and practice in decision analyses and operational research. For example, it is consistent with the ROAMEF (Rationale, Objectives, Appraisal, Monitoring, Evaluation and Feedback) cycle approach recommended by the HMT Green Book (Treasury 2003) and the discourse on ‘value-centred’ decision-making (Keeney 1996). Indeed, the steps outlined by Ranger et al. (2010) (Fig. 2) are fundamentally the same as the ROAMEF cycle and the standard steps of decision analysis outlined by Keeney (1982) and Gregory et al. (2012).

Table 1 shows the steps of the planning process in the TE2100 project, mapped in the context of the process outlined in Ranger et al. (2010), given in Fig. 2. Clearly, the characteristics of the decision and the system are first and foremost in this process and frame the later analyses. The vulnerability analysis (step 1b, Table 1) focussed on identifying key thresholds in the vulnerability of the system, including the limits of protection of the current flood management system, engineering limits on the current barrier, and the ‘limit to adaptation’—the level of sea level rise at which it would be very difficult to continue to protect London in its current form (requiring some retreat).

Table 1 The planning process of TE2100 with reference to steps in Ranger et al. (2010) (Fig. 2)

In the TE2100 project, climate projections entered the decision analyses at a later stage than in a typical science-first approach and in different ways, for example:

  1. 1.

    A preliminary ‘worst-case’ scenario of future extreme water levels was developed [H++ (first guess)] and introduced in step 1b, to inform the identification of options, and in step 1c to create the boundaries on the route-map (Fig. 4).

  2. 2.

    A ‘central-case’ scenario was used in step 2a for the detailed CBA and MCDA, including the risk analysis to assess quantifiable uncertainties.

  3. 3.

    A refined set of scenarios was then used in step 2a to explore the sensitivity of the CBA and MCDA to ‘deep’ uncertainties over climate change.

  4. 4.

    The refined scenarios were used to scope the timing of decision points in 2b.

A lesson from this is that the long-term scenarios were designed specifically to inform particular decisions. Importantly, scenarios were not used to optimise the decision, but instead to help the decision maker to see where the strategy is robust to uncertainty and the vulnerabilities of the plan.

Figure 3 compares science-first and decision-centred approaches in simplified terms. A growing number of studies have noted the advantages to this ‘decision-centred’ approach over the ‘science-first’ approach (Pielke et al. 2012; Gregory et al. 2012; Brown et al. 2011; Ranger et al. 2010; Wilby and Dessai 2010; Dessai and Hulme 2007; Willows and Connell 2003); for example:

Fig. 3
figure 3

Comparison of a ‘science-first’ and ‘decision-centric’ process, with size of the bubbles indicating the emphasis of that step (in terms of time or resources) within the process. Source: adapted from Dessai and Hulme (2007)

  • A decision-centred approach encourages a decision maker to begin at the level of the problem itself, rather than the analysis being defined by the outputs from models (Gregory et al. 2012). This prevents the analysis from becoming too narrow and losing focus on the context and what is the choice to be made (Pielke et al. 2012). It also encourages the decision maker to consider the wider context, including the social, economic and environmental context (the ‘contextual vulnerability’, Füssel 2007), and the interactions of other risks, pressures and priorities. In many cases, this broader context will have a significant influence on the options identified and their perceived relevant benefits. A science-first approach could lead to outcomes that are misleading, too focussed, less able to cope with broader uncertainties, or inappropriate (Pielke et al. 2012).

  • A decision-centred approach can encourage greater involvement of stakeholders earlier in the process, to better understand the contextual vulnerability, the decision criteria and to identify options. It recognises that “social considerations and ethics and the quality of the dialogue play important roles in shaping… choices” (Gregory et al. 2012, pp. 3). As well as ensuring the robustness and relevance of the decision analysis, stakeholder engagement is also crucial for gaining buy-in for the resulting plan (Willows and Connell 2003). Stakeholder engagement was an important component of the TE2100 project (EA 2009).

  • A decision-centred approach can also reduce the resource-intensity of the analysis (Ranger et al. 2010; Brown et al. 2011). A science-first approach is founded on scientific modelling and consequently, the focus is on ensuring that the modelling is comprehensive and robust as possible. But, uncertainties ‘balloon’ throughout the process (Carter et al. 2007; Jones 2000), meaning that the appraisal of options can become impracticable (Wilby and Dessai 2010). In the decision-centred approach, detailed analyses are used only where necessary to compare specific attributes of options, and so analyses tend to be more streamlined, targeted and less sensitive to uncertainties.

Developing decision-relevant sea level rise scenarios

The known uncertainties in extreme water levels (Sect. 2) called for a radically different approach to producing scenarios. Historically, scientists have aimed to develop ‘best-guesses’ of future climate for a given pathway of future emissions based on the ‘best available’ GCMs and regional climate models (RCMs). The result has been a series of scenarios based on different models, as evident in the 1990–2007 Assessment Reports of the IPCC. However, the nature and limitations of these models mean that they may present an incomplete picture of the range of possible futures—an “irreducible lower bound on the range of climate uncertainty” (Brown et al. 2012). For example, the climate models share common structures and assumptions and so are not independent (Masson and Knutti 2011) and may provide only a limited sampling of the uncertainties (Knutti et al. 2010; Oreskes et al. 2010; Stainforth et al. 2007; Morgan 2003). Lempert et al. (2006) suggest that climate models can at best be seen as partial scenario-generators. This is at odds with the requirement to map the range of plausible outcomes (Parson 2008), which is inherent in most decision-analytical tools including scenario planning (Van der Heijden 2005) and robust decision-making (Lempert et al. 2003).

In the mid-2000s, scientists began developing approaches that aimed to better explore and quantify the range of uncertainty (for example, the perturbed-physics ensemble approach, Murphy et al. 2004). This led on to the development of the first probabilistic climate projections, which provided PDFs (probability density functions) for a range of climate variables, such as rainfall and surface temperatures. The rationale was that PDFs were seen as a good approach to communicating uncertainty that is compatible with traditional tools for managing uncertainty (UKCIP 2006). For this reason, this approach was embodied in the UK Climate Projections 2009 (UKCP09, Murphy et al. 2009). A problem with this approach was that the PDFs were highly conditional on the experimental design that produced them and provided only a limited sampling of the uncertainties (Knutti et al. 2010). In other words, there are potentially unquantified secondary uncertainties in the PDFs. In such cases, EUA and similar standard tools are not suitable (Morgan et al. 1999; Gilboa et al. 2009).

UKCP09 itself recognised that this is a particular problem in the case of future projections of extreme water levels. It stated that “knowledge gaps in our understanding of marine processes … mean that current models may not simulate the full range of possible futures” (Lowe et al. 2009).Footnote 4

A number of studies have suggested alternative approaches, including using a more decision-centric approach to develop more ‘decision-relevant’ sets of scenarios or PDFs based on GCMs and RCMs (for example, Lempert et al. 2006; Whetton et al. 2012). We suggest that this represents an advance, but this type of study is often still based on climate models and so may still be subject to unquantified uncertainties. Ranger and Niehörster (2012) and Brown et al. (2012) build a framework for going beyond GCMs to produce a set of narrative scenarios that use empirical models and expert judgement to complement outputs from GCMs. They show that this produces a much wider range of scenarios than would be generated by GCMs.

Recognising the need to quantify the plausible range of future extreme water levels in the Thames Estuary (Parson 2008; Morgan 2003; Van der Heijden 2005) and the limitations of climate models, TE2100 commissioned work to gain a better understanding of the effects of climate change on storm surge, sea level rise and river flowFootnote 5, and to develop a series of plausible narrative scenarios for extreme water levels during the twenty-first century. This analysis ran in parallel with the options development within TE2100 and involved the Met Office Hadley Centre and the Proudman Oceanographic Laboratory.Footnote 6

The approach taken was to develop more extreme (but unlikely) narrative scenarios that attempted to capture plausible ‘worst-case’Footnote 7 estimates of the influence of physical processes known to be missing from current GCMs, in particular, the loss of ice sheets (Meehl et al. 2007). This so-called ‘High++’ scenario would be used for sensitivity testing the robustness of adaptation strategies to uncertainty.

Initially, the High++ (first guess) scenario was set at 4.2 m by 2100,Footnote 8 and was based on expert judgement of the combined maximum plausible increase in water levels from all known sources (EA 2009). Expert judgement of the science available at the time was based on current observations of sea level rise and its main contributors, paleoclimatic data and knowledge of the underlying physics processes. No account was taken of the correlation between the terms. This value was used in the initial identification of adaptation options (Sect. 3).

The central scenario for extreme water levels was set at 90 cm by 2100, based on the Flood and Coastal Defence Appraisal Guidance issued by the UK Department of Environment, Food and Rural Affairs (Department for Environment, Food and Rural Affairs 2006). This level was consistent with the upper bound of the ‘likely’ range of predictions of UKCP09 (Lowe et al. 2009).

The High++ (first guess) scenario was later refined down based on new modelling of the effects of climate change on local sea level rise and storm surge generation, alongside a deeper evaluation of the upper and lower bounds of potential sea level during the century based on paleoclimatic data (EA 2009) and expert judgement of both model and observational evidence. The revised High++ scenario was 2.7 m by 2100.

Decision analysis and the “adaptation pathways” approach

Coastal infrastructure projects have historically adopted the approach to optimising the design to the best-available probabilistic information on extreme water levels over the lifetime of the project (e.g. MAFF 1999). However, the high stakes involved in the TE2100 project coupled with the level of uncertainty in climate change projections motivated a new approach to risk management.

Recent years have seen a growing number of tools and guidance on managing uncertainty in long-term climate risk projections; this includes, for example, Ranger et al. (2010), Wilby and Dessai (2010), HM Treasury and Defra (2009), Groves et al. (2008a, b), Dessai and van der Sluijs (2007) and Willows and Connell (2003). These approaches highlight the benefits to building a more ‘robust’ risk management strategy, which is one that performs adequately well against a set of decision criteria under a wide range of possible future states of the world, rather than adopting the more traditional approach to optimising a strategy to a particular risk level (Dessai et al. 2009; Lempert et al. 2003; Lempert and Schlesinger 2000). Indeed, the UK Government’s guidance on Accounting for the Effects of Climate Change (HM Treasury and Defra 2009) advocates this type of approach where there is unquantifiable uncertainty.

Walker et al. (2013) suggest that there are four main ways of building a robust plan:

  • Resistance: planning for the worst-case scenario

  • Resilience: planning to ensure that whatever occurs, the system can recover

  • Static robustness: reducing vulnerability for the largest possible range of scenarios

  • Dynamic robustness (or flexibility): building plans that can be changed over time as more is learnt or as conditions change.

Walker et al. suggest that the first approach is likely to be most costly, lowest value for money and may be vulnerable to surprises (Fig. 4). The second approach accepts a short-term productivity loss, but focuses on recovery. ‘Static’ robustness is analogous to traditional scenario planning (Walker et al. 2013); but a drawback of this type of approach is that it sets long-term actions now based on current understanding and so could leave plans vulnerable if this understanding changes (Walker et al. 2013; Van der Heijden 2005). Conversely, dynamic robustness (also known as managed adaptive, Fig. 4) commits only to short-term actions, reducing risk iteratively and laying a framework to guide future actions that promotes flexibility (Haasnoot et al. 2013; Ranger et al. 2010). The dynamic robustness approach was recommended by UK government guidance (Department for Environment, Food and Rural Affairs 2006 and HM Treasury and Defra 2009) and was adopted by TE2100.

Fig. 4
figure 4

Illustration of the evolution of risk within an iterative risk management approach. Source: EA (2012a)

The TE2100 project built in flexibility through a combination of three methods (here categorised after Fankhauser et al. 1999):

  • Firstly, so-called, “low-regret” measures are implemented in the near-term. Low-regret measures are those that reduce risk immediately and cost-efficiently under a wide range of climate/sea level rise scenarios. They are called ‘low-regret’ rather than ‘no-regret’ because they may involve some opportunity cost. For TE2100, the low-regret option is improving (raising) existing defences (Table 3). Improving existing defences around the Estuary will cost-beneficially reduce residual risks, while leaving open the option to scale up action in the future. This ‘buys-time’ to monitor and learn before making a major investment.

  • Secondly, incorporating ‘structural’ flexibility, such as engineering in flexibility, so that infrastructure can be adjusted or enhanced in the future at minimal additional cost. For example, the current Thames Barrier can be over-rotated to cope with greater than expected sea level rise. This category could also include safety margins, where infrastructure is over-engineered to cope with greater than expected change; this approach is effective where the marginal cost is low. Another approach considered by TE2100 was the purchase of land to build infrastructure upon in the future (EA 2012a, b).

  • Thirdly, pathway flexibility. TE2100 adopted a dynamic adaptive planning approach (also known as iterative risk management, adaptive management or managed adaptive) where plans are implemented iteratively and are designed to be adjusted over time as more is learnt about the future. In this way, flexibility is built into the long-term strategy—the timing of new interventions and the interventions themselves can be changed over time.

TE2100 utilised an innovative approach to constructing a dynamic adaptive strategy known as the ‘Adaptation Pathways’ approach, also known as the ‘route-map’ or ‘decision pathways’ approach (Fig. 5). This approach helps the decision maker to identify the timing and sequencing of possible pathways of adaptation over time under different scenarios. Each pathway incorporates a package of individual measures. For example, the ‘route-map’ (Fig. 5) can indicate how measures can be implemented iteratively over time to maintain risk below target levels cost-effectively (Fig. 4), while keeping open options to manage future risks.

Fig. 5
figure 5

High-level options and pathways developed by TE2100 (on the y-axis) shown relative to threshold levels increase in extreme water level (on the x-axis). For example, the blue line illustrates a possible ‘route’ where a decision maker would initially follow HLO2 then switch to HLO4 if sea level was found to increase faster than predicted. The sea level rise shown incorporates all components of sea level rise, not just mean sea level

Using the route-map, it is possible to define ‘decision points’ (for example, in terms of observations of mean sea level), where an action no longer meets the specified decision criteria (Kwadijk et al. 2010) and one would either take additional action or switch to an alternative pathway (Sect. 6, and light blue arrow in Fig. 4). Thus, rather than taking a one-off decision now about a ‘best’ option, the approach encourages the decision maker to postulate “what if” scenarios and to take a more flexible approach. It also encourages a decision maker to consider under what conditions a plan will fail and design actions to guard against this, including preparing for actions that might be triggered later (Haasnoot et al. 2013). The strategy is designed such that in the early period, when uncertainties are at their highest, there is virtually no cost to switching.

Haasnoot et al. (2012, 2013) and Walker et al. (2013) describe the evolution of the adaptation pathways approach from scenario planning, and place it in the context of the broader family of adaptive planning approaches (alongside ‘adaptive policymaking’, Walker et al. 2001, ‘adaptation tipping points’, Kwadijk et al. 2010, and ‘assumption-based planning’, Dewar et al. 1993). An important innovation of this family of approaches on ‘static robustness’ was the introduction of ‘sign-posts’ and ‘triggers’, under which the plan would be adjusted or refined (Walker et al. 2001; Kwakkel et al. 2010).

Interpretation of the route-map

TE2100 identified four possible packages of measures, referred to as ‘High-Level Options’ (HLO1, 2, 3a, 3b, and 4, shown as arrows in Fig. 4). Each HLO consists of a pathway or route through the century that can be adapted to the rate of change that we experience. For example, in the first pathway (HLO1), a sea level rise of around 20–30 cm would require raising other smaller flood defences around the Thames Estuary to increase their lifetime. Next, if sea level rise reached around 60–70 cm, the current Thames Barrier could be over-rotated and interim defences (high walls upstream of the Barrier) restored. If sea levels rose to around 80–90 cm, then the current Thames Barrier would need to be improved as well as raising downstream defences.

Together, the high level options were designed to span the estimated plausible range of increases in extreme water levels in the Thames by 2100 (Sect. 4). For example, HLO1 culminates in improving the current Thames Barrier and is appropriate for up to around 2.3 m of sea level rise. This HLO would be sufficient given current “most probable” estimates of future sea level rise in the Thames. Under the H++ (first guess) scenario of sea level rise of 4.2 m, a new barrage would need to be constructed (HLO4).

Clearly, Fig. 4 illustrates that given the deep uncertainty in future extreme sea levels; it would be risky to commit to one pathway today based on current ‘best-available’ projections. But it also shows that the pathways do not diverge until sea level rise reaches over 50 cm, and even then, it is possible to switch between pathways at low cost. For example, the blue arrow in Fig. 4 illustrates a route between HLO2 and HLO4. This has come about due to the very high standard of protection today in the Thames Estuary, and the robustness of the existing flood protection systems. The current Thames Barrier is expected to remain viable until around 2070, which means that implementation of a replacement must begin soon after 2050 (EA 2012a, b). All four HLOs remain open to consideration and a decision between pathways need only be made in the future, when there should be better information as a result of continued monitoring and improvements in sea level modelling.

Formal options appraisal in TE2100

The formal options appraisal in TE2100 followed on from the adaptation pathways analysis. The core of this analysis was traditional CBA and MCDA to assess a set of four possible options (with sub-options, Table 2), which align with the four HLOs (with some refinements). The analytical methods used are consistent with the Environment Agency’s Flood and Coastal Risk Management Appraisal Guidance (FCRM-AG) (EA 2010). These analyses are reported in detail in Penning-Rowsell et al. (2013) and the technical report of TE2100 (EA, 2009). The appraisal included a range of monetised (e.g. property at risk, agricultural land use, risk to life, technical risk) and non-monetised factors (e.g. water quality and quantity, sense of community, recreation, habitats and biodiversity). Here, we overview some of the key components of this analysis relevant to managing deep uncertainty.

Table 2 Options appraised in the TE2100 Project (Penning-Rowsell et al. 2013)

The appraisal demonstrated that continuing to protect London is highly cost-beneficial. The options appraisal concluded that the “do minimum” (DM) option has the highest benefit-to-cost ratio (every £1 cost giving £88 benefit to society), but that there is a strong marginal case for moving beyond this (Penning-Rowsell et al. 2013). Based on the central projection, option 3.2 (a new barrier at long-reach from 2070) emerges as the preferred option, closely followed by option 1.4 (improve existing system). Using a high++ scenario for extreme water levels does affect the ranking of long-term options (Table 3), but does not affect the immediate decisions over flood risk management. However, the analyses also showed that even before 2050, the residual risk would increase under the higher sea level rise scenarios. For these reason, the TE2100 project concluded that given the scale of the investments needed after 2050, the long lead times for implementation and the uncertainties, it is vital to monitor the situation and carefully plan the adaptive measures needed in the medium-term.

Table 3 Ranking of options (Table 2) for different scenarios (Penning-Rowsell et al. 2013)

The decision process: monitoring and decision points

The route-map (Fig. 5) not only lays out the options, but also provides information on when and how decisions should be made. Crucially, it is used to identify a set of a decision points (Fig. 6), triggering specific options or pathways, conditional on observations of sea level rise and other indicators. Together, this framework, incorporating the route-map, decision points and a monitoring system, forms the basis of a 40-year investment plan detailing a complete decision process for upgrading the tidal flood management system.

Fig. 6
figure 6

Schematic diagram of the thresholds, lead times and decision points approach

The decision points were derived as follows. For each adaptation option, TE2100 assessed the key threshold of climate change at which that option would be required (for example, the extreme water level, as illustrated in Fig. 5) and the lead time needed to implement that option. From this, one could estimate the decision point to trigger that implementation, in terms of an indicator value, such as the observed extreme water level, with an uncertainty range caused by the uncertainty in the rate of sea level rise. This is illustrated in Fig. 6. Relative sea level rise and the peak surge tide level are the critical indicators, but eight other indicators are also used (Table 4, EA 2009). Figure 7 gives examples of two of the indicators (1 and 5), demonstrating the upward trends on top of strong natural year-to-year variability in indicator values.

Fig. 7
figure 7

Example indicators. a Thames barrier tidal/fluvial dominated closures 1982–2012. b Thames (Tower Pier) mean tide levels 1934–2011

Table 4 Indicators used in TE2100 (EA 2009)

On current central projections, the initial decision point, at which the pathways (or HLOs) would diverge, is expected to come around 2050. If this were the case, this decision would be made with the benefit of an additional 40 years of knowledge about climate change and sea level rise. However, if monitoring reveals that water levels (or other indicators) are increasing faster (or slower) than predicted under current central projections, then decision points would be brought forwards (or put back) to ensure that decisions are made at the right time to allow an effective and cost-beneficial response. This creates an uncertainty on the timing of the decision point that can be estimated based on the range of projections, as shown in Fig. 6.

The decision process is designed to respond to adjustments in any of these indicators and updates to projections of extreme weather levels, but the effectiveness of the plan will depend on a continuing process of regular review. The plan sets the review period as at least once every 10 years, with a major review in 2050 (EA 2012a, b).

Also critical to its success is the implementation of an ongoing monitoring system. The UK already has a history of constructing some of the longest climate datasets in the world (e.g. the Central England Temperature record dates back to c1650). However, there are some essential characteristics of future monitoring that we can suggest based on the needs of TE2100. Firstly, it is important to avoid breaks in the records so long-term planning of monitoring and its funding options is needed. Secondly, once data are collected; they need to be safely stored and distributed, ideally by being made available to all who could benefit. One approach is through national data hubs. Good practice involves not only making the data available but details of any tools, or methodologies used to translate raw measurements into a final observational product so that the entire process is as transparent as possible. There may also be opportunities to more cleverly use the climate models to help plan how to make the optimum set of observations—since it is not feasible to measure everything and as frequently as we might like.

A further research priority is to conduct analyses to link the operational indices with longer term sea level and storm surge projections. The Environment Agency is currently working with European counterparts and experts (the Istorm network) to address this.

Discussion

We have identified four key innovations in the Thames Estuary 2100 project that enabled it to develop a long-term flood management plan for London that should be robust to the deep uncertainty in long-term projections of extreme water levels. Flexibility and iterative planning are core to the approach.

Haasnoot et al. (2012, 2013) and Walker et al. (2013) discuss the advantages and disadvantages of the dynamic adaptive planning approach. A clear advantage is that the plan is scenario neutral; this means that decisions do not require information about the likelihood of different future scenarios. Yet, unlike in the ‘resistance’ approach (Walker et al. 2013), this does not mean adapting to the worst-case now, but instead means maintaining flexibility to cope with it if necessary in the future. A further advantage is that it gives clear information on the effectiveness and timing of options, enabling analysts to assess under what conditions and on what timescale a plan could fail (Haasnoot et al. 2012). It explicitly recognises that adaptation over time will be determined not only by what can be anticipated today, but also what is observed and learnt in the future (Yohe 1990). In addition, the route-map can help a decision maker to identify opportunities, potential low-regret measures and pathways that can lead to lock-in (Haasnoot et al. 2013). The approach ensures that whatever short- to medium-term plan is adopted, it is set in a framework that will not be maladaptive if climate change progresses at a rate that is different from what is predicted to be “the most probable” today. For example, Fig. 5 demonstrates that the refinement of the High++ scenario from the High++ (first guess) (Sect. 4) would have made no difference to the short-term investment strategy.

The adaptation pathways approach is not only more robust to climate change, but to all other sources of risk and uncertainty, including broader scenario uncertainties (e.g. socioeconomic) and uncertainties resulting from a lack of data. Key thresholds can be postulated and challenged through sensitivity analyses. For example, in TE2100, critical success thresholds for flood storage were assumed and then subject to more detailed modelling, which eventually led to a decision not to adopt the flood storage route. If a potential threshold is found to be critical, it can be monitored and researched further so that the route-map can be adjusted through time. As long as the pathways account for such potential surprises and learning, allowances for adjustment can be planned in.

A potential disadvantage of all robustness-based approaches is that they can lead to greater overall costs or a productivity trade-off. For example, delaying a much-needed public infrastructure project, like a sea defence, could leave people exposed to storm surges in the interim period. Alternatively, as in TE2100, it could mean making costly repairs to older infrastructure to extend its lifetime. To assess this potential trade-off, TE2100 appraised the different High Level Options using CBA and MCDA approaches (Penning-Rowsell et al. 2013). In this case, the appraisal showed that improving the older flood defences would cost-effectively ‘buy-time’ before it is necessary to make a more irreversible decision, such as the building of an expensive new barrier. Thus, a delay on a major investment decision is justified in this case, because the benefits of such an investment are highly uncertain today. The downside of flexibility is small.

To some extent, this flexibility is available for TE2100 due to its specific situation. Firstly, the current Thames Barrier was designed to provide a high standard of protection to 2030 and so adaptation is not urgent. Secondly, cost-effective low-regret measures are available. The lifetime of the existing flooding management system can be extended cost-efficiently by raising other defences and this effectively ‘buys-time’, allowing time to monitor and learn to gain additional information to make an improved long-term decision. This might not be the case in other circumstances, for example, where there are high costs of delay and there are no available ‘low-regret’ options. Ranger and Garbett-Shiels (2012) speculate that this situation may be relatively uncommon, particularly in low-income countries where standards of risk management are lower and consequently many investments in risk reduction will be ‘low-regret’.

Haasnoot et al. (2012) suggest that a weakness in the adaptation pathway approach is the complexity of the analysis and any simplifications that must be made to make the analysis tractable. Haasnoot et al. (2012) and Walker et al. (2013) discuss the complex scenario modelling and decision-making tools that may accompany this type of approach (including exploratory modelling and analysis, scenario discovery, robust decision making, robust optimisation and info-gap). This was not the experience in TE2100. Here, the adaptation pathways approach was used as a simple tool. The options to build in flexibility were clear and scenario-testing of the CBA and MCDA analyses were sufficient to justify the flexible approach (Penning-Rowsell et al. 2013). However, the TE2100 project may have benefitted from the fact that there was one dominant driver of long-term change relevant to the decision and that the economic case for protecting London is so great that each of the options was effectively ‘low-regret’.

Despite this, we propose that the four innovations identified in the paper would be applicable to a broad range of climate risk management decisions, and particularly where decision makers are dealing with long-term, climate-sensitive decisions where there are multiple options available. This will include many types of decisions, particularly long-lived infrastructure, but also sectoral planning, natural resource management, urban development and land-use planning (Hallegatte 2009). Indeed, Haasnoot et al. (2013) finds that similar adaptation pathways approaches have been used in other cases, including coastal flood risk management in the Netherlands (Kwadijk et al. 2010), water management in river deltas (Haasnoot et al. 2013) and lakes (Brown et al. 2011), environmental management (Williams and Brown 2012), urban transport (Marchau et al. 2008) and airport strategic planning (Kwakkel et al. 2010). The New York City Panel on Climate Change has recommended flexible pathways and decision thresholds as a core part of how New York City will approach climate change adaptation (Rosenzweig et al. 2010). In addition, the Chair of the Delta Programme in the Netherlands similarly recognised the need for a more dynamic, adaptive approach “a new way of planning, which we call adaptive delta planning… to maximise flexibility; keeping options open and avoiding lock-in” to cope with long-term uncertainties including climate.Footnote 9 This type of approach is also recommended by UK government guidance (Defra 2006; HM Treasury and Defra 2009). Wilby et al. (2011) discuss the application of this approach to the design of new nuclear sites in the UK and identify several ways of building flexibility into strategies, including setting aside land within the site footprint, safety margins and modular design to allow low-cost retrofit, upgrade and replacement.

We propose that the approach can also be relevant to smaller-scale decisions, particularly where there are limits on resources and data availability. For example, an advantage of this approach is that it need not take a lot of time or intensive study. A route-map and set of decision points can come from a higher-level assessment using expert and stakeholder judgement. Importantly, it is a learning decision process and this means that plans can be refined iteratively, incorporating new data over time.

Wilby et al. (2011) highlight that a wider adoption of this approach will require greater clarity from regulators. We suggest that this is not just relevant for nuclear sites (as in Wilby et al. 2011) but for all long-term infrastructure with regulatory oversight, including water infrastructure, buildings (through building codes and planning frameworks) and flood risk management. Increasing the adoption of this approach will also likely require training and capacity building, as well as the supply of a set of generic narrative scenarios of the future. In the UK, such a set of scenarios could complement UKCP09.

Finally, we have shown that the TE2100 project took a different approach to its use of climate science and modelling to inform its adaptation strategy. Rather than focusing on only using outputs from the ‘best available’ GCMs and RCMs, it recognised the limitations of these models and the need to gain a fuller understanding of the range of plausible future risk. It drew on observations of the past, combined with a more detailed process understanding and expert judgement to generate a set of narrative scenarios of the future, which aimed to capture the plausible range. Learning from this example, we suggest that a more widespread adoption of dynamic adaptive planning approaches across society will require a different direction in future climate research, including:

  • Firstly, a key to the success of this approach is monitoring of the climate and key indicators of change so that change can be detected and any necessary alterations to plans put in place given sufficient time. We suggest that such monitoring be a key priority for investments in improved information.

  • Secondly, understanding future risks requires a better understanding of the past, particularly the causes of variability and trends in the key indicators. In the TE2100 project, this included paleoclimatic evidence on sea levels (EA 2009).

  • Thirdly, a change in emphasis in climate modelling, toward developing the physical understanding of models and real climate processes, to better bound the scale of potential future change. Within this, there is a need for a fuller exploration of the range of uncertainties in projections, including more investigation of the most extreme potential impacts (for example, collapse of ice sheets and the Atlantic meridional overturning circulation).

  • Fourthly, developing new ‘climate service’ products along two lines: firstly, improving the ‘best guess’ models, and secondly, building a set of complementary narrative scenarios using a broader range of evidence that explore the plausible range of uncertainty on this ‘best guess’.

  • Finally, the “climate services” need to recognise that a key part of the service is to effectively communicate the information to those needing to use it for decision-making application, including encouraging a more active dialogue between scientists and users to better communicate the limitations of current information and how this should be applied within decision processes.

In the UK, implementing these priorities is already benefiting from the establishment of a national climate capability as part of the current Met Office Climate Programme, in order to provide leadership on the development of key climate science infrastructure, such as climate model improvements. Such a capability involves a close working relationship between a range of providers including the Met Office and the University sector. A national capability of this kind can provide the basic tools on which new climate advice can be founded.

Additionally, there is a need to recognize where improvements in climate science might be targeted. One aspect is to provide more emphasis on bringing together expertise and datasets from observational studies with those of model development, testing and projection. A second aspect involves targeted research to better understand and simulate the key physical processes that are currently missing in models. This includes small-scale processes (e.g. cloud processes that are smaller than the resolution of current models), and also the key missing large-scale earth system processes, such as loss of carbon from melting permafrost. Both of these inclusions will require improvements in high performance computing power, but at a cost that is modest compared to the benefits provided by helping to avoid either under or over adaptation. However, even with extra resolution and complexity providing better models, it will be necessary to recognize that it is always prudent to supplement the model results with evidence from other sources, as was done in TE2100.

Conclusions

Climate change brings several new challenges for traditional climate risk management. In this paper, we focus on two of these, the non-stationarity of risk and the deep uncertainty in future climate risk projections. The Thames Estuary 2100 was one of the first major infrastructure projects to explicitly recognise and address the challenge of deep uncertainty in future climate risk throughout the decision process. We find that the project placed the concept of robustness at the heart of its plan. In particular, it developed an iterative, learning decision process that cost-effectively reduces risk today while avoiding foreclosing future options. We identify four specific innovations in the project: firstly, the decision-centric planning process; secondly, the development of a set of plausible narrative sea level rise scenarios against which the robustness of plans can be tested; thirdly, the “Adaptation Pathways” approach, which encourages the user to take a more flexible approach to adaptation; and finally, the use of decision points, which ensures that the responses to changing climate risks are timely and cost-effective. We suggest that these innovations should be applicable for a broad range of decisions, particularly major infrastructure, buildings and spatial planning. We also draw several conclusions for future climate science and modelling to better inform adaptation. We find that monitoring should be a key priority for future investment. In addition, we suggest that there are significant benefits in developing a better understanding of the uncertainty in models and generating narrative scenarios to explore this range of uncertainty.