Sources and implications of deep uncertainties surrounding sea-level projections
Long-term flood risk management often relies on future sea-level projections. Projected uncertainty ranges are however widely divergent as a result of different methodological choices. The IPCC has condensed this deep uncertainty into a single uncertainty range covering 66% probability or more. Alternatively, structured expert judgment summarizes divergent expert opinions in a single distribution. Recently published uncertainty ranges that are derived from these “consensus” assessments appear to differ by up to a factor four. This might result in overconfidence or overinvestment in strategies to cope with sea-level change. Here we explore possible reasons for these different interpretations. This is important for (i) the design of robust strategies and (ii) the exploration of pathways that may eventually lead to some kind of consensus distributions that are relatively straightforward to interpret.
Sea-level rise (SLR) poses substantial risks for many low-lying, coastal areas around the world (Nicholls and Cazenave 2010; Katsman et al. 2011). Even a minor increase may considerably increase the frequency of harmful floods (Tebaldi et al. 2012). The management of the associated risks requires a sound understanding of potential local SLR, including low probability, high impact events (Kopp et al. 2014; De Vries et al. 2014; Grinsted et al. 2015).
Local SLR can significantly deviate from the global signal due to non-oceanic effects such as subsidence and post-glacial rebound, and due to oceanic effects (Slangen et al. 2012). Changes in ocean circulation, heterogeneous density changes, and mass-loss of large ice bodies (e.g., affecting the gravity field, Earth’s rotation, and lithospheric flexure) may cause distinct spatial patterns. Local SLR projections thus require a separate treatment of these major components, including thermosteric expansion and mass loss of ice sheets, ice caps, and glaciers.
Sea-level projections are deeply uncertain (Hulme et al. 2009; Ranger et al. 2013; Applegate et al. 2015; Oppenheimer et al. 2016). Deep uncertainty occurs when experts and/or decision makers “do not know or cannot agree upon the system model relating actions to consequences or the prior probabilities on key parameters of the system model” (Lempert and Collins 2007). Experts disagree on the best methods to assess the uncertainties (Church et al. 2013; Moore et al. 2013), and their subjective probability estimates of SLR are widely divergent (Horton et al. 2014).
When confronted with deep uncertainties, analysts have to make a choice (see, for example, Budescu et al. 2014). One option is to ignore the deep uncertainties, i.e., to present a single pdf, perhaps accompanied by a disclaimer that severe changes outside the presented range are possible. From decision-making context, this alternative seems less favorable because decision makers tend to act differently when confronted with deep uncertainties or imprecise probabilities rather than with well-defined probabilities (e.g., Ellsberg 1961; Budescu et al. 2014). A second option is to try to achieve consensus and condense the deep uncertainties into a single “consensus” distribution. A third option, for example if a consensus appears not (yet) possible, is to present decision makers with the key aspects of this deep uncertainty.
Structured expert judgments can be useful in case of ambiguity, disagreeing models, and lack of empirical evidence (Oppenheimer et al. 2007; Aspinall 2010; IAC 2010; Mastrandrea et al. 2010; Bamber and Aspinall 2013; Horton et al. 2014). According to Cooke and Goossens (2008), “structured expert judgment refers to an attempt to subject the decision process to transparent methodological rules, with the goal of treating expert judgments as scientific data in a formal decision process.” Expert agreement is not the objective of structured expert judgment. Rather, it intends to explore the range of views and to help build a political or rational consensus (Cooke and Goossens 2008). Rational consensus can be achieved by means of a method that is defined and agreed-on before eliciting the experts (Cooke and Goossens 2008; Aspinall 2010). It is, however, non-trivial if and how to combine expert opinions (Morgan and Keith 1995). As a consequence, the reliability of structured expert elicitations is often questioned (e.g., Keith 1996; Church et al. 2013; Gregory et al. 2014; Clark et al. 2015).
As an alternative to structured expert judgment, the IPCC’s Fifth Assessment Report (hereafter, AR5) presents a likely range. According to the Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties, a likely outcome means that “the probability of this outcome can range from ≥66% (fuzzy boundaries implied) to 100% probability” (Mastrandrea et al. 2010). The likely range explicitly builds on the agreed-on, current state of knowledge. Aiming for scientific rigor and consistency with literature, the IPCC authors have chosen not to account for poorly understood mechanisms, like the collapse of the marine-based sectors of the Antarctic ice sheet, in the likely range (Church et al. 2013; Gregory et al. 2014). This range has been criticized for being overconfident and ignoring semi-empirical model studies (Kerr 2013; Rahmstorf 2013; Grinsted 2014). However, two recent studies seem fairly consistent (Mengel et al. 2016; Kopp et al. 2016) and many local projections (partly) rely on AR5 and its model ranges (e.g., Kopp et al. 2014; De Vries et al. 2014; Grinsted et al. 2015).
Here, we explore the main reasons for the different interpretations of exactly the same information on SLR. This insight may be useful for designing strategies to cope with the deep uncertainties surrounding sea-level projections. In the longer run, eliciting the reasons for these divergent projections may help reduce ambiguities or help build rational consensus. To demonstrate and quantify these effects, we first explore how interpreting a given uncertainty in projections as representing differing likelihoods impacts probabilistic projections. Next, we discuss the potential role of structured expert judgment. Finally, we explore how structured elicitation might be utilized to reduce the current deep uncertainties and help to better inform risk and decision analyses.
2 The interpretation of the IPCC’s likely range
KNMI14 implicitly interprets the likely range as the 90% probability range, i.e., that the sea-level rise will be within this range with 90% probability. Many other studies (e.g., Kopp et al. 2014) interpret the likely range as the 66% probability range. The rationale given by KNMI14 is to be methodologically consistent with AR5 and internally consistent within KNMI14, and in this way to provide a widely accepted and actionable common framework for climate change adaptation in the Netherlands. The 66% probability interpretation is typically not explicitly motivated nor referenced. The 66% probability interpretation can make sense, for example if the objective is to produce wide (conservative) uncertainty ranges. From a robust decision-making perspective, conservative projections may be preferable to overconfident projections (see for example the discussions in Herman et al. 2015 and Bakker et al. 2016).
The likely range (i.e., spanning 66% probability or more) gives no clear lead on how to estimate higher quantiles, like 1:100, that are decision relevant (Kopp et al. 2014). The applied likely interpretation and distribution function largely determine the extrapolation. Yet, the scientific foundation for this methodological choice is largely unclear.
3 Expert elicitation and the role of ice sheets
Some have attributed the deep uncertainties surrounding sea-level projections to the response of the large ice sheets (Church et al. 2013; Bamber and Aspinall 2013). For instance, simulated and elicited projections of the Antarctic ice sheet contribution for the twenty-first century range from a few centimeters of global sea-level drop (Church et al. 2013) to an implied drastic several meters rise resulting from an almost complete disintegration of the West-Antarctic ice sheet (WAIS) (Pollard et al. 2015).
Probabilistic statements on high ice-sheet contributions are, however, controversial (Gregory et al. 2014; Clark et al. 2015). Different approaches result in widely diverging projections, as shown before in Fig. 1. For example, KNMI14 (red bars) applies, in line with AR5, physically reasoned/modeled upper limits as proposed by Katsman et al. (2011), whereas Perrette et al. (2013) (cyan bars) provide much larger uncertainty ranges based on a semi-empirical approach. Alternatively, some studies apply expert judgments (e.g., Grinsted et al. 2015; Kopp et al. 2014), notably elicited by Bamber and Aspinal (2013, hereafter BA13). Yet, different combination rules result in large differences too.
Grinsted et al. (2015) explicitly use the BA13 expert consensus to estimate the uncertain contribution of the “ignored” processes and add the BA13 estimates to the AR5’s likely range. Kopp et al. (2014), on the other hand, acknowledge the scientific consensus of AR5 and only use BA13’s to estimate the higher quantiles. Aiming for a smooth extrapolation, they fit a log-normal distribution to BA13’s expert consensus and scale this distribution down to match the AR5’s likely range. As a consequence, the 90% probability range of Kopp et al. (2014) is less than half of the range of Grinsted et al. (2015) (Fig. 1).
In a critical assessment of BA13’s elicitation, De Vries and Van de Wal (2015) propose a third approach. Concerned about weighting of experts and a too large influence of outlier opinions, they suggest to use a median estimate (the red dots), a method previously applied by Horton et al. (2014). Median-pooling is shown to be especially powerful in case of a large group of experts, relatively little over-confidence (Park and Budescu 2015; Gaba et al. 2016) and when the intended decision is not driven by tail-behavior of the uncertainty (Hora et al. 2013). Otherwise, the median approach may result in over-confident projections (Park and Budescu 2015).
4 Conclusions and discussion
We have shown how different, all arguably reasonable, interpretations of the imprecise information of the IPCC can result in widely divergent and deeply uncertain sea-level projections. Approaches to address this problem by eliciting and combining (subjective) information from experts have provided useful insights, but still result in deeply uncertain projections.
The examples illustrate that the construction of a consensus estimate from divergent expert assessments can be subject to considerable structural (and deep) uncertainty. This is consistent with the previous assessment that there is no “objective basis for combining expert opinion” (Keith 1996). Given this deep uncertainty, many (e.g., Keith 1996; Keller et al. 2008; Lempert et al 2012) have argued that a robust strategy, i.e., that performs well over a wide range of plausible futures/views (Lempert et al 1996), may be preferable over optimal strategies. Yet, depending on the applied decision criterion, the assessed robustness of a strategy can critically hinge on the range of views considered. Thus, robust strategies can also be very sensitive to outlier opinions and the way divergent expert assessments are aggregated (or not).
Many studies are silent on the aspect of deep uncertainty, for example by providing a single probability density function. This ignorance may lead to inconsistent decisions. Decision makers’ preferences often change when confronted with deep uncertainty (Ellsberg 1961; Budescu et al 2014). Improving its communication, e.g., by providing multiple plausible pdfs, can help to inform the design of more robust risk management strategies. Effective communication of deep uncertainties, however, depends strongly on the decision-context. Therefore, an efficient representation requires a tight interaction between decision analysts, scientists, and decision makers.
We thank Hylke de Vries, Bob Kopp, Roger Cooke, and Michael Oppenheimer for their valuable insights and comments. This work was partially supported by the National Science Foundation through the Network for Sustainable Climate Risk Management (SCRiM) under NSF cooperative agreement GEO-1240507, and the Penn State Center for Climate Risk Management. We thank Kelsey Ruckert for coding assistance. Any conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.
- Bakker AMR, Wong TE, Ruckert KL, Keller, K (2016) Sea-level projections accounting for deeply uncertain ice-sheet contributions. Nat Sci Rep (in review; http://arxiv.org/abs/1609.07119)
- Church JA, Clark PU, Cazenave A, Gregory JM, Jevrejeva S, Levermann A, Merrifield MA, Milne GA, Nerem RS, Nunn PD, Payne AJ, Pfeffer WT, Stammer D, Unnikrishnan AS (2013) Sea level change. In: Stocker TF et al (eds) Climate change 2013: the physical science basis. Contribution of working group I to the fifth assessment report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 1137–1216Google Scholar
- Cooke RM (1991) Experts in uncertainty; opinion and subjective probability in science. Environmental ethics and science policy series. Oxford University Press, New YorkGoogle Scholar
- Gaba A, Tsetlin I, Winkler RL (2016) Combining interval forecasts. Working paper 2014/58/DSC, INSEAD, doi 10.2139/ssrn.2519007
- Grinsted A (2014) AR5 sea level rise uncertainty communication failure. http://www.glaciology.net/Home/MiscellaneousDebris/ar5sealevelriseuncertaintycommunicationfailure/, visited on 2015-12-15
- Herman J, Reed P, Zeff H, Characklis G (2015) How should robustness be defined for water systems planning under change? J Water Res Plan Manag 141(10), doi 10.1061/(ASCE)WR.1943-5452.0000509, 04015012
- Inter Academy Council (IAC) (2010) Climate change assessments: review of the processes and procedures of the IPCC. Royal Netherlands Academy of Arts and Sciences, AmsterdamGoogle Scholar
- Katsman CA, Sterl A, Beersma JJ, van den Brink HW, Church JA, Hazeleger W, Kopp RE, Kroon D, Kwadijk J, Lammersen R, Lowe J, Oppenheimer M, Plag HP, Ridley J, von Storch H, Vaughan DG, Vellinga P, Vermeersen LLA, van de Wal RSW, Weisse R (2011) Exploring high-end scenarios for local sea level rise to develop flood protection strategies for a low-lying delta—the Netherlands as an example. Clim Chang 109(3–4):617–645. doi:10.1007/s10584-011-0037-5 CrossRefGoogle Scholar
- Lempert R, Sriver RL, Keller K (2012) Characterizing uncertain sea level rise projections to support investment decisions. Tech. Rep. Publication Number: CEC-500-2012-056, California Energy CommissionGoogle Scholar
- Mastrandrea MD, Field CB, Stocker TF, Edenhofer O, Ebi KL, Frame DJ, Held H, Kriegler E, Mach KJ, Matschoss PR, Plattner GK, Yohe GW, Zwiers FW (2010) The guidance notes for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. https://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf, visited on 2016-05-08
- Park S, Budescu DV (2015) Aggregating multiple probability intervals to improve calibration. Judgment Decis Mak 10(2):130–143Google Scholar
- Rahmstorf S (2013) AR5 sea level rise uncertainty communication failure. http://www.realclimate.org/index.php/archives/2013/10/sea-level-in-the-5th-ipcc-report/, visited on 2015-12-15
- Van den Hurk B, Siegmund P, Klein Tank A (2014) KNMI’14: climate change scenarios for the 21st century—a Netherlands perspective. Sci Rep WR2014-01, KNMI, De Bilt, the Netherlands, http://www.knmi.nl/bibliotheek/knmipubWR/WR2014-01.pdf, visited on 2016-05-08
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.