Bridging uncertainty concepts across narratives and simulations in environmental scenarios
- 836 Downloads
Uncertainties in our understanding of current and future climate change projections, impacts and vulnerabilities are structured by scientists using scenarios, which are generally in qualitative (narrative) and quantitative (numerical) forms. Although conceptually strong, qualitative and quantitative scenarios have limited complementarity due to the lack of a fundamental bridge between two different concepts of uncertainty: linguistic and epistemic. Epistemic uncertainty is represented by the range of scenarios and linguistic variables within them, while linguistic uncertainty is represented by the translation of those linguistic variables via the fuzzy set approach. Both are therefore incorporated in the models that utilise the final quantifications. The application of this method is demonstrated in a stakeholder-led development of socioeconomic scenarios. The socioeconomic scenarios include several vague elements due to heterogeneous linguistic interpretations of future change on the part of stakeholders. We apply the so-called ‘Centre of Gravity’ (CoG) operator to defuzzify the quantifications of linguistic values provided by stakeholders. The results suggest that, in these cases, uniform distributions provide a close fit to the membership functions derived from ranges of values provided by stakeholders. As a result, the 90 or 95% intervals of the probability density functions are similar to the 0.1 or 0.05 degrees of membership of the linguistic values of linguistic variables. By bridging different uncertainty concepts (linguistic and epistemic uncertainties), this study offers a substantial step towards linking qualitative and quantitative scenarios.
KeywordsParticipatory socioeconomic scenarios Fuzzy sets Linguistic and epistemic uncertainties Narratives and environmental models
The drivers of climate change within socio-ecological systems, such as land-use change and greenhouse gas emissions, alter anthropogenic and climatic pressures in the systems (Schröter et al. 2005; Folke 2006; Holman et al. 2016). The interconnectedness of these drivers adds uncertainty to our understanding of recent climate behaviour and future climate change projections, impacts and vulnerabilities. Such uncertainties have often been accommodated using scenarios to systematically answer “what-if” questions (van Ittersum et al. 1998; Zurek and Henrichs 2007; van Vuuren et al. 2012). Unlike predictions and forecasts, scenarios do not imply a probability or likelihood (van Vuuren et al. 2012). Instead, scenarios have been defined as “plausible descriptions of how the future may develop, based on a coherent and internally consistent set of assumptions about key relationships and driving forces”(Alcamo et al. 2005). As such, scenarios can be in quantitative (numerical) and qualitative (narrative) forms (Bamberger 2000; Philcox et al. 2010).
The complementarity of qualitative and quantitative scenarios is considered a potential strength in addressing complex problems (Vermeulen et al. 2013) since the so-called story and simulation (SAS) approach (Alcamo et al. 2006) became mainstream in scenario development (Kok 2009). SAS consists of a ten-step approach aimed at developing and translating (often stakeholder-led) narratives into (often scientist-led) model quantifications, iterating and revising them until they are linked (van Vliet et al. 2010). SAS yields credible, plausible and innovative scenarios because of the inclusion of expert models combined with other creative elements introduced by stakeholders (Alcamo and Henrichs 2008). The co-production also ensures consistency between stakeholders and model results (Kemp-Benedict 2012; Schweizer and Kriegler 2012). The final scenarios are more relevant and legitimate for end-users as stakeholders can identify their views (i.e. stakes) in the narratives.
Although conceptually strong, operationalising SAS has issues. Alcamo (2008b) already identified two SAS pitfalls: the ‘reproducibility’ and ‘conversion’ problems. The ‘reproducibility’ problem exists because assumptions and mental models are not explicit when a scenario narrative is developed, whereas the ‘conversion’ problem exists because narratives cannot be directly translated into quantifications. Moreover, the distinction between the two problems is often not straightforward. For example, fuzzy cognitive maps (Kosko 1986), recently applied by Kok (2009) and Van Vliet et al. (2010), ‘map’ variables and connections by assigning a weight to each connection. In the literature, this method has been described as improving the structure and reproducibility of qualitative scenarios (Alcamo 2008a) and has been applied as a conversion tool between qualitative and quantitative scenarios (Mallampalli et al. 2016). Nevertheless, most state-of-the-art studies have tended to focus on addressing the ‘reproducibility’ problem as evidenced by the proliferation of systematic stakeholder-based modelling (Voinov and Bousquet 2010), whilst studies focusing on the ‘conversion’ problem show methodological trade-offs between model compatibility, stakeholder expertise and development of narratives (Mallampalli et al. 2016). Hence, further studies addressing the ‘conversion’ problem are urgently needed. However, in order to tackle the ‘conversion’ problem in SAS, there is a need to take a step back, i.e. to better understand the gaps in knowledge within both qualitative and quantitative scenarios separately, before combining them (van Vliet et al. 2010). Scenario narratives integrate imagination in strategic thinking and combine short-term preoccupations in long-term planning with analytical thinking and creative visioning (Rasmussen 2005). Because narratives provide “holistic views” of the future and transcend the sum of single parts (Rasmussen 2005), it becomes very complex to reduce narratives to a selection of model variables. Whilst acknowledging this gap, we argue that narratives can be bridged to models, while maintaining their original characteristics.
The linguistic and epistemic sources of uncertainty remain separate until narratives are translated to produce quantifications. Therefore, narratives and models are treated as two separate products (a circle and square in Fig. 1). Even though methods have been created to translate narratives into quantification, a clear operational link between narratives and models is still lacking (Houet et al. 2016). Currently, two main approaches address the SAS ‘conversion’ problem systematically and transparently: a Bayesian reasoning approach, as outlined by Kemp-Benedict (2010) and a fuzzy sets based approach, as outlined by Kok et al. (2014) and Alcamo (2008b). Bayesian approaches are frequently used as they allow stakeholder input (prior distributions) to be refined (to produce posterior distributions) through confrontation with data (e.g. Aprostolakis (1990) and Van der Sluijs (2007)). Kemp-Benedict (2010) uses Bayesian statistics to propose a direct quantification of narratives in terms of how much they differ from a reference or historical data set. The Bayesian statistics approach tackles the ‘conversion’ problem by converting qualitative elements directly into the desired model input, without extra data processing. Notwithstanding that statistical approaches—both frequentist and Bayesian—structure uncertainty due to unavoidable randomness and imperfect knowledge, we argue that including this in participatory (stakeholder) scenario development poorly covers the full spectrum of uncertainty. Even a systematic and transparent approach such as Kemp-Benedict’s (2010) is not universally applicable: assumptions are quantified in a priori distributions, and we question whether starting from such assumed distributions is realistic when the participatory setting is characterised by diverging stakeholder expertise.
The real challenge for SAS is to account for vagueness when narratives are developed in participatory settings, especially because stakeholder engagement “has become almost a ‘must’”(Voinov and Bousquet 2010). The socioeconomic scenarios that we analyse in this paper are to be understood in this context; they are co-developed by scientists and stakeholders. The socioeconomic scenarios, therefore, include vague or imprecisely defined terms and elements within the narratives, due to the heterogeneity of assumptions, e.g. from a wide range of viewpoints and expertise (Mallampalli et al. 2016). One approach to overcome these issues is to combine stakeholder and other expert opinions to define numerical ranges with associated levels of confidence and probability density functions that introduce extra assumptions (Schoemaker 1991; Refsgaard et al. 2007; Brown et al. 2014). Such an approach, however, has two deficiencies: it requires a (generally opaque) mixture of inputs from two distinct groups, i.e. modellers and stakeholders, and it ignores any outliers and asymmetries within the stakeholder-generated input data. The fuzzy sets approach, in contrast, relies on the existing structure of the stakeholder data to generate probability density functions. Consequently, the method prioritises fidelity and transparency, with the important advantage that final outputs can be directly linked to stakeholders’ inputs. In addition, fuzzy sets allow for weighting on the basis of stakeholders’ professed confidence levels.
In this paper, the objective is to develop and apply an objective approach to translate the linguistic uncertainty in narratives into epistemic uncertainty of model input. Thereby, we address the ‘conversion’ problem when linking narratives with models. After developing the participatory scenarios, we assign the concepts of linguistic uncertainty to the narratives and epistemic uncertainty to model quantifications of the scenario study. We describe the operationalisation of these concepts by first translating the vagueness of narratives to linguistic variables and fuzzy sets and secondly deriving probability density functions from these to generate model input. This paper complements earlier studies on fuzzy sets, e.g. Alcamo (2008b) and Kok et al. (2014), that focus on results only. The discussion addresses our choice to ‘bridge’ linguistic and epistemic uncertainties, rather than attempting to reduce them.
Design and methods
Developing participatory scenarios
The socioeconomic scenarios analysed in this paper are developed within the EU-funded IMPRESSIONS project (Harrison et al. in review). The objective of the scenarios is to provide the context to understanding future impacts, adaptation and vulnerability to climate change at different scales in Europe. Because of their geographic breadth and narrative and quantitative character, the IMPRESSIONS scenarios can be applied to test the fuzzy sets methodology and compare the results across heterogenous stakeholder groups. The IMPRESSIONS scenarios were intended to directly inform numerical inputs to a European-scale model known as the Integrated Assessment Platform (IAP) (Harrison et al. 2015), with the scenario-specific default values and uncertainty ranges of model inputs being derived from the stakeholder inputs. The quantifications derived from stakeholder input explicitly recognise both linguistic and epistemic uncertainty, in that the stakeholders intended the ranges (and the scenarios within which they occur) to allow for epistemic uncertainties in the quantities described (section “Producing narratives and quantifications in a participatory scenario process”), while the fuzzy-set and probabilistic interpretation accounts for linguistic uncertainty (sections “Measuring vagueness using fuzzy sets” and “Probabilistic interpretation of vagueness”).
The results discussed in this paper are based on scenario quantification for five case studies which cover Europe and Central Asia: Europe as a whole (Europe) (Kok et al. in review); Central Asia (Central Asia); Scotland (Scotland) consisting of national scale scenarios for Scotland; Iberian river basin scale scenarios (Iberia); and Hungarian municipality level scenarios (Hungary). For the Central Asia, Iberia and Hungary case studies, heterogeneous groups of stakeholders were engaged. For Europe and Scotland, scientists acted as stakeholders to modify a set of existing stakeholder-developed scenarios (Harrison et al. 2013, 2015). In this paper, we refer to these scientists as ‘expert stakeholders’.
Producing narratives and quantifications in a participatory scenario process
Stakeholders were invited to produce narratives and key quantifications in a 2-day workshop within each case study. Stakeholders were selected to cover a wide range of expertise on different sectors and to have different age, country, and educational backgrounds (see Gramberger et al. 2015, and Supplementary Material 1 for details on the stakeholder selection process). All scenario products were produced by stakeholders as the result of different workshop processes which alternate brainstorming sessions in groups with plenary discussions, called the STIR approach (Gramberger et al. 2015). The approach is designed such that both narratives and key input quantifications become intrinsically connected by having the same mix of facilitators and stakeholders producing both scenarios and quantifying key drivers. Such an approach is fundamental for consistent co-production of both narratives and quantifications (we refer to the Supplementary Material 2 for all steps in the co-production of narratives and quantifications).
The workshop process consists of three main components. In the first component, stakeholders are guided to list, discuss and select key uncertainties relevant to all scenarios and further develop narratives for each individual scenario. In the second component, narratives are discussed in groups (group exercise), where stakeholders are asked to provide qualitative (linguistic) trends for key variables in a written questionnaire (an example of the group exercise questionnaire is provided in Supplementary Material 3). The variables’ descriptions are presented to stakeholders without rephrasing the modellers’ wording to avoid misinterpretation. The third component consists of an individual exercise (an example of the individual exercise questionnaire is provided in Supplementary Material 3), where each stakeholder provides their personal opinion on what quantitative ranges represent those qualitative trends.
Given inevitable time constraints and the importance of completing all steps within the workshop, the maximum number of variables that could be quantified by stakeholders was limited to three or four. These variables were selected based on two criteria. First, the variables had to reflect the expertise of most of the invited stakeholders and nest well among the key issues for the case study. Second, the variables should relate to model input parameters that were among the most sensitive in the model. For the Europe case study, the results of a full sensitivity analysis of the IAP (Kebede et al. 2015) were available to support the choice of sensitive variables. In addition, stakeholders provided qualitative guidance to inform the quantification by the modellers of a much wider range of socioeconomic variables used within the impact models. For example, stakeholders provided trends for four capitals (human, social, manufactured, and financial) of resource availability (Porritt 2007) to be applied to model vulnerabilities to climate change (Dunford et al. 2015). This paper only analyses the trends for the stakeholder-quantified variables; therefore, analysis of capital trends is excluded.
Together with trends and quantification of variables, stakeholders were asked to provide an indication of their confidence when quantifying each variable, to have qualitative information on the stakeholders’ professed confidence levels about the quantifications. The question asked to both stakeholders and expert stakeholders after the quantification of each variable was ‘How confident are you for the quantification you provide for this variable based on your background knowledge (0–10)? (0 = not confident; 10 = very confident)’. We classified the data in four categories to analyse a qualitative ‘confidence index’. The sample was not suited to infer statistical conclusions: firstly, the sample size was too limited due to resource limitations; secondly, the motives behind confidence levels cannot be assessed; thirdly, a psychological and cultural analysis on the background of the stakeholders is out of the scope of this analysis. We therefore decided to analyse the confidence index qualitatively. The analysis of this index occurs according to one of the main assumptions introduced in this study, i.e. that incorporating a subjective level of confidence could alter the value or weight of quantification (Van der Sluijs 2007).
In the next section, we explore how to proceed with quantification of a given variable using the ranges provided by stakeholders during a facilitated workshop. We use the variable ‘Change in food imports from 2010’ for Europe case study as an example to follow the methodological steps and to illustrate the results of other linguistic variables.
Addressing the ‘conversion’ problem: quantification of narratives
Measuring vagueness using fuzzy sets
Vagueness in stakeholder-driven scenarios exists because each stakeholder could have a different interpretation of the linguistic term ‘high increase’ in food imports compared to 2010. To measure this vagueness, each stakeholder was asked to define what he/she personally meant by ‘high increase’ by providing a numerical range. The analysis of ranges derived from stakeholder values assumes that each stakeholder has a different, but equally valid, interpretation of the same statement due to different backgrounds, beliefs, knowledge, and so on.
We define the analysed variable a ‘linguistic variable’ with a ‘linguistic value’. A linguistic value is the vague analogue of a numerical value and is the imprecise non-numerical value of a linguistic variable (Zadeh 1975a). Linguistic variable and model variable have the same meaning to reduce the risk of misinterpretation, which means that the model variable is presented to stakeholders as such in the form of a linguistic variable, without reducing or simplifying the meaning of the original model variable.
The CoG provides the basis for converting the qualitative changes into quantitative changes for each scenario, as described below. The quantified changes are then run in the IAP. The uncertainty around these changes is handled as set out below.
Probabilistic interpretation of vagueness
The scenario quantification in the IAP recognises both epistemic uncertainty and the validity of each stakeholders’ perspective on future changes through the derivation of probability density functions for each model input. For each linguistic value of each linguistic variable, the CoG represents the single output of a fuzzy set. We define the CoG as the default value of the membership function. The variation in values around this default value is taken to define the linguistic uncertainty, which must be taken into account in subsequent analysis (modelling) steps. Generally, this can be achieved by representing the variation as a probability density function (PDF), allowing the form and range of the stakeholders’ quantification to be retained, and also allowing parameter sampling for rigorous sensitivity or uncertainty analyses. Ideally, the form of the PDF will be derived from the frequency of values suggested by stakeholders, with Gaussian and uniform distributions offering particularly useful and contrasting alternatives. However, in many cases, the appropriate function is not clear, either because of an asymmetric or multimodal frequency, or because inadequate frequency data are available. In these cases, it may be preferable to define discontinuous probabilities that minimise the need for additional assumptions to be made. In the case of the IAP, input parameter values were assigned different (linguistic) ranges: a ‘credible’ range within which the ‘default’ value occurs and a wider ‘possible’ range. Assuming a probability distribution for the CoG, we define, for each linguistic variable, a ‘default’, a ‘credible’ and a ‘possible’ range. Beta distributions are often chosen in the literature when non-normality is assumed due to their flexibility and limited ranges (Brown et al. 2014); however, they are not universally appropriate. For consistency with previous work, we use beta-distributions here, but fitted distributions all had a low alpha and beta (between 1 and 2), suggesting that uniform distributions may have provided alternative adequate fits.
This scaled CoG value was defined as the modal or ‘default’ value, with the 90% range taken to define the ‘credible range’ and the 95% range taken to define the ‘possible range’. This choice is based on previously selected confidence ranges with the IAP (Brown et al. 2014) and on expert judgement on the distributions fitted to the data obtained by stakeholders, but other choices would have been equally possible.
Addressing the ‘conversion’ problem: quantification of scenario trends and input to the integrated assessment platform
Analysis of assumptions in quantifications of scenario trends
To test whether uniformity could be the best assumption, we accounted for and qualitatively analysed the performance of the ‘confidence’ index that the stakeholders and expert stakeholders provided with their quantification of the linguistic values for each single linguistic variable (Supplementary Material 5). Three possible interpretations of the results were considered: (a) stakeholders, aware of their insufficient background knowledge, provide ‘very unlikely’ values and a low ‘confidence’ index or (b) stakeholders provide what an expert would consider a ‘reasonable or realistic’ estimate and are either ‘confident’ or ‘less confident’ or (c) stakeholders under- or overstate their expertise.
The qualitative analysis of the ‘confidence’ index supports our choice of a uniform-like probabilistic representation of likely values for two reasons. Firstly, we observed similar patterns in the ‘confidence’ index across the same case study rather than for the same variable. For example, confidence is generally lower for Europe and Scotland case studies compared to Central Asia, Hungary and Iberia. Stakeholders and expert stakeholders all tended to provide a similar confidence level independent of the variable, and thus independent of background knowledge. Secondly, we found no obvious correlation between ‘reasonable’ quantifications and the ‘confidence’ index for stakeholders or expert stakeholders: low confidence may indicate lack of knowledge in cases where unrealistic ranges were provided, or may indicate a critical attitude when reasonable ranges were provided. With the information available, however, we cannot infer whether, and how much, cultural and other personal factors (such as a critical attitude or understanding of the exercise) played a role in the ‘confidence’ index.
Our analysis demonstrates and applies a method to translate vagueness to probabilities as a double-edged sword that will improve current practice when operationalising the story-and-simulation approach, while also improving statistical and fundamental understanding of how uncertainties are perceived and dealt with.
To this end, we addressed linguistic and epistemic uncertainties by ‘bridging’ them, rather than attempting to reduce them because the narratives provide ‘holistic views’ of the future that the models could not fully capture. In our approach, both narratives and models still remain ‘black boxes’ (see the round and squared shapes in Fig. 1) throughout our analysis and neither the linguistic uncertainties of the narratives nor the epistemic uncertainties of modelling are reduced but ‘bridged’. However, effective methods do exist to unravel the single ‘black boxes’. These methods have the advantage of adding structure to either narratives (van Vliet et al. 2012) or both narratives and models (see for example the cross-impact balance approach for narratives in Schweizer and Kriegler (2012) or fuzzy cognitive mapping to link narratives and models in van Vliet et al. 2010). However, these methods do not yet address the different uncertainties and are less transparent or too complicated to carry out in stakeholder workshops.
We chose to represent the stakeholders’ fuzzy numbers with uniform-like distributions for input in the IAP to avoid adding further assumptions and unintended interpretation. However, we provide a description of stakeholder ranges to enable a qualitative comparison with direct quantification of stakeholder-led narratives by impact modellers (Fig. 6). This analysis shows that linguistic values in all case studies lead to different PDFs. In the Hungarian case study, Gaussian probabilities could approximate distributions reasonably well, in most cases, once the ‘0’ values (or zero uncertainty) were removed. We interpreted this result as supporting the idea that stakeholders themselves can ‘bridge’ linguistic and epistemic uncertainty within the fuzzy sets approach. Stakeholders may provide reasonable ranges and could substitute expert judgement from impact modellers, at least for selected variables.
In sections “Introduction” and “Design and methods”, we have introduced the assumption that direct quantification of stakeholder-led narratives by impact modellers could add ‘assumptions on assumptions’, and fail to simply translate uncertainties, if impact modellers solely rely on their own ad hoc judgements (Mallampalli et al. 2016) or simply misinterpret stakeholders’ reasoning and opinions. Impact modellers are well experienced in addressing epistemic/aleatory uncertainty but address linguistic uncertainty less systematically (Regan et al. 2002). We tested this assumption by qualitatively comparing trends of stakeholders’ and expert stakeholders’ ranges across case studies. Instead of comparing trends across case studies, an alternative approach could have been to make single assumptions about interpreting all variable ranges in terms of PDFs, or different assumptions for each variable. But, after preliminary screening of the results, and due to the different types of variables analysed, a direct comparison among similar variables was not possible due to the limited number of participants and variables. Alternatively, a quantitative analysis can be useful to validate the use of stakeholder-led quantifications from a modelling output perspective. For example, Monte Carlo sampling from the PDFs generated for sensitive variables like GDP, population and food imports (Kebede et al. 2015; Brown et al. 2014) could be performed to analyse the propagation of uncertainty in impact models of both stakeholder-led input uncertainty around the CoG and PDFs generated by direct quantification of stakeholder-led narratives by impact modellers.
We have also assumed that, even if probability-based quantification is highly appropriate given the inevitable approximations, there is no further information that can be introduced to define the form of the PDF. In contrast, mainstream alternatives generally structure both narrative and quantification assumptions using probabilistic methods based on Bayesian statistics, such as Bayesian networks (e.g. Henriksen and Barlebo 2008) and Bayesian reasoning (Kemp-Benedict 2013). We concluded that both methods would have been either incomplete or misleading in our analysis. Bayesian-based methods can structure uncertainties more transparently with both prior and posteriori distributions that quantify changing assumptions with acquired information. However, even such methods can be problematic if (1) little agreement exists on the source of the data—especially in data scarce case-studies such as Central Asia and (2) if participants have different expertise. Our qualitative analysis of the ‘confidence’ also shows that stakeholders’ and scientists’ (expert stakeholders) assumptions may be very different and difficult to predict. Even in ‘expert stakeholders’ participatory contexts, extra assumptions need to be minimised.
We suggest to further consolidate these results with a quantitative uncertainty analysis from the IAP output perspective and efforts to strengthen the input from a stakeholder workshop perspective. The stakeholders appreciated the usefulness of this exercise and generally agreed that stakeholders can help modellers in quantifying key scenario drivers. However, some stakeholders found the exercise difficult and this may have resulted in the generation of ‘outliers’. Nonetheless, ‘outliers’ were included in the quantification of the CoG simply because we interpreted them as extreme values or ranges, consciously provided by the stakeholders. We limited ourselves to the exclusion of physically impossible values. The most important exception was the quantification of GDP for the Central Asia case study. Here, extreme growth trends resulted in two very extreme scenarios. When faced with a choice between their quantification and model-led trends, stakeholders chose the (very extreme) trends resulting from their quantification. In such case, a compromise had to be made by applying the trend provided by stakeholders at the upper limit of what the models could represent. As suggested also by the confidence index analysis, however, there could be different but equally legitimate reasons for stakeholders to provide their trends. To reduce ‘outliers’ in future exercises, we suggest to improve the participatory process, at the root of this quantification, by better adapting questionnaires and processes to the stakeholders involved in the workshop, e.g. by including a formal stakeholder-mapping exercise prior to the workshops, to address uncertainty about the representativeness of stakeholders.
The fuzzy set method has been recognised as a simple and transparent method (Alcamo 2008b; Kok et al. 2014; Houet et al. 2016) to be applied from global to local case studies, despite room for further improvement from stakeholder engagement perspectives. This analysis, based on the assumption that stakeholder values are the best available (or at least better than some poorly defined combination of stakeholder and modeller inputs), is the first to show that stakeholders can provide reasonable ranges and that these can be used without adding further assumptions. Ongoing studies applying stakeholders’ quantifications with fuzzy sets within impact models (e.g. Li et al. 2017) will further explore this potential in different modelling environments, aiming at a universally accepted tool to produce quantifications from narratives. Even though a formal validation step may lead to changes in the method, the core steps described are transparent and can be reproduced by practitioners in the field in any workshop settings.
At an epistemological level, this analysis contributes to enhanced dialogue and understanding between modeller-led and local, stakeholder-led communities, and linkage of qualitative and quantitative approaches by bridging the different uncertainty concepts (linguistic and epistemic and aleatory uncertainties) addressed by their research questions. We therefore did not simplify the relevant uncertainties (e.g. combining fuzzy logic and probabilities), but created a common, systematic language between the two communities. We further hope that our research will raise more attention to fundamental issues of different sources of uncertainty in participatory scenario development.
The research was financially supported by the IMPRESSIONS project, funded by the European Union’s Seventh Framework Programme for research, technological development and demonstration under Grant Agreement Number 603416. We are very thankful to Prof Rik Leemans for his supervision and comments on draft versions of the manuscript. We are also very thankful to Dr Evert-Jan Bakker, Prof Gerard Heuvelink and Arturo Torres Matallana (MSc) for their insights on statistics and the insightful comments of several anonymous reviewers.
- Alcamo J (2008a) The SAS approach: combining qualitative and quantitative knowledge in environmental scenarios. In: Alcamo J (ed) Environmental futures: the practice of environmental scenario analysis, volume 2, 1st edition. Amsterdam, pp 123–150Google Scholar
- Alcamo J (2008b) Drawbacks of SAS and a way forward. In: Alcamo J (ed) Environmental futures: the practice of environmental scenario analysis, volume 2, 1st edition. Amsterdam, pp 141–146Google Scholar
- Alcamo J, Henrichs T (2008) Towards guidelines for environmental scenario analysis. In: Alcamo J (ed) Environmental futures: the practice of environmental scenario analysis, volume 2, 1st edition. Amsterdam, pp 13–35Google Scholar
- Alcamo J, van Vuuren DP, Rosegrant M, Alder J, Bennett E, Lodge D, Masui T, Morita T, Ringler C, Sala O, Schulze K, Zurek M, Eickhout B, Maerker M, Kok K (2005) Methodology for developing the MA scenarios. In: Carpenter SR, Pingali PL, Bennett EM, Zurek MB (eds) Ecosystems and human well-being: scenarios 2. Island Press, Washington, pp 145–172Google Scholar
- Alcamo J, Kok K, Busch B, Priess JA, Eickhout B, Rounsevell M, Rothman DS, Heistermann M (2006) Searching for the future of land: scenarios from the local to global scale. In: Lambin E, Geist H (eds) Land-use and land-cover change. Global change - the IGBP series. Springer-Verlag, Berlin, pp 137–155CrossRefGoogle Scholar
- Cornelissen AMG, van den Berg J, Koops WJ, Grossman M, Udo HMJ (2001) Assessment of the contribution of sustainability indicators to sustainable development: a novel approach using fuzzy set theory. Agric Ecosyst Environ 86:173–185. https://doi.org/10.1016/S0167-8809(00)00272-3 CrossRefGoogle Scholar
- Harrison P, Holman I, Cojocaru G, Kok K, Kontogianni A, Metzger M, Gramberger M (2013) Combining qualitative and quantitative understanding for exploring cross-sectoral climate change impacts, adaptation and vulnerability in Europe. Reg Environ Chang 13:761–780. https://doi.org/10.1007/s10113-012-0361-y CrossRefGoogle Scholar
- Harrison PA, Jaeger J, Frantzeskaki N, Berry P (in review) Understanding high-end climate change: From impacts to co-creating integrated and transformative solutions: An introduction to the IMPRESSIONS project. Regional Environmental Change (this Special Issue)Google Scholar
- Houet MC, Bretagne G, Moine MP, Aguejdad R, Viguié V, Bonhomme M, Lemonsu A, Avner P, Hidalgo J (2016) Combining narratives and modelling approaches to simulate fine scale and long-term urban growth scenarios for climate adaptation. Environ Model Softw 86:1–13. https://doi.org/10.1016/j.envsoft.2016.09.010 CrossRefGoogle Scholar
- Kebede AS, Dunford R, Mokrech M, Audsley E, Harrison PA, Holman IP, Nicholls RJ, Rickebusch S, Rounsevell MDA, Sabaté S, Sallaba F, Sanchez A, Savin C, Trnka M, Wimmer F (2015) Direct and indirect impacts of climate and socio-economic change in Europe: a sensitivity analysis for key land- and water-based sectors. Clim Chang 128:261–277. https://doi.org/10.1007/s10584-014-1313-y CrossRefGoogle Scholar
- Kemp-Benedict E (2013) Going from narrative to number: Indicator-driven scenario quantification. In: Recent developments in foresight methodologies. Springer, pp 123–131. doi: https://doi.org/10.1007/978-1-4614-5215-7_8
- Kok K, Pedde S, Gramberger M, Harrison PA, Holman IP (in review) From Shared Socioeconomic Pathways to European socioeconomic scenarios for climate change research. Regional Environmental Change (this Special Issue)Google Scholar
- Mallampalli VR, Mallampalli VR, Mavrommati G, Thompson J, Duveneck M, Meyer S, Ligmann-Zielinska A, Druschke CG, Hychka K, Kenney MA, Kok K, Borsuk ME (2016) Methods for translating narrative scenarios into quantitative assessments of land use change. Environ Model Softw 82:7–20. https://doi.org/10.1016/j.envsoft.2016.04.011 CrossRefGoogle Scholar
- O'Hagan AB, Caitlin E, Daneshkhah A, Eiser JR, Garthwaite PH, Jenkinson DJ, Oakley JE, Rakow T (2006) Fundamentals of probability and judgement. In: Uncertain judgements: eliciting Experts' probabilities. John Wiley & Sons, Ltd, Chichester, pp 1–24. https://doi.org/10.1002/0470033312.ch1 CrossRefGoogle Scholar
- Regan HM, Colyvan M, Burgman MA (2002) A taxonomy and treatment of uncertainty for ecology and conservation biology. Ecol Appl 12:618–628. https://doi.org/10.1890/1051-0761(2002)012[0618:ATATOU]2.0.CO;2Google Scholar
- Schröter D, Cramer W, Leemans R, Prentice IC, Araújo MB, Arnell NW, Bondeau A, Bugmann H, Carter TR, Gracia CA, de la Vega-Leinert AC, Erhard M, Ewert F, Glendining M, House JI, Kankaanpää S, RJT RJTK, Lavorel S, Lindner M, Metzger MJ, Meyer J, Mitchell TD, Reginster I, MDA R, Sabaté S, Sitch S, Smith B, Smith J, Smith P, Sykes MT, Thonicke K, Thuiller W, Tuck G, Zaehle S, Zier B (2005) Ecosystem service supply and vulnerability to global change in Europe. Science 310:1333–1337. https://doi.org/10.1126/science.1115233 CrossRefGoogle Scholar
- Vermeulen SJ, Challinor AJ, Thornton PK, Campbell BM, Eriyagama N, Vervoort JM, Kinyangi J, Jarvis A, Läderach P, Ramirez-Villegas J, Nicklin KJ, Hawkins E, Smith DR (2013) Addressing uncertainty in adaptation planning for agriculture. Proc Natl Acad Sci 110:8357–8362. https://doi.org/10.1073/pnas.1219441110 CrossRefGoogle Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.