Biodiversity and Conservation

, Volume 25, Issue 7, pp 1285–1300

Selecting appropriate methods of knowledge synthesis to inform biodiversity policy

  • Andrew Pullin
  • Geoff Frampton
  • Rob Jongman
  • Christian Kohl
  • Barbara Livoreil
  • Alexandra Lux
  • György Pataki
  • Gillian Petrokofsky
  • Aranka Podhora
  • Heli Saarikoski
  • Luis Santamaria
  • Stefan Schindler
  • Isabel Sousa-Pinto
  • Marie Vandewalle
  • Heidi Wittmer
Open Access
Original Paper

DOI: 10.1007/s10531-016-1131-9

Cite this article as:
Pullin, A., Frampton, G., Jongman, R. et al. Biodivers Conserv (2016) 25: 1285. doi:10.1007/s10531-016-1131-9

Abstract

Responding to different questions generated by biodiversity and ecosystem services policy or management requires different forms of knowledge (e.g. scientific, experiential) and knowledge synthesis. Additionally, synthesis methods need to be appropriate to policy context (e.g. question types, budget, timeframe, output type, required scientific rigour). In this paper we present a range of different methods that could potentially be used to conduct a knowledge synthesis in response to questions arising from knowledge needs of decision makers on biodiversity and ecosystem services policy and management. Through a series of workshops attended by natural and social scientists and decision makers we compiled a range of question types, different policy contexts and potential methodological approaches to knowledge synthesis. Methods are derived from both natural and social sciences fields and reflect the range of question and study types that may be relevant for syntheses. Knowledge can be available either in qualitative or quantitative form and in some cases also mixed. All methods have their strengths and weaknesses and we discuss a sample of these to illustrate the need for diversity and importance of appropriate selection. To summarize this collection, we present a table that identifies potential methods matched to different combinations of question types and policy contexts, aimed at assisting teams undertaking knowledge syntheses to select appropriate methods.

Keywords

Evidence-based policy Biodiversity policy Decision-making Ecosystem services Knowledge brokerage Evidence synthesis Knowledge transfer 

Introduction

There is an increasing demand from multiple policy sectors of society for the process of policy making to be informed by the best available knowledge. Knowledge is generated and communicated in diverse formats and its volume is increasing rapidly, presenting significant challenges in searching, collating and synthesising relevant information in a form that is credible, reliable and legitimate from a decision maker’s perspective (Cash et al. 2003; Sarkki et al. 2013). This is the case also with knowledge on biodiversity and ecosystem services (and their interactions with other sectors, interests or needs). Management of biodiversity and ecosystem services generates a broad spectrum of knowledge from traditional or indigenous knowledge to experimental and science-based understanding. In turn, the knowledge needs of decision makers reflect this spectrum, and methods of knowledge gathering and synthesis are dependent on the types of uncertainty faced and the social context in which the decision needs to be made (e.g. Breckon and Dodson 2016).

The diversity of approaches to knowledge synthesis is manifest in the diversity of questions generated by the policy and broader decision-making communities (practice and management) (e.g. Sutherland et al. 2006). The challenge is to provide up-to-date synthesis for decision makers, both directly concerned with biodiversity and ecosystem services as well as from other sectors that might affect or enter into conflict with biodiversity conservation or with ensuring ecosystem services. The synthesis should be precisely tailored to the request (see Livoreil et al. 2016), in accordance with the policy agenda (which often requires limited time windows and budgets), ensuring its legitimacy from both policy-maker and wider stakeholder perspectives. An important challenge is to ensure that concerns from different sectors are adequately taken into account and an agreement on a common knowledge base is reached to avoid having stakeholders with opposing interests presenting opposing evidence or knowledge. This challenge, which applies to the policy process in general, is particularly demanding for environmental and biodiversity policy—owing to the inter-disciplinary and complex nature of the opinions and interests involved (Nesshöver et al. 2016).

Synthesis methodologies are ex-ante assessments that do not generate any new empirical data but seek to identify, compile and combine relevant knowledge from various sources, so that it is understandable as a single unit and readily available for decision makers who want to draw on the best available evidence. Methods of synthesis vary according to the type of question, the type of knowledge sought and the policy context (e.g. stage of the policy cycle and timeframe). Selection of an appropriate method can be crucial to the inclusion of appropriate knowledge in the decision-making process. Frustration over knowledge flow can occur both from the policy community, when knowledge is not provided in the necessary time window for decision making, and from the scientific community, when knowledge syntheses are apparently ignored or seen as irrelevant to current evidence needs (Dietz and Stern 1998; Owens 2005; Sharman and Holmes 2010; Jordan and Russel 2014).

In this paper, we aim to present an initial illustrative decision matrix tailored for biodiversity and ecosystem service knowledge that can be developed through future iterations. The matrix’s objective is to provide guidance on the selection of appropriate methods of synthesis for a diversity of questions that may be posed by policy makers and, as far as we are aware, is the first effort of this kind applied to environmental knowledge synthesis.

Methods

A Workshop on ‘method selection for providing evidence to policy on biodiversity and ecosystem services’ was convened to develop the decision matrix in Frankfurt, Germany in January 2014 (hereafter referred to as the Frankfurt Workshop); participants were invited who had expertise in knowledge synthesis methods from both social and natural sciences in the field of biodiversity and ecosystem services. Participants included six academic researchers from the KNEU project and ten external researchers. All participants were selected for their expertise in policy-relevant research and/or knowledge synthesis methodology (see Table 3 in Appendix 1 for a full list of participants and their affiliations). The workshop considered a range of question types (see below) with respect to knowledge needs to inform decision making arising from three previous workshops convened by the KNEU project in different regions of Europe (hereafter referred to as regional workshops) (Carmen et al. 2015). In these initial regional workshops a broad range of policy makers, stakeholders and scientists had been asked to formulate questions and specify knowledge needs broadly following the methods of Sutherland et al. (2006) involving initial collection of questions followed by discussion and prioritisation sessions. The outcomes then informed the Frankfurt Workshop in terms of identifying the spectrum of potential requests submitted to a knowledge synthesis process.

Questions arising from the three regional workshops were classified by the Frankfurt Workshop participants into different types with regard to the evidence sought, as follows:

I. Seeking better understanding of an issue (including predictions and forecasting):
  1. 1.

    Seeking greater understanding or predictive power (e.g. how does green infrastructure contribute to human well-being?)

     
  2. 2.

    Scenario building to analyse future events (e.g. how will the risk of flooding change under different climate change scenarios?)

     
  3. 3.

    Horizon scanning (e.g. what will be the most significant novel threats to biodiversity in 2050?)

     
  4. 4.

    Seeking understanding of changes in time and space (e.g. how has the distribution and abundance of rabies in fox populations changed in the last 10 years?)

     
II. Identifying appropriate ways and means of realising certain decisions
  1. 5.

    Seeking measures of anthropogenic impact (e.g. what is the impact of wind farm installations on bird populations?)

     
  2. 6.

    Seeking measures of the effectiveness of interventions (e.g. how effective are marine protected areas at enhancing commercial fish populations?)

     
  3. 7.

    Seeking appropriate methodologies and associated trade-offs (e.g. what is the most reliable method for monitoring changes in carbon stocks in forest ecosystems?)

     
  4. 8.

    Seeking optimal management (e.g. what is the optimal grazing regime for maximizing plant diversity in upland meadows?)

     
III. Improving understanding of possibilities and boundaries for decision-making:
  1. 9.

    Assessing public opinion and perception (e.g. is there public support for badger culling in the UK?)

     
  2. 10.

    Seeking people’s understanding and providing definitions (e.g. how do different people or groups understand ecosystem services?)

     

At the Frankfurt Workshop, participants were asked to identify and describe, based on their experience, possible policy contexts in which any question may arise and to characterize these in terms of the constraints they might imply for the choice of synthesis method. Through a series of breakout sessions the workshop participants were subsequently asked to consider the suitability of different knowledge synthesis methodologies for each type of question, for each policy context identified. Candidate methodologies were contributed by the workshop participants during the workshop and therefore represent the collective experience of those assembled and are not a comprehensive list. Some methodologies were not included because their purpose is not primarily knowledge synthesis (see “Discussion and conclusion” section).

For the purpose of selecting suitable methodologies a prior (theoretical) assumption was made at the workshop that an existing synthesis would not be available to the decision maker and a new synthesis would therefore have to be conducted and a methodology selected. Methodologies differ widely in their ‘robustness’ as measured by their transparency (the extent to which all actions and decisions can be reported), rigour (the effort expended to minimise error in the findings), repeatability (the extent to which the method can be repeated by a third party) and susceptibility to bias (the extent to which the methodology addresses and reduces potential for bias in the findings) (Gough et al. 2012). In constructing the matrix, participants selected more robust methodologies in policy scenarios where these characteristics become more important (e.g. ‘high risk of serious consequences if wrong conclusion is reached’). The most rigorous methodologies were always selected when the policy context allowed.

Workshop participants mapped methods against types of questions and against contextual factors to indicate how well methods are suited to inform each type of question and how well they are expected to perform in the different contexts. Participants subsequently used this process to specify the most promising methods for each combination of question type and policy-context based on the participants’ expertise and knowledge of the synthesis methods. A table was drafted during the workshop and modified through subsequent discussion among the authors of this paper (all but three of whom were workshop participants) that could be directly used as a decision support tool for anyone considering the commissioning or conduct of a knowledge synthesis.

Results

The policy contexts identified in the Frankfurt Workshop and described in the following list are not exhaustive but they are examples of factors that might influence choice of synthesis methodology. We recognise that they are potentially overlapping and interrelated:

Time constraints

The timeframe over which policy decisions need to be made (the policy window) can sometimes be very short (days to weeks). This places limits on the knowledge synthesis that can be achieved or encourages forms of synthesis that can be conducted and updated rapidly.

Financial resource constraints

Alongside time constraints there are always financial constraints and knowledge-synthesis methods may be confined to low cost options.

Controversy caused by conflicts of evidence

Knowledge needs may arise in relation to a disagreement over the interpretation and implications of the current evidence, or its robustness in terms of the variability of existing results. This may require transparent, rigorous, independently conducted (by actors perceived by stakeholders in the conflict to have no vested interest in the outcome) and inclusive synthesis methods that minimise susceptibility to bias by engaging key actors in designing research questions and discussing conclusions on the basis of evidence.

Controversy caused by conflicts of values and/or interests

Knowledge needs may differ according to vested interests (legitimate or otherwise) in the outcome and/or a fundamental difference in values and beliefs on the part of two or more groups. This may require substantial stakeholder engagement to generate an acceptable question (or questions) as well as transparent, rigorous and inclusive synthesis methods that provide reliable evidence regarded as legitimate by the parties involved.

Serious and/or unacceptable consequences of making the wrong decision

Where the consequences of making a wrong decision are regarded as a high risk to a decision maker they may require transparent, rigorous and independently conducted synthesis methods that minimise susceptibility to bias and/or clearly state what the potential biases are, thus providing a clear audit trail to justify the decision.

Diversity of knowledge and information

Where the question demands the synthesis of a high diversity of different types of knowledge and/or many different perspectives need to be included, it may require inter- and transdisciplinary methods and approaches that are able to structure diversity, identify commonalities and differences, or rank alternatives.

High uncertainty

In situations where there is significant uncertainty or variability in knowledge, methods may be required that seek to examine sources of uncertainty and variability in results, and synthesise knowledge taking such uncertainty and/or variability into account. Such methods would provide a best estimate of the truth together with statements of confidence in that estimate.

The identified methodologies are drawn from the natural and social sciences and all have been applied to some extent to support decision making in environmental and other related sectors. Definitions are provided in Table 1 together with explanations of their suitability to explain why they might be chosen for a particular combination of question and policy context.
Table 1

Description of knowledge synthesis methods considered in this paper (in no particular order)

Synthesis method

Overview

Specific needs, resource requirements and limitations

Key advantages and suitability

Expert consultation

A range of experts is asked to provide their knowledge and/or opinion on the question, possibly supporting it with selected published literature. This can be done in meetings or by individual consultation

Available expertise at short notice over a few days, restricted use in conflict situations with contested knowledge claims and risk of advocacy science (see Ehrmann and Stinson 1999). Process susceptible to biased/random selection of experts

Relatively quick and inexpensive approach for well-defined, uncontested policy problems

Expert elicitation/consultation with Delphi process

In the Delphi method, a coordination team or a facilitator designs a questionnaire which is sent to a group of experts. The expert evaluations are then summarized by the coordination team and the summary is sent to the respondents who are given the opportunity to reconsider their original answers in the light of the responses by others. The assumption is that the expert opinions gradually converge as the experts consider the various aspects of the problem and learn from one another. The process is stopped after having completed a pre-defined number of rounds, or reaching a sufficient level of consensus. The mean or median scores of the final rounds are used a result of the process (Linstone and Turoff 2002). Divergent opinions, should they persist are usually made explicit

There are different types of Delphi method based inquiries. Policy Delphi is designed to explore a pressing policy issue and support subsequent decision making. Structural Model Delphi aims for exploring causal relationship and building a systems understanding or model. Trend Model Delphi is designed to initiate discussion on trends and potential future development. Can take longer than expert consultation therefore resources required are greater

Compared to expert consultation, a slightly more systematic and rigorous approach, which usually makes the process of reaching a result more transparent as well as recording divergent opinions and their reasoning. Several rounds usually lead to a more thorough reflection of different issues and perspectives than a single meeting or separate interviews (Mukherjee et al. 2015)

Causal chain analysis (CCA) & Bayesian belief networks (BBN)

CCA establishes relationships between different factors by flow diagrams depicting causal linkages. Often conducted jointly with experts and other stakeholders. Can serve as a first step to more elaborate modelling approaches and/or reviews of evidence on the different causal links identified, e.g. systematic reviews.

Bayesian belief models are a specific derivation of CCA technique to try and estimate probabilities and how they multiply throughout the causal chain. The key feature of BBNs is that they enable us to model and reason about uncertainty. The BBN forces the assessor to expose all assumptions about the impact of different forms of evidence and hence provides a visible and auditable dependability or safety argument (for details see Fenton and Neil 2012)

Relies entirely on the expertise of the participants involved, can provide a structured overview of issues involving a broad range of different factors in relatively short time spans, does not per se provide evidence and usually does not adequately capture complex interactions

Can be used to reach agreement on the different issues and factors involved and thus help to frame or scope a problem, particularly if different types of knowledge including experiential and highly context-specific and/or place- based knowledge is relevant. Usually needs to be combined with other methods to provide an overview of the knowledge acquired on the different causal links (Uusitalo 2007)

Systematic review (SR)

Highly structured and standardised protocol-driven process for synthesising evidence; methods specified a priori; can minimise bias and optimise precision of quantitative outcomes; different approaches used for quantitative or qualitative synthesis. Always includes an extensive search for all relevant evidence and critical appraisal of the included evidence; may quantitatively combine evidence to improve precision (Gough et al. 2012; CEE 2013)

Requires a focused question that can be clearly specified as a set of specific inclusion criteria and sufficient evidence (studies) that address these criteria. Team activity. Based on an a priori peer-reviewed protocol. As such, usually takes months rather than days or weeks to complete

Well-regarded as robust method for synthesising evidence on specific outcomes, so standard methods are available. Defensible approach for contentious topics. Could potentially:

 Increase precision

 Minimise bias

 Ensure transparency

 Facilitate stakeholder involvement (e.g. in a-priori protocol development)

 Clarify uncertainty

A major advantage is the possibility to up-date the review without repeating the whole process

Suitable for independent measurement of effectiveness of an intervention when there is dispute among stakeholders, sufficient quantitative evidence and sufficient time (months) to complete the review. See http://www.environmentalevidence.org/completed-reviews

Systematic map (SM)

Highly structured and standardised protocol-driven process for mapping evidence similar to SR with extensive search but differing primarily in being able to answer a broader question than SR, and not aiming to quantitatively combine evidence; may not include extensive critical appraisal; useful for summarising state of the art for a particular question, possibly to identify or prioritise questions for later SR (Gough et al. 2012; CEE 2013)

Can answer specific or broad questions so long as the question is structured well enough to enable the setting of practical inclusion criteria. Team activity. Based on an a priori peer-reviewed protocol. As such, usually takes months rather than days or weeks to complete. Main limitations are that outcomes are synthesised in less detail than in SR, often as a classification, and may not be appraised critically

Possibly more flexible than SR as could accommodate a broader question and cover a relatively wide range of outcomes, albeit with less detail and rigour. Provides a systematic (defensible) way of establishing “state of the art”. Could also potentially:

   Ensure transparency

   Facilitate stakeholder involvement

   Provide a basis for more focused question development or prioritisation for future SR

A major advantage is the possibility to up-date the map without repeating the whole process

Suitable when addressing a question of impact or effectiveness when there are multiple possibilities in subject, intervention/exposure and outcomes such that a full synthesis of all elements of the question would not be feasible. See http://www.environmentalevidence.org/completed-reviews

Focus groups

Structured discussion of an issue by a group of people, purposively selected usually to involve different stakeholders and/or potentially differing perspectives of an issue at hand. The joint discussion allows participants to consider and react to arguments put forward by other participants so it allows examination of group dynamics and opinion formation (Orvik et al. 2013)

Requires highly skilled facilitation, particularly in situations involving conflicting interests or values. All relevant perspectives should be represented and encouraged to be expressed during discussion

Is useful in situations where evidence is not readily available, different types of knowledge are relevant and/or issues are controversial or where the exact question or knowledge need is not yet clearly identified. Can be combined with other methods, e.g. conducted with a group of experts, or a causal chain analysis can be part of a focus group (Orvik et al. 2013)

Discourse field analysis (DFA)

Discourse field analysis is a structured method for investigating conflicts and alliances among different knowledge holders or stocks of knowledge when discourses are emerging (“input level”, cf. Ottow 2002). Aim is to identify systematically the key issues and actors, and the latter’s location within a discourse field; distinguishing between certain and uncertain knowledge, and determining which knowledge claims are points of conflict between different groups in society and the sciences is a major perspective. In this regard, the focus is on arguments, procedures or putative facts that are seen as correct or true by the actors under analysis, rather than on which are true. The outcome is a picture of the discourse landscape with all its contradictions. It emphasises the negotiating processes that take place within a discourse field, i.e. DFA differs from a discourse analysis as a method in social sciences that traces the interaction of knowledge and power at the “outcome level” in order to show how power is exercised in a society through discourses, e.g. question of rules, controls or exclusion (Foucault 1971)

Can address highly contested issues in complex situations and includes different types of knowledge (scientific, technical, professional, everyday knowledge). Time and labour needed depends on the complexity of the issue; works best with written resources, but can be enhanced with interviews for acquiring tacit knowledge

Addresses the following questions:

 Which issues are high on the agenda of public discussion? Who are the central actors who carry out this discussion; what is their position within the discourse; and in which context do they move?

 Which knowledge claims concerning problematic cause-effect relationships count as certain and which as uncertain?

 Which societal needs for action can be deduced from the discourse, and which research needs (for both the natural and the social sciences)?

 Are there new, emergent issues, and who are their protagonists?

 Can one find examples that show to what practical action (implementation projects) a societal discourse leads? Lux (2010)

Multi criteria analysis (MCA)

MCA is a methodology for supporting complex decision-making situations with multiple and conflicting objectives that decision-makers and other stakeholder groups value differently. The basic idea of MCA methods is to evaluate the performance of alternative courses of action (e.g. management or policy options) with respect to criteria that capture the key dimensions of the decision-making problem (e.g. ecological, economic and social sustainability), involving human judgment and preferences (Belton and Stewart 2002). MCA methods are rooted in operational research and support for single decision-makers but recently the emphasis has shifted towards multi-stakeholder processes to structure decision alternatives and their consequences, to facilitate dialogue on the relative merits of alternative courses of action, thereby enhancing procedural quality in the decision-making process (Mendoza and Martins 2006)

Usually requires expertise on decision analysis software

Possibly limited representativeness (only a small group of stakeholders usually involved)

 Some criteria such as cultural heritage or provisioning services vital for sustenance might not be amenable for trade-offs (though some MCA methods can also address these so-called lexicographic preferences)

 Allows manipulation if not used in a participatory and transparent way

Feasible to address trade-off situations with multiple decision making criteria. Suited for knowledge synthesis processes characterized by incomplete information because they allow a mixed set of both quantitative and qualitative data, including scientific and local knowledge

Can combine information about the impacts of alternative courses of action with information about the relative importance of evaluation criteria for different stakeholders.

Deliberative-analytic methodology which can support participatory processes and transparent decision making.

Can be combined with other knowledge synthesis methods (e.g. Systematic reviews, Delphi, focus groups) (Gregory 2000; Belton and Stewart 2002)

Joint fact finding (JFF) and double sided critique (DSC)

JFF is an emerging strategy for experts, decision makers and key stakeholders from opposing sides of an issue to work together to resolve or narrow factual disputes over public policy issues, including environmental issues. In JFF, the participants jointly determine the questions to be addressed and the best process for gathering information, and they also review the preliminary results of the process, including policy implications, before the results are presented to decision makers (Ehrmann and Stinson 1999)

DSC is a similar approach that allows dual description beyond naturalization or culturalization. Thus, the opposing sides highlight the shortcomings of the other argumentation and methodological approach in order to better identify where disagreement lies and with which approaches it could be addressed (Bateson 2002; Bergmann et al. 2012)

The dialogue is usually assisted by a professional facilitator or mediator. Resources are needed to carry out e.g. reviews or hire experts, in some (rare) cases even carry out new empirical research. JFF processes are often lengthy, depending of the needs to summarise evidence, and they require commitment and sustained involvement from the participants

Suitable for building common ground in highly contentious issues, promoting reflective policy learning, and even resolving persistent disputes (Innes and Connick 1999)

The list is not exhaustive but provides some guidance on the range of methods available

Table 2 presents an example matrix of methodologies that might be suitable for different question types in different policy contexts. It is not exhaustive and represents the initiation of what could be a more extensive effort to provide guidance in this area. We detected that for many combinations of questions and policy constraints there is more than one possible method. The matrix suggests appropriate methods according to the most prominent constraint characterizing a particular decision setting. For some of these, e.g. very restricted time available, this limits the possible choice of method to expert consultation or causal chain analysis for practically all types of knowledge needs (first row). In the case of controversy of evidence, the choice of appropriate method depends much more on the type of knowledge need (third row). In practice, it is likely that several of these characteristics or constraints will apply in any given situation so that further elaboration of the context (e.g. the nature of evidence sought, qualitative, quantitative and/or the estimated amount of evidence available) would be necessary to make a final decision. For example, in the context of controversy or conflicts of evidence, there may also be secondary contexts, such as time and resource constraints, that would shift the balance of choice toward more rapid methods.
Table 2

A decision matrix highlighting evidence synthesis methodologies that may be helpful (based on authors’ consensus opinion) for informing environmental decision making for different combinations of questions and key policy contexts (N/A indicates combinations considered unlikely to arise)

Policy context

Seeking greater understanding or predictive power

Scenario building

Horizon scanning

Seeking understanding of changes in time and space

Seeking measures of anthropogenic impact

Time constraints—short policy window

Expert consultation, causal chain analysis

Expert consultation

Expert consultation

Expert consultation

Expert consultation, causal chain analysis

Financial resource constraints

Expert consultation, causal chain analysis

Expert consultation

Expert consultation

Expert consultation

Expert consultation, causal chain analysis

Controversy/conflicts of evidence

Systematic review/map, joint fact finding

N/A

Expert consultation—Delphi process

Systematic review, Delphi process

Systematic review, joint fact finding

Controversy/conflicts of values or interest

Focus groups, systematic review/map

N/A

Expert consultation—Delphi process

Systematic review, joint fact finding

Focus groups, systematic review

Serious or unacceptable consequences of making the wrong decision

Systematic review

N/A

Expert consultation—Delphi process

Systematic review

Systematic review

Diversity of knowledge/information

Focus groups, joint fact finding

N/A

Expert consultation—Delphi process

Systematic map, joint fact finding

Focus groups, systematic map

High levels of uncertainties

Expert consultation, causal chain analysis, Bayesian belief network

N/A

Expert consultation—Delphi process

Systematic review

Systematic review, expert consultation, causal chain analysis, Bayesian belief network

Policy context

Seeking measures of effectiveness of interventions

Seeking appropriate methodologies

Seeking optimal management

Public opinion and/or perception

Seeking peoples’ understanding of an issue

Time constraints—short policy window

Expert consultation, causal chain analysis

Expert consultation

Expert consultation

Expert consultation

Expert consultation

Financial resource constraints

Expert consultation, causal chain analysis

Expert consultation

Expert consultation

Expert consultation

Expert consultation

Controversy/conflicts of evidence

Systematic review, joint fact finding

Systematic review/map, joint fact finding

Systematic review, multi-criteria analysis

Systematic review, discourse field analysis

Systematic review, discourse field analysis

Controversy/conflicts of values or interest

Focus groups, systematic review

Focus groups, systematic review, double-sided critique

Systematic review, multi-criteria analysis

Systematic review, discourse field analysis

Systematic review, discourse field analysis

Serious or unacceptable consequences of making the wrong decision

Systematic review

Systematic review

Systematic review

Systematic review, discourse field analysis

Systematic review, discourse field analysis

Diversity of knowledge/information

Focus groups, Systematic map

Systematic map, double-sided critique

Systematic map

N/A

N/A

High levels of uncertainties

Systematic review, expert consultation, causal chain analysis, Bayesian belief network

Expert consultation, Bayesian belief network, systematic review

Systematic review

N/A

N/A

Users should identify the type of question that should be addressed and the context in which they are operating

The selection of synthesis methods will have to consider the requirements and constraints of each method highlighted in Table 1 above in relation to the policy context. For example, while systematic review is a very robust knowledge synthesis methodology and can provide highly transparent and reliable results, for many specific knowledge requests there will not be sufficient evidence available to justify a systematic review. Constraints such as insufficient or disputed evidence (e.g. determined by a ‘quick scoping’ of the literature, Defra 2015) might in many cases make it necessary to resort to joint fact finding or double-sided critique while time and resource constraints and short policy window could justify expert consultation or focus groups.

Discussion and conclusions

The Frankfurt Workshop participants agreed that the diversity of knowledge needs reflected in the range of questions identified by participants in KNEU project workshops requires a diversity of synthesis methodologies. This was confirmed by the formative evaluation of the knowledge synthesis prototype and trial assessments conducted during the KNEU project (Carmen et al. 2015; Schindler et al. 2016).

We recommend the decision matrix (Table 2) suggesting appropriate evidence synthesis methodologies given different types of questions and key policy contexts for use as guidance (not prescription) by those considering commissioning or undertaking a knowledge synthesis to meet their evidence needs and inform their decision making. The classification of questions can help specify what exactly is required to meet knowledge needs and inform policy making at a given stage in policy development or steps in the policy cycle. In the experience of the KNEU project, many of the questions formulated by contributors from the policy community combine several aspects and dimensions and are thus unsuitable for straightforward knowledge synthesis. Hence, a thorough scoping process, in which requesters and experts iteratively negotiate the scope, scale and synthesis methodology, is of paramount importance to maximize quality, credibility and relevance of the output (see Livoreil et al. 2016; Schindler et al. 2016). Similarly, the list of contextual factors can serve as a good starting point to specify the policy context in which the knowledge need arises. Once the type of question is identified clearly and the context specified, the table provides suggestions for which methods might be most useful, though the specificities of each case should be considered (including inter alia what kind of knowledge can be accessed and how, and the level of stakeholder involvement required to resolve potential controversies and conflicts of interest).

The reliability for decision making of the outcomes provided by such methodologies will always depend on how well they are executed. Conforming to the highest standards of these methodologies, including making explicit their potential limitations and built-in biases, is crucial to providing a credible synthesis. There is likely to be a relationship between time and financial constraints and potential reliability of methodology. For example, keeping all other factors (expertise and performance of involved persons) constant, if a quick and low budget synthesis is required and simple expert consultation is employed this is likely to be less reliable than using expert elicitation using the Delphi method that would take longer and cost more (Sutherland and Burgman 2015). A similar comparison could be made between rapid, less structured and less comprehensive literature reviews versus systematic reviews. Furthermore, some problem situations require independently conducted syntheses to reduce susceptibility to bias (Pullin and Stewart 2006) while other situations require participatory, deliberative and reflective inquiry where different interpretive frames and biases engendered in them are critically probed and pitted against each other (Saarikoski 2002, 2007).

In some cases we have identified more than one method and some can be used in combination (e.g. expert consultation and systematic review). In other cases the synthesis method enables further analysis such as cost-benefit evaluation or may enable more accurate modelling of scenarios. These possibilities were considered in the Frankfurt workshop but methods that were not considered to strictly meet our definition of knowledge synthesis were omitted from the table. For example, adaptive management is an approach that might be used for many policy issues (Salafsky et al. 2001), which includes the iterative combination of knowledge synthesis (most often using collaborative methodologies, such as participatory knowledge production and/or multi-criteria analysis; e.g. Pahl-Wostl 2007; Méndez et al. 2012) with the generation of new knowledge through the selection, application and monitoring of policies or management strategies (e.g. Walters 1986; Gunderson and Light 2006). It aims at identifying flexible solutions that are resilient to errors and uncertainty (i.e., it treats policies as experiments; Walters and Hilborn 1978; Lee 1993). Hence, while the initial phase of collaborative adaptive management represents a specific type of knowledge synthesis, such an approach extends well beyond the time span of the types of questions addressed here.

In the context of a broader mechanism for biodiversity knowledge synthesis (see Livoreil et al. 2016), the type of matrix shown in Table 2 might be used alongside the scoping process all the way up to agreeing on a methodological protocol. It could help structure the discussion between requesters of a synthesis from the policy community and a knowledge co-ordinating body (i.e. the organisation or individual(s) that would put into place the commissioning and conduct of the synthesis). Such discussions would consider the policy context, the knowledge needs and structure of the question, and would agree on one or more appropriate methodologies.

We recommend that this matrix be further developed, with the inclusion of additional questions and methods in the future. Experiences with using different methods could be documented systematically to start a learning process that might also help to develop more standardized procedures in knowledge synthesis. In the medium term we hope the matrix will stimulate systematic exchange on knowledge synthesis methods and combinations of methods used.

The discussions leading to these results were set in the context of issues involving biodiversity and ecosystem services, but the authors are convinced most of the reasoning outlined above also applies in other policy areas related to, or having an influence on, the environment. Possibly our suggestions might even be helpful regarding knowledge synthesis for decision making in general.

Acknowledgments

This work and the workshop it is based on were funded by the KNEU project within the 7th Framework Programme of the European Commission (Contract No.265299).

Copyright information

© The Author(s) 2016

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Andrew Pullin
    • 1
  • Geoff Frampton
    • 2
  • Rob Jongman
    • 3
  • Christian Kohl
    • 4
  • Barbara Livoreil
    • 5
  • Alexandra Lux
    • 6
    • 7
  • György Pataki
    • 8
  • Gillian Petrokofsky
    • 9
  • Aranka Podhora
    • 10
  • Heli Saarikoski
    • 11
  • Luis Santamaria
    • 12
  • Stefan Schindler
    • 13
    • 14
  • Isabel Sousa-Pinto
    • 15
  • Marie Vandewalle
    • 16
  • Heidi Wittmer
    • 17
  1. 1.Centre for Evidence-Based ConservationBangor UniversityBangorUK
  2. 2.Southampton Health Technology Assessments Centre (SHTAC), Faculty of MedicineUniversity of SouthamptonSouthamptonUK
  3. 3.Jongman EcologyWageningenThe Netherlands
  4. 4.Institute for Biosafety in Plant BiotechnologyJulius Kühn-InstitutQuedlinburgGermany
  5. 5.Fondation pour la Recherche sur la BiodiversitéParisFrance
  6. 6.ISOE -Institute for Social-Ecological ResearchFrankfurt am MainGermany
  7. 7.Senckenberg Biodiversity and Climate Research Centre (BiK-F)Frankfurt am MainGermany
  8. 8.Department of Decision Sciences, Corvinus Business SchoolCorvinus University of Budapest and Environmental Social Science Research Group (ESSRG)BudapestHungary
  9. 9.Department of Zoology, Biodiversity InstituteUniversity of OxfordOxfordUK
  10. 10.Leibniz Centre for Agricultural Landscape ResearchMünchebergGermany
  11. 11.Environmental Policy CentreFinnish Environment InstituteHelsinkiFinland
  12. 12.Doñana Biological Station (EBD-CSIC)SevilleSpain
  13. 13.Division of Conservation Biology, Vegetation and Landscape EcologyUniversity of ViennaViennaAustria
  14. 14.Department of Biodiversity and Nature ConservationAustria & Environment Agency AustriaViennaAustria
  15. 15.Interdisciplinary Centre for Marine and Environmental Research (Ciimar) and Department of Biology, Faculty of SciencesUniversity of PortoPortoPortugal
  16. 16.Department of Conservation BiologyUFZ –Helmholtz-Centre for Environmental ResearchLeipzigGermany
  17. 17.Department of Environmental PoliticsUFZ –Helmholtz-Centre for Environmental ResearchLeipzigGermany

Personalised recommendations