Selecting appropriate methods of knowledge synthesis to inform biodiversity policy

Responding to different questions generated by biodiversity and ecosystem services policy or management requires different forms of knowledge (e.g. scientific, experiential) and knowledge synthesis. Additionally, synthesis methods need to be appropriate to policy context (e.g. question types, budget, timeframe, output type, required scientific rigour). In this paper we present a range of different methods that could potentially be used to conduct a knowledge synthesis in response to questions arising from knowledge needs of decision makers on biodiversity and ecosystem services policy and management. Through a series of workshops attended by natural and social scientists and decision makers we compiled a range of question types, different policy contexts and potential methodological approaches to knowledge synthesis. Methods are derived from both natural and social sciences fields and reflect the range of question and study types that may be relevant for syntheses. Knowledge can be available either in qualitative or quantitative form and in some cases also mixed. All methods have their strengths and weaknesses and we discuss a sample of these to illustrate the need for diversity and importance of appropriate selection. To summarize this collection, we present a table that identifies potential methods matched to different combinations of question types and policy contexts, aimed at assisting teams undertaking knowledge syntheses to select appropriate methods.


Introduction
There is an increasing demand from multiple policy sectors of society for the process of policy making to be informed by the best available knowledge. Knowledge is generated and communicated in diverse formats and its volume is increasing rapidly, presenting significant challenges in searching, collating and synthesising relevant information in a form that is credible, reliable and legitimate from a decision maker's perspective (Cash et al. 2003;Sarkki et al. 2013). This is the case also with knowledge on biodiversity and ecosystem services (and their interactions with other sectors, interests or needs). Management of biodiversity and ecosystem services generates a broad spectrum of knowledge from traditional or indigenous knowledge to experimental and science-based understanding. In turn, the knowledge needs of decision makers reflect this spectrum, and methods of knowledge gathering and synthesis are dependent on the types of uncertainty faced and the social context in which the decision needs to be made (e.g. Breckon and Dodson 2016).
The diversity of approaches to knowledge synthesis is manifest in the diversity of questions generated by the policy and broader decision-making communities (practice and management) (e.g. Sutherland et al. 2006). The challenge is to provide up-to-date synthesis for decision makers, both directly concerned with biodiversity and ecosystem services as well as from other sectors that might affect or enter into conflict with biodiversity conservation or with ensuring ecosystem services. The synthesis should be precisely tailored to the request (see Livoreil et al. 2016), in accordance with the policy agenda (which often requires limited time windows and budgets), ensuring its legitimacy from both policymaker and wider stakeholder perspectives. An important challenge is to ensure that concerns from different sectors are adequately taken into account and an agreement on a common knowledge base is reached to avoid having stakeholders with opposing interests presenting opposing evidence or knowledge. This challenge, which applies to the policy process in general, is particularly demanding for environmental and biodiversity policyowing to the inter-disciplinary and complex nature of the opinions and interests involved .
Synthesis methodologies are ex-ante assessments that do not generate any new empirical data but seek to identify, compile and combine relevant knowledge from various sources, so that it is understandable as a single unit and readily available for decision makers who want to draw on the best available evidence. Methods of synthesis vary according to the type of question, the type of knowledge sought and the policy context (e.g. stage of the policy cycle and timeframe). Selection of an appropriate method can be crucial to the inclusion of appropriate knowledge in the decision-making process. Frustration over knowledge flow can occur both from the policy community, when knowledge is not provided in the necessary time window for decision making, and from the scientific community, when knowledge syntheses are apparently ignored or seen as irrelevant to current evidence needs (Dietz and Stern 1998;Owens 2005;Sharman and Holmes 2010;Jordan and Russel 2014).
In this paper, we aim to present an initial illustrative decision matrix tailored for biodiversity and ecosystem service knowledge that can be developed through future iterations. The matrix's objective is to provide guidance on the selection of appropriate methods of synthesis for a diversity of questions that may be posed by policy makers and, as far as we are aware, is the first effort of this kind applied to environmental knowledge synthesis.

Methods
A Workshop on 'method selection for providing evidence to policy on biodiversity and ecosystem services' was convened to develop the decision matrix in Frankfurt, Germany in January 2014 (hereafter referred to as the Frankfurt Workshop); participants were invited who had expertise in knowledge synthesis methods from both social and natural sciences in the field of biodiversity and ecosystem services. Participants included six academic researchers from the KNEU project and ten external researchers. All participants were selected for their expertise in policy-relevant research and/or knowledge synthesis methodology (see Table 3 in Appendix 1 for a full list of participants and their affiliations). The workshop considered a range of question types (see below) with respect to knowledge needs to inform decision making arising from three previous workshops convened by the KNEU project in different regions of Europe (hereafter referred to as regional workshops) (Carmen et al. 2015). In these initial regional workshops a broad range of policy makers, stakeholders and scientists had been asked to formulate questions and specify knowledge needs broadly following the methods of Sutherland et al. (2006) involving initial collection of questions followed by discussion and prioritisation sessions. The outcomes then informed the Frankfurt Workshop in terms of identifying the spectrum of potential requests submitted to a knowledge synthesis process.
Questions arising from the three regional workshops were classified by the Frankfurt Workshop participants into different types with regard to the evidence sought, as follows: I. Seeking better understanding of an issue (including predictions and forecasting): 1. Seeking greater understanding or predictive power (e.g. how does green infrastructure contribute to human well-being?) 2. Scenario building to analyse future events (e.g. how will the risk of flooding change under different climate change scenarios?) 3. Horizon scanning (e.g. what will be the most significant novel threats to biodiversity in 2050?) 4. Seeking understanding of changes in time and space (e.g. how has the distribution and abundance of rabies in fox populations changed in the last 10 years?) II. Identifying appropriate ways and means of realising certain decisions 5. Seeking measures of anthropogenic impact (e.g. what is the impact of wind farm installations on bird populations?) 6. Seeking measures of the effectiveness of interventions (e.g. how effective are marine protected areas at enhancing commercial fish populations?) 7. Seeking appropriate methodologies and associated trade-offs (e.g. what is the most reliable method for monitoring changes in carbon stocks in forest ecosystems?) 8. Seeking optimal management (e.g. what is the optimal grazing regime for maximizing plant diversity in upland meadows?) III. Improving understanding of possibilities and boundaries for decision-making: 9. Assessing public opinion and perception (e.g. is there public support for badger culling in the UK?) 10. Seeking people's understanding and providing definitions (e.g. how do different people or groups understand ecosystem services?) At the Frankfurt Workshop, participants were asked to identify and describe, based on their experience, possible policy contexts in which any question may arise and to characterize these in terms of the constraints they might imply for the choice of synthesis method. Through a series of breakout sessions the workshop participants were subsequently asked to consider the suitability of different knowledge synthesis methodologies for each type of question, for each policy context identified. Candidate methodologies were contributed by the workshop participants during the workshop and therefore represent the collective experience of those assembled and are not a comprehensive list. Some methodologies were not included because their purpose is not primarily knowledge synthesis (see ''Discussion and conclusion'' section).
For the purpose of selecting suitable methodologies a prior (theoretical) assumption was made at the workshop that an existing synthesis would not be available to the decision maker and a new synthesis would therefore have to be conducted and a methodology selected. Methodologies differ widely in their 'robustness' as measured by their transparency (the extent to which all actions and decisions can be reported), rigour (the effort expended to minimise error in the findings), repeatability (the extent to which the method can be repeated by a third party) and susceptibility to bias (the extent to which the methodology addresses and reduces potential for bias in the findings) (Gough et al. 2012). In constructing the matrix, participants selected more robust methodologies in policy scenarios where these characteristics become more important (e.g. 'high risk of serious consequences if wrong conclusion is reached'). The most rigorous methodologies were always selected when the policy context allowed.
Workshop participants mapped methods against types of questions and against contextual factors to indicate how well methods are suited to inform each type of question and how well they are expected to perform in the different contexts. Participants subsequently used this process to specify the most promising methods for each combination of question type and policy-context based on the participants' expertise and knowledge of the synthesis methods. A table was drafted during the workshop and modified through subsequent discussion among the authors of this paper (all but three of whom were workshop participants) that could be directly used as a decision support tool for anyone considering the commissioning or conduct of a knowledge synthesis.

Results
The policy contexts identified in the Frankfurt Workshop and described in the following list are not exhaustive but they are examples of factors that might influence choice of synthesis methodology. We recognise that they are potentially overlapping and interrelated:

Time constraints
The timeframe over which policy decisions need to be made (the policy window) can sometimes be very short (days to weeks). This places limits on the knowledge synthesis that can be achieved or encourages forms of synthesis that can be conducted and updated rapidly.

Financial resource constraints
Alongside time constraints there are always financial constraints and knowledge-synthesis methods may be confined to low cost options.

Controversy caused by conflicts of evidence
Knowledge needs may arise in relation to a disagreement over the interpretation and implications of the current evidence, or its robustness in terms of the variability of existing results. This may require transparent, rigorous, independently conducted (by actors perceived by stakeholders in the conflict to have no vested interest in the outcome) and inclusive synthesis methods that minimise susceptibility to bias by engaging key actors in designing research questions and discussing conclusions on the basis of evidence.

Controversy caused by conflicts of values and/or interests
Knowledge needs may differ according to vested interests (legitimate or otherwise) in the outcome and/or a fundamental difference in values and beliefs on the part of two or more groups. This may require substantial stakeholder engagement to generate an acceptable question (or questions) as well as transparent, rigorous and inclusive synthesis methods that provide reliable evidence regarded as legitimate by the parties involved.

Serious and/or unacceptable consequences of making the wrong decision
Where the consequences of making a wrong decision are regarded as a high risk to a decision maker they may require transparent, rigorous and independently conducted synthesis methods that minimise susceptibility to bias and/or clearly state what the potential biases are, thus providing a clear audit trail to justify the decision.

Diversity of knowledge and information
Where the question demands the synthesis of a high diversity of different types of knowledge and/or many different perspectives need to be included, it may require interand transdisciplinary methods and approaches that are able to structure diversity, identify commonalities and differences, or rank alternatives.

High uncertainty
In situations where there is significant uncertainty or variability in knowledge, methods may be required that seek to examine sources of uncertainty and variability in results, and synthesise knowledge taking such uncertainty and/or variability into account. Such methods would provide a best estimate of the truth together with statements of confidence in that estimate.
The identified methodologies are drawn from the natural and social sciences and all have been applied to some extent to support decision making in environmental and other related sectors. Definitions are provided in Table 1 together with explanations of their suitability to explain why they might be chosen for a particular combination of question and policy context. Table 2 presents an example matrix of methodologies that might be suitable for different question types in different policy contexts. It is not exhaustive and represents the initiation of what could be a more extensive effort to provide guidance in this area. We detected that for many combinations of questions and policy constraints there is more than one possible method. The matrix suggests appropriate methods according to the most prominent constraint characterizing a particular decision setting. For some of these, e.g. very restricted time available, this limits the possible choice of method to expert consultation or causal chain analysis for practically all types of knowledge needs (first row). In the case of controversy of evidence, the choice of appropriate method depends much more on the type of knowledge need (third row). In practice, it is likely that several of these characteristics or constraints will apply in any given situation so that further elaboration of the context (e.g. the nature of evidence sought, qualitative, quantitative and/or the estimated amount of evidence available) would be necessary to make a final decision. For example, in the context of controversy or conflicts of evidence, there may also be secondary contexts, such as time and resource constraints, that would shift the balance of choice toward more rapid methods.
The selection of synthesis methods will have to consider the requirements and constraints of each method highlighted in Table 1 above in relation to the policy context. For example, while systematic review is a very robust knowledge synthesis methodology and can provide highly transparent and reliable results, for many specific knowledge requests there will not be sufficient evidence available to justify a systematic review. Constraints such as insufficient or disputed evidence (e.g. determined by a 'quick scoping' of the literature, Defra 2015) might in many cases make it necessary to resort to joint fact finding or double-sided critique while time and resource constraints and short policy window could justify expert consultation or focus groups. Available expertise at short notice over a few days, restricted use in conflict situations with contested knowledge claims and risk of advocacy science (see Ehrmann and Stinson 1999). Process susceptible to biased/random selection of experts Relatively quick and inexpensive approach for welldefined, uncontested policy problems Expert elicitation/consultation with Delphi process In the Delphi method, a coordination team or a facilitator designs a questionnaire which is sent to a group of experts. The expert evaluations are then summarized by the coordination team and the summary is sent to the respondents who are given the opportunity to reconsider their original answers in the light of the responses by others.
The assumption is that the expert opinions gradually converge as the experts consider the various aspects of the problem and learn from one another. The process is stopped after having completed a pre-defined number of rounds, or reaching a sufficient level of consensus. Delphi is designed to initiate discussion on trends and potential future development.
Can take longer than expert consultation therefore resources required are greater Compared to expert consultation, a slightly more systematic and rigorous approach, which usually makes the process of reaching a result more transparent as well as recording divergent opinions and their reasoning. Several rounds usually lead to a more thorough reflection of different issues and perspectives than a single meeting or separate interviews (Mukherjee et al. 2015) Causal chain analysis Highly structured and standardised protocol-driven process for mapping evidence similar to SR with extensive search but differing primarily in being able to answer a broader question than SR, and not aiming to quantitatively combine evidence; may not include extensive critical appraisal; useful for summarising state of the art for a particular question, possibly to identify or prioritise questions for later SR (Gough et al. 2012;CEE 2013) Can answer specific or broad questions so long as the question is structured well enough to enable the setting of practical inclusion criteria. Team activity. Based on an a priori peer-reviewed protocol. As such, usually takes months rather than days or weeks to complete. Main limitations are that outcomes are synthesised in less detail than in SR, often as a classification, and may not be appraised critically Possibly more flexible than SR as could accommodate a broader question and cover a relatively wide range of outcomes, albeit with less detail and rigour. Provides a systematic (defensible) way of establishing ''state of the art''. Could also potentially: Ensure transparency Facilitate stakeholder involvement Provide a basis for more focused question development or prioritisation for future SR A major advantage is the possibility to up-date the map without repeating the whole process Suitable when addressing a question of impact or effectiveness when there are multiple possibilities in subject, intervention/exposure and outcomes such that a full synthesis of all elements of the question would not be feasible. See http://www. environmentalevidence.org/completed-reviews  (Orvik et al. 2013) Discourse field analysis (DFA) Discourse field analysis is a structured method for investigating conflicts and alliances among different knowledge holders or stocks of knowledge when discourses are emerging (''input level '', cf. Ottow 2002). Aim is to identify systematically the key issues and actors, and the latter's location within a discourse field; distinguishing between certain and uncertain knowledge, and determining which knowledge claims are points of conflict between different groups in society and the sciences is a major perspective. In this regard, the focus is on arguments, procedures or putative facts that are seen as correct or true by the actors under analysis, rather than on which are true.
The outcome is a picture of the discourse landscape with all its contradictions. It emphasises the negotiating processes that take place within a discourse field, i.e. DFA differs from a discourse analysis as a method in social sciences that traces the interaction of knowledge and power at the ''outcome level'' in order to show how power is exercised in a society through discourses, e.g. question of rules, controls or exclusion (Foucault 1971) the performance of alternative courses of action (e.g. management or policy options) with respect to criteria that capture the key dimensions of the decision-making problem (e.g. ecological, economic and social sustainability), involving human judgment and preferences (Belton and Stewart 2002). MCA methods are rooted in operational research and support for single decision-makers but recently the emphasis has shifted towards multi-stakeholder processes to structure decision alternatives and their consequences, to facilitate dialogue on the relative merits of alternative courses of action, thereby enhancing procedural quality in the decision-making process (Mendoza and Martins 2006) Usually requires expertise on decision analysis software Possibly limited representativeness (only a small group of stakeholders usually involved) Some criteria such as cultural heritage or provisioning services vital for sustenance might not be amenable for trade-offs (though some MCA methods can also address these so-called lexicographic preferences) Allows manipulation if not used in a participatory and transparent way Feasible to address trade-off situations with multiple decision making criteria. Suited for knowledge synthesis processes characterized by incomplete information because they allow a mixed set of both quantitative and qualitative data, including scientific and local knowledge Can combine information about the impacts of alternative courses of action with information about the relative importance of evaluation criteria for different stakeholders. Deliberative-analytic methodology which can support participatory processes and transparent decision making.
Can be combined with other knowledge synthesis methods (e.g. Systematic reviews, Delphi, focus groups) (Gregory 2000; Belton and Stewart 2002) Joint fact finding (JFF) and double sided critique (DSC) JFF is an emerging strategy for experts, decision makers and key stakeholders from opposing sides of an issue to work together to resolve or narrow factual disputes over public policy issues, including environmental issues. In JFF, the participants jointly determine the questions to be addressed and the best process for gathering information, and they also review the preliminary results of the process, including policy implications, before the results are presented to decision makers (Ehrmann and Stinson 1999) DSC is a similar approach that allows dual description beyond naturalization or culturalization. Thus, the opposing sides highlight the shortcomings of the other argumentation and methodological approach in order to better identify where disagreement lies and with which approaches it could be addressed (Bateson 2002;Bergmann et al. 2012) The dialogue is usually assisted by a professional facilitator or mediator.
Resources are needed to carry out e.g. reviews or hire experts, in some (rare) cases even carry out new empirical research. JFF processes are often lengthy, depending of the needs to summarise evidence, and they require commitment and sustained involvement from the participants Suitable for building common ground in highly contentious issues, promoting reflective policy learning, and even resolving persistent disputes (Innes and Connick 1999) The list is not exhaustive but provides some guidance on the range of methods available Biodivers Conserv

Discussion and conclusions
The Frankfurt Workshop participants agreed that the diversity of knowledge needs reflected in the range of questions identified by participants in KNEU project workshops requires a diversity of synthesis methodologies. This was confirmed by the formative evaluation of the knowledge synthesis prototype and trial assessments conducted during the KNEU project (Carmen et al. 2015;Schindler et al. 2016). We recommend the decision matrix (Table 2) suggesting appropriate evidence synthesis methodologies given different types of questions and key policy contexts for use as guidance (not prescription) by those considering commissioning or undertaking a knowledge synthesis to meet their evidence needs and inform their decision making. The classification of questions can help specify what exactly is required to meet knowledge needs and inform policy making at a given stage in policy development or steps in the policy cycle. In the experience of the KNEU project, many of the questions formulated by contributors from the policy community combine several aspects and dimensions and are thus unsuitable for straightforward knowledge synthesis. Hence, a thorough scoping process, in which requesters and experts iteratively negotiate the scope, scale and synthesis methodology, is of paramount importance to maximize quality, credibility and relevance of the output (see Livoreil et al. 2016;Schindler et al. 2016). Similarly, the list of contextual factors can serve as a good starting point to specify the policy context in which the knowledge need arises. Once the type of question is identified clearly and the context specified, the table provides suggestions for which methods might be most useful, though the specificities of each case should be considered (including inter alia what kind of knowledge can be accessed and how, and the level of stakeholder involvement required to resolve potential controversies and conflicts of interest).
The reliability for decision making of the outcomes provided by such methodologies will always depend on how well they are executed. Conforming to the highest standards of these methodologies, including making explicit their potential limitations and built-in biases, is crucial to providing a credible synthesis. There is likely to be a relationship between time and financial constraints and potential reliability of methodology. For example, keeping all other factors (expertise and performance of involved persons) constant, if a quick and low budget synthesis is required and simple expert consultation is employed this is likely to be less reliable than using expert elicitation using the Delphi method that would take longer and cost more (Sutherland and Burgman 2015). A similar comparison could be made between rapid, less structured and less comprehensive literature reviews versus systematic reviews. Furthermore, some problem situations require independently conducted syntheses to reduce susceptibility to bias (Pullin and Stewart 2006) while other situations require participatory, deliberative and reflective inquiry where different interpretive frames and biases engendered in them are critically probed and pitted against each other (Saarikoski 2002(Saarikoski , 2007. In some cases we have identified more than one method and some can be used in combination (e.g. expert consultation and systematic review). In other cases the synthesis method enables further analysis such as cost-benefit evaluation or may enable more accurate modelling of scenarios. These possibilities were considered in the Frankfurt workshop but methods that were not considered to strictly meet our definition of knowledge synthesis were omitted from the table. For example, adaptive management is an approach that might be used for many policy issues (Salafsky et al. 2001), which includes the iterative combination of knowledge synthesis (most often using collaborative methodologies, such as participatory knowledge production and/or multi-criteria analysis; e.g. Pahl-Wostl 2007;Méndez et al. 2012) with the generation of new knowledge through the selection, application and monitoring of policies or management strategies (e.g. Walters 1986;Gunderson and Light 2006). It aims at identifying flexible solutions that are resilient to errors and uncertainty (i.e., it treats policies as experiments; Walters and Hilborn 1978;Lee 1993). Hence, while the initial phase of collaborative adaptive management represents a specific type of knowledge synthesis, such an approach extends well beyond the time span of the types of questions addressed here.
In the context of a broader mechanism for biodiversity knowledge synthesis (see Livoreil et al. 2016), the type of matrix shown in Table 2 might be used alongside the scoping process all the way up to agreeing on a methodological protocol. It could help structure the discussion between requesters of a synthesis from the policy community and a knowledge co-ordinating body (i.e. the organisation or individual(s) that would put into place the commissioning and conduct of the synthesis). Such discussions would consider the policy context, the knowledge needs and structure of the question, and would agree on one or more appropriate methodologies.
We recommend that this matrix be further developed, with the inclusion of additional questions and methods in the future. Experiences with using different methods could be documented systematically to start a learning process that might also help to develop more standardized procedures in knowledge synthesis. In the medium term we hope the matrix will stimulate systematic exchange on knowledge synthesis methods and combinations of methods used.
The discussions leading to these results were set in the context of issues involving biodiversity and ecosystem services, but the authors are convinced most of the reasoning outlined above also applies in other policy areas related to, or having an influence on, the environment. Possibly our suggestions might even be helpful regarding knowledge synthesis for decision making in general.