14.1 Introduction

There are many pressing questions and challenges in landscape ecology that have important consequences for sustainable resource management and the conservation of biodiversity. Given the spatial and temporal scopes and the resulting complexity of these issues, many landscape ecologists struggle to provide evidence-based ­solutions. This is especially apparent when we rely exclusively on the traditional approaches and data employed in the natural sciences to understand broad-scale phenomena that have interacting ecological and human elements. By exploring alternative ways to address the limitations of conventional observational and experimental methods, the authors of this book have used expert knowledge to complement poor data or replace missing empirical data, to cope with complexity that confounded the design and conduct of empirical studies, and to solve problems that required the coupling of knowledge generation with management or conservation decision making. The innovative and diverse array of methods illustrated in this book transcend our work in landscape ecology, providing tools and promoting insights that will be relevant within many other subdisciplines of applied ecology (Kuhnert et al. 2010; Orsi et al. 2011). Furthermore, the perspectives and breadth of studies these authors have presented support the growing consensus that the application of expert knowledge is no longer the domain of a few maverick ecologists working at the margins of methodological inquiry. The number of expert-based studies has more than doubled in the past 10 years, and expert knowledge is serving as a credible foundation for many of the most pressing and complex debates in applied ecology (e.g., O’Neill et al. 2008).

For many, their first foray into the collection and application of expert knowledge is a response to the challenge of having little or no empirical data to guide management and conservation decisions (e.g., Drew and Collazo, Chap. 5; Doyon et al., Chap. 10; Keane and Reeves, Chap. 11). When investigating a new research area or developing a decision-support tool, we direct our initial efforts towards identifying the relevant body of theory and collecting or incorporating empirical data. Often, however, we encounter data gaps that would limit the precision, accuracy, or applicability of the study or product, and we struggle with funding or time constraints that prevent the collection of empirical data to support our efforts. While wrestling with such challenges, we recognize that the professional experience and knowledge of our colleagues could potentially address many of the gaps in the theoretical or empirical knowledge. It is at these crossroads that we make the decision to either formally incorporate expert knowledge in our efforts or to initiate an empirical research program. If we decide on the former approach, a departure from the methods of spatial data collection and analyses that are familiar to landscape ecologists, the research becomes a study focused on the human subject – an area in which most scientists lack experience, and which lies outside our comfort zone. The case studies and discussions in this book will better prepare landscape ecologists to consider whether and how to incorporate expert knowledge in our research, and will better equip us to practice rigorous methods when eliciting this knowledge.

By presenting a diversity of projects, both theoretical and practical, this book offers insights that will allow ecologists to anticipate the potential applicability, advantages, and pitfalls of expert knowledge. All of the lead authors are landscape ecologists who have found themselves dependent on expert knowledge to supplement, complement, or even replace empirical data (Table 14.1). They have shared their experiences, both successes and failures, wrestling with how to elicit expert knowledge in a manner that meets scientific standards of transparency and repeatability. In this chapter, we will synthesize some of the common themes that have emerged from their experiences and highlight opportunities for further research and development (Table 14.2).

Table 14.1 Summary of the objectives and methods for eliciting expert knowledge
Table 14.2 Summary of the chapter authors’ experiences from case studies, and their suggestions for advancing the collection and application of expert knowledge

14.2 What We Learned

14.2.1 Broad Application and Acceptance of Expert Knowledge

Expert knowledge can no longer be considered a fringe or secondary information resource. Although there is a long track record for the application of expert knowledge in natural resource management, we are now observing a greater level of respect for this approach because of the increasing degree of rigor (Sutherland 2006). The broader acceptance and resulting scrutiny provided by the scientific community is as encouraging as the growth in application of expert knowledge. Elicitation and expert knowledge are now valid areas of investigation for researchers in the natural sciences. Recent studies, for example, have considered the existence of bias and uncertainty in knowledge (Johnson and Gillingham 2004; Czembor and Vesk 2009), the ability to generalize knowledge to different landscapes or time periods (Doswald et al. 2007; Murray et al. 2009), the merits and drawbacks of expert knowledge relative to empirical data (Johnson and Gillingham 2005; Pullinger and Johnson 2010), and effective practices for eliciting knowledge (Kuhnert et al. 2010). Also, such studies are being published in the most highly respected ecological journals (e.g., Low-Choy et al. 2009; Murray et al. 2009; Aspinall 2010). These are exciting and worthwhile investigations with broad application to pressing issues in landscape ecology, such as conserving threatened species and understanding the effects of climate change (O’Neill et al. 2008; Wilson et al. 2011).

The case studies presented in this book provide insights and guidance on the application of expert knowledge to specific policy and management challenges. Drawn from communities of landscape ecologists in Australia, Canada, and the United States, the authors have illustrated the application of expert knowledge across such diverse fields as wildlife management (Drew and Collazo, Chap. 5; McNay, Chap. 7; Johnson et al., Chap. 8), conservation biology (Moody and Grand, Chap. 6), risk and vulnerability assessment (Kappel et al., Chap. 13), land use planning (Williams et al., Chap. 12), forest landscape succession and modeling (Drescher and Perera, Chap. 9; Doyon et al., Chap. 10), and fire ecology (Keane and Reeves, Chap. 11). In addition, these chapters provide the reader with an overview of a wide range of elicitation methods (Table 14.1). Other authors focused less on the specific uses of expert knowledge, and instead report on the development of more effective methods to elicit and understand the uncertainty inherent in expert knowledge (Low-Choy et al., Chap. 3; Drescher et al., Chap. 4).

Chapter authors have used expert knowledge to address gaps in the available empirical data, characterize the full state of knowledge of a given system, and expedite the delivery of a decision-support tool or of management guidance where the collection of empirical data would be impractical (e.g., for future states or events, for very large and variable landscapes, when rapid decisions are necessary). For example, McNay (Chap. 7) used expert knowledge to parameterize a predictive model of seasonal habitat use by woodland caribou. Although he had access to a considerable amount of empirical data, the key drivers of seasonal distribution and future habitat were complex and interrelated in ways that were not fully understood. In this situation, expert knowledge appeared to be the best basis for forming hypotheses and for integrating and parameterizing the available knowledge to produce predictive models. In comparison, Drew and Collazo (Chap. 5) had no empirical data to describe the distribution of the King Rail. They relied exclusively on experts to develop a set of complex and interacting hypotheses to describe the habitat relationships of this bird species and to guide the collection of empirical data.

In several chapters, expert knowledge was used to prioritize conservation or land use objectives, particularly when managers believed that inaction while awaiting better empirical data was not an option. Moody and Grand (Chap. 6) worked with experts to identify focal bird species that would be representative of broader faunal associations and used these species to guide regional conservation efforts in rapidly changing landscapes. Kappel et al. (Chap. 13) worked with a large number of experts to identify marine ecosystems vulnerable to key drivers of change. However, some chapter authors, including (Keane and Reeves, Chap. 11), expressed concern over the exclusive reliance on expert knowledge as a substitute for empirical methods, especially when rigorous methods were not applied during the elicitation process.

14.2.2 Investigating Expert Knowledge and Developing Rigorous Methods

An overarching theme that characterized all of the chapters was the recognition that elicitation should promote and support transparent and repeatable methods and provide for an assessment of uncertainty and bias in results. Although experts have long contributed their knowledge to support modeling and planning projects, past applications to the natural sciences had limited utility or acceptance because of non-repeatable and poorly developed methods (Sutherland 2006). Too often, elicitation simply involved an open invitation to discuss a particular subject with little to no development of the approach, documentation of the participants or the elicitation process, or use of rigorous methods (Johnson and Gillingham 2004). Poor research design often results in knowledge that has little internal consistency or external validity, and this has harmed the credibility of experts as information resources and active participants in science-based decision making.

Working properly with experts is not necessarily a simple or inexpensive process from either a time or a financial perspective. Drescher et al. (Chap. 4) allocated 12 months to prepare for the elicitation and Kappel et al. (Chap. 13) invited 199 experts to participate in a survey of the effects of 58 human stressors on 15 marine ecosystems. Many activities occur during the design and implementation of expert-based studies. Common recommendations suggest allocating enough time to refine the research questions, identify and characterize the experts, draft the elicitation questions, test and revise (i.e., pilot) the elicitation process and materials before collecting data, develop a strategy to motivate and maintain participation through what is often a long and demanding process, and assess the uncertainty and perhaps even the validity of the elicited knowledge (Low-Choy et al. 2009; Knol et al. 2010). The increased emphasis on design and planning reflects a growing understanding and appreciation of expert bias, the chance of miscommunication, a variety of types of error, and participant burnout. By delving into the elicitation literature and collaborating with colleagues in the social sciences, landscape ecologists are discovering that many of these potential problems can be anticipated and mitigated through proper study design.

Careful attention to detail is required to develop effective and acceptable approaches for elicitation and for reporting the results. As is the case with the collection of empirical data, researchers and practitioners must develop methods that meet a high standard of scientific rigor. Chapter authors demonstrated a range of techniques that can be used to elicit and formally document expert knowledge. Some used computer-based methods to record the expert’s knowledge (Low-Choy et al., Chap. 3; Drescher et al., Chap. 4), whereas others used more generic survey tools that included questionnaires (Doyon et al., Chap. 10; Moody and Grand, Chap. 6) or focus groups (McNay, Chap. 7; Table 1). Low-Choy et al. (Chap. 3) described innovative elicitation software that allowed the experts to relate their knowledge to a specific landscape and continually evaluate the consistency and logical validity of their responses. This work, in particular, highlighted the recent methodological advances that have increased the rigor of eliciting expert knowledge.

The literature provides some guidance on best practices, potential biases, and the general steps used for elicitation (Kadane and Wolfson 1998; Low-Choy et al. 2009; Knol et al. 2010; Kuhnert et al. 2010). Although these past works are a useful starting point, the studies in this book highlight the apparent need for an elicitation process and a method of analysis that meets the specific objectives of a project. Not by design, but by chance, we find that a diversity of methods were adopted in the studies described in this book. This variation is likely representative of the range of approaches available and the creativity being exercised by researchers and practitioners who are working with experts and applying their knowledge to answering difficult questions and solving difficult problems in landscape ecology. We believe strongly that the elicitation method should be crafted to meet study objectives; however, the existence of such a large number of approaches suggests the need for further research to improve our understanding of these methods and provide stricter guidance on the best elements to use in a given elicitation process.

Although the chapters in this book differ in the problem being studied, the geography of the study area, and the elicitation process (Table 14.1), there is nonetheless a set of consistent steps for developing a transparent and repeatable method for collecting and applying the expert knowledge. We present those steps in a generic framework (Fig. 14.1) that can serve as a starting point for inexperienced landscape ecologists who are interested in planning projects focused on expert knowledge or that include an element of expert knowledge. McBride and Burgman (Chap. 2) and the references therein expand on those steps with more detailed guidance.

Fig. 14.1
figure 1_14

A generic framework for study design and the elicitation of expert knowledge. The grey arrow represents linked processes for certain measurement techniques, the dotted arrow represents a process not appropriate for all study designs, and the dashed arrows represent feedback mechanisms that should be used in the presence of excessive uncertainty or weak validation

Previous researchers have sometimes overlooked the need to clearly define the characteristics of an “expert” (e.g., Petit et al. 2003; Van der Lee et al. 2006), so most chapter authors were careful to develop and document a clear definition of the “expert” and the domain expertise required to meet the project objectives. However, these definitions varied among studies; we found definitions based on the number of years of experience in a particular discipline or professional duty or study area, and definitions based on an index of expertise, such as the number of publications on a relevant subject. Other researchers have used even less direct measures of expertise, including membership in expert panels or committees (O’Neill et al. 2008).

Once defined, the experts must be sought out and invited to participate in the study. Authors in this book often used informal peer-nomination processes, such as recognition by colleagues or professional acquaintances. More formal approaches included chain referral (“snowball”) sampling, in which the initial group of experts identified by the research team nominated additional participants (Chap. 8). One group of authors (Kappel et al., Chap. 13) used computer ­databases such as Google Scholar to search for individuals who met their predefined definition. Such tools might be especially useful when a large pool of experts is required across a number of domains of knowledge.

Chapter authors reported a wide range of techniques for collecting, analyzing, and in some cases evaluating the reliability and uncertainty of expert knowledge. We found approaches with a relatively long track record in the elicitation literature, such as the analytical hierarchy process (Chap. 8), as well as project-specific computer-based applications (Chap. 4). Some approaches for collecting knowledge were more generic and were potentially less sensitive to the biases and sources of imprecision inherent to expert knowledge. These methods included the use of facilitated focus groups and structured questionnaires (Table 14.1). Low-Choy et al. (Chap. 3) discussed the application of an innovative software tool, Elicitator, for collecting and analyzing expert knowledge. This tool allowed the experts to explore their assumptions and the logical consistency of the knowledge they provided when describing the distribution and habitat requirements of plants or animals. The techniques for knowledge collection and analysis were sometimes coupled, as in Chap. 3, but were sometimes discrete. For example, the analytical hierarchy process and Elicitator integrated the processes by which expert knowledge was collected and analyzed. Alternatively, the Bayesian belief networks developed by McNay (Chap. 7) and by Drew and Collazo (Chap. 5) were developed using very different methods for eliciting prior probabilities from their respective expert participants.

Throughout the elicitation process, the research team should continuously verify the logic and consistency of the method, the elicitation scores, and the preliminary results. This can be accomplished through in-progress questionnaires or diagnostic tools that elicit process-related feedback from the experts and other project participants (e.g., research assistants, facilitators). The final step in the elicitation process is an assessment of the validity and uncertainty of the elicited expert knowledge. Uncertainty has a number of specific dimensions, as discussed by McBride and Burgman (Chap. 2), but generally represents the degree of variation in the answers elicited from a pool of experts as well as the resulting range in predictions or guidance provided by expert-based models or decision-support tools. Although there are some useful applications of consensus-based approaches for elicitation and decision making in landscape ecology, these approaches do not identify inter-expert variance, and there is growing agreement that this uncertainty should be documented rather than suppressed (Aspinall 2010). Validation is a comparison of the expert’s individual or aggregate responses to some measure of truth where a measure of predictive accuracy is warranted. Verification and uncertainty are elements inherent to all expert-based processes and indeed to all empirical studies; however, validation is not always required or feasible (Fig. 14.1).

A major advance highlighted in this book was the improved degree of effort to identify, quantify, and account for the uncertainty inherent to elicited knowledge (e.g., Chap. 6; Chap. 8; Chap. 9). Several chapter authors noted the important distinction between aleatory uncertainty (i.e., uncertainty inherent in the nature of the system being studied) and epistemic uncertainty (i.e., uncertainty inherent in expert knowledge of the system). Recognizing this difference and its significance allowed them to develop methods to reduce epistemic uncertainty, primarily by paying much closer attention to the unique experience and judgments of individual experts (Chap. 9). A number of case studies also attempted to provide some means to formally evaluate the reliability of expert input both during and after the elicitation process (Chap. 5; Chap. 9; Chap. 10). Providing timely feedback to experts can allow them to correct their own responses (Murray et al. 2009). Techniques and tools are also available to improve the internal consistency between an expert’s knowledge of system components and their expectations of the overall system behavior (Chap. 3).

Empirical data, where available, were used to validate the accuracy of expert knowledge; McNay (Chap. 7), Johnson et al. (Chap. 8), and Drescher and Perera (Chap. 9) made such comparisons. These authors assumed that the empirical data were obtained for situations similar to those on which the experts based their knowledge and that they were also precise and unbiased. In other chapters, cross-validation with empirical data was either unnecessary or impossible. For example, Drew and Collazo (Chap. 5) used expert knowledge to generate hypotheses about bird distributions and to design a population monitoring strategy. The data collected through annual monitoring were subsequently used to update and refine the model rather than to validate the model. Williams et al. (Chap. 12) and Kappel et al., (Chap. 13) used experts to address questions that focused on integrated socioeconomic and ecological relationships and that considered many criteria, some of which were qualitative. There is no set of empirical observations that can serve to assess such complex or future processes, but model plausibility and internal consistency can nonetheless be verified using independent reviewers and other expert groups. Also, monitoring and active adaptive management experiments can both provide validation for expert-based decisions or predictions, but only once those data are collected.

The general steps for study design (Fig. 14.1) provide a robust starting framework, but more importantly, suggest that the elicitation and use of expert knowledge requires the same level of forethought and methodological rigor as empirically based studies. Indeed, many of the chapter authors implicitly or explicitly advocate for the development of better practices that can be used when eliciting and applying expert knowledge. Johnson et al. (Chap. 8) make such a plea when they report the results of a study that failed to adhere to any of the steps described in Fig. 14.1. To support such an approach, Low-Choy et al. (Chap. 3) provide a method and tool whose structure explicitly supports the use of good practices and that guards against many of the biases encountered during elicitation.

Beyond developing a defensible process for elicitation and meeting the direct objectives for using expert knowledge, a number of chapters highlighted methods designed to collect and analyze metadata that described the expert participants (Chap. 5; Chap. 8; Chap. 13). These ancillary data about the experts facilitated subsequent assessment of the elicited knowledge. Capturing the professional identity of the individuals allows modelers to explore the range and variability of the group’s collective experience (Doswald et al. 2007). The research team can use these metadata to explore the reasons for outlier opinions, propose alternative hypotheses based on different groupings of each expert’s unique perspectives or domains of experience, and assess the representativeness of the collected knowledge relative to the application setting.

14.2.3 Used Wisely, Experts Offer Valuable Contributions

Expert knowledge has often been thought of as temporary or substitute data for situations or questions in which empirical data are lacking. Although this is a valid and important use of expert knowledge, there are some applications in which expert knowledge may be more useful than empirical data. For example, experts offer many advantages for the modeling of complex systems, hypothesis generation, and reaching consensus decisions for management and conservation actions (see Fig. 1.1 in Chap. 1). In particular, the use of experts from a range of domains can reveal key aspects of a situation that were not known to the researchers. Landscape ecology addresses questions that pertain to broad spatial and temporal domains with many interacting cross-scale processes and elements. Such complex relationships can be difficult to quantify and understand using empirical data collected using traditional experimental design (Hargrove and Pickering 1992). If elicited carefully, expert knowledge can offer a broader geographic and temporal perspective than the typical 1- to 2-year studies that form the backbone of most empirical ecological and environmental data. Furthermore, experts can debate issues of environmental variability and data representativeness and can formulate hypotheses that conform to their broader combined experience so as to direct future investigations. However, as several chapter authors reiterate, the choice of experts for enlightening any of these processes is critical and should not be left to chance or opportunity.

Although we have emphasized the advantages of eliciting and using expert knowledge for applications in landscape ecology, expert knowledge is not without error, bias, and inaccuracy. Furthermore, expert knowledge is not a solution for all problems or an answer to all questions when empirical data are lacking: if there are no experts, there is no expert knowledge. Such was the finding of Kappel et al. (Chap. 13), who had too few experts to document the vulnerability of some marine ecosystems. Drew and Collazo (Chap. 5) also highlighted instances where the limited number of experts drawn from a narrowly defined domain (federal wildlife refuges) was not always sufficiently informative of the landscapes or species to be modeled. Where knowledge is lacking, experts might begin to contribute their opinions. The difference between expert knowledge and expert opinion (see Chap. 1) is not always obvious; even the authors within this book intermixed these terms, and experts themselves are not always aware of the limits of their knowledge until they are asked to quantify their degree of certainty. When elicitation focuses on a participant’s domain of expertise, such that they reference events or processes that have occurred within their direct personal experience, then knowledge can be documented. However, when participants must extrapolate beyond their domain of expertise, either in time, space, or subject, their knowledge (by definition) incorporates more characteristics of conjecture, hypothesis, and opinion. This is not to say that expert opinion is of no value. Where direct knowledge is lacking, experts may still provide an educated and useful opinion on a particular question. However, landscape ecologists should carefully distinguish between knowledge and opinion in their analyses. This distinction is especially important because uncertainty will likely be much higher when based on opinion rather than knowledge. As an example, opinion may be ineffective for parameterizing quantitative models in which precision and accuracy are important to direct conservation activities (Chap. 8), but may be useful for developing hypotheses about ecological relationships (Chap. 5) or in risk analyses for future or unobserved events (Chap. 13).

14.3 Our Recommendations for Landscape Ecologists

Methods to collect and apply expert knowledge to questions and problems in landscape ecology are rapidly evolving. The contributions in this book demonstrate that the elicitation and use of knowledge of ecological systems and processes has moved from an ad hoc practice to a formalized and rigorous set of defensible methods. As with any scientific endeavor, however, there is room for refinement, improvement, and innovation (Table 14.2). Furthermore, the authors represented here constitute only a small portion of the researchers who are directly applying expert knowledge within the field of landscape ecology. We strongly suspect that many landscape ecologists remain unaware of the importance of rigor in the design of an elicitation study and of the basic elements of the process that are identified in Fig. 14.1. Also, although expert knowledge continues to play an important and growing role in the application of landscape ecological principles, its value remains ambiguous and its use remains contentious among the broader community of ecologists. To improve this situation, we have provided some guidance for best practices and have suggested areas of further research that will be necessary to improve the science of elicitation and the practice of application when developing studies or projects ­premised on expert knowledge (Table 14.2).

14.3.1 Become Informed: Review the Literature Prior to Eliciting Knowledge

Several authors in this book identified gaps in their own training, which left them unprepared for the complexity of designing, facilitating, and interpreting results from expert elicitations. Most people trained in the natural or life sciences have very little exposure to the assumptions that underlie research on human subjects and in the methods used to collect and apply expert knowledge. There is a wealth of existing literature, however, that provides well-founded guidance on defensible best practices for eliciting and using expert knowledge. Some of this work has focused on ecological applications (e.g., Low-Choy et al. 2009; Kuhnert et al. 2010), but other types of practitioners and academic disciplines have a longer history of using expert knowledge well. Thus, we urge the uninformed reader to explore the literature on statistics, health sciences, business, policy sciences, and psychology (Kadane and Wolfson 1998; Aspinall 2010; Knol et al. 2010). Many of the authors in this book have benefited from working directly with colleagues in the social sciences who might not have fully grasped the subject of their studies, but who understood very well the general process for effective elicitation. Just as we might seek out help from a colleague with advanced training in statistics, we must be open to the opportunities that experts in elicitation can provide, even if these experts are found in fields of study with few links to ecology. This message was delivered by McBride and Burgman (Chap. 2) and others (Table 14.2), who argued that improving the application of expert knowledge within landscape ecology will require greater awareness of the tools that are available, as well as the skills to select and tailor these tools to meet the needs of a given project.

For those wishing to learn more, the chapters and citations in this book identify many useful resources. Though this book is not a how-to manual for eliciting expert knowledge, each chapter offers valuable recommendations for motivating expert participants, improving communication, minimizing bias (Chap. 3; Chap. 4), documenting uncertainty (Chap. 6; Chap. 8; Chap. 9), and evaluating the accuracy of expert knowledge (Chap. 7; Chap. 8; Chap. 9). We have the following recommendations to improve the level of awareness and the capacity for self-learning by ecologists who are interested in applying expert knowledge to questions and problems in landscape ecology:

  • Publication of special issues in journals of applied ecology to highlight and promote the effective and proper use of expert knowledge.

  • Formal recognition of points of contact for both practitioners who have used expert knowledge and persons with expertise in eliciting expert knowledge. This “community of practice” would serve as a forum for discussing and guiding methods and for mentoring ecologists who are interested in applying expert knowledge.

  • Development of a textbook or best practices manual that focuses on the most current methods for effectively working with ecological experts and eliciting expert knowledge. This text would consider all elements of gathering and using expert knowledge and would focus on the methodological hurdles or problem areas most likely to confront applied ecologists (Fig. 14.1). We suspect, however, that such a discipline-focused text is premature. We recognize that over the past 10 years ecologists have made significant progress in appreciating the complexity of expert-based studies, and applying better methods of elicitation. Also, there is a substantial literature from other disciplines that can direct ecologists in the development and proper application of methods, and indeed, in understanding the nature of expert knowledge (Cooke 1991; Meyer and Booker 1991; O’Hagan et al. 2006; Collins and Evans 2007). However, considering the unique challenges faced by landscape ecologists, principally the interacting effects of spatial and temporal scale as well as process heterogeneity, we recommend the further refinement and testing of new and innovative methods (e.g., Chap. 3) and additional case studies and applications. Such work would allow a better understanding of the sources of bias and uncertainty inherent to landscape ecology and provide for a stronger foundation for a discipline-specific text. The increasing rate of publications in this area suggests that the science of expert knowledge, as applied to landscape ecology, may mature to a sufficient level to support such a text over the next 3–5 years.

14.3.2 Expand the Available Toolsets to Support Rigorous Elicitation of Knowledge

Authors in this book have demonstrated considerable innovation in developing methods that are effective for eliciting expert knowledge. Some notable advances of particular relevance to applications in landscape ecology include the improved integration of statistical analysis and GIS data within the elicitation process (e.g., Chap. 3). Such spatially explicit approaches will be more intuitive to landscape ecologists, making the knowledge reporting less abstract (Chap. 6). Despite these advances, however, all authors in this book concur that refinement and development of elicitation methods is a key area in need of further research (Table 14.2).

We noted considerable variation in the number of experts employed across the studies in this book and in previous research (e.g., Seoane et al. 2005 = 1; Chap. 13 = 58), and with the exception of certain knowledge areas in which no experts were identified, there was little justification of sample size. The social science literature provides some guidance on the best number of experts for an elicitation, but this advice is based on observations of group dynamics and biases (Aspinall 2010). In ecology, it seems likely that heterogeneity in the environments or ecological processes for which experts are knowledgeable, as well as variation in knowledge among expert participants, will affect recommendations for the minimum number of participants needed to meet the requirements for statistical rigor (Chap. 5). Also, the number of experts involved in a project is likely to require trade-offs among the breadth of the knowledge domain, the availability or number of experts working in that domain, and the effort and expense necessary to identify and recruit experts and to elicit their knowledge. Regardless of the practical limitations that confound the issue of sample size, some guidance is needed on when the use of too few experts threatens the validity of a study’s conclusions.

Authors in this book were nearly unanimous in reporting the need to accurately identify and characterize expertise as well as the need for methods that better record and incorporate uncertainty during the elicitation process (Table 14.2). Clearly, ­useful expert knowledge is premised on identifying the correct pool of experts and differences in suitability among experts within that pool, but this step is often overlooked or is based on ad hoc criteria. Furthermore, there is less guidance on how to identify good experts a priori relative to the assessment and weighting of expert knowledge that occurs during and after elicitation. More research on understanding the implications of incorrect parameterization of the expert definition and of involving too few experts is clearly warranted. We have the following recommendations for further study:

  • Development and testing of elicitation tools that allow experts to document their knowledge using an easy to understand, transparent, and repeatable process. These tools should accommodate all contexts of expert knowledge (Table 14.1, Chap. 1), including problem synthesis, hypothesis building, and model parameterization. Such tools should have inherent mechanisms that guard against bias and inconsistent logic or that allow experts to test for and correct such problems (e.g., Chap. 3).

  • Development of tools or elicitation strategies that better match the spatial and temporal experience and knowledge of an expert to the proposed questions. For many problems in landscape ecology, we are seeking knowledge that informs our understanding of large-scale processes across landscapes or regions, whereas the experts may be more familiar with patch-level phenomena. Currently, we have a limited understanding of the implications of such scale mismatches or how to scale-up expert knowledge.

  • Studies to understand the implications of the definition of an expert, the linkages between this definition and the problem being studied, the number of experts involved in a study, and the uncertainty in expert knowledge. A broader definition of the expert might result in a larger sample of experts, but would then result in a greater breadth of uncertainty in the expert knowledge. The implications of such decisions for the efficiency and reliability of the elicited knowledge should be investigated.

14.3.3 Continue to Critically Evaluate and Test Expert Knowledge

Ideally, every expert-based project should incorporate a critical analysis of the elicitation methods, the information elicited, and the reliability of the resulting decision-support products. Expert knowledge can be cross-validated against other expert sources (Chap. 13) or empirical data (Chap. 7; Chap. 8). However, there is no formal guidance for practitioners or researchers as to when validation of expert knowledge is necessary and, when it is necessary, how best to proceed with the validation. We suspect that the methods and rules for validation will prove to be project-specific, but some perspective on the scope of the available methods is necessary to help researchers understand this problem and choose an appropriate solution. Although validation may not be necessary or even possible in all cases, evaluation and verification of the elicitation process should be a structured component of all projects that depend on expert knowledge. Further research and debate is required to define the necessity and expected outcomes of validation.

Understanding the variation in expert knowledge and the possible biases that underlie this variation appears to be an area of increasing interest in the ecological literature (Doswald et al. 2007; Hurley et al. 2009; Chap. 13). The collection and analysis of detailed quantitative information from individual experts represents a large advance over approaches that capture only aggregate components of knowledge (e.g., consensus results). There are likely strong links between the selection of experts and the resulting uncertainty in knowledge. Thus, by characterizing individual experts and maintaining the ability to distinguish their personal responses within the pool of elicited information, we can better understand the sources of uncertainty (Chap. 8; Chap. 11). There remains, however, much room for innovation and further refinement of methods to evaluate the knowledge gathered through elicitation. Uncertainty is not an unknown concept to the practitioners and researchers who will apply expert knowledge. Indeed, uncertainty was well categorized (e.g. Chap. 2) or was at least recognized within many of the chapters. Despite the recognition of uncertainty as a pivotal concept in evaluating and applying expert knowledge, approaches for documenting uncertainty remain largely ad hoc. The authors represented in this book were nearly unanimous in reporting that the science of eliciting and using expert knowledge would be improved if the elicitation methods directly categorized, measured, and incorporated the uncertainty inherent in the knowledge (Table 14.2). Achieving this goal would require consistent and standardized measures of uncertainty in knowledge and a better understanding of the sources of this uncertainty, especially in the context of expert selection, and would require guidance on how best to manage and accommodate uncertainty. Specific recommendations include:

  • Establish guidelines to characterize the reliability of expert knowledge. Recognizing that there are a broad range of applications and associated requirements for the precision and accuracy of knowledge as well as levels of involvement by experts, such guidelines would support the judicious application of expert knowledge to a given problem or question. An assessment system that positions the elicited information along a spectrum ranging from opinion to knowledge would be particularly valuable.

  • Develop better methods and a consistent measurement scale for quantifying the degree of uncertainty in knowledge both among experts and within an ­individual’s elicited responses. In theory, this measurement scale would partition ­uncertainty into the three main types: aleatory (due to the system’s inherent complexity), epistemic (due to limitations of the expert’s knowledge), and ­linguistic (due to the inherently subjective nature of the words an expert uses to describe their knowledge). Such divisions would also improve the elicitation process. In addition, the quantification of uncertainty would allow researchers to weight the individual responses to account for their relevance in the context of a specific application or question.

Finally, through the diversity of the projects, the rigor of the different methods, and the insights of the authors, this book illustrates the exciting and valuable progress that is currently being made in the application of expert knowledge to answering the questions and solving the problems faced by landscape ecologists. Although there remains much room for innovation and improvement, the potential value of expert knowledge that is collected using a rigorous study design is high. This value is likely to be increasingly evident both in the short term, as a stop-gap measure when there is insufficient data and formal knowledge to support management decisions, and in the long term, as a way to complement and supplement empirical data and formal knowledge. This compilation of learning and experience suggests that there are few bounds to the effective and reliable application of expert knowledge. Where experts are available and a proper method is employed, neither the expert’s discipline, the geography of the study area, nor the subject of study should prevent advancement of our understanding or the development of solutions to the complex problems faced by landscape ecologists. For these reasons, we are confident that applications of expert knowledge in landscape ecology will continue to expand and that the science of eliciting and using expert knowledge will continue to improve.