Environmental Management

, Volume 47, Issue 2, pp 161–172

The Route to Best Science in Implementation of the Endangered Species Act’s Consultation Mandate: The Benefits of Structured Effects Analysis

Authors

  • Dennis D. Murphy
    • Department of BiologyUniversity of Nevada
    • Nossaman LLP
Article

DOI: 10.1007/s00267-010-9597-9

Cite this article as:
Murphy, D.D. & Weiland, P.S. Environmental Management (2011) 47: 161. doi:10.1007/s00267-010-9597-9

Abstract

The Endangered Species Act is intended to conserve at-risk species and the ecosystems upon which they depend, and it is premised on the notion that if the wildlife agencies that are charged with implementing the statute use the best available scientific information, they can successfully carry out this intention. We assess effects analysis as a tool for using best science to guide agency decisions under the Act. After introducing effects analysis, we propose a framework that facilitates identification and use of the best available information in the development of agency determinations. The framework includes three essential steps—the collection of reliable scientific information, the critical assessment and synthesis of available data and analyses derived from those data, and the analysis of the effects of actions on listed species and their habitats. We warn of likely obstacles to rigorous, structured effect analyses and describe the extent to which independent scientific review may assist in overcoming these obstacles. We conclude by describing eight essential elements that are required for a successful effects analysis.

Keywords

Endangered speciesConsultationEffects analysisBest available science

Introduction

Shortly after taking office in 2008, President Obama proclaimed that it was his intention “to restore science to its rightful place in the Endangered Species Act.” Both environmentalists and scientists were buoyed. The new administration clearly intended to signal an end to an era when political appointees apparently manipulated the analyses of staff scientists, while at the same time reaffirming the central role of science in decision-making under the Act. One of the Bush administration’s later-term actions had been to amend the regulations that implement the interagency consultation provisions of the federal Endangered Species Act (ESA 1973); these regulations require the U.S. Fish and Wildlife Service and National Marine Fisheries Service to conduct effects analyses and issue decisions in the form of scientifically informed biological opinions. The Bush administration action substantially reduced the circumstances in which federal agencies were required to consult with the federal wildlife agencies when agency actions may affect threatened or endangered species. Reduced consultation raised fears of increased impacts on federally protected plants and animals and the habitats that support them.

Many observers presume that the current Administration’s commitment to a return to pre-Bush era implementation of the ESA will be a return to meeting the intent of the United States Congress—that agency decisions respecting imperiled species again will be informed by reliable scientific information. But, it is worth asking whether realizing pre-Bush era implementation of the Act will be enough to accomplish the current President’s intent. Were decisions under the Act in the 1990s and earlier actually guided by the best available science as required by law? More specifically, did the federal wildlife agencies actually utilize the best available science to inform the decisions that they made under the interagency consultation provisions of the Act? And, with a return to broader consultation requirements, can we count on the agencies to now and in the future use good science to inform their decisions?

We’re not so sure. Congress plainly intended that knowledge from science should inform ESA implementation. The spare language of the statute sets a high bar for the use of science. In making determinations and findings under the Act, federal agencies must “use the best available scientific and commercial data.” Reliable information drawn from science needs not only to be cited, amassed, and then presented by the wildlife agencies, it actually needs to be “used,” that is exercised to inform a required interagency consultation decision-making process, which starts with assembling reliable information and proceeds to agency decisions. Jasanoff (1990) describes the process by which regulatory agencies take scientific knowledge and assess and analyze it in making regulatory determinations as a “trans-scientific” activity. Under the interagency consultation provisions of the ESA, the obligatory trans-scientific step is referred to as effects analysis. The U.S. Forest Service refers to its version of the exercise as consistency review. In applications under the authority of the U.S. Environmental Protection Agency (EPA) and other agencies concerned principally with human health and safety this activity is called risk assessment. But we contend that, while the federal wildlife agencies muster reliable information, and often the best information available, those agencies less frequently employ that information in a rigorous, structured effects analysis.

The effects analysis concept is the essential element that is necessary to fulfill Congress’ best science mandate under the ESA. Although the wildlife agencies have explicitly required themselves to conduct effects analysis in making consultation determinations (U.S. Fish Wildlife Service and National Marine Fisheries Service 1998), they follow no consistent approach and too often leave essential steps unaddressed. We offer a description of an effects analysis framework that is applicable in the context of interagency consultation for protected species and also more broadly in conservation planning. We warn of the potential for recurring errors by staff biologists from the federal wildlife agencies (and, to an extent, academia) when conducting effects analyses that can readily compromise the best available science in its application in efforts to protect species most at risk of extinction. We then turn to the role of independent science review in interagency consultation to, among other things, avert the errors described. And, we describe eight essential elements that are required for a successful effects analysis. We believe that to meet the current administration’s intent for good science to guide conservation actions under the federal Endangered Species Act, a new commitment to rigorous, structured effects analysis is essential.

The Effects Analysis Concept

The purpose of the Endangered Species Act is to conserve at-risk species and the ecosystems upon which they depend. The law includes provisions for listing species as either threatened or endangered, and provides mechanisms for protecting and, ultimately, recovering such species. One mechanism is interagency consultation, which is a process mandated by provisions in section 7 of the Act. Those provisions require all federal agencies, in consultation with and with the assistance of the federal wildlife agencies, to insure that any action authorized, funded, or carried out by such agency is not likely to jeopardize the continued existence of any listed species or result in the destruction or adverse modification of critical habitat of such species. Actions subject to consultation are varied and numerous, ranging from operation of a major water project by the Bureau of Reclamation that affects a migratory route used by steelhead (Oncorhynchus mykiss), to construction of a new highway interchange subsidized by the Federal Highway Administration that affects a roost used by Indiana bats (Myotis sodalis), to filling of wetlands on private land authorized by the U.S. Army Corps of Engineers that affects rearing sites used by the California tiger salamander (Ambystoma californiense).

The federal wildlife agencies have promulgated regulations to implement the ESA’s consultation provisions that require evaluation of the effects of any proposed action undertaken by a federal agency on a listed species or the designated critical habitat of that species (Department of the Interior and Department of Commerce 2009). The regulations further define the effects of the action as “the direct and indirect effects of an action on the species or critical habitat, together with the effects of other activities that are interrelated or interdependent with that action, which will be added to the environmental baseline.” Evaluation of the effects of federal agency action that has the potential to harm a listed species is the focus of an effects analysis. The ultimate purpose of the effects analysis is to inform the determination of the Fish and Wildlife Service or the National Marine Fisheries Service as to whether a proposed action is either likely or unlikely to jeopardize the continued existence of a listed species or result in the destruction or adverse modification of its critical habitat.

The effects analysis begins with an assessment of the status of an affected species and its habitat in order to ascertain the environmental baseline, which encompasses the past and present impacts of all other actions and environmental stressors within the action area, which affect the listed species and its habitat. The next step is to assess the effects of the proposed action against the backdrop of the environmental baseline. The effects of the action cannot be evaluated in a vacuum; the environmental baseline will, in almost all cases, materially influence whether the effects of the action are likely to jeopardize the species’ continued existence or result in the destruction of adverse modification of its critical habitat. When the federal wildlife agencies determine the baseline environmental conditions that affect a species and the effects of the action on that species, they must do so consistent with applicable legal requirements, including the best available science requirement.

Requirement to Use the Best Available Data

The interagency consultation provisions of the ESA state that action agencies (such as the Army Corps of Engineers and the Forest Service) and the federal wildlife agencies must “use the best scientific and commercial data available” in fulfilling their respective requirements. The requirement derives from a predecessor to the ESA, the Endangered Species Conservation Act of 1969, which directed the Secretary of the Interior to make species listing decisions on the basis of the best scientific and commercial data available. When Congress enacted the modern Endangered Species Act in 1973, and amended the Act in 1978, it imported the standard into listing and other provisions in the new statute, including the interagency consultation provisions. Unfortunately, there is limited legislative history that might further inform an understanding of the requirement.

The federal wildlife agencies have not issued regulations that interpret the requirement to use the best scientific and commercial data available, but in 1994 they did issue a policy statement on information standards under the Endangered Species Act (Department of the Interior and Department of Commerce 1994a). This guidance document states that it is the policy of the federal wildlife agencies, among other things, to
  • require biologists to evaluate all scientific and other information that will be used to prepare biological opinions and incidental take statements to ensure that such information is reliable, credible, and represents the best scientific and commercial data available;

  • gather and impartially evaluate biological, ecological, and other information that disputes official positions, decisions, and actions proposed or taken by the federal wildlife agencies during their implementation of the Act; and

  • require biologists to document their evaluation of information that supports or does not support a position being proposed as an official agency position on a interagency consultation in reliance on the best available comprehensive, technical information regarding the status and habitat requirements for a species throughout its range; and to the extent consistent with the use of the best scientific and commercial data available, use primary and original sources of information as the basis for recommendations to make a determination of whether a federal action is likely to jeopardize a listed species or destroy or adversely modify critical habitat.

There are a number of other federal laws, regulations, and policies that should inform an understanding of the requirement to use the best scientific and commercial data available. Two in particular are pertinent. The first is the Administrative Procedure Act (APA), which provides parties affected by final agency actions with a means to seek judicial review of those actions (APA 1946). In addition, it requires that a reviewing court set aside agency action that is “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” The second is the Information Quality Act (IQA), which was enacted in 2001 as a rider to an appropriations act (IQA 2001). The Office of Management and Budget issued guidance to federal agencies pursuant to the IQA to ensure the “quality, objectivity, utility, and integrity” of information disseminated by those agencies to the public (OMB 2002). The federal wildlife agencies, in turn, issued their own information quality guidelines. Among other things, the standards in the APA and IQA emphasize the importance of transparent decision-making to allow affected individuals and reviewing courts to determine that federal agencies have considered the full record before them and have made agency determinations based upon the data, analyses, and findings in that record.

In a number of articles on the subject, both attorneys and scientists have set forth perspectives on the scope of the requirement to use the best scientific and commercial data that are available (for example, Brennan and others 2002, Doremus 2004, Ruhl 2004, Sullivan and others 2006). Some scholars have suggested that the requirement to use the best scientific and commercial data available in the Act is meaningless because the APA’s scope of review subsumes the requirement (for example, Ruhl 2004). But a number of federal courts, including the U.S. Supreme Court, have concluded that the ESA’s best scientific data provisions establish procedural and substantive legal requirements (see Bennett v. Spear, 520 U.S. 154. 1997). While some legal advocates and judges may conflate the APA and the “best scientific data” requirement of the ESA, they are nonetheless distinct.

A plausible interpretation of the requirement to use the best scientific and commercial data available is offered by Doremus (2004), who states that “the best available science mandate was generally intended to ensure objective, value-neutral decision making by specially trained experts.” This may be perceived as a bit naïve in light of the fact that implementation of the Endangered Species Act involves reconciling competing values; hence, science cannot provide exclusively objective answers to conservation planning questions. But, the fact that values enter into the decision-making process under the ESA at some juncture does not provide a basis for straying from application of the scientific method as the means to gather and assess information. Instead, it provides grounds for establishing a process that can serve to parse out technical or scientific issues from policy considerations. That seems to be the intent of the effects analysis process as it is invoked in the federal wildlife agencies’ consultation handbook (U.S. Fish and Wildlife Service and National Marine Fisheries Service 1998). The effects analysis framework discussed below builds upon a body of work developed over three decades, which critiques contemporary approaches to the analogous process of risk assessment, argues for more reliable use of the best scientific and commercial data available and, ultimately, offers steps toward realizing transparent and defensible decision-making in the context of interagency consultation.

Effects Analysis Framework

The process for completing an effects analysis is set forth in general terms in the Endangered Species Act, the regulations regarding interagency consultation, and the consultation handbook. But these materials do not provide a sufficiently detailed roadmap for obtaining data and analysis regarding listed species and their habitats, environmental stressors, and projected environmental changes. Nor does the handbook describe how to use that information to make quantified predictions of the ecological costs and benefits of the proposed action and attendant conservation measures (and, where appropriate, alternative actions).

More comprehensive guidance is available that describes the analogous process of risk assessment, a standard decision-making tool in the implementation of a number of federal laws enacted to protect human health and the environment (Carroll and others 1996). Risk assessment has received a great deal of critical attention (for example, Sunstein 2002, National Academy of Public Administration 1995) and has been the subject of three committee reports from the National Research Council (NRC 1983, 1994, 2009). The most recent report, Science and Decisions, describes the process of evaluating the effects of environmental disturbances or assessing risks from environmental stressors as a framework, with risk assessment providing the bridge between research, in which “scientific knowledge and diverse types of information on specific threats” are developed, and “risk management activities [that] are undertaken by regulatory agencies” to minimize those threats (NRC 2009, p. 30). Although the description of risk assessment uses a different nomenclature than is applied in the context of the ESA, the risk assessment and effects analysis processes share similar attributes and functions. As such, the risk assessment framework described by the NRC and EPA can inform the effects analysis process undertaken under the Endangered Species Act.

In introducing the concept of risk assessment, the NRC notes that research findings can only rarely, if ever, directly inform decision-making (NRC 2009). Instead, such findings must be interpreted. Risk assessment is a method for interpreting research findings and evaluating the relative merit of various options available to decision-makers. The Environmental Protection Agency has devised a framework for cumulative risk assessment in meeting their mandates for environmental protection (EPA 2003). In brief, the purpose of that framework is to assess combined risks posted by aggregate exposure to multiple agents or stressors (EPA 2003, NRC 2009). This may be contrasted with a single-chemical risk assessment approach that the NRC and EPA developed in the 1970s and early 1980s (EPA 2003).

In its first report on risk assessment, released in 1983, the NRC advocated for risk assessment as a framework that would allow decision-makers to assess “complex and uncertain, and often contradictory scientific information” derived from research (NRC 2009, p. 30). This framework may be applied in the context of effects analysis in support of species and habitat conservation through the step-wise approach advocated by the NRC (see Fig. 1).
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-010-9597-9/MediaObjects/267_2010_9597_Fig1_HTML.gif
Fig. 1

The effects analysis framework

A precursor to effects analysis is referred to in the risk assessment literature as the problem formulation phase (EPA 2003). In the context of interagency consultation, this is the stage at which the proposed action is defined and the action area is delineated. The scope of a proposed action and action area determine which one or more listed species and critical habitat may be affected by the action and establish side-boards on the extent of effects (for example, by delineating the portion of the historical and present range of the species that falls within the action area). At the problem formulation phase, EPA recommends development of a conceptual model that represents relationships and pathways (EPA 2003, pp. 24–27). Development of such models is commonplace in conservation planning, and can be appropriate in the course of interagency consultation. As a general rule, it is appropriate for the action agency—rather than the apposite federal wildlife agency—to undertake this phase of the process.

This problem formulation stage, which precedes the actual effects analysis, has both a policy component and a featured role for scientists. It is plainly appropriate to expressly incorporate policy considerations when defining the proposed action. Societal values and needs, and input from a range of stakeholders, will often be instrumental in defining the proposed action. Under the Act’s consultation process, the federal “action agency” defines the proposed action. Often, the action agency will work cooperatively with an applicant to define the proposed action, and other affected parties—including states, tribes, and interested stakeholders—may contribute to the process of defining the proposed action (particularly in the context of regional conservation planning). At the same time, the judgment of scientists—informed by scientific data, analyses, and study results—will be instrumental in ascertaining the scope of the action area. Likewise, scientists should play a dominant, if not exclusive, role in the development of a conceptual model that describes the ecology of the target species, essential environmental stressors that affect the habitat of the species, and the likely effects of the proposed action on both. The combination of input from scientists and policy considerations elicited from public officials and affected stakeholders is analogous to the combination of expert input and policy considerations in the problem formulation phase of risk assessment (EPA 2003, p. 10).

The first step in the effects analysis process itself is the collection of reliable scientific information which includes relevant data, pertinent analyses, and findings that accompany those analyses (see column two of Fig. 1). Logistical limitations, particularly the actual scarcity of individuals of many listed species that are the subjects of effects analyses, often inhibit the ability of scientists to engage in hypothesis testing using a rigorous experimental design. Even in circumstances where data sets are relatively rich, there are significant information gaps and limitations to inference that constrain the reliability of available data in application to management decision-making. But the availability and quality of standing information cannot be ascertained until this first step is completed. The scientific data, analyses, and findings that will be used to inform the effects analysis should be vetted with scientists to identify and select that information that is pertinent, reliable, and sufficiently robust to populate the models that will be used to analyze project-related costs and benefits to species and their habitats, and to the public.

The second step is to catalog and select among models that will be used to integrate existing data and analyses in order to describe the baseline conditions and effects of the proposed action on relevant species and their respective habitats. At this juncture, it is imperative to assess critically the quality and applicability of existing data and analyses (both by assessing discrete data sets, analyses, and findings themselves and by assessing synthetic data and analyses pertaining, for example, to the effects of predation on the abundance of a targeted species), as well as associated findings, and to acknowledge uncertainties and present confidence intervals around findings that are made. The publication of data and analyses on a targeted species and its habitat in scientific journals does not mean that such information is necessarily applicable in conservation planning—and a lack of publication of such data and analyses does not mean that that information is not applicable. Critical assessment of the appropriateness of the underlying data sets and the methods or tools used to analyze those data sets must be carried out through an independent and rigorous process. During that process, decision-makers should both consider the reliability of the information and its pertinence to management planning, and acknowledge key uncertainties and variability in the ecosystem.

The third step in effects analysis links scientific data and model results to resource management options in an assessment of the ecological costs and benefits of the proposed action and, where appropriate, alternative planning opportunities. Transparency is critical at the point when available scientific information is synthesized and linked to determinations. This is where the best available science is actually “used” to substantiate defensibly the determinations made by the federal wildlife agencies in identifying the causes of species declines, the role of the proposed action in those declines, and the sufficiency of measures coupled with the proposed action to offset or counter those declines.

Following completion of the effects analysis, the federal wildlife agencies must make affirmative decisions regarding effects on the species and its habitat, together with regulatory options that are appropriate in light of those decisions. At this interpretive, post-effects analysis stage of the process, policy considerations appropriately and explicitly are incorporated into decision-making; in contrast, every effort should be made to eliminate such considerations during the three-step effects analysis stage of the process. This stage of the process is analogous to risk characterization as described in the risk assessment literature (EPA 2003; NRC 2009). Transparency is critical in step three of the effects analysis to allow the action agency, applicant, and other interested parties to understand how agency biologists have synthesized information and linked that information to determinations. Transparency when arriving at agency decisions is also important to allow those same parties to comprehend the respective roles of the effects analysis and policy considerations in agency decision-making.

If effects analysis as described herein has a ring of familiarity to the conservation scientist, it ought to. What Beissinger (2002) describes as “the cornerstone of conservation science” is for all intents and purposes effects analysis—it is population viability analysis (PVA), fairly described as “essentially a risk-assessment methodology applied to the issue of species extinction” (Shaffer and others 2002). Clearly, the outright extinction of a targeted species is not the only operational outcome toward which demographic data can be applied to assess the effects of environmental stressors. More generally, PVA is the analytical response to the need for policy guidance under the ESA. It has been more than two decades since conservation biologists fully recognized that PVA is “a process of risk analysis… where hazards are identified, risks are considered, and a model is developed” (Ralls and others 2002). So it is vexing to observe that the effects analysis-population viability analysis nexus remains rarely acknowledged by the wildlife agencies. PVAs use time-series population data, set in the context of a life-cycle model where appropriate and available; they establish probabilities for species survival outcomes under varying conditions—more or less habitat, arranged in different configurations, with diverging resource conditions and qualities. As such PVAs generate exactly the information that policy makers need to assist them in differentiating between alternative determinations, regulatory actions, and management responses. Shaffer and others (2002) assert that the lack of detailed population data for most taxa of conservation concern is the major limitation to the meaningful application of PVA in solving conservation problems. A greater and overarching limitation is the failure by federal wildlife agencies to recognize formally that PVA, in one form or another, is called for as the appropriate means to analyze the effects of an action on species of concern in virtually all formal consultations under the ESA.

Recurring Errors in Effects Analyses

While the federal wildlife agencies often gather the best scientific and commercial data available, the far more difficult task they face is to employ available data and analyses to complete rigorous, defensible effects analyses using a structured approach, such as that described above. The translation of data and analytical results from the scientific literature into agency decisions can be a problematic exercise. Scientists who present findings in the scientific literature cannot possibly anticipate all future applications of that knowledge, and, as such, rarely provide clear guidance to those who might apply their findings to management planning and future conservation actions. No rule set has been adopted to guide agency staff as they translate data and analytical results into agency decisions. Thus, as agency staffers draw conclusions from original studies, in so doing interpreting the scientific findings of others, there is risk that data and analyses thereof may be misinterpreted and misapplied. This may lead to misinformed policy decisions and management plans. Below we identify a number of errors that the federal wildlife agencies may commit, or obstacles that they may fail to overcome, when conducting effects analyses.

The first four types of errors we describe here involve misapplication of reliable scientific information in the context of effects analyses. The first of these is incomplete presentation of available information, which can lead to conclusions that would not be drawn if the complete information base had been considered. It is impossible to know whether missing information is the result of oversight, or purposeful to the presentation. Although this circumstance may seem readily avoidable, the dispersed nature of available information can make it difficult for authors of an assessment document to ensure comprehensive presentation of information. Whether the incomplete presentation of available data results from the fact the agency does not have the data in its possession, has the data but fails to present it, or has the data but fails to analyze or otherwise use it, this can be a serious problem. Incomplete information does not only under-inform, it can lead to unsupportable conclusions, and can even result in a biased outcome that can misguide management responses.

One example of incomplete presentation of available information involves a National Marine Fisheries Service biological opinion (1998) for North Pacific Fishery Management Council’s Fishery Management Plan. The Fisheries Management Plan regulates the North Pacific ground-fish commercial fisheries, and the biological opinion analyzed the effects of the Plan on the Stellar sea lion and other listed marine mammals. Environmental groups subsequently challenged the biological opinion on a number of grounds. In Greenpeace v. National Marine Fisheries Service, 80 F. Supp. 2d 1137 (W.D. Wash. 2000), upon review of the biological opinion, the court held that it contained “no meaningful analysis” of the effects of the Fishery Management Plan on critical habitat. The court went on to state that the biological opinion did not include such basic information as the estimated level of fishing in designated critical habitat. The court concluded that the necessary data was available but not analyzed, and found that the biological opinion was “heavy” on general background information, but failed to utilize available information to assess the effects of the Fishery Management Plan on the listed species.

The second type of error is the misinterpretation of findings from published research, and the third type is misrepresentation of available scientific findings. Oftentimes, upon post hoc review of an agency decision, it may be impossible to determine whether an error falls within the former or the latter of these two categories. One example of an error that could be either is in the National Marine Fisheries Service biological opinion (2009) for continued operations of the Central Valley and State Water projects in California, which provide water supplies for approximately two-thirds of the State’s population. In the biological opinion, the Service relied on a study by Vogel (2004) to support the conclusion that a reduction in water export pumping reduces the number of salmon that leave the main stem San Joaquin River and enter the southern portion of the Sacramento-San Joaquin Delta. In The Consolidated Salmonid Cases, 2010 U.S. Dist. LEXIS 54937 (May 18, 2010), the court held that it was not “rational” or “scientifically justified” for the Service to rely on Vogel (2004) to support the conclusion that a reduction in water export pumping reduces the number of salmon that leave the mainstem San Joaquin River and enter the south Delta, because Vogel (2004) concluded that, based on the data analyzed and results obtained, it was not possible to explain why some fish move from the main stem of the San Joaquin River to the south Delta. This sort of factual error in presentation is problematic, because it can only be discerned by reviewing the agency decision in combination with the record of data and analytical results that support that decision.

A fourth type of error in effects analysis is inappropriate emphasis. This is a recurring problem in effects analyses. It arose, for example, in a Fish and Wildlife Service biological opinion (2001) for operation of the Klamath Project on the California-Oregon border. The Project affects fish in the Klamath River basin, including the federally listed short-nose and Lost River suckers. When the Bureau of Reclamation consulted with the Fish and Wildlife Service regarding the effects of Project operations on the listed fish species, the Service determined that Project operations would jeopardize the species and proposed a reasonable and prudent alternative that included minimum water levels for Upper Klamath Lake. In a subsequent review of the biological opinion, the NRC Committee on Endangered and Threatened Fishes in the Klamath River Basin (2002) explained that the Fish and Wildlife Service imposed minimum water levels to address concerns regarding water quality and shoreline spawning habitat. But the Committee concluded that available data did not support the contention that maintaining higher lake levels would have hoped-for water quality benefits, or would reduce dewatering of spawning areas and thereby survival of the suckers in their early life stages. Scientific information available at the time the Service completed its biological opinion did not support the agency’s emphasis on water levels as a means to improve water quality and increase spawning habitat and thus advance the welfare of the species (NRC 2002).

The remaining three types of errors are frequently easier to identify than the interpretive errors described above. The fifth common error is the mistaken presumption that conclusions presented as part of an empirical study are scientifically valid if the study appears in a peer-reviewed scientific journal. One recent circumstance in which this error arose was in the Fish and Wildlife Service’s response to an IQA appeal associated with its biological opinion for continued operations of the Central Valley Project and State Water Project (U.S. Fish and Wildlife Service 2008). In its response, the Service stated that it “accepts the peer review processes of scientific journals and thus, the scientific validity of the paper’s conclusions.” (U.S. Fish and Wildlife Service 2009, p. 10.) But the appearance of technical information in a peer-reviewed journal does not render it scientific or valid; its appearance in print only indicates that the information has met that journal’s criteria for publication, including having satisfied the journal’s peer review process. This is not to say that the fact that a paper is published in a scientific journal is irrelevant; instead, it is one of a number of factors that an agency may consider when evaluating data, analytical results, and findings presented in that paper. Federal wildlife agency guidelines require those agencies to ensure that all information used to prepare biological opinions is reliable and credible and constitutes the best scientific and commercial data available; and, that includes information from peer-reviewed journals (U.S. Fish and Wildlife Service and National Marine Fisheries Service 1998, Department of the Interior and Department of Commerce 1994a).

A sixth type of error results from the mistaken view that if one increases the quantity of data, analyses, or references presented in an effects analysis, the document will become increasingly more robust and defensible. It is too frequently the case that effects analyses consist principally of vast amounts of aggregated data and analyses, presented without sufficient interpretation, and followed by lengthy reference lists that seem designed to impress the lay audience. Such analyses will not properly inform agency determinations and likely will foreclose evaluation of such determinations by interested parties. That said, legal advocates and agency spokespersons commonly cite to the length of an agency decision or the number of references that accompany that decision as evidence of the quality of that decision and courts, in some circumstances, have failed to discern that such contentions are fallacious. For example, in River Runners for Wilderness v. Martin, 574 F.3d 723, 747 (9th Cir. 2009), the court cited the fact that an Environmental Impact Statement had more than 500 references as a basis to uphold that document. The federal wildlife agencies should avoid the temptation to try to convince the public or a reviewing court that the analysis is comprehensive by loading an effects analysis with data, analyses, and scientific references that can actually diminish the focus and quality of their work.

A seventh type of error is involvement of research scientists in the process of formulating effects analyses, making affirmative regulatory determinations and defending those determinations in subsequent litigation. Involving research scientists in these activities will necessarily require them to advocate a particular outcome or position, and, as a result, place their credibility as impartial scientists at risk (Mills and Clark 2001). It also undermines the effects analysis process, whereby those experts involved in conducting the effects analysis are tasked with critical assessment and integration of standing data and analyses, as well as related findings. This was one reason that then Secretary of the Interior Bruce Babbitt established the short-lived National Biological Survey (Wagner 1999).

Role of Independent Scientific Review

Independent scientific review is one tool that the federal wildlife agencies may use to identify and resolve the most prevalent recurring errors in effects analyses described above. More generally, independent scientific review may result in more robust and defensible effects analyses than the federal wildlife agencies would produce absent such review (Meffe and others 1998). There is widespread support for independent scientific review as a means to improve federal agency decisions (for example, Sunstein 2002), and the Office of Management and Budget (OMB) issued “peer review” guidance in December 2004 requiring independent review of important scientific information by qualified specialists prior to dissemination of that information (OMB 2005). OMB has stated that such review “involves the review of a draft product for quality by specialists in the field who were not involved in producing the draft” (OMB 2005, p. 2665).

The federal wildlife agencies have developed a cooperative policy for peer review that applies to listing rules and recovery plans (Department of the Interior and Department of Commerce 1994b). But the policy is brief and general, and the Fish and Wildlife Service has been criticized for implementing it in a manner that calls into question the independence and objectivity of the review process (Government Accountability Office 2003, Ruhl 2005). And, in any case, it does not apply to interagency consultation. As a result, the federal wildlife agencies have used independent review in an ad hoc manner as a tool to assess biological opinions and their effects analyses.

Although the federal wildlife agencies have not previously provided formal policy guidance regarding independent review of biological opinions, there are certain prerequisites for independent scientific review that should be considered obligatory in order to assure that the product of the review process is rigorous and widely perceived as both objective and legitimate. Because the prevailing practice is for the federal wildlife agency that is itself the subject of the review to specify the scope of the review (also known as the task statement or charge), the agency must take due care to avoid defining the scope of review in a manner that impedes the ability of the reviewers to evaluate the biological opinion in its entirety. The scope of review can be limited, but generally it should not be limited by providing only a portion of the biological opinion to the reviewers or by articulating a scope of review that encourages the reviewers to focus on specific aspects of the biological opinion at the expense of assessing other aspects of the document. As a general matter, it will be appropriate for a federal wildlife agency seeking review to simply request review of the biological opinion in toto (together with the record materials that support that document), rather than limiting review or steering the reviewers toward certain aspects of the document. If the agency provides only a portion of the biological opinion and its effects analysis, or limits review of the document via its task statement, it will subject itself to claims of bias. And while some may contend that the federal wildlife agencies are composed of technocrats who neutrally implement the law, both the relevant political science literature (for example, Lowi 1979) and history (for example, Department of the Interior Office of the Inspector General, undated) reveal that such a contention cannot withstand scrutiny.

The reviewers of biological opinions must be given adequate time and resources to fulfill their task. Too often, reviewers are given insufficient time to conduct a rigorous, independent review. One significant problem that arises when a group of reviewers has insufficient time is that it will tend to rely on a subset of reviewers within the group. This can result in a review that lacks in rigor, objectivity, and legitimacy. One way to avoid the problem of insufficient time to conduct a review is to incorporate independent review into the consultation schedule from the outset. This can be accomplished through early coordination by the wildlife agency with the action agency (and applicant where applicable). Just as important, the wildlife agency must provide time for serious consideration of the review and responses to the input provided by reviewers. Failure to provide adequate time for the reviewers and responses may also lead to claims of bias.

Finally, the federal wildlife agencies should establish a protocol for selecting reviewers. That protocol must include a demonstrated firewall between persons involved in the selection process and persons involved in preparation of the biological opinion subject to review and certain selection criteria. The past practice of the Fish and Wildlife Service of allowing the scientists responsible for listing and critical habitat decisions to select the persons to review those decisions has elicited criticism for the obvious reason that it “invites charges of manipulation” (Ruhl 2005, p. 427). In addition to specifying who may and may not be involved in selection of reviewers, the protocol should identify selection criteria. In its Final Information Quality Bulletin for Peer Review, OMB establishes four criteria by which to evaluate and select reviewers: expertise, balance, independence, and conflict of interest (OMB 2005). The first of these criteria is essential, because if the reviewers lack requisite expertise then the review is a futile undertaking. The remaining criteria are informed in substantial part by the policy of the National Academies on committee composition and balance and conflicts of interest (National Academies 2003). The criterion of balance places emphasis on the need to impanel a committee that represents a diversity of scientific perspectives. Independence is critical, particularly if the biological opinion subject to review is expected to have significant natural resource management consequences. External experts are less likely to be subjected to influence, whether intentional or not (NRC 1998). And, avoiding conflicts of interest is necessary, because it can impair the objectivity of reviewers or call into question the legitimacy of the review by creating a perception of bias.

Provided these prerequisites for independent scientific review are fulfilled, input from experts can provide a valuable tool to improve decision-making. That said, independent review is no substitute for preparation of rigorous and defensible biological opinions by agency staff. Agency staff must have the resources and authority to integrate science transparently into the obligatory agency assessment process of effects analysis and then into the agency’s ultimate decision document, the biological opinion. If agency staff do not have adequate expertise, lack sufficient resources, or are otherwise not up to the task of conducting effects analyses (for example, due to bias), then no amount of expert independent scientific review will remedy such structural issues.

Essential Elements for Successful Effects Analysis

To pass through the process of effects analysis, and in doing so to bring the best available scientific to bear in consultation under the Endangered Species Act, requires a transparent exercise that includes the following elements. The first two of these fall within the problem formulation phase described in the effects analysis framework and therefore precede the preparation of an effects analysis; but they are nonetheless critical to conservation planning, including in the context of interagency consultation. The next five elements are introduced through the three sequential steps in the effects analysis framework (see Fig. 1). The outcome, agency action, must be linked through the effects analysis to the data and analytical results that are selected to inform the process.
  1. (1)

    Problem formulation and scoping—A planning group—typically consisting of the federal action agency, the applicant (if any), and, in appropriate circumstances, other parties such as states, tribes, and interested stakeholders—needs to clearly articulate the proposed action that is intended to be addressed using an effects analysis. That group must describe how existing conditions threaten the targeted species and options for altering those conditions to benefit the species. It must do the same with respect to the proposed action. It should identify tractable, alternative management options that would accommodate the proposed action and might contribute to halting population declines or reversing them. A description of the outcomes of these tasks should be presented in a biological assessment and should then inform the effects analysis.

     
  2. (2)

    Conceptual model of the system—Planners working with technical experts must agree on a conceptual model that identifies and describes the targeted resources, how covered species are affected by environmental stressors, ecological linkages among them, and how the targeted species are likely to respond to potential restoration or mitigation actions. Species-specific conceptual models are necessary to guide analysis of effects of actions on each target species. Opportunities for developing combined conservation responses for multiple species may be identified from the models; and the potential to use available information for one species to guide conservation actions targeting one or more others—a surrogate approach—can be validated in part by comparing conceptual models.

     
  3. (3)

    Decision-making framework—Planners must identify a framework for making regulatory decisions, in this case the decision as to whether the proposed action is likely to jeopardize the continued existence of the target species or result in destruction of adverse modification of designated critical habitat of that species. Implementation of this framework necessarily involves characterizing the relationship between the target species and the effects of the proposed action, as well as the effects of all other actions (and stressors acting) on that species. It also involves a description of how the findings regarding effects of the action and how the effects in the baseline will be linked to regulatory decisions. The decision-making framework may be different for each target species, because each species differs in its response to existing conditions, individual stressors, and potential management responses. The decision-making framework should describe the modeling tools that will be used to address each target species.

     
  4. (4)

    Identification of reliable scientific information—Guided by input from technical experts, the data derived from research and monitoring, including inferences drawn from other species and other locations that will be used in the effects analysis, must be identified. The pertinence of data or other information to the effects analysis must be explained. Why any candidate information that might seem pertinent was rejected for use in the effects analysis should also be explained. All candidate quantitative information that is potentially useful to planning should be considered before planners defer to use of qualitative information, observations, and best judgment defaults.

     
  5. (5)

    Description of assumptions—Simply acknowledging that there is a lack of the pertinent information that is desired to carry out some aspect of the effects analysis is not sufficient. An explicit description of the implication of key uncertainties that confront the analysis is necessary, along with the assumptions (or defaults) that will be used in lieu of essential information. Key shortcomings associated with data variability must be day-lighted. For example, the implications of the use of target species data derived from monitoring surveys that were designed for other taxa must be addressed. Uncertainty and variability analysis should reflect the need to consider both in comparative evaluation of management action options. It is not appropriate in any circumstance to rely on an assumption when pertinent information is available and would foreclose the need to make the assumption.

     
  6. (6)

    Identification of analytical tools and approaches—Effects analysis modeling approach(es) must be clearly described; and, a description of why those models that have been selected were so selected, and why alternatives were not used, should be made available. The rationale and justification for use of off-the-shelf modeling tools must be made clearly. The action agency and applicant (as well as other interested parties, where appropriate) should be given the opportunity to contribute to (or input into) model development, model parameterization, and interpretation of model runs and outcomes, and must be involved in a transparent process that connects effects analysis outputs with candidate management actions.

     
  7. (7)

    Elements and attributes of the analysis—Agency personnel working with technical experts must describe the relationship between target species and the proposed action in terms of the nature and magnitude of risk associated with existing and potential future conditions, and the expected benefits to the targeted species of alternative management options. Agency personnel must articulate fully how the decision-making framework will serve to link available information to policy outcomes via an assessment that accounts for the proposed action and offers alternative conservation actions and quantifies the attendant costs and benefits of those actions.

     
  8. (8)

    Agency determinations—The basis for agency determinations that are made using the products of the effects analysis and selected conservation actions must be transparent in order to allow stakeholders to trace those determinations and actions back through the effects analysis to the apposite scientific information.

     

Each of the steps to a determination by the wildlife agencies is essential and confers technical adequacy and legal defensibility to the decision outcome.

Conclusions

Should the Fish and Wildlife Service and National Marine Fisheries Service intend to facilitate President Obama’s stated intent to confirm Congress’ vision of science in implementing the federal Endangered Species Act, those agencies need to revisit the process by which they access, assimilate, and then exercise technical information in development of policy determinations. A new level of attention must be paid to the operative verb in Congress’s direction to the wildlife agencies to “use the best available scientific and commercial data” to inform their conservation directives. The “trans-scientific” processes necessary to make the linkage between science inputs and policy outcomes can take advantage of well-developed tools, including population viability analysis, but the federal wildlife agencies must adhere to a defensible step-down approach that acquires and uses reliable knowledge, and employs it in modeling exercises that assess the ecological costs and benefits of the proposed action and attendant conservation measures, and, where appropriate, alternative actions.

Acknowledgment

Support for this research was provided by the Center for California Water Resources Policy and Management.

Copyright information

© Springer Science+Business Media, LLC 2010