1 Introduction

Since its inception in 1988, the Intergovernmental Panel on Climate Change (IPCC) has worked with the growing recognition that uncertainty is pervasive in our understanding of the climate system: what drives climate change, what will determine its future course, and what influence it will have on important social and ecological aspects of our world. It is not news that the IPCC has struggled, with varying degrees of success, in its efforts to describe these uncertainties and to judge the confidence with which it can offer its major conclusions. Richard Moss and Stephen Schneider (Moss and Schneider 2000) were the lead authors of IPCC’s first attempt to provide some guidance for authors in this regard, during the preparation of the Third Assessment Report (the TAR). A second guidance document was created by an author team headed by Martin Manning and Rob Swart (Manning et al. 2004) after an expert meeting to support the Fourth Assessment Report (AR4), and yet another version was produced after another expert meeting by a cross-working group team (Mastrandrea et al. 2010) as chapter authors assembled to begin their work on the Fifth Assessment Report (AR5). This most recent attempt, informed by the history of previous assessments, is the point of departure for the papers in this special issue of Climatic Change.

AR5 authors must do their work in a world that is marked by several recent, major changes in the climate change landscape that present larger challenges and opportunities. First of all, the Inter-Academy Council (IAC 2010) review of IPCC emphasized the treatment of uncertainty and, among other things, called for improvement in the way IPCC describes and communicates uncertainty with particular emphasis on increased consistency across working groups in order that conclusions become more comparable and more credible. In so doing, the review made vague reference to a need to uncover and answer some fundamental “client” questions, that is, who reads the reports and with whom are authors intended to communicate.

A second major change arises from a fundamental conclusion in the Synthesis Report of the AR4 which focused attention on risk management. In language that was unanimously approved in the final plenary of the Fourth Assessment process, governments (the primary clients of IPCC reports) asserted that “Responding to climate change involves an iterative risk management process that includes both adaptation and mitigation and takes into account climate change damages, co-benefits, sustainability, equity, and attitudes toward risk” (IPCC 2007b, pg 22). Indeed, Mastrandrea et al. (2010) notes that risk is, at its core, likelihood times consequence; they also concur that member countries of the IPCC have thereby put themselves on record as wanting future assessments to cover not only highly likely outcomes (i.e., high confidence conclusions), but also less likely outcomes, presumably those with demonstrably high potential consequences. This will be an extra challenge for the AR5 and for subsequent assessments, but it is also an enormous opportunity. Assessments cannot be alarmist, but they must henceforth push scientists beyond their comfort zones in framing conclusions that will adequately inform decision-makers about the full range of potential risk—particularly those decision-makers who worry about how to adapt and/or how to mainstream climate risk into their other decisions. This means characterizing and reporting the extreme tails of distributions, even in the many cases where the literature describes such outcomes only as “not implausible”; but it must be recognized that this puts an extra burden on IPCC authors. They must attempt to view the world through the prism of decision makers, taking into consideration at least the broad outlines of the full suite of approaches that policy-makers, or those who advise them, may adopt in evaluating risk.

Finally, it has become clear to most observers that the sources of uncertainty are not confined to our evolving understanding of how the climate system works. Within that sphere, many conclusions are now widely accepted even though many of the details of specific sources of risk are still sketchy. Uncertainty about how our socio-political-economic systems will evolve may be even greater because such changes are in many ways nearly impossible to envision [see Hawkins and Sutton 2009, and 2010]. That is to say, decisions over the next two or three decades affecting this larger context may influence the climate of 2100 and beyond in ways that are at least as significant as the implications of even the major current scientific uncertainties, like climate sensitivity and long-term ice-sheet stability. There are, of course, multiple relationships between these two categories of uncertainty that cannot be ignored. Should we choose to do little to abate emissions over the next few decades, then the upper tails of our current emissions scenarios will become more likely, and that will cause the upper tails of the vulnerability distributions to become more likely. Coming to grips with this realization and its nuances is yet another challenge for the AR5, and will surely require increasing attention not only by Working Groups II and III (WGII and WGIII), but also Working Group I (WGI). The basic climate science assessed by WGI must be transmitted to WGII and WGIII in a form appropriate for the risk management framework which is needed to grapple fully with this dynamic socioeconomic context.

2 Description of the special issue

This Special Issue provides an opportunity for a wide-ranging discussion of IPCC’s past and possible future approaches to the evaluation, characterization, and communication of uncertainty. Authors who were invited to contribute to this collection of papers approached their assignments from a variety of perspectives. Some, like Richard Moss, Michael Mastrandrea, and Katharine Mach, were intimately involved in producing the guidance documents; their contributions describe the objectives of these documents and offer some introspective considerations of past experience and what we might expect in AR5. Others, like Kristie Ebi, Gian-Kasper Plattner, Ottmar Edenhofer, Thomas F. Stocker, Christopher B. Field, and Patrick R. Matschoss, are playing key roles as working group co-chairs or members of associated technical support units in the AR5 process; they, as well as one of us (G.Yohe) were involved in developing the AR5 Guidance document, and their contributions describe their aspirations and concerns as the AR5 authors set to work. Still others, like Granger Morgan and Baruch Fischoff, articulate weaknesses and strengths in IPCC guidance efforts from an extraordinarily experienced and informed vantage point: that of research into uncertainty judgment and communication. Meanwhile, Marcus King and Sherri Goodman use their experience with the defense and national security communities to describe an approach to communicating and coping with profound and unique types of risk and uncertainty. James Risbey, Roger Jones, Roger Pielke, Jr., Rachael Jonassen, and Judith Curry have already contributed to the literature, discussions and evaluations of IPCC practices and procedures with regard to judging and communicating uncertainty. Pielke and Jonassen offer an empirical evaluation of uncertainty language in the AR4 while Risbey, Jones and Curry suggest “ignorance” as another category of confidence—not one that brings the process to a complete standstill, but one that best describes the state of affairs in some circumstances. Humility, they would all argue, would be a virtue. Brenda Ekwurzel and Peter Frumhoff have worked from IPCC documents to try to communicate with broader audiences in language that is more accessible than the dense prose that IPCC prefers; their paper describes some of the challenges and opportunities that they have faced or enjoyed, respectively. John Sterman and Robert Socolow represent users of that information from within the broader research community; they express some frustration in interpreting summary statements from previous assessments and offer suggestions for reducing that burden. Finally, Richard Tol, who has been an IPCC participant for many years and has thought seriously about the structure and efficiency in the entire enterprise, offers an analogy between a standard natural monopoly in economic theory and the IPCC in practice vis a vis providing climate information to the international community. It allows him to offer some stark but constructive hypotheses and some novel but intriguing remedies.

Each contribution was reviewed externally and within the editorial structure of Climatic Change for accuracy in its portrayal of historical context and underlying science, but every author has been allowed to express his or her own opinion about how well the IPCC in general and the uncertainty guidance documents in particular have served IPCC’s various clients—readers of the full assessments, readers of the technical summaries, and readers who have confined their attention to the overarching summaries for policymakers and synthesis reports. It is our hope that this collection of opinions, informed by a range of perspectives and experiences within and from outside the IPCC, will be of some help as author teams begin their work on the AR5 in earnest.

3 The editors’ views

Speaking as editors of this journal and longtime IPCC authors, we open this issue with our own view of IPCC’s treatment of uncertainty. We have been deeply involved in IPCC over several assessments and Special Reports, including the current Fifth Assessment. We offer our willingness to volunteer repeatedly as evidence that the following criticisms and suggestions should be taken as an indication that we hold the IPCC and our fellow authors in high regard. Indeed, we are firm in our belief that the IPCC is a unique and valuable institution that can continue to break new ground in providing the world with information critical to responding to climate change. We organized this special issue to collect some provocative thoughts, and we did so this year with the hope that those thoughts will help IPCC authors complete their appointed tasks. It is, to be sure, too late to influence the official guidance document, but we hope that many of the ideas articulated by the contributors here can nonetheless aid the process and make the AR5 a better document—even if this assistance manifests itself only in more open and engaged discussion among author teams. We also hope that these papers will contribute constructively to the next guidance update for the AR6 as well as intervening assessments conducted by, for example, the author team being assembled for the next National Climate Assessment in the United States.

That said, we begin our own personal commentary by reporting our belief that IPCC is confronted with four unresolved problems with regard to uncertainty as it moves forward with the AR5:

  1. 1.

    Who is its audience?

  2. 2.

    What information does it wish to impart to them?

  3. 3.

    What language should it use to do so?

  4. 4.

    And at what level of detail should it report its results?

Answering the first question would make it much easier to address the other three. But to some extent, the very structure of IPCC reports obstructs a ready solution. The separation of Summaries for Policymakers (SPM’s) from chapters (with a Technical Summary (TS) providing a bridge), already recognizes that policy-makers, other decision-makers for that matter, and many other users of the reports, to be honest, don’t want to read all of the details; and even the most expert of us is selective in what he or she reads. That said, it appears that none of it can be dispensed with because the full texts of these reports as well as the fundamental conclusions highlighted in the SPM’s are a critical factor in setting the global research agenda for the years between assessments. Furthermore, no part can be structurally changed without changing the rest. The SPM’s are bound by the same set of principles as the more detailed chapters..

Paragraph 2 of the amended Procedures Guiding IPCC Work (IPCC 2007a) is pertinent to the other three problems. Its states that

“The role of the IPCC is to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation.” (emphasis added) http://ipcc.ch/pdf/ipcc-principles/ipcc-principles.pdf)

These Procedures challenge authors to be “objective” (reporting conclusions that, according to Webster’s 7th New Collegiate Dictionary, pertain to “a sensible world” that are “observable and verifiable especially by scientific methods”), presumably in contrast to being “subjective” (again according to Webster, “relating to knowledge as conditioned by personal mental characteristics”). Authors are also frequently reminded by their working group co-chairs (to a degree which varies across Working Groups), that they are constrained by a list of unwritten rules. Here we paraphrase four that, from our experience, are among the most troublesome:

  1. 1.

    IPCC shall perform no original research;

  2. 2.

    IPCC shall focus on consensus and the rationale behind its construction (See Paragraph 10 of the amended Procedures Guiding IPCC Work, discussed below);

  3. 3.

    IPCC shall avoid subjective judgments wherever possible (see above); and

  4. 4.

    IPCC shall depend uniquely on informal interactions among groups of scientists (authors) to develop its findings and avoid using formalized approaches to making judgments about what is known and about uncertainty.

These restrictions are, to some degree mutually exclusive, so trying to apply them simultaneously leads to the very problems that have engendered criticism of IPCC assessments. For example, it is logically impossible to avoid subjectivity when developing conclusions from the deliberations of small groups of experts. The biases inherent in such activities, and attempts to overcome them, are so well documented that, by ignoring this literature (Morgan and Henrion 1990; see Morgan paper this issue), IPCC itself offends the principles on which its work supposedly stands. One rejoinder to this critique is that the multi-layered review process guards against such bias and mitigates the degree to which it is reflected in the assessments. Another is that different chapters have different author teams, and so their biases differ and their influence on the summarizing documentation of the SPM’s is ameliorated. Governmental collaboration in developing the SPM provides a final backstop, to be sure; but we doubt that either governmental or peer review suffices because it is largely limited to repairing material already heavily filtered by the authors. Rather, because each of the strictures fundamentally affects the treatment of uncertainty, we agree with many of the authors who have contributed to this special issue and others when they argue that the basic aspects of IPCC approach to evaluating and communicating uncertainty need to be reconsidered.

4 The unavoidable role of subjective judgment

The restriction against original research is inherently impossible to uphold while at the same time avoiding subjective judgments. The reality is that the literature is full of gaps and it is simply not possible to provide a meaningful assessment without occasionally attempting to fill those gaps. Even the very act of organizing an assessment frequently creates a new conceptual framing of an important issue or topic; and so a new assessment is, in itself, a new contribution to the literature. For example, recall the following examples:

  1. 1.

    The identification of the determinants of adaptive capacity was a critical part of Chapter 18 of the contribution of Working Group II to the TAR (IPCC 2001), which provided an organizing frame for a significant amount of post-2001 research into adaptation (e.g., Yohe and Tol 2002).

  2. 2.

    The development of criteria for identifying “key vulnerabilities” was a major contribution from Chapter 19 of the contribution of Working Group II (WGII) to the AR4 (IPCC (2007c). Those criteria supported the continued emphasis on “Reasons for Concern” (themselves developed during the creation of the TAR) in the WGII-AR4-SPM and subsequently in the AR4 Synthesis Report (IPCC 2007b); and they have organized subsequent work by those looking to identify where “dangerous anthropogenic interference with the climate system” might be or might already have been detected (Smith et al. 2009; Yohe 2010).

  3. 3.

    The partial (and largely unsuccessful) attempt by WGI to estimate potential changes in the dynamical ice sheet contribution to sea level rise in the AR4 (IPCC 2007d). The final result has confused users and scientists alike and confounded the application of AR4 sea level rise scenarios to subsequent assessments of risk to coastal zones around the world for years. (Oppenheimer et al. 2007; Nicholls et al. 2011).

The IPCC’s development of the SRES scenarios provides a fourth (and perhaps the leading) example (IPCC 2000). The outcome had IPCC appearing to be twisted in a pretzel. On the one hand, IPCC’s making projections of how the socio-economic future of the planet might evolve over 100 years (as opposed to assessing scenarios composed by others) certainly constituted doing “new research”. On the other hand, it did make them (because others had not expended the time or effort to do so). Since then, nearly every impact and adaptation study published has anchored itself on one or another of the four underlying storylines in large measure because they had the IPCC brand attached. But in trying not to at least acknowledge restriction #1 against doing original research, it took the unsatisfying approach of presenting the SRES without assigning probabilities to any of them.

IPCC has recently endorsed the independent description of four new scenarios that have been identified by limits on long-term radiative forcing from anthropogenic sources (the so-called Representative Concentration Pathways, RCP’s). The thought is that these new alternatives will anchor a new generation of impact studies along pathways that can be tied more closely (but still very loosely) to alternative mitigation futures, but only if the generating pathways and impact contexts can be tied to supporting socio-economic development pathways (the proposed so-called SDP’s) so as to make the picture self-consistent. That there are an infinite number of SDP’s that could produce any of the four RCP’s (except perhaps for the lowest one) is a problem that has yet to be solved. IPCC has not been integrally involved in the development of either the RCP’s or, should they materialize, the SDP’s; but IPCC has been the motivating force behind their creation and its leadership certainly anticipates that they will play the same role in subsequent assessments as the SRES trajectories did in past efforts. In short, we wonder why IPCC should shy away from creating new science (and thereby inhibit that creative enterprise) when it is obvious that it cannot do its work (appropriately) without doing new science.

5 The issue of consensus

To many, notably including Risbey and Curry in this special issue, the emphasis on consensus is the most troublesome limitation of IPCC assessment processes (for a general critique of the consensus approach to science, see Moore and Beatty 2010). Achieving consensus is, to be clear, one of the major objectives of IPCC activities. Paragraph 10 of the amended Procedures Guiding IPCC Work, for example, states that “In taking decisions, and approving, adopting and accepting reports, the Panel, its Working Groups and any Task Forces shall use all best endeavors to reach consensus” (http://ipcc.ch/pdf/ipcc-principles/ipcc-principles.pdf). The paragraph continues by noting that “for approval, adoption and acceptance of reports, differing views shall be explained and, upon request (by countries participating in the approval plenaries), recorded (in publically available documentation that is maintained by the IPCC Secretariat). Differing views on matters of a scientific, technical or socio-economic nature shall, as appropriate in the context, be represented in the scientific, technical or socio-economic document concerned,” but it is certainly the case that all participants in IPCC assessments would like this to be the exception rather than the rule. To our memory, such documentation has never been required at least so far as SPM’s are concerned.

Drawing the boundaries between consensus and disagreement is an activity which is so subjective that it ipso facto violates restriction #3 (Moore and Beatty 2010). As IPCC, in a search for objectivity in uncertainty assessment, has turned more to describing uncertainty in terms of the characteristics of ensembles of model outcomes, the deficiency in such an approach (its exclusion or limited treatment of systemic, structural uncertainty in models) has become increasingly apparent to the community (Winsberg 2010; Knutti et al. 2008; Goldstein and Rougier 2009). The exercise of subjective judgment in the comparison of ensemble outcomes to observational and paleoclimatic data provides a critically important means to augment the criteria internal to the model world. Indeed, there are examples in IPCC reports of willingness to acknowledge the importance of expert (subjective) judgment, if on a limited basis (e.g., see discussions of climate sensitivity, detection and attribution and climate and weather extremes in WGI report, assessment of response strategies in the WGII report of AR4; see also Knutti and Hegerl (2008) for futher details on the role of expert judgement in estimating climate sensitivity). But more salient is IPCC’s reluctance to fully couple the inevitable process of subjective judgment in a coherent way into its assessments of uncertainty and, absent this coupling, IPCC’s tendency, particularly in SPMs, to resort to emphasizing ensemble means rather than fully describing the range of views. In some cases, neither ranges of views nor are consensus judgments reported, leaving decision makers at a loss. Outstanding examples of the latter include, in addition to the examples of SRES and ice sheets above, authors’ avoidance of any estimates of carbon cycle feedbacks involving tundra or methane hydrate reservoirs; and avoidance of estimations of the degree of implementation of adaptation capacity under particular circumstances.

Two proposals have been advanced repeatedly for beginning to address the problem of creating, defending and communicating consensus results as well as departures from the consensus. The degree to which IPCC, through its working group leadership structure, resisted these proposals during the AR4 process is unsettling, given that the scientific communities from which IPCC authors are drawn are supposed to think analytically about the world as a whole. Apparently, this dictum does not extend to reflexive consideration within the IPCC process as it performs its assessments. Such reflexivity is entirely normal in social sciences, and increases, rather than decreases, the rigor of and confidence in the associated findings.

The first proposal calls for relaxing the focus on consensus and instead putting as much effort into presenting the full range of expert judgments. We and others have gone so far as to suggest that consensus on many key aspects of climate change is well known to governments; and we agree with Socolow in this issue, and others, when they argue that the value added from an assessment is in displaying the range of views. The most complete way to do so would be to present not only the range of views in the community, but also the range of views within the assessment group and perhaps even consider ripping off the mask of anonymity that cloaks our deliberations. Other ideas to increase transparency about the full spectrum of beliefs have surfaced, including opening author deliberations to scholars of decision-making, or/and the media.

The second urges that all Working Groups forgo the fiction that expert deliberations are entirely objective and that arriving at judgments by deliberation within what are usually small subgroups is the only permissible approach to assessment. Formal expert elicitation (Morgan and Henrion; Morgan, this issue) has been proposed again and again (for example, before the first uncertainty guidance in the report of the Aspen workshop (see Moss 2011 and Hassol 1996) and it is troubling that the IPCC has repeatedly declined to explore its value. After all, there is only a sparse literature on the efficacy of IPCC’s favored approach (see below) in comparison to the relatively extensive scholarly literature on formalized elicitation of judgments This observation raises an important question in our mind: Why is IPCC so tied to a method whose value remains largely speculative, given how little it has been subject to scholarly study? Perhaps it is because the same procedures have been adopted by many other assessments in many other contexts; the National Academies of Science, for example, produces consensus documents from diverse committees and panels that are subjected to expert review from selected external scholars (and it is perhaps noteworthy that NAS reports occasionally feature signed dissents on particular points). Yet, anecdotal information aside, there are very, very few studies of the internal operation of such panels (LaMont 2010), and for evaluating the effectiveness of IPCC’s approach, the community has only the very limited literature matching its projections over the past 20 years to actual climate outcomes (Rahmstorf et al. 2007; Pielke 2008). Furthermore, it is unlikely that formalized elicitation as currently practiced is the only or even the best available alternative method for assessing expert knowledge. IPCC should be encouraging research into such approaches (much as they encourage research into new emissions scenarios, for example) rather than turning its back on them. If those who do such research had a client as large and visible as IPCC, then progress might occur quickly.

6 Who is the target?

These proposals about “how” better to conduct assessments beg the question of who the audience is. The combination of deeply detailed chapters read comprehensively by only a self-selecting group of interested scholars and practitioners coupled to a single SPM which is finalized via a largely separate, unusual, and sometimes controversial process underscores what seems to some observers to be an inherent contradiction embedded in IPCC. We hold a different view. IPCC is designed to satisfy two distinct constituencies, and it has implemented (in the form of plenary approval of the SPMs) a brave and unprecedented experiment in whether this is possible. The difficulty arises not in the attempt to communicate with a client but in shying away from its full implications.

First, IPCC needs to decide if it really intends to communicate with the general public or more accurately, publics. If so, then it needs to recognize that scientists are not communications experts and that their intended messages are likely to be overwhelmed by the interpretations, misunderstandings, and spins of the inevitable intermediaries. If IPCC is going to take public communication seriously, then it should incorporate experts into the process of developing the message. This means involving social psychologists (Pidgeon and Fischhoff 2011) and others with expertise on cognition, learning, and interpretation of information deeply within the assessment machinery because the language spoken via the SPM and press releases has genetic links back to the chapters. This suggestion immediately raises the difficult issue of how much the evaluation of uncertainty and the communication of uncertainty are separable. To the extent that they are one and the same process, cognitive expertise is an essential element of assessment.

Alternatively, IPCC could decide that governments are its only plausible clients. At least they (in the form of their chosen representatives) are intimately involved in the process of developing the SPM, and so they thereby gather an understanding of the meaning of the details which the general public does not. Not only are government delegates involved in editing and approving the SPM, but government scientists are integral to the entire process. It could be that IPCC can dispense with the idea of embedding cognitive experts in the process and instead leave the business of messaging entirely to governments and the many non-governmental intermediaries who work in the interface between the scientific communities and the public. But as of now, IPCC tries to do both and is not entirely successful at either.

7 Conclusion: we’re scientists, so let’s experiment

Tol (in this collection of papers) has suggested that IPCC’s effective monopoly over global climate assessment is unhealthy for its relations with the general public while limiting its utility to policy makers. The international climate change regime is undergoing a transformation into what some have called a “regime complex” with diversification of institutional roles away from the UNFCCC and toward other bodies like the Major Economies Forum, the World Bank, the Montreal Protocol, and potentially, the WTO (Keohane and Victor 2010). Will the same happen to IPCC? The advantage of centralization of the assessment role is the communication value in a monolithic message. The disadvantage is the ossification and eventual irrelevance of a unique and singularly important institution. It would be better for IPCC to begin to experiment under conditions it can control than to wait for its influence to wane because it is no longer up-to-date in its approach to assessment. Some of the ideas that spring to our minds, in order of their degree of departure from current practice, include:

  1. 1.

    Fully reporting differences of judgment within working group chapters, including identifying specific views with particular experts;

  2. 2.

    Establishing competing assessment teams within IPCC as well as using other approaches developed for intelligence analysis (Tetlock 1991; Goodman and King, this issue). Full reporting on divergent outcomes of teams would be one way to satisfy the need to go beyond the emphasis on consensus, but only a limited number alternative teams could be deployed on only a handful of questions, or IPCC would be overtaxed;

  3. 3.

    In a twist on the Tol proposal, bidding out certain questions to groups affiliated with IPCC yet operating under rules which each would propose in order to provide an optimum combination of uniformity and diversity under IPCC’s tent.

As IPCC and its supporting community continue to tinker with the process, though, it is important to recognize the importance of doing it right. In the climate world where coping with uncertainty is a way of life, expressing conclusions carefully is essential. Scientific knowledge will evolve, and conclusions will change. Sometimes new knowledge will strengthen existing conclusions. Other times, new knowledge will weaken confidence in accepted wisdom. And sometimes, errors will be made (in the assessments as well as in the underlying science). Ultimately, any set of adjustments in how IPCC treats uncertainty and otherwise conducts its business must not only attain the approval of governments but, just as important, the active and engaged cooperation of the participating scientists. Such an effort will require that our community bring as much enthusiasm and originality to the task of increasing the transparency, comprehensibility, and utility of its assessments as we bring to our research.