Three dilemmas in the integrated assessment of climate change
- 311 Downloads
These three dilemmas embody the hardest, most important, and most enduring problems of doing assessment well. None admits simple, obvious solutions. Each can be managed better or worse for any particular assessment endeavor, but doing better requires clear understanding of the purpose of the endeavor. What ways of combining different pieces of disciplinary knowledge, of making projections, and of pursuing policy relevance are more or less appropriate will differ, depending on whether a project seeks to characterize uncertainties and gaps in knowledge; to advise a particular policy choice; to support dialog among policy actors; or to facilitate inquiry into relevant values or goals. Evaluation of the relative emphasis, the methods, and the process of an assessment can only be done relative to some such purpose.
Of course, some pitfalls may be so serious as to thwart any purpose, as Risbey et al.'s discussion of the global modeling movement reminds us. The global models' most obvious pitfalls — inadequate treatment of uncertainty, neglect of economic adjustment, excessive confidence in predictions — have largely been seen and avoided by the current assessment community (though there may be more to be learned even here). But on the subtler questions of how assessment or modeling can contribute most usefully to policy, little progress has been made since the 1970s. Consequently, though assessment has advanced in many ways since then, IA remains at risk of suffering the same fate as the global models: a cycle of early enthusiasm, followed by a reaction of frustration and excessive, undeserved rejection.
Current endeavors in IA have made substantial contributions to identifying and prioritizing knowledge needs, less to informing specific policy choice. Further progress cannot be guided by a single canonical view of what assessment should be and do, but will proceed incrementally down multiple paths. Several paths currently appear promising: analytic approaches to better represent multiple actors, diverse preferences, and multiple valued outcomes; better representation and application of uncertainty, including diverse expert opinion; novel methods to link assessment with policy communities; and broader participation in assessment teams and explicit focus on negotiating and elaborating pragmatic, viable critical standards. Risbey et al.'s call to develop institutions for critical reflection, mutual learning, and self-improvement will be crucial in developing and evaluating the progress made down these paths.
Morgan and Dowlatabadi's checklist for desiderata of IA is a good starting point for a conversation about assessment standards, to which I would propose a few extensions and elaborations. First, there should be not just multiple assessments, but multiple assessment projects using diverse collections of methods and approaches. Second, assessment projects should explore novel methods for connecting their work with the policy community. Third, the approach should be iterative not just within each project, but across assessment projects and between them and the policy community. Fourth, assessors should not be embarrassed by, or seek to disguise, results that are merely illustrative, non-authoritative, and suggestive; these should be acknowledged as such, and the vigorous questioning and critique that will come, including partisan critique, accepted. Do not seek to avoid criticism by mumbling. An important limit to this checklist approach is suggested, though, by the way various writers have groped to define assessment standards by analogy to other domains, revealing how limited is our understanding of how to evaluate assessment. Risbey et al. refer to ‘connoisseurship’, as if assessment is like fine wine; Clark and Majone (1985) refer to artistic criticism, as if assessment is like opera singing. If these analogies are appropriate, then pursuing a single set of critical standards for assessment is at least premature, possibly erroneous. Rather, there should be a diversity of approaches, perhaps so broad that no single set of criteria for excellence could be defined. The pragmatic middle way between the too-limiting application of a single set of standards, and an anarchic refusal to evaluate, will have to be negotiated, defined, and improved incrementally.
Unable to display preview. Download preview PDF.
- Clark, William C. and Majone, Giandomenico: 1985, ‘The Critical Use of Scientific Inquiries with Policy Implications’, Science, Technology, and Human Values 10 (3), 6–19.Google Scholar
- Glantz, Michael H., Robinson, Jennifer, and Krenz, Maria E.: 1985, ‘Recent Assessments’, in Kates, Robert W., Ausubel, Jesse H., and Berberian, Mimi (eds.), Climate Impact Assessment: Studies of the Interaction of Climate and Society, SCOPE 27, International Council of Scientific Unions., Wiley, Chichester.Google Scholar
- Grobecker, A. J., Coroniti, S. C., and Cannon Jr., R. H.: 1974, The Report of Findings: The Effects of Stratospheric Pollution by Aircraft, DOT-TST-75–50, US Department of Transportation, Climatic Impact Assessment Program, National Technical Information Service, Springfield, VA.Google Scholar
- Guetzkow, Harold (ed.): 1962, Simulation in Social Science: Readings, Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
- Morgan, M. G. and Dowlatabadi, H.: 1996, ‘Learning from Integrated Assessment of Climate Change’, Climatic Change 34, 3–4.Google Scholar
- Risbey, J., Kandlikar, M., and Patwardhan, A.: 1996, ‘Assessing Integrated Assessments’, Climatic Change 34, 3–4.Google Scholar
- Wynne, Brian and Shackley, Simon: 1995, ‘Environmental Models - Truth Machines or Social Heuristics’, The Globe 21, 6–8.Google Scholar