Skip to main content

The Value of Information in a Risk Management Approach to Climate Change

  • Chapter
  • First Online:
The Value of Information

Abstract

The standard economic approach to analyzing the climate change problem has been to search for efficient abatement policies. The massive uncertainties and the possibility for cataclysmic climate damages, however, suggest that a risk management approach is more appropriate. This shifts the policy question to how much risk of catastrophe society is willing to accept. Intuitively, this change in focus may shift our information needs, and the needs should be assessed through a value-of-information analysis. Such calculations should allow for examination of how improved information alters the probability of exceeding a given policy target, incorporate rigorous expert judgment for determining beliefs and quantifying uncertainties, and highlight what scientific information is most valuable for a policymaker attempting to keep the probability of catastrophic climate impacts below a set threshold. We discuss how Bayesian belief nets can be useful tools for this type of analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Much ink is spilled over whether we should choose according to the principle of maximal expected utility. We assume for the present discussion that the decider is a rational agent, in the sense of Savage (1954). A rational agent’s preferences can always be decomposed into a unique probability over states of the world and an affine unique utility over consequences such that preferences are modeled as expected utility.

  2. 2.

    The conditional probability Prob(cs = 1.5 | LO) = 0.75, and similarly Prob(cs = 1.5 | Hi) = 0.3. Solve Prob(cs = 1.5) = 0.66 = Prob(cs = 1.5 | LO)P(LO) + Prob(cs = 1.5 | HI)(1 − P(LO)) to find Prob(LO) = 0.8.

  3. 3.

    This model is meant simply to demonstrate the approach, and in that sense, the distributions chosen are somewhat arbitrary. We model the climate sensitivity as Beta distributed on [1, 15] with parameters (2, 24). The damage and production function exponent are both modeled as uniform variables, the first over [1, 3], and the second over [0.2, 0.4]. Part of the appeal of the BBN approach is that these distributions can be altered and the effects on output examined explicitly.

  4. 4.

    I thank Molly Macauley for a long series of extensive discussions on the value of information; she should be held blameless for mistakes I persist in making. I am also grateful to Roger Cooke and Carolyn Kousky for discussions of issues raised in their chapter; they too are without responsibility for my errors.

  5. 5.

    A related issue is the distinction between data, the raw material an instrument gathers, and information, after the data have been turned into useful knowledge through some theoretical interpretation in the chain between initial the gathering of those data and their potential use.

  6. 6.

    The full quote from Brand (1985, 49) is instructive: “On the one hand, information wants to be expensive, because it is so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.” A quarter of a century later, this is still true.

  7. 7.

    A virtue of having health insurance purchased through businesses or the government rather than individually is that it creates an information firewall that keeps individual knowledge of health status from creating an adverse selection problem in health insurance markets, where relatively healthy people choose not to purchase insurance, increasing the expected costs of ensuring those who purchase insurance, perhaps to the point where the insurance market disappears altogether. When employees can choose their health care provider, that firewall comes down, and the costs of insurance to all can increase (Cutler and Reber 1998).

  8. 8.

    None of the formal constructions in this section are novel.

  9. 9.

    Finkel (this volume) points out that one can view this as the expected avoided cost of error or “regret” from choosing x* instead of x(w).

  10. 10.

    This distinction has implications for whether case studies on the value of particular pieces of information really can tell us very much about whether it was valuable to invest in the ability to acquire that information. Such case studies tend to be valuable to the degree that the benefit of the information, U(x(w), w) − U(x*, w), is relatively constant over the set of w that one is likely to observe. For example, if (a) one knows that a disease might break out in one out of N areas but one does not know which area will be the one, (b) stopping the disease would have the same benefit regardless of where it broke out, and (c) preventive measures would not be effective absent knowing where the breakout is, then a study showing the benefits of detection will be informative regarding the value of the determining which area will have the outbreak. Whether assumptions along these lines are generally valid could be a useful question to study.

  11. 11.

    It may be that p(w, z), the probability of w conditional on z, is learned through a process of Bayesian updating based on g(z), prior beliefs regarding z, and h(z, w), the likelihood that evidence z would be observed were a particular climate observation w valid:

    $$ p\left( {w,z} \right) = \frac{{h\left( {z,w} \right)g(z)}}{{\int\limits_Z {h\left( {z,w} \right)g(z)dz} }}. $$

References

  • Ackerman, F., & Heinzerling, L. (2002). Pricing the priceless: Cost-benefit analysis of environmental protection. University of Pennsylvania Law Review, 150, 1553–1584.

    Article  Google Scholar 

  • Arrow, K. (1959). Economic welfare and the allocation of resources for innovation (P-1856-RC). Santa Monica: RAND Corporation.

    Google Scholar 

  • Brand, S. (1985, May). Keep designing: Discussions from the Hackers’ conference, November 1984. Whole Earth Review, 46, 44–55.

    Google Scholar 

  • Brennan, T. (2010). The challenges of climate policy. The Australian Economic Review, 43, 225–239.

    Article  Google Scholar 

  • Cooke, R. M., & Kelly, G. N. (2010). Climate change uncertainty quantification: Lessons learned from the joint EU-USNRC project on uncertainty analysis of probabilistic accident consequence code (Resources for the Future Discussion Paper 10-29). Washington, DC: Resources for the Future.

    Google Scholar 

  • Cutler, D., & Reber, S. (1998). Paying for health insurance: The trade-off between competition and adverse selection. Quarterly Journal of Economics, 113, 433–466.

    Article  Google Scholar 

  • Fankhauser, S., & Tol, R. S. J. (2005). On climate change and economic growth. Resource and Energy Economics, 27(1), 1–17.

    Article  Google Scholar 

  • Hanson, R. (2007). The policy analysis market: A thwarted experiment in the use of prediction markets for public policy. Innovations, 2(Summer), 73–88.

    Article  Google Scholar 

  • IPCC. (2007). Climate change 2007: Synthesis report. Contribution of Working Groups I, II and III to the fourth assessment report of the Intergovernmental Panel on Climate Change. Geneva: Intergovernmental Panel on Climate Change.

    Google Scholar 

  • Keller, K., Yohe, G., & Schlesinger, M. (2008). Managing the risks of climate thresholds: Uncertainties and information needs. Climatic Change, 91, 5–10.

    Article  Google Scholar 

  • Lempert, R. J., Schlesinger, M. E., Bankes, S. C., & Andronova, N. G. (2000). The impacts of climate variability on near-term policy choices and the value of information. Climatic Change, 45, 129–161.

    Article  Google Scholar 

  • Malkiel, B. (2003). The efficient market hypothesis and its critics. Journal of Economic Perspectives, 17, 59–82.

    Article  Google Scholar 

  • Manne, A., & Richels, R. (1995). The greenhouse debate: Economic efficiency, burden sharing and hedging strategies. Energy Journal, 16(4), 1.

    Google Scholar 

  • Nocera, J. (2009, January 2). Risk mismanagement. New York Times Magazine. http://www.nytimes.com/2009/01/04/magazine/04risk-t.html. Accessed 1 July 2011.

  • Savage, L. J. (1954). Foundations of statistics. New York: Wiley.

    Google Scholar 

  • Sorkin, A. R. (2009, September 16). Taking a chance on risk, again. New York Times. http://dealbook.nytimes.com/2009/09/17/taking-a-chance-on-risk-again/. Accessed 1 July 2011.

  • von Hayek, F. (1945). The use of knowledge in society. American Economic Review, 35, 519–530.

    Google Scholar 

  • Weitzman, M. (2007). A review of the stern review on the economics of climate change. Journal of Economic Literature, 45, 703–724.

    Article  Google Scholar 

  • Wighton, D. (2009, January 29). Efficient market hypothesis is dead—for now: Business editor’s Davos commentary. The (London) Sunday Times.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Carolyn Kousky or Timothy J. Brennan .

Editor information

Editors and Affiliations

2.Commentary: Valuing Information, Ascertaining Risk, and Setting the Target

2.Commentary: Valuing Information, Ascertaining Risk, and Setting the Target

Footnote 4In their contribution to this volume, Kousky and Cooke (this volume) offer a method, Bayesian belief nets (BBNs), as a way to acquire the relevant stochastic information under a risk management approach to climate policy. Under such an approach, the goal is to come up with a policy to meet a given target—for example, that the probability of a given level of warming is less than a tolerable maximum. KC present this in a unified framework, but I want to suggest that the three aspects of the discussion—valuing information, ascertaining risk, and setting the target—are separable issues, in the sense that the merits of the approaches to any of these aspects can be assessed independently of how we regard the others. To support this, I will look at these three aspects in turn, and then conclude with some observations on the use of aggregate expected utility in setting climate policy, whether under a cost-benefit or risk management framework.

2.1.1 2.C.1.The Value of Information: Some General Issues

That information is valuable is obvious; more interesting is why that requires investigation in a way that the value of hamburgers does not. Part of the problem is that markets may not work as well for information as they do for hamburgers. Three aspects of information bring this out. A first is the circular regress in the purchase of information. Applying to information the general principle that one needs to know about a product before one can formulate a willingness to pay for it implies that before one would buy information, one would need to be informed about what the information is (Arrow 1959, 10). Were that true, why would one need it?Footnote 5

Two other aspects are intriguing because they are somewhat contradictory. One is that information, apart from any physical medium in which it may be embodied (book, DVD) or communicated (theater, computer), meets the economist’s definition of a “public good.” In that definition, the consumption or possession of information is “nonrivalrous,” meaning that one person’s having it does not disable others from having or it as well. Consequently, information should be available to all who place a positive value on it, which generally requires that the price be zero. This is the somewhat technical argument behind the aphorism “information wants to be free” (Brand 1985, 49).

However, if information is free, one has the problem of covering the cost of discovering and providing it. Until recently, information providers recovered these costs through a combination of embedding the information in tangible goods that were relatively costly to duplicate and intellectual property protections against unauthorized copying and resale. The ability to convert information into digital formats detached from tangible goods and easily copied on computers and transmitted through broadband networks has blown a substantial hole in these protections. The good news is that a lot of information is free, but the bad news is that the business models used to support information provision have become unsustainable.

In contrast, although information meets the economic definition of a public good, the value of information often depends on exclusivity—that it is not made available as a public good in practice, even if it could be in principle.Footnote 6 One obvious context is business strategy, where the value of information depends on the competitive advantage it conveys over relatively ignorant rivals. Information can be valuable in interactions with buyers or sellers, although in those cases one runs the risk that asymmetry in information between buyers and sellers could cause adverse selection, leading to a collapse of markets to the detriment of all. In such cases, the value of information can be negative: market participants as a whole would be better off without it.Footnote 7 But business is not the only context where information is valuable to the extent it is private. Academic researchers strive to limit access to information to protect the priority of discovery on which reputations depend. The second person to write “E = mc2” on a piece of paper is likely unknown, probably forever.

2.1.2 2.C.2.Modeling the Value of Information

Fortunately and understandably, KC focus not on the strategic value of climate information, but on its public good aspect. Moreover, they get around the paradox by invoking the assumption that the procurer of information knows that the information will resolve a question or reduce uncertainty regarding it, but does not know how that uncertainty will be resolved. They present this graphically, but it is useful to look at the value of information algebraically to see that their formulation applies not just to expected utility maximization but to risk management as well.Footnote 8 It also allows us to see that BBNs used to refine uncertainties could in principle be equally useful in either setting.

We face uncertainty about a parameter w reflecting sensitivity of climate to emissions. KC have w equal either to 1.5 or 5, but for notational convenience, we allow w to vary over a connected range W. Before gathering new information I, the likelihood that w takes a particular value is f(w), which as a probability distribution satisfies

$$ \int\limits_W {f(w)} dw = 1. $$

Absent I, a policymaker charged with maximizing expected utility would have to choose a policy x* to maximize

$$ \int\limits_W {U\left( {x,w} \right)f(w)} dw, $$

where U(x, w) is the utility from policy x if w is realized.

Following KC, assume first that I allows the policymaker to know the value of w before choosing x. If so, she would choose x(w) to maximize U(x, w) given w. Thus, prior to acquiring I, the expected utility the policymaker would achieve with I is

$$ \int\limits_W {U\left( {x(w),w} \right)\,f(w)} dw. $$

The value of information V at the time one decides to acquire it is the difference between the expected utility with the information and the expected utility without it:Footnote 9

$$ V = \int\limits_W {\left[ {U\left( {x(w),w} \right) - U\left( {x^*,w} \right)} \right]\,f(w)} dw. $$

Figure 2.1 in KC illustrates this formulation in the case where w can take one of only two values.

If V exceeds the expected cost of gathering the information—for example, launching an earth observation satellite, carrying out malaria detection tests, or advertising product characteristics—the investment in information gathering is worth the cost; if V is smaller than that expected cost, the information is not worth obtaining. V will tend to be larger as the difference between the utility of choice knowing the information, U(x(w), w) exceeds U(x*, w), over ranges of w that are relatively likely—that is, where f(w) is relatively large. When U(x(w), w) is not much different from U(x*, w), either because decisionmakers cannot choose x(w) to be much different from x* or because the choice does not affect U that much, or when the set of w values where the difference matters has a low ex ante likelihood of occurring, where f(w) is small, the value of information will not be great and investments to procure it are less likely to be justified on cost-benefit grounds.

It is important to recognize that the value of information here is based on f(w), the likelihood the information will be valuable before the information is gathered. Consequently, a study showing that information happened to be valuable, given a particular realization of w after the information was gathered, does not prove that the information was worth gathering in the first place. If the ex ante chance of observing that value of w, f(w), was particularly small, the information investment may not have been worth making, even if it happened to turn out to be valuable. Similarly, an information investment could have been worth the cost even if the particular piece of information revealed turned out to have little effect on the decisionmaker’s choice or utility, if outcomes that could have been important were sufficiently likely at the time the investment in information gathering was made.Footnote 10

As KC point out in their second example, information may not enable a decisionmaker to identify w; it may give her only better information regarding the underlying distribution of w. To describe this, let the preinformation f(w) be given as the weighted average of a set of probability distributions p(w, z), where the preinformation probability of observing distribution p(w, z) is g(z) taken over a domain Z of distributions.

$$ f(w) = \int\limits_Z {p\left( {w,z} \right)g(z)dz}, $$

where

$$ \displaystyle\int\limits_Z {g(z)dz} = 1. $$

and for any z,Footnote 11

$$ \displaystyle \int\limits_W {p\left( {w,z} \right)dw = 1}. ^{11}$$

Prior to learning z, the policymaker would choose x* that maximizes expected utility

$$ \int\limits_W {U\left( {x,w} \right)} \int\limits_Z {p\left( {w,z} \right)g(z)dz} dw. $$

After learning, the policymaker can choose x(z) to maximize expected utility with the distribution p(z, w) that describes the postinformation uncertainty about w:

$$ \int\limits_W {U\left( {x(z),w} \right)} p\left( {w,z} \right)dw. $$

The value of information is, as above, the expected improvement in being able to choose after learning, in this case the distribution g(z) rather than w directly:

$$ V = \int\limits_Z {\int\limits_W {\left[ {U\left( {x(z),w} \right) - U\left( {x^*,w} \right)} \right]} p\left( {w,z} \right)g(z)dzdw}. $$

KC’s Fig. 2.2 illustrates this where there are just two potential distributions.

So far we have two uncertainties, the distribution of the climate variable w given the parameter z, and the distribution of the parameter z. Of course, there are other uncertainties as well, associated with how x will affect utility given w, such as how well the policy will work and what it will cost. One uncertainty is technological change that may reduce the cost of substituting away from fossil fuel use; another surrounds the ecological costs associated with extreme-case geoengineering. Let θ be a potentially multidimensional parameter reflecting these uncertainties over a range Θ distributed by h(θ, w), so that the “utility” U(x, w) for any policy under any realization of climate sensitivity is really an expected utility

$$ U\left( {x,w} \right) = \int\limits_\Theta {U\left( {x,w,\rm theta } \right)h\left( {\rm theta, w} \right)d\rm theta } $$

where, for any w,

$$ \int\limits_\Theta {h\left( {\rm theta, w} \right)d\rm theta } = 1 $$

Substituting this into the expression for the value of information in the general case where information narrows but does not eliminate uncertainty regarding the climate parameter gives

$$ V = \int\limits_Z {\int\limits_W {\int\limits_\Theta {\left[ {U\left( {x(z),w,\rm theta } \right) - U\left( {x^*,w,\rm theta } \right)} \right]} p\left( {w,z} \right)g(z)h\left( {\rm theta, w} \right)d\rm theta dwdz} }. $$

We could increase the formal complexity by looking at the value of information about θ that changes the degree of uncertainty regarding the costs and benefits of climate policy. However, this formulation is sufficient to illustrate that the methods for constructing the value of information, the use of Bayesian belief nets, and the choice between expected utility and risk management frameworks, can be regarded as independent issues.

2.1.3 2.C.3.Separating Value of Information, Risk Management, and BBNs

KC propose that climate policy be considered using a risk management framework, as distinguished, presumably, from an expected utility framework. KC (2012, page number tk) characterize the distinction in the following way:

A risk management approach asks what policy should be, given the large range of possible outcomes from that choice. This is quite distinct from asking what the optimal policy is under different assumptions of our uncertain variables.

Perhaps I’m wearing blinders acquired from decades of being an economist, but the source of “quite distinct” is not obvious. Both seem to be doing the same thing.

The essential difference may be that under risk management, rather than maximizing expected utility, the task is “to keep the probability of facing catastrophic damages to some determined low level,” or as said later, a “defined tolerable level.” If so, the utility or benefits of the outcome itself are essentially out of the calculation. Maximizing expected utility thus becomes a matter of minimizing the cost of meeting the risk management probability target. In the formulation above, then, we can substitute for expected utility (which was net of cost) the negative of cost.

To put it another way,

$$ U\left( {x,w,\rm theta } \right) = \overline U - C\left( {x,w,\rm theta } \right), $$

where \( \overline U \) is the utility achieved at the “defined tolerable level” and \( C\left( {x,w,\rm theta } \right) \) is the cost of implementing policy x with climate sensitivity y under circumstances θ. Substituting this for the value of information V above gives that value as the difference between the cost we have to pay for acting before getting the information and the cost we would expect to have to pay after we learned more about the distribution of climate conditions.

$$ V = \int\limits_Z {\int\limits_W {\int\limits_\Theta {\left[ {C\left( {x^*,w,\rm theta } \right) - C\left( {x(z),w,\rm theta } \right)} \right]} p\left( {w,z} \right)g(z)h\left( {\rm theta, w} \right)d\rm theta dwdz} } $$

where x* is the choice of policy that minimizes expected costs prior to acquiring more information about the distribution of climate sensitivity.

In my interpretation of KC, the value of information shows up in the context of a cost-effectiveness test rather than as expected utility maximization. However, as a matter of formalism and, importantly, the underlying uncertainties, the factors determining the value of information are essentially the same. It may be that some values of θ would affect utility but not cost, so the relevant distribution function h(θ, w) could be less difficult to ascertain. But we still need to know h(θ, w), p(w, z), and g(z) to determine whether an effort to acquire information to determine z is worth the cost and whether we are operating under an expected utility framework or a risk management framework.

Similar considerations show that the Bayesian belief nets KC advocate would be relevant in an expected utility context. The purpose of a BBN is essentially to use the knowledge of a set of experts to learn the probabilities relevant to a particular policy. For expected utility, one could interpret this as using those experts to learn the distribution of effects on utility instead of costs (h(θ, w)). The expert information can also be used to determine which distribution of climate sensitivity p(w, z) we have, essentially by finding z and eliminating the stochastic step added by g(z). This suggests that the virtues of BBN in improving information regarding risk, and whether it is worth the cost to set one up, are at least qualitatively just as plausible in an expected utility framework as in a risk management framework. The expert survey methods KC propose are equally applicable in both circumstances, so their value is largely separate from whether one maximizes utility or minimizes the cost of reaching a maximum tolerable probability of a climate catastrophe.

2.1.4 2.C.4.Critiques Nonetheless

Establishing that methods for calculating the value of information and Bayesian belief nets could apply equally to expected utility and risk management doesn’t mean that risk management and expected utility are equally (in)valid ways of determining policy responses. Neither does applicability to expected utility theory insulate BBNs from critiques of risk management. The point of showing that the concepts are largely independent in principle allows their merits to be assessed largely independently. Some brief observations on risk management, BBNs, and their interrelationship follow. I conclude with a critique of economic approaches to utility maximization in the climate context.

2.1.4.1 2.C.4.1.Risk Management

One can think of three justifications for using risk management over an expected utility approach. The first two are conventional arguments that justify cost-effectiveness over wealth maximization approaches generally. First, if the benefits of a policy are too difficult to quantify, one might simply evaluate policies in terms of how well they achieve a predetermined policy target. Quantification may be inherently difficult because the data on valuation are highly noisy. For the instructive case of the value of reducing mortality risk, the “statistical value of life,” the underlying data on willingness to pay for incremental safety benefits are notoriously variable—but at least there are some markets or behavior trails from which a willingness to pay may be inferred. Since the effects of climate change are global, nonexcludable (and thus outside markets), and future, the present willingness to pay to mitigate it may be impossible to measure with any real confidence. One might be better off making a considered judgment regarding acceptable risk.

The second conventional argument for taking risk management rather than expected utility approach is that ethical considerations as well as economic factors determine the appropriate target, the instant example being the maximum tolerable probability of a sufficiently large-scale climate effect. I conclude below with some observations on the limits of the economic approach and, derivatively, the unavoidability of ethical considerations in the specific context of climate policy. In general, however, when lives or major changes to the social or physical environment hang in the balance, some may argue that policy responses ought not be determined solely by how much people might be willing to pay for them, even if that willingness could be measured accurately. One could set the level of tolerable risk using qualitative assessment, normative judgment, and communal deliberation (Ackerman and Heinzerling 2002) and then manage that risk by gathering information to seek out the most cost-effective programs.

A more recent argument for risk management and against expected utility, cited by KC, comes from Weitzman (2007). As I understand it, Weitzman’s argument rests on two premises. The first is that the distributions for climate events that one could statistically infer from the data, the f(w) in the models above, have fatter tails—more weight toward high climate sensitivity—than the underlying distributions might entail. He contrasts the “t-distribution” from the normal distribution in that regard. The second premise is that the appropriate form for modeling the utility of wealth, where wealth is affected by climate, entails assuming constant relative risk aversion (CRRA). These two premises together imply that the expected (dis)utility from climate change is −∞ and that any finite effort to alleviate it is justified.

I lack the expertise to address Weitzman’s statistical premise, but the CRRA assumption is unsupported by theory and contradicted by experience. With regard to theory, CRRA is based on a quadratic Taylor series approximation to a utility function to model the willingness to pay to avoid variance in wealth. As such, it is constant only within a small distance around a target wealth level. Nothing suggests that one could extrapolate that approximation far outside such small variations, particularly to catastrophes. Were this so, individuals would regard the loss of life as sufficiently harmful to warrant arbitrarily large expenditures to limit mortality risk. That is, the observed value of a statistical life would be infinite—a prediction violated by almost everyone’s behavior almost every day. KC’s analysis does not rely on going this far, but it does undercut using Weitzman’s argument to justify a risk management rather than expected utility approach.

I was surprised to see KC tout the advantages of “value-at-risk” (VaR) models for risk management. VaR has been taking a beating in the press, where its widespread adoption in assessing derivative portfolio risk has been blamed for the financial meltdown of 2008:

Given the calamity that has since occurred, there has been a great deal of talk, even in quant circles, that this widespread institutional reliance on VaR was a terrible mistake. At the very least, the risks that VaR measured did not include the biggest risk of all: the possibility of a financial meltdown. “Risk modeling didn’t help as much as it should have,” says Aaron Brown, a former risk manager at Morgan Stanley who now works at AQR, a big quant-oriented hedge fund. A risk consultant named Marc Groz says, “VaR is a very limited tool.” David Einhorn, who founded Greenlight Capital, a prominent hedge fund, wrote not long ago that VaR was “relatively useless as a risk-management tool and potentially catastrophic when its use creates a false sense of security among senior managers and watchdogs. This is like an air bag that works all the time, except when you have a car accident.” Nassim Nicholas Taleb, the best-selling author of “The Black Swan,” has crusaded against VaR for more than a decade. He calls it, flatly, “a fraud.” (Nocera 2009)

VaR, by the way, is a horrible way to measure risk, as has been said again and again by economists, because it calculates the risk for only 99% of the time. As [Simon Johnson, a professor at Sloan School of Management at MIT] says, “VaR misses everything that matters when it matters.” Indeed, the VaR metrics obviously missed what led to what now has been dubbed the Great Recession. (Sorkin 2009)

I am not an expert in assessing financial risk, but I find one potential flaw of VaR, a shortcoming it shares with risk management generally: once the level of acceptable risk is determined, factors that might make the costs of unlikely events outside the range of acceptability become irrelevant to decisions, rather than ratcheting down acceptable probabilities as would happen with an expected utility approach. However, a reasonable response may be that the financial meltdown was the fault of not VaR but its application. To paraphrase the National Rifle Association, one could say, “Models don’t kill economies, bankers kill economies.”

2.1.4.2 2.C.4.2.Bayesian Belief Nets

My initial reaction to KC’s BBN proposal was skepticism. My epistemological predisposition is that knowledge is something an individual acquires by examining the evidence and analyzing relevant theories, not by what appears from taking a poll. Of course, as KC go to some length to point out, BBN is more than mere poll taking. Underlying it is a process, not described in detail in their chapter, for treating expert opinions themselves as data amenable to Bayesian updating and maximum likelihood testing to reduce the uncertainty associated with a particular potential phenomenon, such as climate sensitivity or the costs of various policies.

Along with recognizing the statistical aspect of BBNs is the point that knowledge is routinely combined through institutional mechanisms to arrive at better estimates. Markets, certainly since von Hayek (1945), can be seen as information media in which disaggregated estimates of costs and value are combined to provide prices, the best estimates possible of marginal benefits and marginal costs. Asset markets—stocks and bonds, commodity futures and options, derivatives—similarly combine information regarding expectations of those benefits and costs to estimate their present values. The efficient market hypothesis is that those markets cannot be systematically beaten without unique information (Malkiel 2003), a view currently under fire since the 2008 financial market meltdown (Wighton 2009). Variations on this theme involve prediction markets (Iowa Electronic Markets, Intrade) in which assets are created with payoffs based on the outcome of elections, legislation, wars, or other events. Such markets have been proposed, albeit controversially, to predict the likelihood of catastrophes or terrorist attacks (Hanson 2007).

We, or at least I, should also keep in mind a couple of things about the individual character of knowledge. Academically, material does not become accepted, nor is a lot of research funded, without peer review, itself a kind of collective expert assessment. More broadly, what all of us believe we know is far, far greater than what we have individually found out. From childhood to the present, we depend on teachers, books, colleagues, journals, libraries, and maybe even Wikipedia, to tell us what we believe we “know.” Each of those sources is fundamentally a sort of “belief net.” That they lack the statistical foundation of BBNs is in BBNs’ favor; the crucial point is that resistance to BBNs on the basis of their being collective rather than individual may be misplaced.

It does remain the case that a BBN need not be restricted to risk management settings. As noted above, a BBN can reduce uncertainties to improve policy choices under expected utility maximization as well. We do need to be careful to keep in mind that the value of a BBN is something that needs to be known ex ante. KC’s portrayal gives the impression that the value of a BBN is realized ex post. We need the assorted probability distributions described above to determine whether the costs of a BBN are worth undertaking. More on how one would make this ex ante assessment would be useful. One could imagine a staggered set of BBNs, where one undertakes a relatively low cost survey of a small number of experts to determine whether a full-blown BBN would be worth the costs.

2.1.4.3 2.C.4.3.Putting Them Together

The theme here is that value-of-information calculations, risk management versus expected utility, and the role of BBNs can all be assessed independently. Although that seems largely true, there is one sense in which BBNs combined with risk management could be problematic. As KC say, the goal under risk management is to find the best way to deal with a “defined tolerable level” of risk. This leaves open the question of how “tolerable level” comes to be defined. Under risk management, this is specifically not a matter of ascertaining individual willingness to pay, whatever that may be, and defining tolerable level as the point at which the revealed willingness to pay for further reductions in probability no longer exceeds the cost of those reductions.

If not, then how do we define the tolerable level? If the preferences of the general public cannot be ascertained or relied on, then the question of who gets to choose becomes unavoidable. If so, the BBN framework raises the possibility that the experts may interpret questions to be not about their best guess of a relevant value or probability distribution, but about their interpretation of “tolerable level.” This may happen unintentionally but could be problematic nonetheless.

For example, one might find a climate expert who is an avid bicyclist and vegan who finds air-conditioning oppressive. Another expert may love his Hummer and steaks and believe that air-conditioning is great, especially when one keeps the windows open so the house doesn’t smell musty. Those experts’ assessments regarding relevant probabilities of climate sensitivity and thus justification for incurring costs of policies may become difficult to disentangle from their own judgments about what costs are worth incurring in a society. Perhaps this is just a predictable observation from a paternalism-averse economist, but we probably need to be careful in framing BBN surveys to minimize the degree to which they become a forum for elites’ desires trumping the preferences of the public. Even if one has few qualms about letting elites make policy decisions, the political legitimacy of BBNs as an advisory tool may depend on limiting them to “just the facts.”

2.1.5 2.C.5.Expected Utility, Risk Management, and Climate Policy

We can conclude with an observation (Brennan 2010) that could cut in favor of the risk management approach, and less obviously but possibly Bayesian belief nets as well. When economists employ utility maximization models for policy, they—we—typically interpret “utility” as in consumer surplus terms—that is, aggregate willingness to pay. The standard normative critique of this approach is that in aggregating surplus across everyone in an economy, it treats each dollar of net benefit as equal, whether the recipient is homeless or a billionaire. The justification for ignoring the normative consequences of the distribution of net benefits is that the winners could in principle compensate the losers, creating a situation where everyone is better off—a generally uncontroversial improvement. Someone other than the policy evaluator can decide explicitly or implicitly whether some other distribution of benefits is preferable.

However, there may be no compensation-in-principle in the case of climate change. If the benefits of climate policy are realized only by generations far in the future, this compensation would require that those in the future pay us for the sacrifices we make on their behalf. Since future output cannot be put into a time machine for delivery to us, such compensation may be impossible. If that is the case, the standard economic approach does not suffice; explicit moral assessment of our obligations regarding the welfare of future generations and environmental protections becomes paramount.

The inevitability of an explicitly normative dimension beyond efficiency to climate mitigation policy suggests we ought to focus not on maximizing expected utility but on determining an ethically tolerable level of risk of severe climate change to be met at least cost—the risk management approach that KC advocate. This would entail gathering information to reduce the relevant uncertainties associated with the costs of various means of meeting that objective. KC provide a framework to accomplish this, making their contribution to the climate policy community important.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Kousky, C., Cooke, R.M. (2012). The Value of Information in a Risk Management Approach to Climate Change. In: Laxminarayan, R., Macauley, M. (eds) The Value of Information. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4839-2_2

Download citation

Publish with us

Policies and ethics