Climate Science and Paleoclimatology
Although individual instrumental records from various places in North America and Europe are examined to determine trends, including the identification of the warmest and coldest years in the records, the main focus of this chapter is on proxy records. Proxy temperature records are constructed from tree ring, ice core and other paleoclimatic data using statistical methods that link the proxies to observed (instrumental) temperature data. The statistical methods are briefly discussed. Climate scientists use the proxy temperature reconstructions to show that global average temperatures remained constant for upwards of two millennia before rising dramatically beginning in the twentieth century – the temperature reconstructions effectively eliminate the Medieval Warm Period and the Little Ice Age or relegate them to local phenomena. This reconstruction became known as the hockey stick, with the long-run period of constant temperatures constituting the shaft and the recent dramatic upturn the blade of the stick. Along with a similar trend in the concentration of atmospheric carbon dioxide, the hockey stick is the key empirical evidence of global warming used by the IPCC. The criticism and defense of the hockey stick graph are discussed in detail, as are some of the other issues regarding the use of instrumental and proxy temperature reconstructions.
By the time of the 2007 Report of the Intergovernmental Panel on Climate Change (IPCC 2007), many commentators were confidently asserting that the scientific debate about the causes and perhaps even the potential future impacts of global warming had been settled, and that there was evidence indicating that recent decades were the warmest humans had ever seen (see Chap. 2). It was assumed that the vast majority of scientists agreed that human activities were overwhelmingly responsible for already observed global warming. This claim of an ‘overwhelming consensus’ among scientists is itself a non-scientific statement that has never been tested. The truth, of course, is that a ‘consensus’ does not and never has existed, and, even if there were a consensus among scientists, this does not imply the truth of the matter. The validity of scientific statements is not resolved by consensus or by a popular vote, although there are occasions when courts are asked to weigh the evidence and decide in favor of one side or the other because the unresolved issue has immediate policy implications (e.g., when a British court was asked to rule on the showing of An Inconvenient Truth in public schools).
only 1% explicitly endorsed what Oreskes called the ‘consensus view’;
29% implicitly accepted it “but mainly focus[ed] on impact assessments of envisaged global climate change;”
8% focused on ‘mitigation;’
6% focused on methodological issues;
8% dealt “exclusively with paleo-climatological research unrelated to recent climate change;”
3% “reject[ed] or doubt[ed] the view that human activities are the main drivers of the ‘observed warming over the last 50 years’;”
4% focused ‘on natural factors of global climate change;’ and
42% did “not include any direct or indirect link or reference to human activities, CO2 or greenhouse gas emissions, let alone anthropogenic forcing of recent climate change.”
A more recent survey of the same database but covering more recent years showed that scientific opinion was shifting away from belief in catastrophic anthropogenic warming, and not toward it (Schulte 2008), while a survey of climate scientists showed that the matter remains very much debated among them (Bray and von Storch 2007). Further, over 31,000 scientists, including over 9,000 with PhDs signed the Global Warming Petition stating, “There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.” However, just as consensus does not resolve scientific disputes, neither does a petition.
During the latter part of November, 2009, hackers broke into the computers at East Anglia University in the United Kingdom, targeting in particular the University’s Climate Research Unit (CRU); or perhaps a whistleblower released the information. The CRU specializes in the study of past climate. Its research reproducing historical temperatures from information based on tree rings, stalagmites in caves, ice cores, and lake sediment cores has come under increasing scrutiny, partly over controversy regarding the Medieval Warm Period (900–1300 ad) and, to a lesser extent, the Little Ice Age (1350–1850). The CRU supports the view that recent increases in temperatures are unprecedented by historical standards, that projected warming will be catastrophic, and that humans are responsible.3
The numerous documents and emails obtained from the CRU computers were posted anonymously on the internet, thereby providing a unique insight into the lengths to which scientists will go to protect their beliefs and data.4 Overall, the emails and other information posted on the web paint a negative picture of how climate science is done, and raises questions concerning the view that recent and projected temperatures are outside historical norms. In particular, as one high-profile weekly news magazine noted: The scientists “believe in global warming too much, and that their commitment to the cause leads them to tolerate poor scientific practice, to close themselves off from criticism, and to deny reasonable requests for data” (The Economist, 28 November 2009, p. 93).5 Several months after ‘climategate’ broke, Phil Jones, Director of the CRU, admitted in a BBC interview on February 13, 2010, that he did not believe “the debate on climate change is over” or that “the vast majority of climate scientists think” it is resolved.6
In one study, Anderegg et al. (2010) constructed a list of 903 names of people who were convinced by the evidence that anthropogenic climate change was happening as described by the IPCC, and a list of 472 names of those who were deemed not to be convinced by the evidence. The former group included 619 IPCC Working Group I authors (IPCC WGI 2007), while the latter included many people who opposed government action to mitigate climate change (and may have even been convinced by the evidence).7 Upon comparing the qualifications of the two groups, the authors found that the group of convinced scientists were more highly cited (and thus considered to have more climate expertise and prominence) than those in the unconvinced group. No statistical analysis was provided. What is most disconcerting about the analysis is the attempt to compare ‘apples and oranges’ – to compare experts on policy with those who essentially wrote the scientific case for anthropogenic warming. The policy experts deal with reality – what is politically feasible in mitigating greenhouse gas emissions without destroying the fabric of society – and what is not possible is the reduction in fossil fuel use that the convinced scientists propose as a solution (Gerondeau 2010; Levitt and Dubner 2009). The subject of reducing fossil fuel emissions is addressed in more detail in Chaps. 9, 10, 11, and 12.
The IPCC’s Working Group I scientists only know what historical temperatures have done and what might happen in a world of higher temperatures, but have no comparative advantage in predicting the extent of damages and social unrest/upheaval that global warming or attempts to mitigate it might cause. Here we are in a fuzzy arena where theology, philosophy and social science trump climate science.
It would appear that climate change or global warming is no longer about climate science, but it is about beliefs and politics. As ‘climategate’ has shown, climate science has deteriorated into a conflict rather than a debate between global warming alarmists and skeptics. It has become a matter of winning the hearts and minds of ordinary citizens, convincing them that climate change is either the greatest disaster ever to face humankind or a benign change in weather patterns that is well within what humans have experienced in the past several millennia. Indeed, it has recently been characterized by some as a religious debate (Nelson 2010; Sussman 2010; Wanliss 2010).
In this chapter we demonstrate that there has been anything but a ‘scientific consensus’ regarding just one aspect of the climate science, namely, the historical record. Evidence of controversy was already presented in Chap. 2, where we saw that there is disagreement among scientists about whether the instrumental record even signals a general warming that can be attributed to human greenhouse gas emissions, or whether the warming is an artefact of socioeconomic activities that affect local climates and thus local temperature measurements. Here we extend our examination of the claim made by researchers at East Anglia University’s CRU and their global collaborators that current temperatures are high by historical standards, that they have risen at a historically unprecedented rate, and that the rate of increase will be even faster in the future.
We begin in the next section by first examining those facts of global warming that might well be considered indisputable. To get our bearings, we then examine raw (unadjusted) temperature data from a number of weather stations, followed by a discussion of the paleoclimatic record. Finally, we combine the paleoclimatic data and the instrumental data in a discussion of the controversial ‘hockey stick’ diagram. In later chapters, we discuss projections of future temperature increases from climate models and some further controversies related to the science of climate change.
3.1 Indisputable ‘Facts’ of Global Warming?
- 1.Beyond dispute is the fact that the level of CO2 in the atmosphere has increased since the beginning of the Industrial Revolution, and has risen from about 270 ppm by volume (denoted ppmv or more often simply ppm) to nearly 400 ppm today. This is indicated in Fig. 3.1. The upward trend in CO2 has continued pretty much unabated and has in fact risen somewhat faster in recent years.
Temperatures have risen in the 150 or more years since the end of the Little Ice Age, which occurred around 1850 AD. This too is indisputable. The rise in temperatures is calculated to be slightly more than 0.05 °C per decade, or about 0.7 °C over the past 100 years, or, based on Chap. 2, about 1 °C over 130 years (0.07 °C per decade). This can be seen in Figs. 2.3 and 2.4.
Carbon dioxide is a greenhouse gas – it makes a contribution to global warming. This is not a point of disagreement. What is disputed is the extent of its contribution – the overall sum of the various positive (leading to further warming) and negative feedbacks caused by the initial CO2 forcing.
Also indisputable is the fact that human activities have contributed to this rise in atmospheric CO2 concentrations.
Everything else about global warming remains controversial, with peer-reviewed scientific papers providing evidence of the lack of consensus.
CO2 is one of the most important greenhouse gases, with other greenhouse gases generally measured in terms of their CO2 equivalence, denoted CO2e. For convenience, we will simply use CO2 to refer to carbon dioxide plus other greenhouse gases measured in terms of their CO2 equivalence. However, CO2 is a relatively minor greenhouse gas compared with water vapor.
In terms of its relative contribution to the greenhouse effect, water vapor accounts for 95.00%, followed by CO2 (3.62%), nitrous oxide or N2O (0.95%), methane or CH4 (0.36%), and CFCs and miscellaneous gases (0.07%). However, once clouds are factored in, the contribution of water vapor to greenhouse warming may be less, varying between 66 and 85%, because clouds reflect the sun’s rays. In climate models, it is the enhanced greenhouse effect – the ‘forcing’ effect of CO2 in increasing water vapor in the atmosphere – that causes climate change of the magnitude found in the IPCC reports. Because warmer air causes more water to evaporate from the oceans, the initial CO2-induced warming is thought to lead to a greater amount of water vapor which, in turn, increases temperatures even more – a climate feedback. It is this climate feedback and whether other factors (e.g., cosmic rays) affect water vapor and cloud formation that is a source of disagreement (which is discussed further in Chap. 5).
In the remainder of this chapter, we focus on the historical record to determine if global temperatures are already above those experienced in the past. Several important sources of disagreement are reviewed, the most important of which relates to the so-called ‘hockey stick’ – a graph showing temperatures to be flat for some 1,000 years and then rising sharply beginning around 1900. This controversy relates to paleoclimatic data and whether historical temperature reconstructions, or proxy data derived from tree ring data, ice core samples, lake-bed sediments and other sources, indicate that it was ever warmer than today, whether there is evidence of stable temperatures over the past two millennia (another version of the hockey stick), and whether CO2 and temperature go hand-in-hand over time (as there is some suggestion that CO2 lags temperature rise by 200–800 years as noted already in Chap. 1). Other controversies pertain to surface versus satellite temperature data and, more recently, whether ocean temperature data are more relevant. These issues relate to the means by which temperature and other weather data are collected, and the use of computer models to predict future climate change. The focus in this chapter, however, is only on the historical temperature record, primarily the paleoclimatic record.
It is important to recognize that instrumental records of precipitation, temperature and other weather data are available at a global level only after about 1850, and then not for most regions (see Chap. 2). People recorded temperature and/or precipitation at various times before the 1800s, with the best historical record available likely being the Central England temperature record. However, there are insufficient systematic records to construct large-scale regional or global temperature averages prior to the mid to late 1800s, just as the Little Ice Age was ending. As we already saw (Fig. 3.1), instrumental measurements of atmospheric CO2 only began in 1958, although proxy measures are available from ice core samples for earlier years. The lack of instrumental records makes it difficult to say anything about current versus historical temperatures, for example, as the record of instrumental measurements is too short. Nonetheless, as discussed in Sect. 3.3, historical temperature proxy data can provide some indication regarding past climates.
3.2 Evidence from Individual Weather Stations
The record indicates that temperatures in both locations have risen over the past nearly 130 years, with those in Victoria having risen only slightly and those in Edmonton by some 2 °C. This is not unexpected given that the world was just coming out of the Little Ice Age and interior continental regions generally warm more than coastal regions, as these are affected by the ocean sink.
A more interesting story is told when one takes the differences between the average maximum and minimum temperatures, which are plotted in Fig. 3.2b. In Victoria, the maximum temperature appears to be rising relative to the minimum, suggesting that summers are getting warmer. This supports the view that temperatures have been rising, even in Victoria. A more telling result is that of Edmonton, where the difference between average maximum and minimum temperatures has fallen, even while average temperatures have risen. This suggests that there may be an ‘urban heat island’ effect, which occurs whenever a weather station is located in an area experiencing urban growth. Thus, a weather station that is located in an open field early in the historical record slowly gets surrounded by increasingly dense urban, commercial and/or industrial developments. The heat given off by the surrounding buildings at night in the winter prevents temperatures from falling to levels experienced earlier in the record.8 Clearly, Edmonton weather station data exhibit a heat island phenomenon. This is to be expected for a location such as Edmonton – a rapidly growing industrial city at the heart of an oil and gas industry that began in the late 1940s.
It is difficult to discern an overall upward trend in temperatures from Fig. 3.3. Although regressing the average z-score on time provides evidence of a very slight upward trend of nearly 0.05 °C per decade (z = –0.1789 + 0.0045 × year, R2 = 0.0329), the trend will vary by the number and locations of the cities included in the average. However, it is also important to note that temperatures peak in 1997–1998 when there was a particularly strong El Niño event, without which temperatures may have remained flat for the period in question.
The period since 1926, or even since 1850, is simply too short for us to determine whether current temperatures are higher than ‘normal’ – whether recent weather patterns and projected global warming are somehow outside the realm of human experience. To get a better feel for this, we need to investigate temperatures over several millennia.
As noted in Chap. 2, the most reliable temperature data probably come from the 1,221 high-quality weather stations that make up the U.S. Historical Climate Network.9 Quite some sleuthing is required to determine annual averages for the U.S. contiguous 48 states using the USHCN data, for example, which is why averages from other sources are generally used. NASA-GISS provides an average of U.S. surface temperature anomalies for the contiguous 48 states that have been cited by various commentators. These data have been homogenized in an attempt to remove non-climate influences: Measures of average U.S. temperatures rely on USHCN data, and then adjust the raw temperature data to remove non-climate influences. The adjusted data appear to change quite frequently, however (see below). One way to adjust the data is to employ population density, which has been a standard method. A more recent effort to remove non-climate sources of contamination adjusts observed surface temperatures using nighttime radiance (the amount of light emitted from various regions as measured by satellite data) for the period March 1996 through February 1997 (Hansen et al. 2010).
In Table 3.1, we provide NASA-GISS information regarding the 10 warmest years in the lower contiguous United States based on instrumental records. Rankings of the warmest years are provided for three different periods.10 Since data for the August 2007 report are available only through 2006, 2007 is not included in the earliest ranking given in the table. Notice that scientists have adjusted the data in ways that make more recent years appear warmer. Thus, the number of years from the past two decades that appear in the top 20 warm years has increased from 7 to 8 and finally to 11. In the May 2009 listing, 2007 is the 14th warmest year in the historical record, but it has moved up to tenth by the April 2010 listing. Based on data released in early 2011, GISS data show 2010 to be the warmest year on record, by 0.01 °C over 1998 (see Goddard 2011). While it may be true that the latest adjustments based on nighttime radiance are scientifically better than earlier adjustments, it seems odd that the most recent years are now showing up as among the warmest in the temperature record, contrary to global evidence presented in Fig. 2.6 and, importantly, averages based on raw temperature data for the U.S. presented in Fig. 3.3b.
Ten warmest years based on average contiguous 48 U.S. surface air temperature anomalies (°C)a, 1880–2006
August 20, 2007
May 27, 2009
April 26, 2010
Ten coldest/warmest January-February-March-April seasons in the past 500 years, Stockholm, Sweden, temperature anomalies in °C from 1961 to 1990 average
Ten coldest years
Ten warmest years
Why are recent U.S. temperatures considered to be so warm compared to other years in the record? As pointed out in Chap. 2 and in the discussion above, climate scientists appear not to have been able to eliminate the contamination due to non-climatic or socioeconomic influences from the temperature record. This is not to suggest that global temperatures have not increased since the late 1800s, but rather that the recent decade may not have been the warmest ever. Certainly, the use of nighttime radiance is fraught with problems, including the fact that some jurisdictions illuminate their skies to a much greater extent than others, not because they are somehow richer, but because of political and historical factors pertaining to public lighting, sprawl, and so on. Further, it is difficult to wrap one’s head around the idea that radiance observations for 1996–1997 can be used to adjust temperature data going back several decades or more. Missing data at stations, changing station locations, spatial coverage, and varying record lengths across stations affect the temperature reconstructions. Given that these are problems for weather stations that are considered to be of the highest quality and that are located in the world’s richest country, one is left to speculate about the quality of data from weather monitoring stations elsewhere on the globe.
The the main reason why recent temperatures appear to be the warmest on record in the NASA-GISS temperature reconstruction, however, concerns Arctic temperatures. Outside of satellite observations of temperatures (which are not used in the GISS reconstruction), there are very few weather stations in the north. Yet, Hansen and his colleagues extrapolate these limited observations to the entire Arctic. In 2010, therefore, the average temperature for the entire far north ranged from 4 to 6° C above normal on the basis of observed temperatures in Nuut, Greenland, and a couple of other northern stations. As Goddard (2011) points out, neither the satellite data nor the HadCRUT reconconstructions come to a similar conclusion; the recent warm years are the result solely of incorrect procedures for averaging temperatures over a vast area based on extremely limited observations. The pitfalls of this were discussed in Chap. 2.
As noted in Chap. 2, the Berkeley Earth Surface Temperature project seeks to shed light on questions regarding the instrumental temperature record, and thus that of the warmest year. Even so, these efforts concern the warmest year in the past 130 or so, and not that of the last two millennia. We now turn to this issue.
3.3 Eliminating the Medieval Warm Period and Little Ice Age
The Intergovernmental Panel on Climate Change (IPCC), and thus much of the climate science community, takes the view that, although there is some variability within the system, on balance the Earth’s climate is generally in equilibrium and has been so for thousands of years. In this world, there are three things that can cause the climate to change. These forcings, as they are known, are volcanoes, solar cycles (sunspot cycles) and human activities. Volcanoes spew particulates into the atmosphere that reflect sunlight back into space, thereby resulting in global cooling. However, volcanic ash might fall on snow and ice, thereby reducing the reflectivity (or albedo) of the surface while absorbing heat, thus leading to warming. The overall impact depends on a variety of factors and the time frame considered. The 11-year sunspot cycles, on the other hand, are thought to have little impact on global temperatures (IPCC WGI 2007, pp. 476–479). Consequently, this leaves anthropogenic emissions of greenhouse gases as the IPCC’s main explanation for climate change.
Not surprisingly, the Medieval Warm Period (MWP), which is dated from about 900 to 1300 ad, created a problem for this view of past climate, as evidenced by the climategate emails. The MWP makes it difficult to accept the view that fossil fuel consumption (CO2), large-scale cattle rearing (methane), tropical deforestation (CO2) and other activities are responsible for climate change. The MWP stands in contrast to the notion that such human activities will cause temperatures to rise to levels never seen before. Clearly, the MWP was not the result of anthropogenic emissions of CO2 and other gases. It was a natural event, but one that is not explained in the IPCC account. If temperatures during the MWP were as high as or higher than those experienced thus far, there has to be something other than an anthropogenic forcing that accounted for this warm period.
The MWP also creates a dilemma for climate modelers. If the MWP was real, it would then be incumbent upon climate modelers to duplicate the Medieval Warming to demonstrate the veracity of their models. After all, information on atmospheric CO2 and other greenhouse gases, and aerosols and particulates, is available from such things as lake bed sediments and ice cores. Thus, climate modelers cannot claim that their models are to be trusted simply because they are based on scientific relationships (mathematical equations) when such models cannot reconstruct an event such as the MWP. If, on the other hand, the MWP was the result of extra-terrestrial forces (sunspots, cosmic rays, earth orbit, tilt of the earth, etc.) that cannot be taken into account by climate models, there is no reason why these forces cannot also explain current climate events (as discussed in Chaps. 4 and 5). The same is true if modelers find there is some non extra-terrestrial explanation previously not taken into account.
There is simply too much evidence for the Medieval Warm Period to ignore. It comes from historical writings – the Viking colonization of Greenland, grape growing in England, crop production at high elevations, and so on (e.g., see Diamond 2005; Fagan 2008; Ladurie 1971; Lomborg 2007; Plimer 2009, pp. 31–99). Yet, climate scientists and climate modelers have deflected criticism by arguing that the MWP was not a period of global warming, but, rather, a period of heterogeneous warming with some regions experiencing a burst of warming at the same time that others experienced a cool period.11 Backcasts of temperatures from climate models appear to confirm this position as various climate models’ simulated temperatures for the past millennium do not indicate extended periods where temperatures were ‘out of equilibrium,’ but, rather, confirm the notion of long-term equilibrium with average temperatures fluctuating slighlty about the shaft of a ‘hockey stick’ (discussed below).12
The Little Ice Age also poses a problem for climate scientists because it provides a possible explanation for the warming observed in the instrumental record – any upturn in global temperatures would be expected once this period of ‘natural’ cooling came to an end. Indeed, the rise in temperatures seen in the instrumental record is not at all unexpected given that the instrumental record begins about the same time that the LIA ended. Some climate scientists argue that the LIA was confined only to Northern Europe,13 while the IPCC downplays the LIA, arguing that temperatures during this period were at most 0.1–0.3 °C cooler than normal (IPCC WGI 2007, p. 108). Contrary evidence to these views of the LIA is provided below, particularly in Fig. 3.6c.
In his exhaustive study of historical climate that includes records not employed by climate scientists, Ladurie (1971) was certainly convinced that the LIA was a global phenomenon that affected areas beyond Europe.14 In his study on the Little Ice Age, Brian Fagan (2000) relied on much anthropological evidence to indicate that the LIA impacted North America and places as far away as New Zealand.15 Khim et al. (2002) find evidence for both the MWP and LIA in sediment cores from the northern end of the Antarctic Peninsula. Plimer (2009, p. 74) notes that the “cold climate and glacier expansion in the Little Ice Age are documented from all continents and on major islands from New Zealand in the Southern Pacific Ocean to Svalbard in the Arctic Sea.” Likewise, in their historical temperature reconstructions for regions in China, Ge et al. (2010) find that recent warming has likely been exceeded in the past 1,000 or more years, the rate of recent warming was not unusual, and the observed warming of the twentieth Century comes after an exceptionally cold period in the 1800s. This is confirmed by Ran et al. (2011), for example, who indicate that temperatuires in the MWP exceeded those of the twentieth century and the first decade of the twenty-first century by at least 0.5 °C. These authors also conclude that solar radiation may have been an important forcing mechanism explaining past ocean temperatures.
Still, the MWP remained a thorn in the side of the IPCC for the reason mentioned above – climate science could not explain the warming period using the climate models upon which predictions of catastrophic warming are based. Therefore, climate researchers looked for a way to eliminate the Medieval Warm Period, and with it the Little Ice Age. In testimony before the U.S. Senate Committee on Environment & Public Works’ Hearing on Climate Change and the Media, on Wednesday, December 6, 2006, David Deming of the University of Oklahoma stated: “In 1995, I published a short paper in the academic journal Science. … The week the article appeared, I was contacted by a reporter for National Public Radio. He offered to interview me, but only if I would state that the warming was due to human activity. When I refused to do so, he hung up on me. I had another interesting experience around [this] time… I received an astonishing email from a major researcher in the area of climate change. He said, ‘We have to get rid of the Medieval Warm Period’.”17 Climate scientists found the evidence to eliminate the MWP and LIA in a Yale University PhD dissertation by Michael Mann (1998) and two follow-up papers by Mann et al. (1998, 1999), which are often referred to as MBH98 and MBH99.
Despite the work of Mann and his colleagues, the controversy about the MWP and LIA has not died down. As discussed below, McIntyre and McKitrick (2003, 2005a, b) and others found errors in MBH98 and MBH99, while climate scientists themselves were uneasy about the MBH conclusions. The following examples are documented in the climategate emails.18 Michael Mann confirmed the truth of David Deming’s Congressional testimony in a climategate email of June 4, 2003. In reference to an earlier statement or email by Jonathan Overpeck of the University of Arizona, who along with the CRU’s Keith Briffa was a lead coordinating author of the paleo-climate section of the Fourth Assessment Report (IPCC WGI 2007), Mann wrote: “… a good earlier point that peck [Overpeck] made w/ regard to the memo, that it would be nice to try to ‘contain’ the putative ‘MWP’.” Clearly, the ‘putative’ MWP and also the LIA remained a major problem for paleoclimate scientists even after MBH98 and MBH99, and some felt uneasy about attempts to suppress these periods in the historical record.
Phil Jones of the CRU subsequently argued in 2004 that neither the MWP or LIA could be denied, but that the MWP was “no way as warm” as the last two decades of the twentieth Century and that no decade of the LIA averaged more than 1 °C below the 1961–1990 average global temperature, although this was based on “gut feeling, no science.” Further unease was expressed in an email written February 16, 2006 by Briffa: “Let us not try to over-egg the pudding … [as] there have been many different techniques used to aggregate and scale the data – but the efficacy of these [techniques] has not yet been established.” Yet, no objections or qualifiers were included in the Fourth Assessment Report’s conclusion that the most recent years were the warmest of the past 1,300.
Despite the statements in the previous paragraph, Jones later concluded on the basis of data from Greenland that the MWP was as warm as or warmer than anything seen recently. Yet, he and his coauthors concluded that current warming trends in Greenland “will result in temperature conditions that are warmer than anything seen in the past 1,400 years” (Vinther et al. 2010). This conclusion is unwarranted by the evidence and rooted solidly in the belief that temperature increases of the period from 1975 to 1998 (see Chap. 2) will continue indefinitely into the future. It also assumes that the temperature reconstructions of the past are accurate, something which we consider in the following discussions.
3.3.1 Analyzing Paleoclimatic Data
One way to compare current temperatures with past ones is through the use of proxy temperature data. Paleoclimatologists can infer past temperature records from ice cores, tree rings from long-living trees, lake bed sediments, stalagmites in caves, and coral reefs. Tree rings have a 1–3 year temporal resolution, speleothems (stalagmites, stalactites and similar rock formations) may also be resolved annually, ice cores resolve information on a decadal scale, historical documents (and anthropological evidence) have temporal resolutions of 10–30 years, and lake sediments have resolutions at the decadal to century time scales.
Constructing a Temperature Proxy
A temperature proxy refers to a measure, such as tree-ring width, that is sufficiently correlated with temperature to enable the reconstruction of temperature records where instrumental data are not available. Consider tree-ring width as a temperature proxy, with foresters having found that tree rings are wider when temperatures are warmer, given sufficient precipitation so that it does not unduly constrain growth. How do we obtain a temperature proxy based on a particular historical record of tree-ring widths?
Both approaches have their drawbacks. The first approach (estimate the response function and then invert it) leads to upward bias in the standard errors of the reconstruction – that is, the variances of the temperature reconstructions are larger than they should be. The second approach (invert the response function and then estimate the transfer function directly) leads to a downward bias in both the reconstructed temperatures (they are lower than they should be) and variances (which are too small relative to the actual variance in temperatures).
The response and the transfer functions are assumed to be linear. If they are nonlinear problems arise. For example, suppose the functions are quadratic. Then, as temperatures rise tree ring width will first increase, but as they rise further tree ring width will fall. In that case, one would observe the same tree ring width for two different temperatures, one lower than the other. Likewise, one would predict two temperatures for each tree ring width.
There has also been some discussion in the literature as to which temperatures to employ – the local temperature or a global one. By relying on global temperatures, one is essentially assuming that trees “not responding to their own local temperature can nevertheless detect a signal in a wider temperature index” (Montford 2010, p. 47). This is difficult to accept, but has been assumed by some paleoclimatologists.
Finally, if the regression model (3.4) is able to predict temperatures from tree-ring widths for the verification period with reasonable (i.e., statistical) accuracy, then the tree ring data for the historical period for which we have no observed temperatures can be used to construct a ‘proxy’ temperature record – the reconstruction period. While we have discussed how this is done with tree rings, it can also be done using information from stalagmites, lake sediment boreholes, ice cores, et cetera. It is only necessary to find some measure that is a good proxy for temperature – that is strongly correlated with temperature. For example, the depth of each organic layer in a sediment might be indicative of higher growth during warm periods and less during cold ones. Likewise, the composition of dead organisms in a lake bed sediment (as opposed to the depth of an organic layer), or isotopes of various gases (or their ratios) in ice-core samples, might be highly correlated with temperatures, and thus can serve as temperature proxies. However, the resolution in these cases will not be annual.
Aggregating Temperature Proxies
Suppose one has 50 or more temperature series developed from various proxy data, such as tree ring widths, ice cores, et cetera. In addition, a researcher might include temperatures from the central England data series (Fig. 2.3), or the Swedish temperature data construction by Leijonhufvud et al. (2010). In essence, one might have numerous different series that represent various temperature reconstructions. The reconstructions are from different geographical locations and are likely of different length and resolution. If one were to plot the many reconstructed temperatures over time, or rather the deviation of temperature from some average temperature (either determined as the series temperature average or chosen exogenously, and it does not matter), which is known as the temperature anomalies, one would get what has been referred to as ‘spaghetti graphs’ – wiggly lines, some of which may take a definitive upward trend in the twentieth century and others not.
To make sense of all the spaghetti lines, and get something useful that might be an indicator of a global trend in temperatures, it is necessary to somehow combine the information from all of the different series. Of course, as we saw in Sect. 2.1 of the previous chapter, the easiest way to summarize the data is simply to average it (and we illustrate that below). However, averaging might obscure interesting and important things. For example, a large portion of the data series might indicate a sharp upward trend in temperatures in the twentieth century, while remaining series indicate a gentle decline in temperature. If you look only at the average, the sharp uptick might be obscured or missed altogether. Principal component analysis is a well known, long standing statistical technique that teases out the most important trends by looking at patterns in all the data from the various temperature series.
Principal component analysis combines data series so that there are just as many principal components (linear combinations of the data series) as there are original data sets. However, the principal components (PCs) are organized so that they explain a decreasing amount of the overall variation in the overall data. Each principal component is a linear combination of the spaghetti data sets, with the weights assigned to some of the data sets in a PC much greater than those of other data, with some data sets assigned such a low weight they are essentially ignored in that particular PC. Thus, a PC constitutes a weighted average of the various data series, with the first PC (PC1) constructed so as to explain the greatest underlying variability in the data. PC2 accounts for the greatest amount of the remaining underlying variability, and so on. The first PC might account for 80% or more of the variation between the various reconstructed temperatures, and the first three to five PCs might account for 95% or more of the observed differences between temperatures. In this fashion, a 100 or more data series might be reduced to only a few.
With principal component analysis, it is important that the time step is the same for each of the spaghetti data series and that they each have the same number of observations. This is unlike the situation encountered in Sect. 2.1, where some observations were missing; in that case, it was possible to construct raw averages by simply dividing the sum of observations for a period by the number of data points for that period. For paleoclimatic spaghetti data series, the time step is generally annual and the period to be covered varies from several hundred to perhaps 2,000 years. Missing observations are typical, especially at the beginning and end of a data series because the lengths of the spaghetti data series vary significantly.
In the paleoclimatology literature, there are two sources of controversy that have not been adequately addressed because each involves value judgments. First, filling in gaps where information is missing can be done by linear extrapolation, regression analysis using information from the other temperature reconstructions, or some combination of these approaches. This poses several problems: How can linear interpolation adequately address large gaps given that climate is inherently variable and nonlinear over short and long periods of time? If interpolation is not the sole method employed, does one use all the available temperature series or a subset that is based on proxy datasets geographically close to the one with observational gaps? How is the choice made?
Second, how does one fill in missing observations at the beginning or end of a temperature series? One method that has been employed, probably for convenience given the lack of scientific guidance on the issue, is to use the first temperature record to ‘in fill’ all of the missing years prior to that first observation, and to do the same at the end of the series using the final observation. Where this has been done, it has been a source of controversy, but no less so than some alternatives. Clearly, given that scientists believe temperatures to be increasing, the use of the average value to fill in missing temperatures at either end of the series will be avoided. However, replacing missing observations at the beginning of the series with the temperature of the first observation or temperatures from another series (or some combination of series) is fraught with the same objections as those raised concerning other method(s) used to fill gaps. Nonetheless, splicing temperatures from the instrumental record at the end of a proxy series, which has been done in some cases (see below), is considered to be invalid for obvious reasons.
There is no simple way out of the dilemma. Yet, the choices that are made affect the conclusions one reaches about temperatures during the Medieval Warm Period and Little Ice Age relative to those in the late twentieth Century. This is reflected in the so-called ‘hockey stick’ controversy.
3.3.2 The Hockey Stick
Climate scientists have combined data from various proxies to derive a temperature graph that goes back more than 1,000 years. The graph shows temperatures to be flat for some 1,000 or more years, and then rising rapidly during the past 1,000 years. The graph takes a hockey stick shape with the long flat part of the graph analogous to the stick’s shaft and the sharp uptick in the twentieth century analogous to the blade. The higher temperatures associated with the Medieval Warm Period and the lower temperatures of the Little Ice Age have essentially been eliminated by relegating them to regional phenomenon. This has resulted in quite a bit of controversy, as discussed in this section and hinted at above. Although the controversy has been ably and helpfully discussed by Montford (2010), we provide an additional overview here.
MBH98 managed to reconstruct temperatures for the period 1400 to the present, while MBH99 were able to extend the reconstruction back to 1000 AD. Other paleoclimatic reconstructions of temperature go back two millennia, while, in some cases, both the CO2 content of the atmosphere and temperature reconstructions based on sediment and ice-core data go back several hundred thousand years. It should already be evident that attempts to reconstruct temperatures going back that far are fraught with uncertainty, and even CO2 measurements from ice cores are beset by problems related to, among other things, the age of the samples. Temperature reconstructions that go back 1,500–2,000 years are needed, however, if we wish clearly to identify the Medieval Warm Period from something other than historical writings.
Data from 71 series that go back for two millennia have been ‘collated’ by Fredrik Charpentier Ljungqvist (2009), a history student at Stockholm University in Sweden. He collected all of the paleoclimatic data series he could find in the literature that provided temperature information over the past two millennia – a total of 71 separate records. Ljungqvist could not obtain seven records that appear in the literature, discovered that one was an index as opposed to a proper temperature record (Record 19 from Galicia in Spain), and found 12 records for the Northern Hemisphere that were not tested to determine if there was a statistically significant temperature signal – rather, statistical significance was assumed because these series were considered to come from a trusted source. To be included, a series had to provide data points no more than a century apart. “All records with a sample resolution less than annual [had] been linearly interpolated to annual resolution” (Ljungqvist 2009). Subsequently, Ljungqvist’s data records were posted with the World Data Center for Paleoclimatology in Boulder, Colorado and the U.S. National Oceanic and Atmospheric Administration’s Paleoclimatology Program.20
It is difficult to determine how paleoclimatic data should be analyzed. As noted above, principal component analysis can be used to find combinations of various data series that account for the majority of the variability in the underlying data. Also as discussed above, this requires that one has observations for each year in each data series. Ljungqvist used a straight-line extrapolation between the years for which observations are available to fill in missing data, but he did not attempt to fill in missing observations at the beginning or end of a reconstructed temperature series. We use Ljungqvist’s data to discuss some of the problems associated with temperature proxies. Indeed, one must take care in working with paleoclimatic data and recognize that it comes with many qualifications. It is no wonder that some climate scientists, such as Keith Briffa (quoted above), are concerned about the types of conclusions one can draw.
To shed some light on the hockey stick controversy, we begin by examining the 71 data series in more detail.21 Three series are not public, 18 provide only the historical variation in temperatures assuming a standard normal distribution (i.e., z-scores are provided), and 50 series provide actual temperature data. Suppose we begin with the temperature data and take the average of the available observations for each year. The average temperature in any given year depends on the number of observations available for that year, and is influenced by the location – whether a data series comes from Antarctica, a temperate region or a tropical one. Thus, if an observation from Antarctica falls out for any given year while one in a tropical region remains or enters for that year, the average will be higher than warranted, and contrariwise should a tropical data point be missing while one from the Antarctic remains or enters.
Given this proviso, a plot of average temperatures for the past 2,000 years was constructed (but not provided here). In this plot, temperatures are low in the early years, but they are also unusually low and volatile at the end of the period. The reason for this relates to missing data – a situation that is most acute on either end of the series. Indeed, there are between 41 and 45 observations for the first several decades in the series, the maximum number of 50 observations occurs for the period 133 AD to 1811, and then the available observations begins to taper off. By about 1980 there are less than 20 series for which there are data, falling to only 5 by 2000! The average temperature anomaly fluctuates so much after about 1950 that no definitive trend is observable. The paleoclimatic reconstructions using these series indicate that average global temperatures after the LIA are still well below those experienced during the Medieval Warm Period, and that recent temperatures may simply have been rising as the earth came out of the Little Ice Age.22
To get a better feel for what is happening, we re-specify the period on the horizontal axis to exclude the two extreme ends of the data series, providing the plot in Fig. 3.5c. That is, we only plot global average temperatures from paleoclimatic data for the period 550–1950, after which the number of available data series for constructing temperature averages drops off rapidly. (The period after 1950 is revisited below.) Note that the Medieval Warm Period and the Little Ice Age can now be readily identified in the figure. It is also clear from both Figs. 3.5a, c that, after the LIA, there was a rapid increase in average global temperatures determined from the paleoclimatic proxies.
The 71 data series considered above are clearly not the only ones available, even in the public domain. There are literally hundreds of different paleoclimatic data series on tree rings, ice cores, caves and so on, and these are available from the World Data Center for Paleoclimatology in Boulder, Colorado.24 Two examples are provided in Fig. 3.6. In Fig. 3.6a, we consider 17 proxy series from Jones et al. (1998).25 Again we constructed z-scores and averaged these for each year, but we had to use a 30-year moving average trend line to make sense of the data. Only upon doing so could we identify the Little Ice Age, while the Medieval Warm Period is more difficult to discern. Note that the number of series (observations) rises at the start of the LIA and then falls precipitously beginning around 1975, although 17 series is small to begin with.
In Fig. 3.6b, we employ only 14 temperature-related proxy series due to Tom Osborn and Keith Briffa (2006). Given that these were already in a standardized form, we simply averaged them and did not need to employ a moving average trend to get a better sense of the data. The number of available series or observations declines beginning around 1960, which again corresponds with an increase in average global temperatures derived from proxies. Further, the total number of observations is small.
The reduction in the number of available proxy records after about 1960 (as indicated in Figs. 3.5 and 3.6) is itself an enigma. Much of the data is based on tree rings and other proxies for which information should be more readily available in recent years than in earlier ones. Why have these records not been updated? Failure to do so has been interpreted by some to constitute an effort to hide information, namely, that the proxy data show temperatures to be falling in recent years, contrary to the instrumental record. This is discussed further by Montford (2010).
Sorting out what paleoclimatic proxy data tell us about historical temperatures is as much an art as it is science, which, unfortunately, leaves room for researcher bias regarding how information is finally reported. For example, NOAA scientists provide a graph based on 837 individual borehole records. This graph indicates temperatures rising at an increasing rate from 1500 to 2000. There is no evidence of a LIA in this reconstruction as temperatures have marched steadily upwards. Further, the borehole temperature data track very closely the instrumental record of Jones et al. (2010) that begins in 1850.26 Yet, by eliminating the Little Ice Age, NOAA scientists find that average global temperatures have increased by only 1 °C since 1500. The graph based on NOAA borehole records strongly suggests that future temperatures will continue to rise in the future, which is the point that NOAA wishes to make. Unfortunately, it also suggests bias on the part of the graph builders.
Climate scientists tend not to present data in the simplistic form indicated above. One reason, of course, is that it is difficult to reconcile data collected from lake-bed sediments in the tropics, for example, with ice-core samples from Antarctica; as explained above, dropping or adding an observation from one of these data series has a big upward or downward impact on the average. Thus, climate scientists will be selective in their use of data series, employ a variety of moving averages, rely on principal component analysis, and employ a variety of other statistical methods when summarizing and presenting proxy temperature information.
Despite the evidence in Figs. 3.5 and 3.6, and the inherent problems associated with temperature reconstructions from proxy information, some climate scientists persist in arguing that the record indicates that the Current Warm Period is somehow unusual from a historical perspective. Indeed, climate scientists persistently hold to the notion that the Earth’s climate had previously been in a stable equilibrium, but that human emissions of greenhouse gases subsequently disturbed this equilibrium. This view requires that temperatures remain flat for a millennium or more before rising rapidly over the past century. As noted earlier, this depiction of events is referred to as the hockey stick. Given the extent of the so-called hockey stick wars, let us consider the issue in somewhat greater detail as it involves both the paleoclimatic record and the instrumental record.
3.3.3 The Climate Hockey Wars
Compare Fig. 3.7 with Figs. 3.5a, c, and 3.6a. Clearly, the data available from the World Data Center for Paleoclimatology in Boulder, Colorado, do not lead to a graph anywhere close to the one in Fig. 3.7. Therefore, it is not surprising that the hockey stick view of the world has been controversial and proven wrong. And, as a result of pressure by bloggers, the figure in the UN report (Fig. 3.7 here) was quietly replaced by another figure (McMullen and Jabbour 2009, p. 5).28
MM discovered two problems with the MBH reconstructions. First, principal component analysis requires that, to make series compatible, it is necessary to standardize the data in each temperature series by constructing z-scores (subtracting from each data point the mean of the series in which it is found and dividing by the standard deviation of the series). With reconstructed data, however, it was necessary only to subtract means as the paleoclimatic temperature reconstructions had already been standardized (Montford 2010, pp. 194–195); this is referred to as ‘centering the data.’ Rather than center the data using the means of each of the series, MBH used the means of the calibration period. This ‘short-centering’ procedure was unusual and tends to bias outcomes towards the more recent calibration period.
Second, along with short centering, the algorithm used by MBH leads to a temperature reconstruction that gives precedence to any series that indicate a strong upward or downward trend during the calibration period (or twentieth century). That is, if there was one temperature series that gave a hockey stick shape, this one series would dominate as long as no other series had a strong twentieth century downward trend. McIntyre and McKitrick demonstrated this by combining a single ‘hockey-stick’ series with numerous randomly-constructed data series that exhibited no trend (i.e., were white noise series), and, using MBH’s algorithm, obtained the hockey stick shape associated with the single ‘hockey-stick’ series. Regardless of anything else going on, as long as one series with a strong twentieth century uptick in temperatures was included, one obtained the hockey stick shape using short centering and the MBH algorithm. Even if a number of series included a MWP and LIA, these disappeared because of short centering.
The one proxy series that displays the uptick associated with the hockey stick is a tree ring series from bristlecone pines in the western United States collected by Graybill and Idso (1993) for the purpose of demonstrating the fertilization effect of rising atmospheric CO2 during the 1900s on tree growth. The authors specifically stated that the twentieth century growth in these trees was not accounted for by local or regional temperatures and was hypothesized to be the result of CO2 fertilization. Thus, it was surprising that this series was used in temperature reconstructions. Not only that, but all reconstructions leading to a hockey stick result included the Graybill-Idso bristlecone pine series, although in some cases the bristlecone pine series was hidden.
It is important to note that a variety of instrumental temperature records, such as the Central England series, and proxies constructed from tree-rings, lake-bed sediments and so on can somehow be combined to derive a historical reconstruction of global (or regional) temperatures. For example, the 71 series from Ljungqvist were used to provide some indication of past global temperatures (see Fig. 3.5). Indeed, there are now hundreds of ‘spaghetti’ graphs, one for each temperature proxy. And each proxy is derived from one or a few tree ring series, lake bed sediments, ice core samples, and so on. The simplest way to combine series, say for a region or supra-region (even global level), is to average them, as we did for the Ljungqvist series. However, this is a statistically crude method and, as noted earlier, a preferred statistical method is principal component analysis. A principal component analysis of the Ljungqvist series, for example, finds that no more than 18% of the total variation in temperatures can be explained by a single PC – by the first PC.
What this means is that their hockey stick shape is a rather unimportant pattern in the database, as would be expected since bristlecones are a couple of closely related species from a small area of the western USA. However, because they correlate well to temperature in the twentieth century, they dominate the calibration results and hence the reconstruction too (p. 327).
Needless to say, the hockey stick did not disappear without a fight. As a result of the controversy, two independent review panels were struck – one by the National Academy of Sciences and the other at the request of Congress. Both supported the MM analysis. In addition, at the request of Representatives Joe Barton and Ed Whitfield, the U.S. House of Representatives commissioned an independent evaluation by Edward Wegman, Chair of the National Academy of Sciences’ Committee on Applied and Theoretical Statistics (Wegman et al. 2006) – known as the Wegman Report. The Wegman Report reviewed the data and the statistical methods used by Mann (1998), MBH98 and MBH99, and McIntyre and McKitrick (2003, 2005a, b), concluding in favor of McIntyre and McKitrick. With regard to the statistical analysis, Wegman and his colleagues found that there was a deficiency in the way proxy and instrumental temperature data were analyzed: “A serious effort to model even the present instrumented temperature record with sophisticated process models does not appear to have taken place” (Wegman et al. 2006, p. 15).
In addition to the statistical evidence, the Wegman Report employed network analysis to examine relations among researchers. Wegman found that there are too few independent researchers looking into the historical temperature record, so much so that objectivity in the review process could not be guaranteed. Of course, the number one critique leveled at the Wegman Report was that it is not peer-reviewed!29
“Especially when massive amounts of public monies and human lives are at stake, academic work should have a more intense level of scrutiny and review… [A]uthors of policy-related documents, like the IPCC report, … should not be the same people as those that constructed the academic papers.
… federally funded research agencies should develop a more comprehensive and concise policy on disclosure… Federally funded work including code should be made available to other researchers.
With clinical trials for drugs and devices to be approved for human use by the FDA, review and consultation with statisticians is expected… evaluation by statisticians should be standard practice … [and] mandatory.
Emphasis should be placed on the Federal funding of research related to fundamental understanding of the mechanisms of climate change”.
These recommendations anticipated the climategate revelations by some 3 years, particularly as these relate to freedom of information requests.
3.3.4 The Hockey Stick Strikes Back
A new hockey stick result was published in the IPCC’s Fourth Assessment Report (IPCC WGI 2007) based on tree ring data from the Yamal Peninsula in Siberia by Keith Briffa and colleagues (Briffa et al. 2008; also Briffa et al. 1996; Briffa et al. 2001; Schweingruber and Briffa 1996). This result finds that the coldest year in the previous 1,200 years occurred during the MWP, but the chronology they use inexplicably adds core counts from the Polar Urals (1995) in the absence of Yamal core counts. The data do not indicate an increase in temperatures for the twentieth century until after 1990, when the available tree ring data collapse from samples with 30+ trees to 10 trees (1990) and then 5 trees (1995).
Finally, Mann et al. (2008) made an attempt to reconstruct the hockey stick without tree ring data. Earlier, Loehle (2007) had demonstrated that there was no ‘hockey stick’ in temperature reconstructions that excluded tree rings. Mann and his colleagues relied on data from four lake bed sediments in Finland collected by Mia Tiljander as part of her PhD dissertation research. The Tiljander proxies indicated an uptick in twentieth century temperatures, but this was attributed to a disturbance caused by ditch digging. Nonetheless, Mann et al. (2008) argued that they had demonstrated that this did not matter. They did this by first showing that they could get a hockey stick result without employing the Tiljander data. However, this version of the hockey stick included the bristlecone pine data. They then demonstrated that the hockey stick result was not due to bristlecone pines by removing them from the reconstruction; however, when they did this, they again put in the Tiljander proxies (Montford 2010, pp. 362–373)!
What is most disturbing about the hockey stick debate is the difficulty that independent researchers, such as MM, have had accessing data that are the basis of results published in peer-reviewed journals.30 After all, verification is a key element of any empirical research and journals generally have policies regarding data and the ability of others to verify results. If anything, on the face of it, the hockey stick debate might be considered a black eye for science. However, difficulty obtaining data that should be made available to other researchers has taken second stage to the more pejorative ‘climategate’ revelations.
3.3.5 Hiding the Evidence?
One of the problems that climate scientists working with paleoclimatic data encountered was that their proxy records did not always coincide with the instrumental records. Some proxy records indicate that temperatures should have been declining rather than rising during recent decades, as indicated by instrumental obseervations. Since instrumental data are available for some 130 years, and that instrumental data are used to calibrate the temperature-proxy relationship, it is surprising to find that temperatures based on proxy data and the instrumental temperature data diverge for upwards of 40 years. Clearly, it is necessary to investigate this divergence further using the best available statistical methods, which might lead to a re-evaluation of the paleoclimatic record as the proxy response function (3.1) and the temperature transfer function (3.3) need to be revaluated. However, climate scientists dismissed the divergence by attributing it to higher environmental pollution after 1960, although evidence of this is lacking, and other factors, and chose to ‘hide the decline’ as one climategate email put it.31
As an academic matter, scientists combine different types of data all the time for the purpose of extracting information and constructing statistical models. As long as the methods are clearly explained and the reader is given the information necessary to evaluate the quality of the calibration/fitting process, there is nothing wrong with this, and indeed it is often the path to important discoveries and progress. But in the case of the preparation of the WMO and IPCC diagrams, the problem is that readers were not told about the way different data sets were being trimmed and/or combined, hence materially adverse information was withheld from readers, thus exaggerating the quality of the statistical model (para 46, pp. 25–26).
What is disturbing is the way in which the authors of the IPCC Working Group I report (IPCC WGI 2007) deal with the criticisms levied at the various reconstructions of historical temperatures using paleoclimatic proxy data. The IPCC continues to adhere to the ‘hockey stick’ story, although it features much less prominently and more subtly than in the previous (IPCC WGI 2001) report.34 What has been most frustrating in all of this is the reluctance of the climate scientists involved to make their data and methods available to other researchers, even if these researchers might have had a different view on the causes of global warming, and their failure to collaborate with researchers who have the necessary statistical expertise.35 In the meantime, the hockey stick story continues to dominate the pages of public documents.
Nonetheless, change is coming. In a recent paper, econometricians McShane and Wyner (2011) use time series anlaysis and the proxy data to predict temperatures found in the instrumental record. They find that the proxies climate scientists use are no better at predicting future temperatures than “random series generated independently of temperature.” Indeed, statistical models based on proxy data are unable to forecast or backcast high temperatures or rapid increases in temperature, even for in-sample predictions (i.e., where the prediction and estimation periods coincide). Upon reconstructing the Northern Hemisphere land temperatures using the climate scientists’ proxies (i.e., the contentious data series), McShane and Wyner (2011) find a similar reconstruct to that of the climate scientists, but with a higher variance. They conclude that the recent high temperatures are not statistically different from those of earlier years.
Other economists are finding similar problems with reconstructions of historical temperatures using proxy data (e.g., Aufhammer et al. 2010). Their work confirms the criticisms levied at the hockey stick by McInyre and McKitrick, Pielke et al. (2007), the Wegman Report, and others.
3.3.6 Temperatures and CO2 Before Time
The conclusion of the latest IPCC report is simple: “It is very unlikely that the twentieth-century warming can be explained by natural causes. The late twentieth century has been unusually warm. Palaeoclimatic reconstructions show that the second half of the twentieth century was likely the warmest 50-year period in the Northern Hemisphere in the last 1,300 years” (IPCC WGI 2007, p. 702). There are two problems: First, as pointed out in this chapter, there is no statistical evidence to suggest that the last 50 years of the twentieth century were statistically the warmest of the past 1,300. As shown in this chapter, many peer-reviewed publications and commissioned reports call into question the temperature reconstructions that form the basis for the IPCC’s conclusion, arguing that the IPCC ignores the well-documented Medieval Warm Period which may well have seen higher average global temperatures than seen in the Current Warm Period. Indeed, a recent study by Blakeley McShane and Abraham Wyner (2011) found that recent temperatures were not statistically different from past temperatures, even when the IPCC’s paleoclimatic reconstructions were used as the basis for comparison. Second, if anthropogenic emissions of greenhouse gases resulted in the warm years of the latter part of the twentieth century, what anthropogenic sources resulted in the ‘cooling’ since 1998 whilst atmospheric concentrations of CO2 have continued to rise?
Paleoclimatic reconstructions of past climate are not necessary to make a scientific case for global warming. Rather, reconstructions such as the hockey stick are important only from a political standpoint, because, if it is possible to demonstrate that current temperatures are higher than those experienced in the past, it will be easier to convince politicians to fund research and implement policies to address climate change. However, the opposite may now have occurred. By hitching its wagon to the hockey stick, the IPCC may have harmed its credibility. As Montford (2010) points out: “What the Hockey Stick affair suggests is that the case for global warming, far from being settled, is actually weak and unconvincing” (p. 390). The implication for policymakers is that fears of global warming are likely overblown. This chapter and the previous one have shown how difficult it is to aggregate temperature data from various sources, determine an average global temperature, and construct historical global temperature averages from proxy data. Although instrumental and paleoclimatic temperature evidence are an important component in helping us understand climate change, they are only one part of the science.
On October 14, 1997, there was an El Niño Community Preparedness Summit in Santa Monica, California, to discuss the super El Niño event of that year. This El Niño led to the highest temperature on record in 1998 and relatively high but falling temperatures for several years thereafter (see Figs. 2.6 and 2.7 in Chap. 2). The invited speaker, Al Gore, predicted that, because of human emissions of CO2, there would no longer be La Niña events and that, according to his fellow scientists, El Niño events “would become permanent.”37
This incidence illustrates what is truly sad about the state of climate science, namely, that truth has been sacrificed for political expediency. No climate scientists have denounced Al Gore and his fictional depiction of the future; none questioned his membership among the scientific elite, although they are quick to question the credibility of scientists working outside the climate community (e.g., see Anderegg et al. 2010). None have cried foul when reports supporting peer-reviewed literature finding the hockey stick to be incorrect were castigated by environmentalists, and none have denounced the UNEP’s climate science report (McMullen and Jabbour 2009) for attributing every weather event (cyclones, drought, torrential rains, etc.) to anthropogenic global warming (a topic discussed further in Chap. 7). And a major reason why many scientists have not spoken out is due to the trust placed in the analysis of paleoclimatic data.
There are many variants of the historical temperature graphs that can be built. Because of the ad hoc way in which data series are combined and graphs subsequently constructed (e.g., normality assumptions), none is truly representative of what the global climate was really like over the past several millennia. Is it even appropriate to use averages of various series, or should one let each series speak for itself? Should one employ principal component analysis? Are principal components useful if any one accounts for no more than 18–20% of the total variation in the data? That is, does it make sense to replace 50 data sets, for example, with 20 principal components? Does this shed further light on past climate?
As pointed out by Marc Sheppard,38 and evident from Fig. 3.5c, the paleoclimatic proxies from tree rings tend to diverge from the instrumental (observational) record after about 1960, depending on the particular proxy construct. Assuming the instrumental record is reliable from 1880 to 2000, this implies that proxy temperature data and instrumental data give opposing results for 40 out of 120 years, or one-third of the time. There is no explanation of why this is the case. Some climate scientists have simply argued that the more recent tree-ring data are unreliable for some reason, often attributed to environmental pollution. As a result, the proxy data are dropped and the instrumental data put in their place, or the proxy data are ‘corrected’ to accord with the observed record. Although one cannot fault climate scientists for doing this as a stop-gap measure, it is necessary to acknowledge ignorance about the climate record. Until an explanation for the difference between the proxy data and the instrumental record is found, one cannot argue that the historical record provided by the proxy reconstruction is reliable.
Clearly, the analysis of paleoclimatic data leaves much open to interpretation, which results in a great deal of uncertainty about the human role in global warming and what, if any, action governments should take to affect human activities to reduce greenhouse gas emissions. This wicked uncertainty needs to be taken into account in determining the costs and benefits of mitigating CO2 emissions, and if mitigation is even an optimal policy. Before turning exclusively to economic issues, however, it is necessary to consider some further issues related to climate science as these affect economic analysis and the conclusions one can get from economic science.
The following erratum was printed in Science on January 21, 2005: The final sentence of the fifth paragraph should read “That hypothesis was tested by analyzing 928 abstracts, published in refereed scientific journals between 1993 and 2003, and listed in the ISI database with the keywords ‘global climate change’.”
Benny J. Peiser, Letter to Science, January 4, 2005, submission ID: 56001. Science Letters Editor Etta Kavanagh eventually decided against publishing even a shortened version of the letter that she requested because “the basic points of your letter have already been widely dispersed over the internet” (e-mail from Etta Kavanagh to Benny Peiser, April 13, 2005). Peiser replied: “As far as I am aware, neither the details nor the results of my analysis have been cited anywhere. In any case, don’t you feel that Science has an obligation to your readers to correct manifest errors? After all, these errors continue to be employed by activists, journalists and science organizations. … Are you not aware that most observers know only too well that there is absolutely ‘no’ consensus within the scientific community about global warming science?” The correspondence between Peisner and the editors of Science is at www.staff.livjm.ac.uk/spsbpeis/Scienceletter.htm. (viewed April 11, 2011).
While supporting the view that current temperatures are unprecedented, the CRU now acknowledges that perhaps temperatures during the Medieval Warm Period were warmer than currently (see Vinther et al. 2010). This is discussed further below.
Emails from East Anglia University can be searched at http://www.eastangliaemails.com/ (viewed April 15, 2010). Overviews of many of the key (controversial) emails are available in a United States Senate Report (U.S. Senate Committee on Environment and Public Works 2010) and from Australian science writer Joanne Nova (2010). A recent (2010) book by Steven Mosher and Thomas W. Fuller, Climategate The CRUtape Letters (ISBN 1450512437; self published but available from Amazon.com), provides a history of the climategate emails that ties them to the scientific issues as they evolved.
The Economist is quite apologetic for the attitude of climate scientists, arguing that the scientific failings are typical practice. However, it fails to point out that more technical analyses of computer codes raise concerns about the crude methods used to link proxy temperature data from tree rings to observed (albeit also ‘adjusted’) data originating from weather stations; two of many interpretations are provided by Marc Sheppard (sinister) and John Graham-Cumming (apologetic) at (both viewed December 3, 2009): www.americanthinker.com/2009/11/crus_source_code_climategate_r.html and http://www.jgc.org/blog/2009/11/very-artificial-correction-flap-looks.html, respectively The Economist’s bias was revealed in a lengthy article in the March 20, 2010 issue entitled “Spin, science and climate change” (pp. 83–86). Disconcertingly, the spin referred to detractors of catastrophic anthropogenic global warming, who the article suggests do not conduct peer reviewed research but only operate through blogs, in contrast to those real scientists who do believe in human-driven global warming. For another perspective, see http://www.youtube.com/watch?v=U5m6KzDnv7k (viewed April 9, 2011).
See http://news.bbc.co.uk/2/hi/science/nature/8511670.stm (viewed February 16, 2010).
The current author appears to have been included among those unconvinced by the evidence. However, his reason for signing one of the documents used by Anderegg et al. (2010) related to Canada’s climate policies and not to the climate science (which he only began to investigate seriously in preparing the current book).
Since a body always gives off infrared radiation to its surroundings, buildings, pavement and so on contribute to higher temperatures during the daytime as well as nighttime (see Chap. 5). Thus, the heat island effect is not simply a nighttime phenomenon.
Data for stations are available at http://cdiac.ornl.gov/epubs/ndp/ushcn/access.html (viewed April 26, 2010).
In all three cases, the data are taken from http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt, as viewed August 20, 2007, May 27, 2009, and April 26, 2010. The middle observation is reported by Brian Sussman (2010, p. 58), with the others by the current author. Sussman does not report the temperature anomalies and there is no way to retrieve them from the internet location at which they are found.
See IPPC Working Group I(IPCC WGI 2007, pp. 466–474). The Working Group I (WGI) report is entitled ‘Climate Change 2007. The Physical Science Basis.’
See (IPCC WGI 2007, p. 479).
For example, Tom Pedersen, Director of the Pacific Institute for Climate Studies (PICS) at the University of Victoria in British Columbia, claims that the low temperatures of the LIA were local occurrences as opposed to a wider, global trend (personal communication, April 25, 2010).
Ladurie (1971) points to the advance and retreat of glaciers in North America and Greenland (pp. 99–107), records of flowering dates for the cherry blossom and other plants in Japan (p. 270), lake freezing dates in Japan (p. 272), evidence from giant cacti in Arizona (p. 40), and many examples from other regions, as support for the existence of the LIA outside Europe.
Fagan’s study is particularly instructive when it is contrasted with his Medieval Warm Period (Fagan 2008). The only clear conclusion is that warm weather is greatly preferred to cold, which is why the MWP is sometimes referred to as an ‘optimum.’ Certainly there were droughts and plagues of locusts, but evidence from various sources indicates that droughts, crop failure and yields were much worse during cold periods than warm ones (Fagan 2000, 2008; Idso and Singer 2009; Ladurie 1971; Plimer 2009, pp. 63–86).
Steve McIntyre discusses the origins of this figure (Fig. 7c in the IPCC report) in a May 9, 2008 blog at http://www.climateaudit.org
The methods discussed here are described in more detail by Montford (2010, pp. 41–48) and Auffhammer et al. (2010).
NOAA’s World Data Center for Paleoclimatology also makes available hundreds of different ice-core, lake-bed sediment, coral reef and other paleoclimatic records at their website http://www.ncdc.noaa.gov/paleo/paleo.html, although it is sometimes difficult to determine what each record actually contains/means. Much of the data is in ‘raw’ form so it is still necessary to develop temperature or other proxies from it. The data are available at (viewed May 25, 2011): ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/ljungqvist2009/ljungqvist2009recons.txt.
See http://www.ncdc.noaa.gov/paleo/paleo.html (viewed April 17, 2010).
This accords with remarks by Phil Jones in a February 13, 2010 BBC interview, in which he indicated that the MWP was warmer than anything experienced recently.
This is similar to the temperature anomalies that we encountered in Chap. 2. There, for example, the CRUT3 temperature series constitute an anomaly about the 1961–1990 global average temperature. Here the average temperature of a proxy series is simply the average over all observations for that series.
Data available from http:/www.climateaudit.info/data/jser.txt (viewed April 20, 2011).
See http://www.ncdc.noaa.gov/paleo/borehole/core.html (viewed April 26, 2010).
The graph of temperatures is nearly identical to NOAA’s temperature graph based on borehole data (see previous note).
See http://wattsupwiththat.com/2009/10/05/united-nations-pulls-hockey-stick-from-climate-report/ (viewed October 12, 2009). The book’s official website http://www.unep.org/compendium2009/) was “under revision” as of October 12, 2009, but available again in February 2010. Interestingly, the graph that replaced the original figure (Fig. 3.7 in the text) begins in 1880 and goes to 2005, rather than the period 1000–2000. However, for the period after 1998, it continues to show temperatures increasing contrary to official data, as shown in Fig. 2.4 of the previous chapter.
It turns out, however, that the IPCC itself relied on non-peer reviewed material for a number of its assertions (see Chap. 5).
Many papers, correspondence and other documents relating to the hockey stick debate between MM and MBH can be found at http://www.climateaudit.org/?page_id=354 (viewed April 12, 2011). Also, the climategate controversy may have resulted in greater openness in the sharing of data and computer code.
See http://climateaudit.org/2009/12/10/ipcc-and-the-trick/ (viewed April 24, 2010) and http://climateaudit.org/2011/03/17/hide-the-decline-sciencemag/ (viewed April 12, 2011).
The graph was digitized by Stephen McIntyre in 2010 at the internet site indicated in the preceding note. Since then, data are more readily accessible from the internet, but one must still search to find the appropriate data and instructions regarding what the data mean. The data provided below are from Briffa et al. (1998), Jones et al. (1998) and Briffa (2000), and can be accessed via McIntyre’s climateaudit.org website.
In two videos on YouTube, Berkeley physics professor, Richard Muller, provides an excellent overview of the issue, as well as a scathing attack on the climate scientists responsible (see: http://www.youtube.com/watch?v=8BQpciw8suk and http://www.youtube.com/watch?v=U5m6KzDnv7k (viewed April 9, 2011)). It should be noted that Fig. 3.10 is not exactly the same as the figure in the IPCC report as it is only meant to demonstrate how the ‘trick’ (as it was described in the climategate emails) was implemented.
This is evident from the figures on pages 467–468, 475, 477 and 479 of the IPCC WGI (2007). Each of the figures still has temperatures rising rapidly during the 1900s and into the twenty-first century. The McIntyre-McKitrick critique of the hockey stick is summarily dismissed (IPCC WGI 2007, p. 466) with a reference to a paper by Wahl and Ammann (2007) that had not yet appeared and an incorrect reference to a paper by Wahl et al. in Science (2006) to which MM were not permitted to respond. The IPCC authors ignored the Wegman report and other research supporting MM. Indeed, one of the IPCC’s review editors (gatekeepers) believed the methods used to derive the hockey stick result were biased, giving statistically insignificant results; yet, he signed off on the paleoclimatic chapter, thereby agreeing that the hockey stick constituted a ‘reasonable assessment’ of the evidence (Montford 2010, pp. 446–447).
A colleague suggested that the CRU was simply so overwhelmed with requests to access the data under Freedom of Information (FOI) that they could not possibly respond to all such requests. This argument is specious because data, computer code, etc. could easily have been made available on the internet (available data are currently dispersed across various sites); further, climategate emails indicate that requests came before FOI became an issue and that the number of requests was not onerous. Indeed, climategate emails strongly suggest that there was a deliberate attempt to prevent ‘outsiders’ from accessing the data.
Source: http://cdiac.ornl.gov/trends/co2/contents.htm (viewed April 17, 2010).
Reported in the San Francisco Chronicle, October 15, 1997 at (as viewed October 19, 2009): http://icecap.us/index.php/go/joes-blog/metsul_special_report_to_icecap_al_gores_inconvenient_mistake/
http://www.americanthinker.com/2009/11/crus_source_code_climategate_r.html (viewed February 18, 2010).
- Aufhammer, M., Wright, B., & Yoo, S. J. (2010). Specification and estimation of the transfer function in Paleoclimatic reconstructions. Paper presented at the Fourth World Congress of Environmental and Resource Economists, Montreal, QC.Google Scholar
- Bray, D., & von Storch, H. (2007). The perspectives of climate scientists on global climate change. GKSS 2007-11. Retrieved February 15, 2010, from http://dvsun3.gkss.de/BERICHTE/GKSS_Berichte_2007/GKSS_2007_11.pdf
- Briffa, K. R., Jones, P. D., Schweingruber, F. H., Karlén, W., & Shiyatov, S. G. (1996). Tree-ring variables as proxy-climate indicators: Problems with low-frequency signals. In R. D. Jones, R. S. Bradley, & J. Jouzel (Eds.), Climatic variations and forcing mechanisms of the last 2,000 years (pp. 9–41). Berlin: Springer.CrossRefGoogle Scholar
- Briffa, K. R., Shishov, V. V., Melvin, T. M., Vaganov, E. A., Grudd, H., Hantemirov, R. M., Eronen, M., & Naurzbaev, M. M. (2008). Trends in recent temperature and radial tree growth spanning 2000 years across Northwest Eurasia. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1501), 2271–2284.CrossRefGoogle Scholar
- Diamond, J. (2005). Collapse. New York: Viking.Google Scholar
- Fagan, B. M. (2000). The little ice age. How climate made history 1300–1850. New York: Basic Books.Google Scholar
- Fagan, B. M. (2008). The great warming. Climate change and the rise and fall of civilizations. New York: Bloomsbury Press.Google Scholar
- Gerondeau, C. (2010). Climate: The great delusion: A study of the climatic, economic and political unrealities. London: Stacey International.Google Scholar
- Goddard, S. (2011). The hottest year on record? SPPI reprint series (20 pp). Retrieved February 15, 2011, from http://scienceandpublicpolicy.org/images/stories/papers/reprint/the_hottest_year_ever.pdf
- Idso, C., & Singer, S. F. (2009). Climate change reconsidered: 2009 report of the Nongovernmental International Panel on Climate Change (NIPCC). Chicago: The Heartland Institute.Google Scholar
- IPCC. (1990). First assessment report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press.Google Scholar
- IPCC WGI. (2001). Climate change 2001. The scientific basis. Working group I contribution to the third assessment report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press.Google Scholar
- IPCC WGI. (2007). Climate change 2007. The physical science basis. Working group I contribution to the fourth assessment report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press.Google Scholar
- Jones, P. D., Parker, D. E., Osborn, T. J., & Briffa, K. R. (2010). Global and hemispheric temperature anomalies - Land and marine instrumental records. Trends: A compendium of data on global change. From http://cdiac.ornl.gov/trends/temp/jonescru/jones.html
- Ladurie, E. L. (1971). Times of feast, times of famine . A history of climate since the year 1000 (B. Bray, Trans.). London: George Allen & Unwin.Google Scholar
- Levitt, S. D., & Dubner, S. J. (2009). Super freakonomics: Global cooling, patriotic prostitutes, and why suicide bombers should buy life insurance. New York: Harper Collins.Google Scholar
- Lomborg, B. (2007). Cool it. The skeptical environmentalist’s guide to global warming. New York: Knopf.Google Scholar
- Mann, M. E. (1998). A study of ocean-atmosphere interaction and low-frequency variability of the climate system. Unpublished PhD dissertation, Faculty of the Graduate School, Yale University.Google Scholar
- McKitrick, R. (2010, February 26). Evidence submitted to the independent climate change email review (ICCER), Sir M. Russell, Chairman (80 pp). Retrieved April 15, 2010, from http://sites.google.com/site/rossmckitrick/McKitrick-ICCER-Evidence.pdf
- McMullen, C. P., & Jabbour, J. (Eds.). (2009). Climate change 2009. Science compendium. Nairobi: United Nations Environment Programme.Google Scholar
- Montford, A. W. (2010). The hockey stick illusion: Climate and the corruption of science. London: Stacey International.Google Scholar
- Nelson, R. H. (2010). The new holy wars: Economic religion versus environmental religion in contemporary America. University Park: Pennsylvania State University Press.Google Scholar
- Nova, I. J. (2010). ClimateGate: Thirty years in the making. Retrieved February 9, 2010, from http://joannenova.com.au/global-warming/climategate
- Oreskes, N. (2004, December 3). The scientific consensus on climate change. Science, 306(5702), 1686.Google Scholar
- Osborn, T. J., & Briffa, K. R. (2006). Spatial extent of warm and cold conditions over the northern hemisphere since 800 AD. IGBP PAGES/World Data Center for Paleoclimatology Data Contribution Series #2006-009. Boulder: NOAA/NCDC Paleoclimatology Program.Google Scholar
- Pielke, R. A., Sr., Davey, C. A., Niyogi, D., Fall, S., Steinweg-Woods, J., Hubbard, K., Lin, X., Cai, M., Lim, Y.-K., Li, H., Nielsen-Gammon, J., Gallo, K., Hale, R., Mahmood, R., Foster, S., McNider, R. T., & Blanken, P. (2007). Unresolved issues with the assessment of multidecadal global land surface temperature trends. Journal of Geophysical Research, 112, D24S08. doi:10.1029/2006JD008229.CrossRefGoogle Scholar
- Plimer, I. (2009). Heaven & earth. Global warming: The missing science. Ballan: Connor Court Publishing.Google Scholar
- Schweingruber, F. H., & Briffa, K. R. (1996). Tree-ring density networks for climatic reconstruction. In P. D. Jones & R. S. Bradley (Eds.), Climatic variations and forcing mechanisms (NATO Series, Vol. I 41). Berlin: Springer.Google Scholar
- Sussman, B. (2010). Climategate. A veteran meteorologist exposes the global warming scam. New York: WND Books.Google Scholar
- U.S. Senate Committee on Environment and Public Works. (2010). ‘Consensus’ exposed: The CRU controversy (84 pp). Retrieved April 15, 2010, from www.epw.senate.gov/inhofe
- Wanliss, J. (2010). Green dragon: Dominion not death. Burke: Cornwall Alliance.Google Scholar
- Wegman, E. J., Scott, D. W., & Said, Y. H. (2006). Ad hoc committee report on the ‘Hockey Stick’ global climate reconstruction. Retrieved October 16, 2009, from http://republicans.energycommerce.house.gov/108/home/07142006_Wegman_Report.pdf