# Global increase in record-breaking monthly-mean temperatures

## Authors

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s10584-012-0668-1

- Cite this article as:
- Coumou, D., Robinson, A. & Rahmstorf, S. Climatic Change (2013) 118: 771. doi:10.1007/s10584-012-0668-1

- 40 Citations
- 2k Views

## Abstract

The last decade has produced record-breaking heat waves in many parts of the world. At the same time, it was globally the warmest since sufficient measurements started in the 19th century. Here we show that, worldwide, the number of local record-breaking monthly temperature extremes is now on average five times larger than expected in a climate with no long-term warming. This implies that on average there is an 80 % chance that a new monthly heat record is due to climatic change. Large regional differences exist in the number of observed records. Summertime records, which are associated with prolonged heat waves, increased by more than a factor of ten in some continental regions including parts of Europe, Africa, southern Asia and Amazonia. Overall, these high record numbers are quantitatively consistent with those expected for the observed climatic warming trend with added stationary white noise. In addition, we find that the observed records cluster both in space and in time. Strong El Niño years see additional records superimposed on the expected long-term rise. Under a medium global warming scenario, by the 2040s we predict the number of monthly heat records globally to be more than 12 times as high as in a climate with no long-term warming.

## 1 Introduction

In the last decade a large number of record-breaking extreme weather events have caused major damage to ecosystems and society (Coumou and Rahmstorf 2012; WMO 2011b) and raised the question as to what extent the extremes are due to climatic warming (Dole et al. 2011; Hoerling et al. 2007; Kysely 2010; Mann and Emanuel 2006; Mitchell et al. 2006; Schär et al. 2004; Stott et al. 2004; Trenberth and Shea 2006). For one type of extreme, heat waves, the link between their occurrence and global warming is qualitatively straightforward: higher global mean temperatures will inevitably lead to more and more-severe heat waves (Della-Marta et al. 2007b; IPCC 2007; Meehl and Tebaldi 2004). Here we address the quantitative question of how many of the currently observed record-breaking heat waves can be associated with climatic change, as compared to how many would have occurred in a climate with no long-term trend.

We focus on prolonged heat spells lasting for several weeks and scoring monthly heat records. Summertime monthly heat-records document the most prolonged and therefore destructive heat waves (Founda and Giannakopoulos 2009; Kalkstein and Smoyer 1993; Karoly 2009; Smoyer 1998; Tan et al. 2006; UNEP 2004). Recent examples of such record-breaking heat waves include Europe 2003 (Schär et al. 2004; Stott et al. 2004), Greece 2007 (Founda and Giannakopoulos 2009), Australia 2009 (Karoly 2009), Russia 2010 (Barriopedro et al. 2011; Rahmstorf and Coumou 2011), Texas 2011 (WMO 2011a) and the continental U.S. 2012 (NOAA 2012). All had severe impacts on society causing many heat-related deaths, massive forest fires or harvest losses (Coumou and Rahmstorf 2012; UNEP 2004; WMO 2011b). Mortality and morbidity rates are strongly linked to heat wave duration, with excess deaths increasing each additional hot day (Kalkstein and Smoyer 1993; Smoyer 1998; Tan et al. 2006). Moreover, *record-breaking* extremes typically cause large impacts because society and ecosystems are not adapted to them. It is thus important to quantify (1) how the number of monthly temperature records is changing world-wide and (2) to what extent this can be explained by global warming.

Theory predicts that in time series with a slowly shifting mean, the expected number of records scales roughly linearly with the ratio of the long-term trend to the short-term variability (Ballerini and Resnick 1987; Franke et al. 2010; Glick 1978; Krug 2007; Rahmstorf and Coumou 2011; Redner and Petersen 2006; Wergen and Krug 2010). Thus, in data with large variability compared to trend, the climate-related increase will be relatively minor. Hence the current warming rate has so far increased the number of *daily* temperature records at individual weather stations only in a moderate way (Meehl et al. 2009; Redner and Petersen 2006; Trewin and Vermont 2010; Wergen and Krug 2010). In contrast, in highly aggregated data like global mean temperature almost all recent records are due to the long-term climatic warming trend (Rahmstorf and Coumou 2011; Zorita et al. 2008), because the variability is small compared to the trend. In between these cases, monthly-mean temperatures typically have a standard deviation similar to the observed warming over the 20th century (Rahmstorf and Coumou 2011). Typically, their distribution thus has shifted by roughly a standard deviation towards warmer temperatures. A substantial increase in the number of monthly records can thus be expected (Benestad 2004; Wergen and Krug 2010). Here we quantify the number of heat records observed world-wide. We distinguish between ocean and land records and those occurring in cold versus warm seasons. Before discussing these observational results we discuss some fundamentals related to record statistics.

## 2 Methods

### 2.1 Statistical model

*μ*is (Ballerini and Resnick 1987; Rahmstorf and Coumou 2011):

*t*

_{n}is the time of record

*n*,

*μ*

_{0}the long-term mean value,

*σ*the short-term variability,

*x*denotes the value of the extreme and

*erf*refers to the error-function. When the normalized trend, defined as the ratio of the long-term linear trend

*μ*over the short-term variability

*σ*, is relatively small, this model can be linearized, giving the expression (Franke et al. 2010):

*σ*is defined as the standard deviation of the detrended time series. This equation shows that in a first-order approximation, the probability for a record-breaking extreme scales with the normalized trend (

*μ*/

*σ*). In the case of zero trend, Eq. 2 reduces to 1/

*n*, i.e. the solution for record-breaking extremes of an independent and identically distributed (iid) time-series, a solution which is thus independent of the underlying probability density function (pdf) (Benestad 2004; Krug 2007). The linearized model (Eq. 2) compares well to the original, full model for small trend, but it underestimates the number of extremes for large trend (Fig. 1). The benefit of Eq. 2 is that it provides direct insight into the first-order behavior of the number of records in a slowly shifting climate, and that it can be solved directly whereas Eq. 1 requires numerical integration. Note that the decomposition into trend (or “climate change”) and noise applies irrespective of the physical cause of the trend, so that the question of whether the trend is anthropogenic or natural is not addressed by this statistical approach.

The model-expected number of records in the last decade, i.e. 2001–2010, is \( {{R_{\bmod }}=\sum\limits_{n=N-9}^N {{P_r}} }(n) \), where *N* is the total number of years in the time series. The record ratio *X*, i.e. the increase in the number of records compared to those expected in an iid time-series is then defined by \( {X_{\bmod }}{={{{{R_{\bmod }}}} \left/ {{\left( {\sum\limits_{n=N-9}^N {\frac{1}{n}} } \right)}} \right.}} \) for the model and \( {X_{obs }}{={{{{R_{obs }}}} \left/ {{\left( {\sum\limits_{n=N-9}^N {\frac{1}{n}} } \right)}} \right.}} \) for observations, with *R*_{obs} the actual observed number of records.

### 2.2 Data

We use the 131-year (1880–2010) combined land-ocean surface temperature dataset provided by NASA-GISS (Hansen et al. 2010). It provides monthly average surface temperature anomalies on a 2 × 2° global grid. Excluding polar regions outside 70° latitude because of sparse data coverage, we obtain roughly 12,500 grid points, for each of which we have 12 time series (one for each calendar month), giving a total of ~150,000 individual time series. The number of spatially independent time series will be much smaller, however, due to spatial correlation of neighboring grid points. This is only relevant for computation of statistical significance (more on this later). Global mean temperatures for each month show the typical two-step global warming signal with warming taking place in two distinct periods: some from the 1910s to the 1940s, and the majority from the 1970s until the present (see SOM Fig. 1a) (Hansen et al. 2010). While the warming trend varies substantially over the 131-year period, no significant time variation in the variance *σ* of the non-linear detrended data could be detected (see SOM Fig. 1b and c).

### 2.3 Serial correlation

Before analyzing actually observed records, we will show that the non-linearly detrended time series, containing the year-to-year natural variability for a specific month, are close to iid (*independent* and *identically distributed*). A time series is iid if each data-point in the time series has the same probability distribution as the others and all are mutually independent (Benestad 2004). This thus implies stationarity, with the pdf (and thus also the mean and the variance) not changing over time (Benestad 2004). We do not make any assumptions however on the shape of the pdf: For iid distributions the 1/*n* relationship holds regardless of the underlying pdf (Ballerini and Resnick 1987; Benestad 2003). Mutual independence implies that there is no serial-correlation in the time-series. For other climate data, however, serial correlation has been shown to be important (Bunde et al. 2005; Rybski et al. 2006; Zhu et al. 2010). We therefore calculated serial correlation in the non-linearly detrended GISS surface temperature data for each calendar month and tested to what extent it could affect our results. Over all continents and almost all ocean regions, the serial correlation is within −0.2 and 0.2 for both winter and summer months (SOM Fig. 2). Only some localized ocean regions, mainly in the Southern Ocean, have a serial correlation outside of this range due to the longer memory in the ocean, but still do not go above 0.3–0.4. This small serial correlation is what one would expect as individual data points are spaced 1 year apart. To quantify the effect of serial correlation on the number of records, we ran a number of Monte Carlo experiments with different levels of serial correlation (from −0.9 to 0.9) and compared the generated number of records to those predicted by the 1/*n*-relationship. This experiment showed that only for high levels of positive serial correlation (0.5 or larger), the number of records is statistically significantly different from those predicted by 1/*n* (SOM Fig. 3). For moderate positive serial correlation (<0.5) and negative serial correlation (as large as −0.9), the deviation from 1/*n* is *not* statistically significant and small, generally only a few percent and ~10 % at max. This is in agreement with previous findings by (Benestad 2003) who also showed that the 1/*n*-relationship holds for time series with moderate serial correlation. Our test shows that the non-linearly detrended time series have only limited serial correlation and that their number of records is thus well described by 1/*n*, i.e. those expected for an iid series. For the range of serial correlations observed, the difference in the number of records in the detrended time series as compared to the iid-expected number is negligible compared to the detected increase in observed record events (see Results section).

## 3 Results

### 3.1 40-year records

The two-step global warming signal (see SOM Fig. 1) implies that only over the period 1971–2010 the model assumption of a linear trend is reasonable. In a first step, we thus limit the dataset to the last 40 years, i.e. *N* = 40, to be able to compare model and observations. For each time series, we determine the linear trend and the standard deviation of the non-linear detrended time series, and count the number of observed records that occurred in the last decade. Time series with similar normalized trends (*μ*/*σ*) are binned and averaged, since the model predicts that the number of extremes depends on *μ*/*σ*. For oceans, the most frequently observed normalized trend is ~0.02 standard deviations per year, whereas for continents it is ~0.03 (Fig. 1a and b). Here competing effects play a role: continents generally have both a larger trend and a larger variance. For such normalized trends the model’s record ratio, *X*_{mod}, is about 2 (i.e. a doubling of the number of 40-year records) which is indeed observed (Fig. 1c). Cooling occurred on ~12 % of the ocean surface and only ~3 % of the land surface. In these cooling regions, the observed number of ocean records is higher than expected from the full model whereas the number of land-records drops to near zero as predicted (Fig. 1c). The match between observed and model-expected records is also better for the continents than for the oceans for positive trends. This can be explained by the much longer memory in the oceans, which makes deviations from the model assumptions (i.e. random uncorrelated noise) more likely. Another reason for deviations from the simple model is the linear trend assumption: if warming accelerates this will increase the number of expected records compared to a steady warming with the same linear trend. Vice versa, a decelerating warming will decrease their number. Furthermore, the linear-warming model assumes gaussianity of the data and stationarity of the variance, which may not hold everywhere. However, that these are not major issues is shown by the fact that the observed increase in the number of record events is reasonably well explained by the model (Fig. 1c).

*X*

_{obs}drops to 1, i.e. to that expected in iid time series. Over the continents, the number of 40-year records has roughly increased by a factor of two in near-Arctic regions and by a factor of three in the tropics. In the tropics the increase is primarily due to small short-term variability, in combination with a moderate trend. In near-Arctic regions, the high record occurrence is due to the exceptionally strong trend there. The oceans have seen a more moderate increase in extremes, typically by a factor of two, except for the subarctic region. Here the number of records in the zonal mean is dominated by those occurring in the subpolar North Atlantic. Over the oceans larger discrepancies exist between the stochastic model and observations (discussed later). The seasonal variation in the number of records, both observed and predicted, is generally small (see SOM Fig. 5).

Combining records of all months shows that most continents have seen much more than a doubling in the number of 40-year records (Fig. 3e). For Africa, Amazonia, southern Asia and Eastern Europe, the observed record ratio *X*_{obs} is typically more than four. These high-impact regions are captured by the model (Fig. 3f), although the model results are smoother with less small-scale structure. North America, Northern Eurasia and Australia have seen a more moderate increase in records (Fig. 3e) as also predicted by the model (Fig. 3f). The observed record-patterns for boreal (Fig. 3a) and austral (Fig. 3c) summer are similar to those averaged for all months but some differences exist. Europe has seen many more summertime (Fig. 3a) than wintertime (Fig. 3c) records, something that is captured by the model. A similar seasonal behavior is observed in western North America. Finally, the central Eurasian region scored many wintertime records (Fig. 3c) something that is not expected from the model results (Fig. 3d). Over the continents, the model thus generally captures the most important large-scale patterns of observed records. The latter are however more spatially clustered, as most prominently seen in the seasonal figures. That this clustering is not seen in the model results is not surprising as it excludes any non-linear feedback mechanisms and only determines the increased probability of scoring new records due to background warming (more on this in the Discussion section).

Observed records over the oceans are also more spatially clustered whereas the model produces smooth patterns. Over the North Atlantic the model predicts a smooth, regular increase in the number of records (Fig. 3f) whereas a tri-polar structure is observed in the data, with many records in the subtropical and subpolar regions and only few in the mid-latitudes (Fig. 3e). In the tropical eastern Pacific the model expects a small but non-zero number of records but actually none are found (white region offshore Peru, Fig. 3e). We associate these two discrepancies between model and observations to respectively the North Atlantic Oscillation (NAO) and El Niño variability and discuss this in more detail below (see Discussion). The discrepancies between the stochastic model and observations are therefore more pronounced over the oceans, likely due to temperature anomalies that are less stochastic and more coherent in space and time here.

### 3.2 131-year records

*X*is about three times larger.

In global area-weighted average, the data show a five-fold increase in the number of record events. Whereas in iid time series the total number of monthly records in a decade is expected to be close to 1 (given it is approximately 12 months multiplied by 10 years divided by 131 years), on average ~5 are observed. Continental regions with an exceptionally large number of 131-year records in the last decade are primarily located in the tropics and include East Africa, India and Amazonia (Fig. 4c). In these regions the record ratio *X*_{obs} is now more than 12, i.e. 12 times as many records as expected, or on average roughly one record-breaking warm month per year. For western European summers a similarly large record ratio is observed (Fig. 4a). The probability that the records set in these regions were due to the long-term warming rather than stationary variability is thus more than 90 % (i.e. (*X*_{obs}–1)/*X*_{obs} × 100 %). In global average this probability is about 80 % and generally even higher over continents (red regions in Fig. 4d). Note that this applies to the probability of a record being broken, irrespective of by what margin it is broken (see Discussion and (Otto et al. 2012)). Notable exceptions are the eastern U.S., Australia, Southeast Asia and Argentina with probabilities ranging from about 10 to 50 % depending on the specific location.

### 3.3 Time evolution

*X*

_{obs}) has evolved over time. The largest ratios are found in 1998, 1983 and 2010. The thick red curve shows

*X*

_{obs}smoothed, revealing a clear upward trend over the past 50 years.

To what extent can this time evolution be understood based on global mean temperature alone, disregarding any spatial patterns of climate change? To test this we use a stochastic model, assuming an Earth with globally uniform warming plus added random variability. We describe this variability by 100 independent realisations of random white noise with constant standard deviation. To compute the statistical uncertainty (blue band, Fig. 5) we need to account for spatial correlation of temperature anomalies, and we used the 1,200 km decorrelation radius of monthly mean station data found by Hansen and Lebedeff (1987). On this basis we estimate that the monthly temperature evolution has ~100 degrees of freedom (obtained by dividing the surface area of the Earth between 62°S and 72°N by the area of a circle with a 1,200 km radius). Hence our stochastic model is equivalent to an Earth with globally uniform warming plus 100 independent realisations of random white noise with constant standard deviation (taken as the average of that of the observed monthly series). One realisation of the *global-mean* evolution of records is thus the average over 100 independent Monte Carlo simulations. We produced 10,000 such global-mean realisations to determine the 5–95 % uncertainty range. In Fig. 5 we show this range for 5-year smoothed values, so that it can be directly compared to the observed heavy red line. This spatial correlation issue does not affect the expected value (blue line) but only the spread of the 5–95 % uncertainty range (blue band). As input we used the observed GISS global mean temperature up to the year 2010 in smoothed form (with interannual variability removed, as in Rahmstorf and Coumou (2011)) and a smooth global warming projection from 2011 onwards (based on emissions scenario RCP4.5, with warming by 0.7 °C from 2011 to 2040 (IPCC 2007)). The model-expected value, *X*_{mod}, steeply increases after the 1970s due to the rapid rise in global temperature, reaching ~5 in the last decade consistent with the observations. During the period 1980–2010 it increases by a factor 3.1, while the absolute number of extremes increases by a factor 2.4. The observed number of records is mostly within the blue expectation band and thus consistent with the stochastic model, with three notable peaks as exceptions: in the 1940s, the early 1980s and around 1998. The latter two coincide with the two extreme El Niño events of 1982/83 (Lukas et al. 1984) and 1998 (Picaut et al. 2002), which suggests that the number of extremes can be largely described by the simple “warming + white noise” model with additional extremes during strong El Niño events.

This agreement allows us to estimate the future increase of extremes under a medium global warming scenario (see Fig. 5). By the 2040s, the expected record ratio has globally increased to more than 12.

## 4 Discussion

We have shown (1) that the number of records, relative to the number expected in an iid series, has increased over the 1880–2010 period and is now five times as high, and (2) that the number of records in the non-linearly detrended temperature time series closely follows the iid-expected value. This implies that the number of record-breaking heat extremes has on average increased to roughly 5 times the number expected in a climate with no long-term warming.

The rise in records has been especially steep over the last 40 years (Fig. 5), a period that saw an approximately linear increase in monthly global mean temperatures (SOM Fig. 1a). On average globally, the increase in records over this period can, to the first order, be explained by a simple statistical model of linear warming plus Gaussian white noise (Fig. 1). Large regional variations exist in the observed number of records, something that is partly captured by the statistical model (Fig. 3). With the data-analysis and statistical modeling presented here, one can quantify, to a first order and on a super-regional scale, the probability that a heat record is due to climatic warming (irrespective of the physical cause of this warming). The discrepancies between observations and model could be due either to non-linear behavior in the data (as it can result from coherent variability patterns), non-Gaussianity or non-stationarity of the variance. Further research is needed to distinguish which of these factors are important in specific geographical regions. Here we discuss some possible mechanisms.

Positive NAO years are associated with anomalously cold sea surface temperatures over subtropical and subpolar regions in the North Atlantic while the mid-latitudes are relatively warm (Hurrel and Deser 2009). The NAO switched from a state dominated by positive indices in the 1990s to a state dominated by negative indices in the 2000s (Hurrel and Deser 2009). This cycle thus caused excess warming from the 1990s to 2000s in subtropical and subpolar regions and lesser warming in the mid-latitudes, causing the observed tri-polar structure of records in the North Atlantic. The large maximum of records in the northern Atlantic geographically matches the northern half of the subpolar gyre and could be related to its variability (Levermann and Born 2007).

The peak El Niño year of 1998 resulted in record sea surface temperatures in the tropical eastern Pacific offshore Peru (Su et al. 2001). Because the 1998 temperatures in this region were so extreme, they have not been surpassed since then. This explains why the observed number of records in the last decade in this region is below that expected by the stochastic model.

The many wintertime records observed in a band reaching across Eurasia, which are not expected by the model, may be related to variability in snow cover (McCabe and Wolock 2010) which would lead to deviations from the assumption of a simple linear trend over the past 40 years. These are three examples (NAO, El Niño, snow cover) of mechanisms leading to added complexity in some regions and thus to differences between observations and the simple stochastic model.

Individual heat waves are often associated with non-linear physical mechanisms that may not be related to the long-term temperature trend. Such processes include El Niño events, atmospheric blocking (Dole et al. 2011) and soil-moisture feedbacks (Schär et al. 2004). Attributing individual heat waves thus requires a thorough understanding of the underlying physics and cannot rely on statistical studies alone. However, these non-linear processes now occur on top of the slow but steady background warming. Thus, naturally occurring mechanisms such as El Niño can cause heat extremes that, in combination with background warming, turn into record-breaking extremes. The effect of climate change is to add extra warmth to heat extremes that would have fallen below record thresholds without the long-term warming, but now exceed them.

The statistical analysis presented here thus provides a first order, base-case analysis assessing the increase in extremes expected due to background warming alone. Therefore, during strong El Niño years like 1998 (Picaut et al. 2002) many more records occur than expected solely by the long-term warming (see Fig. 5). Similarly, such non-linear mechanisms can also cause extremes to cluster in space. For example, once a heat wave associated with a blocking high pressure system develops over a particular region (Dole et al. 2011), soil-moisture feedbacks might amplify the warming (Schär et al. 2004) resulting in a cluster of record-events. The efficiency of this mechanism is unrelated to the local warming trend but rather depends on the soil-moisture content. This may or may not show any long-term trend and depends primarily on precipitation and evaporation rates during the most recent months. Such processes, which are not included in the simple stochastic model used here, can thus explain why the observations show more spatial and temporal clustering than the model (see Fig. 3).

Monthly heat records occurring during summer months document the most prolonged, and therefore high-impact, heat waves. Our results show that over large continental regions, including parts of Europe, Africa, Amazonia and southern Asia, the number of summertime monthly heat records have increased strongly. This is consistent with statistical studies on the increase of heat wave intensity over the 20th century (Della-Marta et al. 2007a; Hansen et al. 2012; Kuglitsch et al. 2010). Recent theoretical and modeling studies focusing on individual heat waves, i.e. on Russia 2010 (Otto et al. 2012; Rahmstorf and Coumou 2011) and Europe 2003 (Stott et al. 2004), estimated that climatic change has increased the likelihood of these events about fourfold (“best estimate”) (Stott et al. 2004). Our results are consistent with these findings. Also they corroborate recent modeling studies indicating that the tropics are especially vulnerable to experiencing record-breaking seasonal heat extremes in the next century (Diffenbaugh and Scherer 2011).

We show that climatic change has on average increased the occurrence-probability of scoring new warm monthly temperature records five-fold worldwide. Our statistical analysis does not consider the causes behind climatic change, but given the overwhelming evidence that global warming in the second half of the 20th century is anthropogenic (IPCC 2007), one may conclude that approximately 80 % of the recent monthly heat records would not have occurred without human influence on climate. Under a medium future global warming scenario this share will increase to more than 90 % by 2040.