Climate Dynamics

, Volume 31, Issue 5, pp 573–585 | Cite as

Estimating present climate in a warming world: a model-based approach

Article

Abstract

Weather services base their operational definitions of “present” climate on past observations, using a 30-year normal period such as 1961–1990 or 1971–2000. In a world with ongoing global warming, however, past data give a biased estimate of the actual present-day climate. Here we propose to correct this bias with a “delta change” method, in which model-simulated climate changes and observed global mean temperature changes are used to extrapolate past observations forward in time, to make them representative of present or future climate conditions. In a hindcast test for the years 1991–2002, the method works well for temperature, with a clear improvement in verification statistics compared to the case in which the hindcast is formed directly from the observations for 1961–1990. However, no improvement is found for precipitation, for which the signal-to-noise ratio between expected anthropogenic changes and interannual variability is much lower than for temperature. An application of the method to the present (around the year 2007) climate suggests that, as a geographical average over land areas excluding Antarctica, 8–9 months per year and 8–9 years per decade can be expected to be warmer than the median for 1971–2000. Along with the overall warming, a substantial increase in the frequency of warm extremes at the expense of cold extremes of monthly-to-annual temperature is expected.

Keywords

Climate change Present climate Climate normals Probability distribution CMIP3 

1 Introduction

Weather services base their operational definitions of “present” climate on past observations. Commonly, a 30-year normal period such as 1971–2000 or 1961–1990 is used (World Meteorological Organization 1989). Where data availability allows, baseline periods extending further to the past may be used for some purposes, particularly when focusing on the statistics of extremes, which are difficult to estimate from short time series.

However, in a world with ongoing, presumably largely anthropogenic climate change (Hegerl et al. 2007), past statistics give a biased estimate of the present climate. A compelling example is the global mean surface air temperature: according to the analysis of Brohan et al. (2006), all six years in the twenty-first century (2001–2006) have been among the seven warmest since 1850.

The clear distinction between the present and the past in the time series of the global mean temperature reflects, in part, the relatively small interannual variability of this quantity. On smaller horizontal scales, larger variability makes the difference between the present and the past less obvious. However, model simulations indicate that the ongoing climate change is an important factor even on the regional scale, at least down to the decadal time scale. For example, Räisänen and Ruokolainen (2006) estimated that, in southern Finland, the decade 2011–2020 has a 95% chance to be warmer than the mean for 1971–2000.

Internal climate variability increases with decreasing time scale. Thus, the mean temperature of a given calendar month or even the whole year varies much more from 1 year to another than the decadal mean temperature varies from decade to decade. It might appear likely that, in comparison with this wider range of variability, the effects of climate change would still be so small that past statistics would give a good approximation to the probability distributions that characterize interannual temperature variability in the present or near-future climate. However, as we will show in this paper, this is not the case. Consequently, we argue that reliance on past statistics alone is not the best way to define present climate for planning purposes, at least as far as temperature is considered. Better estimates of the present temperature climate are obtainable by combining past observations with model simulations of anthropogenic climate change.

The differences between present and past climate are less obvious for precipitation, for which the signal-to-noise ratio between the expected anthropogenic changes and interannual variability is much lower than for temperature (Räisänen 2001; Harvey 2004). In principle, however, the need to update climate statistics to take into account ongoing climate changes also extends to precipitation and many other variables.

In this paper, we propose a “delta change” type method for estimating present and future climate. Past observations of temperature and precipitation are modified on the basis of model results and observed global mean temperature changes, with the aim of making them representative of altered climate conditions. From the distribution of these modified observations, probability distributions that characterize the present-day or future interannual variability of climate are estimated.

The method is tested in hindcast mode to find that, for temperature, it gives better estimates of the climate in 1991–2002 than are obtained from observations from the previous 30 years (1961–1990) alone. An application of the method to the present (2007) climate suggests that, as geographically averaged over land areas north of 60°S, about 70% of individual months and 85% of full years can now be expected to be warmer than the median for 1971–2000.

The model simulations and observational data used in this study are introduced in Sect. 2. Section 3 describes the delta change extrapolation method used for deriving estimates of present or future climate. The hindcast test which includes a comparison between several methodological options is presented in Sect. 4. In Sect. 5, the extrapolation method is used to study the climate at present (2007) and later in the twenty-first century. A summary of the main findings with some concluding discussion is given in Sect. 6.

2 Data sets

2.1 Observational data

The delta change method requires both observations of past climate and model simulations of climate change. The observational temperature and precipitation data used here are from the Climate Research Unit (CRU) TS 2.1 data set (Mitchell et al. 2004). The CRU data, which have a native resolution of 0.5° in both latitude and longitude, are here aggregated to 2.5° × 2.5° grid boxes, which is representative of the typical resolution of current climate models. The CRU data set covers the years 1901–2002 and is available for all land areas excluding Antarctica. In this study, however, we mainly use the period beginning from 1961.

A time series of the observed evolution of the global mean surface air temperature is also needed. For this purpose, the analysis of Brohan et al. (2006) is used.

2.2 Model data

Data from 21 coupled atmosphere–ocean general circulation models participating in the World Climate Research Programme Third Coupled Model Intercomparison Project (CMIP3) (Meehl et al. 2007a) are used. This subset (Table 1) includes all CMIP3 models for which data for both the “20th Century Climate in Coupled Climate Models” (20C3M) simulations and for twenty-first century simulations based on the Special Report on Emissions Scenarios (Nakićenović et al. 2000) A1B scenario were available at the time that this research was made. A concatenation of these simulations gives, for each model, a time series covering at least the period 1901–2098.
Table 1

The models used in this study

Model

Institution

CCSM3

National Center for Atmospheric Research, USA

CGCM3.1 (T47)

Canadian Centre for Climate Modelling and Analysis

CGCM3.1 (T63)

Same as previous

CNRM-CM3

Météo-France

CSIRO-MK3.0

CSIRO Atmospheric Research, Australia

ECHAM5/MPI-OM

Max Planck Institute (MPI) for Meteorology, Germany

ECHO-G

University of Bonn and Model and Data Group, Germany; Korean Meteorological Agency

FGOALS-g1.0

Chinese Academy of Sciences

GFDL-CM2.0

Geophysical Fluid Dynamics Laboratory, USA

GFDL-CM2.1

Same as previous

GISS-AOM

Goddard Institute for Space Studies, USA

GISS-EH

Same as previous

GISS-ER

Same as previous

INM-CM3.0

Institute for Numerical Mathematics, Russia

IPSL-CM4

Institut Pierre Simon Laplace, France

MIROC3.2 (hires)

Center for Climate System Research, National Institute for Enviromental Studies and Frontier Research Center for Global Change, Japan

MIROC3 (medres)

Same as previous

MRI-CGCM2.3.2

Meteorological Research Institute, Japan

PCM

National Center for Atmospheric Research, USA

UKMO-HadCM3

Hadley Centre for Climate Prediction and Research / Met Office, UK

UKMO-HadGEM

Same as previous

Although parallel runs started from slightly different initial conditions are available for many of the models, only one 198-year time series for each model is used in this study, with the exception of the cross-verification test described in the end of Sect. 4.2. The model results were interpolated to the same 2.5° × 2.5° latitude–longitude grid as the CRU data.

The model results are used for two purposes. First, they are used to estimate the evolution of the global mean temperature beyond the time allowed by observations. In the long run, the magnitude of the global warming will depend substantially on the magnitude of greenhouse gas emissions (Meehl et al. 2007b). However, for the main focus of this study, that is, climatic conditions in the early twenty-first century, the choice among the SRES scenarios is unimportant.

Second, as described in Sect. 3.1, the simulations are used to find linear scaling relationships between the global mean temperature change and local climate changes. In model simulations, such scaling relationships tend to be quasi-independent of the details and magnitude of the forcing (e.g., Santer et al. 1990; Mitchell et al. 1999; Huntingford and Cox 2000; Mitchell 2003; Harvey 2004; Räisänen and Ruokolainen 2006; Meehl et al. 2007b), at least for scenarios dominated by increasing greenhouse gas concentrations. On the other hand, this linearity might fail in areas with substantial changes in aerosol forcing, even though heat transport in the climate system acts to smooth out the effect of geographical forcing differences (Boer and Yu 2003). To account for this potential problem, a more sophisticated version of the delta change method would be needed than is used here.

3 Delta change extrapolation

To estimate the probability distributions that characterize present or future climate, we follow a two-step procedure. First, past observations are extrapolated forward in time, with the aim of making them representative of altered climate conditions. Second, the desired probability distributions are derived from the distribution of these extrapolated observations.

The principle of the extrapolation is illustrated in Fig. 1, using December mean temperatures in Helsinki, Finland (60°N, 25°E) as an example. For each baseline observation, of which only some are shown in the interest of clarity, there are 21 extrapolation curves spreading gradually with time. These curves are derived from the results of the 21 CMIP3 models, using the simulated model-specific changes in local mean temperature and temperature variability together with the observed (to the present) and simulated (for the future) evolution of the global mean temperature. In this particular case, most of the models suggest a slight decrease in interannual variability with increasing greenhouse gas forcing. This leads in Fig. 1 to a slightly smaller increase in high than low December mean temperatures, but the effect is subtle compared with the overall warming. The details of the extrapolation procedure are discussed in Sects. 3.1, and 3.2.
Fig. 1

Extrapolation of observed temperatures forward in time. The closed circles show observed December mean temperatures in Helsinki, Finland, in five individual years, and the dashed lines (21 for each of the 5 years) the extrapolated temperatures derived from these observations using the results of 21 climate models. The solid lines give the extrapolation based on all-model mean results. The vertical line is drawn at the year 2007

The probability distribution characterizing the climate in a given year (e.g., 2007) is derived from the extrapolated temperature or precipitation values originating from each yearly observation within the selected baseline period (e.g., 1971–2000). The observed (1971–2000) and extrapolated (for 2007) distributions of December mean temperature in Helsinki are shown in Fig. 2. In addition to the discrete distribution obtained directly from the observed or extrapolated temperatures, a continuous distribution based on an application of a Gaussian kernel (Sect. 3.4) is also computed. Such a smoothing has the advantage of reducing sampling errors in probability estimates, but it may also cause systematic errors. The verification study in Sect. 4 suggests that the smoothing is generally beneficial, that is, the decrease in sampling errors outweighs the systematic errors. On the whole, however, the findings reported in this paper are quite insensitive to the amount of smoothing applied.
Fig. 2

a The observed distribution of December mean temperatures in Helsinki in 1971–2000, and b the derived probability distribution for the year 2007. The bars show the discrete distribution and the solid lines a continuous distribution based on Gaussian smoothing with smoothing coefficient b = 0.5 in Eq. (10). In b, the dashed line shows the continuous distribution for 1971–2000 (same as the solid line in a). The vertical scale is given both for the probability density (on the left) and for the number of cases in each 1°C class within a 30-year period (on the right)

3.1 Regression equations

Let x(t) be the value of a variable, for example the mean temperature of a given calendar month at a given location, in year t. We write x(t) as
$$ x(t) = \ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) + x'(t) $$
(1)
where \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) \) is the time-dependent expected value of x in year t and x′(t) is the deviation from this due to internal climate variability. The expected value refers here to the value expected under a given forcing history. For a climate model, the evolution of \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) \) could be determined, in principle, by first running the model a very large number of times with the same forcing but varying initial conditions, and then averaging x(t) over these simulations. In practice, such near-infinite ensembles of simulations are not available for models, and even less for the real world, where only one realization of climate has been observed. Thus, the estimation of \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) \) and the statistics of x′(t) must rely on approximate methods.
In this study, we assume that \( \ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) \) and the magnitude of variability, which is characterized by the expected absolute value of x′(t), change linearly with the expected value of the global mean temperature, \( \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t): \)
$$ \ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) = A + B\ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t) $$
(2)
and
$$ \overline{{|x'|}} (t) = C + D\ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t). $$
(3)
The model studies referred to in Sect. 2.2 support such a quasi–linear relationship for time mean climate changes under increasing greenhouse gas forcing. Although this linearity assumption has not been verified as well for changes in variability, Eq. (3) appears as a logical first approximation as far as anthropogenic changes can be regarded as a small perturbation to the climate system. Furthermore, although Eq. (3) is difficult to verify for individual models because changes in variability typically have a low signal-to-noise ratio, it appears to be supported by a multi-model mean analysis of the CMIP3 simulations. When averaged over the CMIP3 models, changes in interannual variability tend to increase quasi-linearly with the simulated global warming, but their geographical distribution remains nearly constant throughout the twenty-first century (not shown).

The shape of the probability distribution of x′(t) is assumed to be independent of external conditions. Some counter evidence of this has been found from analysis of daily climate variability in regional climate model simulations (Kjellström et al. 2007), but we would expect such changes to be a third-order effect for the topics studied here. In fact, for the results shown in this paper, even the changes in the magnitude of variability are much less important than the changes in the mean.

\( \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t) \) is estimated here as a running 11-year mean of G. The assumption behind this somewhat arbitrary choice is that most of the interannual variability of the global mean temperature is unforced, whereas most of the variability on decadal and longer time scales is forced (e.g., Stott et al. 2000). We do not claim that this definition of \( \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t) \) is by any means exact. There is no sharp spectral cut-off between forced and unforced variability in nature, and even if such a cut-off existed, a more sophisticated low-pass filter would approximate it better than a simple running mean. On the other hand, for most parts of the world, only a small fraction of the local interannual climate variability is linearly correlated with the interannual variations of the global mean temperature. This makes the derived statistics of x′(t) relatively insensitive to small errors in \( \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t). \) Furthermore, the present choice of \( \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t) \) only requires five years of data after the year t, which is an advantage when extrapolating past observations forward in time. It allows us to use the observed evolution of \( \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t) \) almost up to the present, before substituting this with model simulations, in which the rate of global warming might be biased by model errors (see Sect. 3.2 below).

Although the regression coefficients in Eqs. (2) and (3) could be in principle estimated from observations, we derive them from model simulations, which span a longer period of time and a much larger range of global mean warming. The coefficients for the mean climate, A and B in Eq. (2), were estimated by regressing x(t) in the years 1906–2093 with the running 11-year mean of the simulated global mean temperature. The deviations x′(t) were then computed as the residuals from this regression. Finally, to estimate the coefficients C and D in Eq. (3), the absolute value of these deviations was regressed against the 11-year running mean global mean temperature. This was done separately for each of the 21 models.

3.2 Modification of observed time series

Next, the model-based regression coefficients were used for extrapolating observations. Let x obs(t 0) denote the observed value of temperature or precipitation in year t 0, and let y obs(t 0, t) be the modified value derived from this observation and representing the climate of the forecast year t. Furthermore, let \( \Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t) \) be the change in the 11-year running mean global mean temperature from year t 0 to year t. This is estimated as the sum of two terms:
$$ \Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t) = \Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}_{{{\text{obs}}}} (t_{0} ,t_{{{\text{ref}}}} ) + \Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}_{{{\text{sim}}}} (t_{{{\text{ref}}}} ,t) $$
(4)
where the first term on the right gives the observed change from year t 0 to year t ref and the second term the simulated change from year t ref to year t. To minimize the effect of errors in model-simulated global mean temperature changes, year t ref is best chosen as late as allowed by observational data. Having observations of the global mean temperature available up to the year 2006, we use here t ref = 2001, except in the verification study in Sect. 4, where t ref = 1985 is used.

The value y obs(t 0, t) is derived from x obs(t 0) in two steps. First, an intermediate value z obs(t 0, t) is computed by only taking into account the estimated change in mean climate. Then, when these intermediate values have been calculated for all years t 0 within the baseline period, they are converted to y obs(t 0, t) using the estimated changes in variability.

The conversion from x obs(t 0) to z obs(t 0, t) for temperature is simple:
$$ z_{{{\text{obs}}}} (t_{0} ,t) = x_{{{\text{obs}}}} (t_{0} ) + B\Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t). $$
(5)
The treatment of precipitation is slightly more complicated, because we follow the common assumption that models are more skillful in simulating relative than absolute changes in this variable. Consequently, the equation
$$ z_{{{\text{obs}}}} (t_{0} ,t) = x_{{{\text{obs}}}} (t_{0} )\frac{{\ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t)}} {{\ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) - B\Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t)}} $$
(6)
is used. Thus, the observed precipitation in year t 0 is multiplied by the ratio between the expected value of the simulated precipitation in year t [estimated from Eq. (2)] and the corresponding expected value adjusted backwards for the global warming from t 0 to t.
To account for changes in variability, the mean of z obs(t 0, t) over all t 0 within the baseline period is first computed. This value is denoted as Z(t). Then, each value z obs(t 0, t) is replaced by
$$ y_{{{\text{obs}}}} (t_{0} ,t) = Z(t) + R(t_{0} ,t)(z_{{{\text{obs}}}} (t_{0} ,t) - Z(t)). $$
(7)
For temperature, R(t 0, t) is simply the estimated ratio between the variability at time t and the variability at time t 0:
$$ R(t_{0} ,t) = \frac{{\overline{{|x'|}} (t)}} {{\overline{{|x'|}} (t) - D\Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t)}} $$
(8)
where \( \overline{{|x'|}} (t) \) is obtained from Eq. (3). In the case of precipitation, however, the multiplication in Eq. (6) scales the variability with the same factor as the mean precipitation. To avoid double counting, Eq. (8) is replaced for precipitation by
$$ R(t_{0} ,t) = \frac{{\overline{{|x'|}} (t)}} {{\overline{{|x'|}} (t) - D\Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t)}}\frac{{\ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t) - B\Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}(t_{0} ,t)}} {{\ifmmode\expandafter\bar\else\expandafter\=\fi{x}(t)}}. $$
(9)
A potential pitfall in Eqs. (6), (8) and (9) is that, in rare circumstances, the ratios in these equations may become negative or approach infinity. To avoid this, the change factor of precipitation in Eq. (6) and the ratio R(t 0, t) in Eqs. (8), (9) are artificially forced between 0.4 and 2.5.

3.3 Treatment of intermodel differences

As the coefficients A, B, C and D in Eqs. (2), (3) and \( \Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}_{{{\text{sim}}}} (t_{{{\text{ref}}}} ,t) \) in Eq. (4) vary between the models, the same x obs(t 0) gives slightly different values of y obs(t 0, t) for each model (Fig. 1). Most of the results shown in this paper are based on a posteriori averaging over the 21 models, that is, the resulting probability distributions were averaged after applying the extrapolation procedure separately for each model. However, Sect. 5 also includes some comparison between the distributions based on the individual models, to study the effects of modeling uncertainty.

When testing the method in hindcast mode (Sect. 4), a few other methodological choices were also explored. As an alternative to the a posteriori averaging, A, B, C, D and \( \Updelta \ifmmode\expandafter\bar\else\expandafter\=\fi{G}_{{{\text{sim}}}} (t_{{{\text{ref}}}} ,t) \) were averaged over the 21 models before applying Eqs. (4)–(9). As this choice eliminates the variation between the models, it produces narrower distributions for y obs(t 0, t) than the a posteriori averaging, even though the difference is generally small. Furthermore, we also tested a version of the method in which changes in variability were neglected, that is, R(t 0, t) was set identically to one in Eq. (7).

3.4 Gaussian smoothing

As a post-processing step, but before combining the results from the 21 models, the discrete distributions obtained from Eqs. (2) to (9) were smoothed to a continuous distribution using a Gaussian kernel. Continuous probability distributions representing the observed climate were also derived in exactly the same manner.

The input for the smoothing consisted, for each model, of the N extrapolated observations y obs(t 0, t) (t 0 ∈ [t 1, t 1 + N − 1]), where t 1 denotes the beginning and N the length of the baseline period, generally chosen as 30 years. For temperature, the probability density function was estimated as
$$ f(y) = \frac{1} {{Nbs}}{\sum\limits_{t_{0} = t_{1} }^{t_{1} + N - 1} {G{\left( {{\left( {y - {\left( {M + {\sqrt {\frac{N} {{N - 1}}(1 - b^{2} )} }(y_{{{\text{obs}}}} (t_{0} ,t) - M)} \right)}} \right)}/bs} \right)}} } $$
(10)
where M and s are the mean and standard deviation of y obs, and b (0 < b ≤ 1) is a smoothing coefficient. G is the density function of the standard normal distribution. For b = 1, Eq. (10) returns a Gaussian distribution with the same mean (M) and standard deviation (s) as the sample of N original or modified observations. For smaller values of b, the mean and the standard deviation are the same, but the shape of the distribution follows the original discrete distribution more closely. With increasing b, sampling errors in the density function f(y) and in the cumulative distribution function decrease. On the other hand, too strong smoothing may introduce systematic biases in the smoothed distribution, particularly near its tails, if the distribution of the input data differs significantly from normal. Based partly on hindcast verification (Sect. 4) and partly on subjective judgment, a coefficient of b = 0.5 is used in this study. An example of the effects of the smoothing is shown Fig. 2.

The distribution of monthly, and to some extent seasonal and annual, precipitation totals tends to be positively skewed, particularly in arid areas. To reduce or eliminate this skewness, the smoothing by Eq. (10) was applied to the square root of precipitation, rather than to precipitation itself.

4 Hindcast verification

To test the method, we applied it to hindcasting temperature and precipitation in the years 1991–2002. Two basic questions were addressed:
  1. 1.

    How well does the extrapolation perform, in comparison with the null alternative of using the observed distributions from the baseline period (1961–1990) as such?

     
  2. 2.

    How is the hindcast performance affected by methodological details such as (a) the Gaussian smoothing, (b) the treatment of intermodel differences, (c) the inclusion or exclusion of changes in variability, and (d) the choice of the baseline period?

     
To study the second question, several slightly different methods for calculating the baseline and extrapolated distributions were tested. The ones discussed in this paper are specified in Table 2. In Sect. 4.1 below, however, we will only use the version FULL6190C; discussion of the other versions is deferred to Sect. 4.2. In FULL6190C, the baseline was 1961–1990, the extrapolation method included changes in mean climate and variability, intermodel differences were taken into account as described in Sect. 3.3, and Gaussian smoothing was used with b = 0.5. The same smoothing was also used for estimating the quantiles of the baseline (1961–1990) climate in Sect. 4.1.
Table 2

The hindcast methods used in Table 4

Method

Baseline

Extrapolation of baseline observations

Smoothing

(1) CRU6190D

1961–1990

Baseline observations used without extrapolation

None

(2) CRU6190C

1961–1990

Baseline observations used without extrapolation

b = 0.5

(3) FULL6190C

1961–1990

Extrapolation including changes in mean and variability, separately for each model

b = 0.5

(4) SIMPLE6190C

1961–1990

Extrapolation using only multi-model mean changes in time mean climate

b = 0.5

(5) CRU0190C

1901–1990

Baseline observations used without extrapolation

b = 0.5

(6) FULL0190C

1901–1990

Extrapolation including changes in mean and variability, separately for each model

b = 0.5

In all cases, the evolution of the 11-year running mean global mean temperature was taken from observations up to the year 1985 and from models thereafter, to avoid using observational data that were not available in the year 1990. The forcing used in the simulations does not strictly fulfill this condition, since about half of the models included the Pinatubo eruption that could not have been predicted in advance. However, verification statistics obtained using that subset of models that exclude volcanic forcing are almost identical to the statistics for the full 21-model ensemble (not shown).

4.1 Hindcasted versus observed frequency distributions

Figures 3a, b show the hindcasted and observed (CRU) frequencies of warm months in the years 1991–2002, where a “warm” month is defined as one warmer than the median for 1961–1990. As averaged over the land area north of 60°S and over the whole 12-year period, the observed frequency was 67%. In some tropical areas, where interannual temperature variability is modest and small changes in the mean temperature can therefore result in a large relative shift in the frequency distribution, up to over 90% of all months in 1991–2002 were warm. This tropical maximum in the frequency of warm months contrasts with the geographical distribution of the time mean temperature change from 1961–1990 to 1991–2002, which was largest in high-latitude North America and central Eurasia (not shown). On the other hand, the CRU analysis also shows some areas where the fraction of warm months was below 50%, particularly in parts of South America.
Fig. 3

Relative frequency of warm (warmer than the median for 1961–1990) and wet (precipitation above the median for 1961–1990) months in the years 1991–2002. Left frequencies predicted by the FULL6190C hindcast, right the CRU analysis

The average hindcast frequency of warm months is 66%, in good agreement with the CRU analysis. The geographical distribution is much smoother, but this is not unexpected: the averaging over 21 models and the derivation of the regression coefficients from long simulated time series virtually eliminate the effects of internal climate variability. Nonetheless, the hindcast and the observed distribution share common features, particularly the occurrence of the highest frequencies in the tropics. The spatial correlation between the two fields is 0.57.

Conclusions for precipitation (Fig. 3c–d) are different. The hindcast frequency of wet months is almost everywhere between 45 and 55%, which reflects the much lower signal-to-noise ratio of precipitation than temperature changes. The higher values in northern Greenland are an artefact. There, the lack of observations makes interannual precipitation variability in the CRU analysis unrealistically small (New et al. 2000), and the increase in precipitation inferred from the models results, therefore, in a too large relative shift in the frequency distribution. The observed frequencies are more variable than the hindcast, presumably mostly due to internal climate variability but possibly also as a result of inhomogeneities in precipitation data. The spatial correlation between the hindcast and the observed distribution is negligible (r = 0.10).

The analysis is extended to other quantiles of the temperature and precipitation distributions in Table 3. For temperature, the agreement between the hindcast and the observed area and time mean frequencies is excellent. The frequency of very cold months (below the tenth percentile for 1961–1990) is in both cases near 5%, whereas the frequency of very warm months (above the 90th percentile) is slightly over 20%. Extremely warm months (above the 99th percentile) have a frequency of 4% in the hindcast and 5% in the CRU analysis. For annual mean temperatures, the decrease in variability with the increase in time scale makes the overrepresentation of high-end quantiles even larger than it is for the monthly means. In this case as well, the agreement between the hindcast and observations is very good, although the hindcast slightly underestimates the frequencies in the upper part of the distribution.
Table 3

Frequency distribution of local monthly and annual mean temperature and precipitation values in 1991–2002 relative to the quantiles for the period 1961–1990

Quantile range (%)

T, monthly

T, annual

P, monthly

P, annual

Hindcast

CRU

Hindcast

CRU

Hindcast

CRU

Hindcast

CRU

<10

4.7

5.0

1.9

2.1

10.2

8.7

9.7

9.4

>50

66.1

67.0

78.5

80.2

50.6

51.7

51.4

51.3

>90

20.6

20.5

33.5

35.5

10.4

10.1

10.9

9.7

>99

4.1

4.9

9.7

11.7

1.2

1.7

1.4

1.6

For each case, the first number gives the frequency (in percent) predicted by the hindcast and the second the value obtained from the CRU analysis. The frequencies are averaged over the land area covered by the CRU data set

For precipitation, the average hindcast and observed frequencies both remain close to the values expected in an unchanged climate. This reflects both the low signal-to-noise ratio of precipitation changes and the fact that the changes vary in sign between different parts of the world.

4.2 Continuous ranked probability score

To further evaluate the extrapolation method and to study the dependence of its performance on methodological details, we used the Continuous Ranked Probability Score (CRPS). As detailed in the Appendix, CRPS is an integral of the Brier score B over all choices of the threshold value used for defining B. A decrease in CRPS signifies an improvement of the probabilistic forecast. We evaluated CRPS for both monthly and annual temperature and precipitation, averaging the score temporally over the 144 months and 12 years in 1991–2002 and geographically over the land area covered by the CRU data set.

Several ways of making the probabilistic hindcast were tested (Table 2). The resulting CRPS values are given in Table 4 in normalized form, as the ratio to the score obtained when using the baseline observations from 1961 to 1990 as such, without any extrapolation or Gaussian smoothing (CRU6190D). The results are discussed below in terms of the questions posed in the beginning of Sect. 4.
Table 4

CRPS statistics for the methods defined in Table 2, as averaged over the years 1991–2002 and over the land area covered by the CRU data set

Method

Temperature

Precipitation

Monthly

Annual

Monthly

Annual

(1) CRU6190D

1.0000 (0.8780°C)

1.0000 (0.4887°C)

1.0000 (0.4924 mm/day)

1.0000 (0.1894 mm/day)

(2) CRU6190C

0.9908

0.9898

0.9898

0.9904

(3) FULL6190C

0.9167

0.7135

0.9904

0.9935

(4) SIMPLE6190C

0.9210

0.7184

0.9913

0.9996

(5) CRU0190C

1.0308

1.1308

0.9867

1.0070

(6) FULL6190C

0.9274

0.7481

0.9869

1.0057

Values are expressed as ratios to the score obtained by using the discrete distribution from the observations for 1961–1990 as the forecast (CRU6190D)

1. Does the extrapolation method improve the probabilistic hindcast? By comparing lines 2 (CRU6190C) and 3 (FULL6190C), it is obvious that there is an improvement for temperature but not for precipitation. The improvement for annual mean temperatures is larger (28%) than that for monthly temperatures (7%). When 1901–1990 instead of 1961–1990 is used as the baseline, the relative improvement of temperature hindcasts from the extrapolation becomes even larger (FULL0190C versus CRU0190C), although the absolute CRPS values are higher.

2a. Is the Gaussian smoothing useful? This was studied by comparing the CRPS values for smoothed and discrete distributions, both when using the baseline observations as such and when using the extrapolation method. A slight but systematic decrease in CRPS as a result of smoothing with b = 0.5 was found in both cases, as shown for the first of them on lines 1–2 (CRU6190C versus CRU6190D). This implies that the decrease in sampling errors achieved by the smoothing outweighs systematic biases possibly caused by the method. The smoothing was also tested with b = 1.0, that is, by using pure Gaussian distributions for temperature and for the square root of precipitation. The CRPS values were very similar to those obtained with b = 0.5 (not shown). Here, b = 0.5 is preferred to avoid the systematic errors that a stronger smoothing might cause in some cases near the tails of the distribution (although CRPS, as a measure designed to verify the distributions as a whole, is not very sensitive to such errors). However, the main results in this paper are virtually independent of the degree of smoothing applied.

2b, c. To study how important it is to take into account changes in variability and intermodel differences in time mean climate change, simpler extrapolation hindcasts were made by omitting one or both of these factors. When both were omitted (SIMPLE6190C), a slight deterioration in CRPS compared with FULL6190C occurred. This was basically due to the neglect of intermodel differences: regardless of if changes in variability were included or not, the resulting CRPS values were within a few hundredths of a per cent for both temperature and precipitation (not shown). On the other hand, the CRPS values for SIMPLE6190C were, in the case of temperature, much closer to those for FULL6190C than CRU6190C. Thus, most of the skill of FULL6190C is associated simply with the multi-model mean time mean warming, with only a slight additional improvement from the inclusion of intermodel differences as a representation of modeling uncertainty.

2d. As an alternative to the baseline 1961–1990, the longer baseline 1901–1990 was also tested because of its potential for reducing sampling errors. However, the CRPS statistics indicate that this was not beneficial, with the marginal exception of monthly precipitation (compare CRU0190C with CRU6190C and FULL0190C with FULL6190C). In particular, a relatively large increase in CRPS for annual mean temperature (and to a lesser extent, monthly temperatures) is evident from CRU6190C to CRU0190C. Thus, the advantage from the increased sample size is outweighed by the fact that the temperatures in 1901–1960 were less representative of the near-present climate than those in 1961–1990. For the model-based extrapolation hindcast, the increase in CRPS with the use of the longer baseline is much smaller. Thus, the climate change information from the models, together with the information on the observed global mean warming before 1985, nearly compensates the worse representativeness of the early observations.

Returning to question 1, the difference between temperature and precipitation in the FULL6190C versus CRU6190C contrast in CRPS is consistent with the results shown in Fig. 2 and Table 3. For temperature, the observed changes in the frequency distributions between 1961–1990 and 1991–2002 were substantial, and their general character and magnitude were captured well by the extrapolation method. By contrast, observed changes in precipitation climate were rather badly predicted by the extrapolation, which, in fact, marginally deteriorated CRPS for this variable. Several factors may contribute to this result:
  1. 1.

    Changes in precipitation climate have had, this far, a low signal-to-noise ratio, which makes them very difficult to predict.

     
  2. 2.

    The regression coefficients between the local climate and global mean temperature are derived from simulations in which the forcing is dominated by increasing greenhouse gas concentrations, particularly in the twenty-first century when the simulated warming is largest. However, the differences in climate between 1961–1990 and 1991–2002 were also affected by other types of forcing, particularly changes in anthropogenic aerosol loading and aerosols from volcanic eruptions. This issue may be particularly important for precipitation, which appears to be generally more sensitive to the nature and distribution of radiative forcing than temperature is (Hegerl et al. 2007).

     
  3. 3.

    The response of precipitation to increasing greenhouse gas concentrations might differ between the models and the real world.

     
  4. 4.

    Inhomogeneities in observational data might also play a role.

     
Although the issues 2–4 would all warrant further research, the lower signal-to-noise ratio alone means that the potential to improve estimates of present climate by using model results is much lower for precipitation than for temperature. This was demonstrated in a perfect model cross-verification test made with the CCSM3 model, which has the largest number of (seven) realizations available for the 20C3M and A1B simulations. We chose, in turn, each one of the seven simulations as a pseudo-truth, against which the probability distributions derived from the other six simulations were verified. The average cross-verified CRPS ratio between the hindcast obtained from the extrapolation method and the hindcast assuming unchanged climate was 0.998 (0.988) for monthly (annual) precipitation. For comparison, a near-perfect method was also tested, in which the probability distribution of precipitation in a given year was inferred directly from the simulated precipitation in the other six ensemble members in this year and its four nearest neighbours. Even in this case the CRPS ratio was lowered only slightly: to 0.996 (0.984) for monthly (annual) precipitation. Thus, even in the absence of any other sources of error, the low signal-to-noise ratio of precipitation changes would have kept the improvement in CRPS very small.

5 Application to present and future climate

In this section, the extrapolation method is used to study the characteristics of present (around the year 2007) and future climates. The baseline period 1971–2000 is used, and the 11-year running mean global mean temperature follows the analysis of Brohan et al. (2006) up to the year 2001, after which it is inferred from the models. For the year 2007, this yields a 21-model mean global mean warming of 0.44°C relative to the mean for 1971–2000. The resulting 21-model mean changes in annual mean temperature and precipitation, obtained by multiplying the global mean temperature change with the regression coefficients for time mean climate change, are shown in Fig. 4.
Fig. 4

Changes in climatological mean values of (a) annual mean temperature (°C) and (b) precipitation (percent of the mean for 1971–2000) from 1971–2000 to 2007, as estimated with the extrapolation method. The figure is based on 21-model mean results

Where not stated otherwise, the probability estimates shown here are averaged over all 21 models used as the basis of the extrapolation. Figure 5 shows the resulting estimates for the relative frequency of warm (above the median for 1971–2000) and very warm (above the 90th percentile for 1971–2000) months and years in the present climate. The frequency map for warm months (Fig. 5a) is similar to the one in Fig. 3a, although the values are slightly higher. On the average about 70% of all months in the present climate (i.e., 8.4 months per year) should be warmer than the median for 1971–2000, with the lowest fraction (typically 60–65%) over mid-to-high latitude Eurasia and North America and southeastern South America, and the highest fraction (80–90%) in the equatorial tropics. Unsurprisingly, the corresponding numbers for the annual mean temperature are higher. The geographically averaged expected frequency of warm years in the present climate is 85%, with values exceeding 95% in parts of the tropics.
Fig. 5

Expected relative frequency of warm (above the median for 1971–2000) and very warm (above the 90th percentile for 1971–2000) months (left) and full years (right) in the present (around the year 2007) climate. The figure is based on 21-model mean results from the delta change extrapolation

If 1961–1990 instead of 1971–2000 is used for defining normal climate, these numbers grow larger. In comparison with this baseline, as a geographical average 74% of all months and 90% of full years are expected to be warm in the present climate.

The frequency maps for very warm (above the 90th percentile for 1971–2000) months and years (Fig. 5c, d) share largely the same geographical pattern as those for warm months. Spatially averaged, 23% of all months and 38% of full years should be “very warm” in the present climate. Considerably higher frequencies are expected in parts of the tropics.

The results in Fig. 5 represent frequencies averaged over 21 models. To illustrate the uncertainty associated with intermodel differences, the two maps in Fig. 6 show the intermodel range in the frequency of warm months. These maps were obtained by selecting, in each grid box, the model with the lowest and highest derived frequencies (thus, different models were selected in different grid boxes). Although the differences are not negligible, all the models point towards an increasing frequency of warm months over all land areas. For the model with the lowest geographically averaged fraction of warm months (GISS-AOM), the mean value is 66%. At the other extreme, data from MIROC3.2 (hires) suggest a frequency of 74%.
Fig. 6

Expected relative frequency of warm (above the median for 1971–2000) months in the present climate: a minimum and b maximum among the 21 models

In the future, with continued global warming, warm and very warm months and years are projected to become increasingly more common at the expense of cold months and years, as shown for the A1B scenario in Fig. 7. As averaged geographically over all land areas excluding Antarctica and using the multi-model mean probabilities, only about 7% of all months and only 0.5% of full years are expected to be colder than the median for 1971–2000 around the year 2050. At the same point of time, very cold individual months (below the 10th percentile for 1971–2000) are expected to occur with an average frequency of only about 0.5%, whereas very cold years should have become nearly extinct. At least equally striking is the dramatic increase in the frequency of warm extremes. According to our calculation, more than half of all months should be warmer than the 90th percentile for 1971–2000 by about the year 2035 and warmer than the 99th percentile for 1971–2000 by about the year 2065. The same thresholds for the annual mean temperature are reached already by about 2010 and in the early 2030s. Again, the most rapid relative shifts in the frequency distributions are expected to occur in the tropics, where interannual temperature variability is small. Conversely, these area mean numbers slightly exaggerate the rate of changes that can be expected in the midlatitudes.
Fig. 7

Changes in the frequency distribution of (a) monthly and (b) annual mean temperatures under the A1B scenario. The area below the bottommost line in a shows the fraction of months colder than the 10th percentile for 1971–2000, the area between this and the next line the fraction of months with temperature between the 10th and 50th percentiles for 1971–2000, and so on. The figure is based on 21-model mean results from the delta change extrapolation and the frequencies are averaged over all land areas excluding Antarctica

Changes in the statistics of monthly and annual precipitation are likely to be much more modest than those for temperature. In most parts of the world, the expected frequency of wet (precipitation above the median for 1971–2000) months in the present climate is between 45 and 55%, whereas the expected frequency of wet years generally varies between 40 and 60% (Fig. 8a, b). As noted in Sect. 4.1, the much higher values in northern Greenland are an artefact stemming from unrealistically small interannual precipitation variability in the CRU data set in this area.
Fig. 8

Expected relative frequency of wet (precipitation above the median for 1971–2000) months and years in the present climate (2007) and in the climate of the year 2050, under the A1B scenario. The figure is based on 21-model mean results from the delta change extrapolation

With the projected continuation of the global warming, the changes in precipitation statistics are also expected to grow larger with time (Fig. 8c, d). Around the year 2050, up to 65% of all months and up to over 80% of all years are projected to be wet in the northernmost parts of Eurasia and North America, while only about 35% of individual months and down to 25% of full years are projected to be wet in parts of Central America and the Mediterranean region. Nonetheless, the projected changes in the frequency distribution of precipitation from 1971–2000 to 2050 are mostly smaller than those for temperature from 1971–2000 to 2007.

6 Summary and discussion

When climate is changing, past observations tend to give a biased view of the actual present-day climate. Although relevant for climate changes of any origin, this issue is now made particularly acute by the ongoing anthropogenic global warming. In this paper, we have introduced a delta change type extrapolation method which aims to make past observations representative of present or future climate, using the observed evolution of the global mean temperature and model-simulated changes in time mean climate and interannual variability. By using this method, we have compared the observed distributions of monthly-to-annual temperature and precipitation in the period 1971–2000 that is commonly used for defining normal climate, with estimates of the actual present-day distributions, representing the climate around the year 2007.

Although the differences between the present and the recent past are still expected to be small for precipitation, they are more substantial for temperature. Our calculations suggest that, in the present climate, typically 8–9 months per year, and 8–9 years per decade, should be warmer than the median for 1971–2000. In the tropics, where interannual temperature variability is generally modest, an even larger fraction of warm months and years is expected, despite a smaller time mean warming there than in higher latitudes. With projected continued increases in the global mean temperature, these numbers will continue to increase. Along with the changes in the middle of the temperature distribution, the statistics of extremes are also affected: very cold (warm) months and years become much more (much less) unusual. All these features are already seen when comparing the observed temperatures between the years 1991–2002 and 1961–1990.

In a probabilistic verification of hindcasts for the period 1991–2002, the delta change extrapolation performed well for temperature, clearly beating the null alternative of using the observed distribution from the 1961–1990 baseline period as such. For precipitation, though, no improvement over the direct use of the baseline observations was found. This might partly reflect deficiencies in the extrapolation method and in the model simulations, and possibly also errors in the verifying observations. Even if acting alone, however, the low signal-to-noise ratio of precipitation changes would have kept the improvement achievable by the delta change extrapolation much smaller than it is for temperature.

Current projections of anthropogenic climate change indicate that the increase in global mean temperature will continue, at a rate similar to or greater than in the last few decades, throughout the twenty-first century (Meehl et al. 2007b). The issue discussed in this paper will therefore remain relevant even if climatic baseline periods are shifted forward in time when new observational data accumulate. Sooner or later, this will force weather services to rethink how they define their climatic normals. Will they continue to rely on past observations, or will they aim to adjust the observations for ongoing climate changes? As far as the normals are used for putting present weather conditions in the context of the past, the former choice is reasonable. However, if the purpose is to provide society with realistic estimates of the climate that can be expected at present and in the near future, these estimates should be based on the best available information. For temperature at least, this information is now obtained by combining past observations with model-based estimates of climate change, rather than from past observations alone.

Notes

Acknowledgments

We acknowledge the modeling groups for making their model output available as part of the WCRP’s CMIP3 multi-model dataset, the Program for Climate Model Diagnosis and Intercomparison (PCMDI) for collecting and archiving this data, and the WCRP’s Working Group on Coupled Modelling (WGCM) for organizing the model data analysis activity. The WCRP CMIP3 multi-model dataset is supported by the Office of Science, US Department of Energy. This research has been supported by the Academy of Finland (decision 106979) and by the ACCLIM project within the Finnish Climate Change Adaptation Research Programme ISTO.

References

  1. Boer GJ, Yu B (2003) Climate sensitivity and climate state. Clim Dyn 21:167–176CrossRefGoogle Scholar
  2. Brier GW (1950) Verification of forecasts expressed in terms of probability. Mon Wea Rev 78:1–3CrossRefGoogle Scholar
  3. Brohan P, Kennedy JJ, Harris I, Tett SFB, Jones PD (2006) Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850. J Geophys Res 111:D12106. doi: 10.1029/2005JD006548 CrossRefGoogle Scholar
  4. Candille G, Talagrand O (2005) Evaluation of probabilistic prediction systems for a scalar variable. Q J R Meteorol Soc 131:2131–2150CrossRefGoogle Scholar
  5. Harvey LDD (2004) Characterizing the annual-mean climatic effect of anthropogenic CO2 and aerosol emissions in eight coupled atmosphere-ocean GCMs. Clim Dyn 23:569–599CrossRefGoogle Scholar
  6. Hegerl GC, Zwiers FW, Braconnot P, Gillett NP, Luo Y, Marengo Orsini JA, Nicholls N, Penner JE, Stott PA (2007) Understanding and attributing climate change. In: Solomon S et al (eds) Climate change 2007: the physical science basis. Cambridge University Press, London, pp 663–745Google Scholar
  7. Hersbach H (2000) Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather Forecast 15:559–570CrossRefGoogle Scholar
  8. Huntingford C, Cox PM (2000) An analogue model to derive additional climate change scenarios from existing GCM simulations. Clim Dyn 16:575–586CrossRefGoogle Scholar
  9. Kjellström E, Bärring L, Jacob D, Jones R, Lenderink G, Schär C (2007) Variability in daily maximum and minimum temperatures: recent and future changes over Europe. Clim Change. doi: 10.1007/s10584-006-9220-5
  10. Meehl GA, Covey C, Delworth T, Latif M, McAvaney B, Mitchell JFB, Stouffer RJ, Taylor KE (2007a) The WCRP CMIP3 multimodel dataset: a new era in climate change research. Bull Am Meteorol Soc 88:1383–1394CrossRefGoogle Scholar
  11. Meehl GA, Stocker TF, Collins W, Friedlingstein P, Gaye A, Gregory J, Kitoh A, Knutti R, Murphy J, Noda A, Raper S, Watterson I, Weaver A, Zhao Z-C (2007b) Global climate projections. In: Solomon S et al (eds) Climate change 2007: the physical science basis. Cambridge University Press, London, pp 747–845Google Scholar
  12. Mitchell TD (2003) Pattern scaling: an examination of the accuracy of the technique for describing future climate. Clim Change 60:217–242CrossRefGoogle Scholar
  13. Mitchell JFB, Johns TC, Eagles M, Ingram WJ, Davis RA (1999) Towards the construction of climate change scenarios. Clim Change 41:547–581CrossRefGoogle Scholar
  14. Mitchell TD, Carter TR, Jones PD, Hulme M, New M (2004) A comprehensive set of high-resolution grids of monthly climate for Europe and the globe: the observed record (1901–2000) and 16 scenarios (2001–2100). Tyndall Centre Working Paper 55, 30 ppGoogle Scholar
  15. Nakićenović N, Alcamo J, Davis G, de Vries B, Fenhann J, Gaffin S, Gregory K, Grübler A, Jung TY, Kram T, La Rovere EL, Michaelis L, Mori S, Morita T, Pepper W, Pitcher H, Price L, Raihi K, Roehrl A, Rogner H-H, Sankovski A, Schlesinger M, Shukla P, Smith S, Swart R, van Rooijen S, Victor N, Dadi Z (2000) Emission scenarios. A special report of Working Group III of the Intergovernmental Panel on Climate Change. Cambridge University Press, London, 599 ppGoogle Scholar
  16. New M, Hulme M, Jones P (2000) Representing twentieth-century space-time climate variability. Part II Development of 1901–96 monthly grids of terrestrial surface climate. J Climate 13:2217–2238CrossRefGoogle Scholar
  17. Räisänen J (2001) CO2-induced climate change in CMIP2 experiments: quantification of agreement and role of internal variability. J Climate 14:2088–2104CrossRefGoogle Scholar
  18. Räisänen J, Ruokolainen L (2006) Probabilistic forecasts of near-term climate change based on a resampling ensemble technique. Tellus 58A:461–472Google Scholar
  19. Santer BD, Wigley TML, Schlesinger ME, Mitchell JFB (1990) Developing climate scenarios from equilibrium GCM results. Report 47. Max-Planck Institut für Meteorologie, Hamburg, 29 ppGoogle Scholar
  20. Stanski HR, Wilson LJ, Burrows WR (1989) Survey of common verification methods in meteorology. Research report 89–5, Atmospheric Environment Service Forecast Research Division, CanadaGoogle Scholar
  21. Stott PA, Tett SFB, Jones GJ, Allen MR, Mitchell JFB, Jenkins GJ (2000) External control of 20th century temperature by natural and anthropogenic forcings. Science 290:2133–2137CrossRefGoogle Scholar
  22. Wilks DS (1995) Statistical methods in the atmospheric sciences. Academic Press, New York, 467 ppGoogle Scholar
  23. World Meteorological Organization (1989) Calculation of monthly and annual 30-year standard normals. WMO-TD/NO. 341, Geneva, 11 ppGoogle Scholar

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  1. 1.Department of Physics, Division of Atmospheric Sciences and GeophysicsUniversity of HelsinkiHelsinkiFinland

Personalised recommendations