Advertisement

Future Directions

  • Manfred Mudelsee
Chapter
Part of the Atmospheric and Oceanographic Sciences Library book series (ATSL, volume 42)

Abstract

What changes may bring the future to climate time series analysis? First we outline (Sections 9.1, 9.2 and 9.3) more short-term objectives of “normal science” (Kuhn 1970), extensions of previous material (Chapters 1, 2, 3, 4, 5, 6, 7 and 8). Then we take a chance (Sections 9.4 and 9.5) and look on paradigm changes in climate data analysis that may be effected by virtue of strongly increased computing power (and storage capacity). Whether this technological achievement comes in the form of grid computing (Allen 1999; Allen et al. 2000; Stainforth et al. 2007) or quantum computing (Nielsen and Chuang 2000; DiCarlo 2009; Lanyon et al. 2009)—the assumption here is the availability of machines that are faster by a factor of ten to the power of, say, twelve, by a mid-term period of, say, less than a few decades.

Keywords

Surrogate Data Bootstrap Resampling Singular Spectrum Analysis Climate Time Series Persistence Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

What changes may bring the future to climate time series analysis? First we outline (Sections 9.1, 9.2 and 9.3) more short-term objectives of “normal science” (Kuhn 1970), extensions of previous material ( Chapters 1,  2,  3,  4,  5,  6,  7 and  8). Then we take a chance (Sections 9.4 and 9.5) and look on paradigm changes in climate data analysis that may be effected by virtue of strongly increased computing power (and storage capacity). Whether this technological achievement comes in the form of grid computing (Allen 1999; Allen et al. 2000; Stainforth et al. 2007) or quantum computing (Nielsen and Chuang 2000; DiCarlo et al. 2009; Lanyon et al. 2009)—the assumption here is the availability of machines that are faster by a factor of ten to the power of, say, twelve, by a mid-term period of, say, less than a few decades.

9.1 Timescale modelling

Climate time series consist not only of measured values of a climate variable, but also of observed time values. Often the latter are not evenly spaced and also influenced by dating uncertainties. Conventional time series analysis largely ignored uneven and uncertain timescales, climate time series analysis has to take them into account.

The process that generated the times, \( \left\{ {{{t}_{X}}\left( i \right)} \right\} \) for univariate and also \( \left\{ {{{t}_{Y}}\left( j \right)} \right\} \) for bivariate series, depends on the climate archive. We have studied linear and piecewise linear processes for speleothem or sedimentary archives ( Section 4.1.7) and nonparametric models for ice cores ( Section 8.6.1). Such types of models are the basis for including uncertain timescales in the error determination by means of bootstrap resampling (\( \left\{ {t_{X}^{*}\left( i \right)} \right\} \) and also \( \left\{ {t_{Y}^{*}\left( j \right)} \right\} \)). In bivariate and higher dimensional estimation problems, also the joint distributions of the timescale processes are important. See the example of the Vostok ice core ( Section 8.6.1) with the coupled timescales for the ice and the gas.

Climate archive modelling should be enhanced in the future to provide accurate descriptions of uncertain timescales. Archive models should evidently include the physics of the accumulation of the archive. One may even think of physiological models describing the performance of humans in layer counting of regular sequences such as varves (Table  1.3). A second ingredient of climate archive modelling are statistical constraints, for example, a strictly monotonically increasing age – depth curve in a speleothem archive or an absolutely dated fixpoint in a marine sediment core. An exemplary paper (Parrenin et al. 2007) of climate archive modelling studies the accumulation and flow in an ice sheet, into which a core is drilled. The Bayesian approach may be suitable for combining the inputs from physics and statistical constraints (Buck and Millard 2004).

9.2 Novel estimation problems

 Chapters 2,  3,  4,  5, and  6 presented stochastic processes and estimation algorithms for inferring the fundamental properties of univariate climate processes in the climate equation (Eq.  1.2): trend, variability, persistence, spectrum and extremes.  Chapters 7 and  8 studied bivariate processes: correlation and the regression relation between two univariate processes. We believe to have covered with these chapters the vast majority of application fields for the climate sciences.

However, in science there is always room for asking more questions, that means in a quantitative approach, for attempting to estimate different climate parameters in the uni- or bivariate setting.

An obvious example of such a novel estimation problem is SSA, mentioned in the background material of  Chapter 1. This decomposition method has been formulated so far only for evenly spaced, discrete time series. Interpolation to equidistance is obsolete because it biases the objectives of the decomposition (estimates of trend, variability, etc.). SSA formulations applicable to unevenly spaced records should therefore be developed.

Other novel estimation approaches are expected to come from the array of nonlinear dynamical systems theory ( Section 1.6). This field has a focus more on application data from controlled measurements or computer experiments and less on unevenly spaced, short paleoclimatic time series. A breakthrough, also with respect to SSA, may come from techniques of reconstructing the phase space at irregular points.

9.3 Higher dimensions

Climate is a complex, high-dimensional system, comprising many variables. Therefore it makes sense to study not only univariate processes (Part II), X, or bivariate processes (Part III), X and Y, but also trivariate processes, X and Y and Z, and so forth. A simple estimation problem for such high-dimensional processes is the multivariate regression, mentioned occasionally in previous chapters ( Sections 4.2 and  8.7),
$$ Y\left( i \right)={{\theta }_{0}}+{{\theta }_{1}}X\left( i \right)+{{\theta }_{2}}Z\left( i \right)+\cdot \cdot \cdot +{{S}_{Y}}\left( i \right)\cdot {{Y}_{{\text{noise}}}}\left( i \right). $$
(9.1)

The higher number of dimensions may also result from describing the climate evolution in the spatial domain (e.g., X is temperature in the northern, Y in the southern hemisphere). There is a variety of high-dimensional, spatial estimation problems: multivariate regression, PCA and many more (von Storch and Zwiers 1999: Part V therein).

As regards the bootstrap method, there is no principle obstacle to perform resampling in higher dimensions. An important point is that resampling the marginal distributions, of X and Y and Z separately, is not sufficient; the joint distribution of (X, Y, Z), including dependences among variables, has to be resampled to preserve the original covariance structure. This requires adaptions of the block bootstrap (MBB) approach. A further point, which may considerably exacerbate the estimation as well as the bootstrap implementation, is unequal observationtimes. The sets
$$ \left\{ {{{t}_{X}}\left( i \right)} \right\}_{{i=1}}^{{{{n}_{X}}}},\left\{ {{{t}_{Y}}\left( j \right)} \right\}_{{j=1}}^{{{{n}_{Y}}}},\left\{ {{{t}_{Z}}\left( k \right)} \right\}_{{k=1}}^{{{{n}_{K}}}} $$
(9.2)
need not be identical. Depending on the estimation problem and the properties of the joint climate data generating process (e.g., persistence times), the algorithm for determining \( {{\theta }_{0}},{{\theta }_{1}},{{\theta }_{2}} \), and so forth, has to be adapted. This is a step into new territory. An example from the bivariate setting is the “synchrony correlation coefficient” ( Section 7.5.2). A final point of complication from the move into higher dimensions is dependence among the timescale variables. Since this type of complication can occur already in two-dimensional problems ( Section 8.6.1), we expect it in higher dimensions as well. This challenge must be met by means of timescale modelling (Section 9.1).

9.4 Climate models

Computer models render the climate system in the form of mathematical equations. The currently most sophisticated types, AOGCMs ( Fig. 1.9), require the most powerful computers.Nevertheless, the rendered spatial and temporal scales are bounded by finite resolutions and finite domain sizes. Also the number of simulated climate processes is limited.

The problem of a finite spatial resolution is currently tackled by means of using an AOGCM (grid size several tens to a few hundred kilometres) for the global domain and nesting into it a regional model or RM (grid size reduced by a factor \( \sim 20 \)) for a sub-domain of interest (say, Europe). The AOGCM “forces” the RM (Meehl et al. 2007; Christensen et al. 2007), that means, prescribes the conditions at the boundaries of the sub-domain. Sub-grid processes, not resolved even by the RM (e.g., cloud processes) and therefore not explicitly renderable by the AOGCM–RM combination, can be implicitly included by employing inferred parametric relations (e.g., between cloud formation and temperature). The AOGCM–RM combination includes many variables, \( {{\left( {X,Y,Z,...} \right)}^{\prime }}\equiv X \), from the climate at grid points, and many parameters, \( {{\left( {{{\theta }_{0}},{{\theta }_{1}},{{\theta }_{2}},...} \right)}^{\prime }}\equiv \theta \), from the parameterizations (Stensrud 2007) and other model equations. For convenience of presentation, we consider the climate variable vector, X, and the climate model parameter vector, θ.

Our premise of a future “quantum boost” by a factor \( \sim {{10}^{{12}}} \) can make regionalization dispensable and let more realistic AOGCMs (grid size several tens to a few hundred metres) become calculable with computing times reduced from, say, a year to less than a month. Regarding the sophistication of a climate model, the increased computing power can also be utilized for including processes from the fields of biology and economy (greenhouse gas emissions (Moss et al. 2010) and “climate engineering” measures). Indeed, a finer spatial grid does require more processes to be explicitly included. Regarding the temporal scale, the boost should allow to simulate much larger spans (transient paleoclimate runs) by the means of AOGCMs and their successors.

There exists, however, another field where to invest computing power, namely the uncertainty determination of climate model results. We sketch this area in light of the methodology presented in this book, statistical estimation and bootstrap resampling.

Physics describes climate dynamics by means of nonlinear coupled differential equations,
$$ \mathbf{\dot{X}}=f\left( {\mathbf{X},\mathbf{R},\theta } \right), $$
(9.3)
where the dot denotes time derivative, f is a function, and R represents uncoupled, external forcing variables (e.g., solar activity). Time discretization yields
$$ \mathbf{X}\left( {i+1} \right)=\mathbf{X}\left( i \right)+\Delta T\cdot \mathbf{\dot{X}}, $$
(9.4)
where Δ T is a time step, in an AOGCM typically in the order of minutes to hours. From an initial climate state, X(1), the climate evolution is derived. This sample from the climate model “archive” is
$$ \left\{ {\text{x}\left( i \right)} \right\}_{{i=1}}^{n}. $$
(9.5)
The climate evolution can also be observed, yielding a multivariate time series sample,
$$ \left\{ {{{\text{x}}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n}. $$
(9.6)

The observations are, of course, strongly limited in the number of climate variables, geographic locations and time resolutions. There have been few observations made of, say, temperature in 1000 m height above sea-level at \( 130^{\circ} \) \( 30^{\circ} \)S for the time interval from 1850 to 2010 and a spacing of \( d\left( i \right)=\Delta T \) = 30 minutes.

9.4.1 Fitting climate models to observations

Let us view climate modelling as an estimation problem. The task is to estimate the model parameters, θ, given observations, \( \left\{ {{{\text{x}}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n} \). This set shall include the “missing observations.” The task requires to run the model and produce \( \left\{ {\text{x}\left( i \right)} \right\}_{{i=1}}^{n} \). The less distant the model output is to the observations, the better the fit.

Let us introduce a cost function to measure the distance,
$$ SSQGXY{{Z}_{v}}\left( \theta \right)=g\left( {\left\{ {{{x}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n},\left\{ {{{x}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n}} \right). $$
(9.7)
g may be a form of a generalized least-squares cost function that takes into account predictor uncertainty and the degrees of freedom; Section 9.4.3 considers the design of g in more detail. The parameter estimate minimizes the cost function,
$$ \hat{\theta }=\arg \min \left\{ {g\left( {\left\{ {{{x}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n},\left\{ {{{x}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n}} \right)} \right\}. $$
(9.8)

The parameter vector is included in the right-hand side of the equation because the model output, \( \left\{ {\text{x}\left( i \right)} \right\}_{{i=1}}^{n} \), depends on it.

The outlined procedure is with current computing power not feasible for a full estimation of AOGCM parameters. It has been performed for a simple climate model containing only three variables (Hargreaves and Annan 2002) and an Earth system model of intermediate complexity (Paul and Schäfer-Neth 2005).The concept of fitting climate models to data is also denoted as data assimilation or state estimation (Wunsch 2006).

Subsequent to the estimation, we should like to know the parameter uncertainties for the fitted climate model. This knowledge may be achieved by means of bootstrap methods, producing the replications,
$$ {{\hat{\theta }}^{*}}=\arg \min \left\{ {g\left( {\left\{ {x_{\circ }^{*}\left( i \right)} \right\}_{{i=1}}^{n},\left\{ {{{x}^{*}}\left( i \right)} \right\}_{{i=1}}^{n}} \right)} \right\}. $$
(9.9)

The observation resample, \( \text{x}_{\circ }^{*}\left( i \right) \), can be obtained via the surrogate data bootstrap ( Section 3.3.3), taking into account the errors of the observation devices, the distributional shapes (which may be Gaussian or not), the covariances (which may be rather small) and the “internal climate variability” (which may have to be estimated by means of separate model experiments). The model output resample, x *(i), incorporates a new (trial) set of parameters, θ*. However, it should also be based on a random initial state, x *(1), because the initial conditions are not exactly known. x *(1) may be taken randomly from a set of time series values, Open image in new window , of a climate model run without forcing components (stationarity). This “ensemble technique” is already currently being applied to quantify the uncertainty component owing to imperfectly known initial conditions (Randall et al. 2007; van der Linden and Mitchell 2009). Also the forcing variable, R(i), may have to be described stochastically for being included in the surrogate data approach.

The replications, \( \left\{ {{{{\hat{\theta }}}^{{*b}}}} \right\}_{{b=1}}^{B} \), serve in the usual manner (Section et al.  Section 3.4) for constructing CIs. Of particular interest should be the joint PDF of the climate model parameter estimators, which may be described by means of confidence regions in the parameter hyperspace (Smith et al 2009; Tebaldi and Sansó 2009).Realistic climate model error and CI determination do not require a handful of runs (current ensemble technique) but rather B runs, with B in the usual order of 2000 or even higher (because of the dimensionality).

9.4.2 Forecasting with climate models

Models are employed to forecast future climate, \( \text{x}\left( {n+1} \right) \), at time \( t\left( {n+1} \right) \). (Indeed, forecasts are made for many time steps to cover the typical range from the present to the year 2100.) This is achieved in our vision by a run of the model employing the estimated, optimal parameters, \( \hat{\theta } \). That run has to use also a guess of the future forcing, \( \mathbf{R}\left( {n+1} \right) \).

Of crucial importance, scientifically and socioeconomically, is to determine the size of the forecasting error. The bootstrap methodology, utilized for that purpose in the bivariate setting (Section  8.5), should be helpful also in the high-dimensional setting.

The recommendation is to produce forecast resamples, \( {{\text{x}}^{*}}\left( {n+1} \right) \), from which to calculate standard errors, CIs, confidence bands (over a time span), and so forth.

How are the \( {{\text{x}}^{*}}\left( {n+1} \right) \) produced to reflect the full range of the various sources of uncertainty?
  • The parameterization uncertainty can be taken into account by resampling from the set of replications, \( \left\{ {{{{\hat{\theta }}}^{{*b}}}} \right\}_{{b=1}}^{B} \). This preserves the covariance structure of the parameter estimates.

  • The initial-condition uncertainty can be taken into account by means of the ensemble technique.

  • The forcing uncertainty may be difficult to include in a quantitative manner. This step does likely necessitate the usage of separate forcing models.

9.4.3 Design of the cost function

Designing the cost function (Eq. 9.7) is important for achieving small standard errors and narrow CIs for the climate forecasts and the model parameter estimates. It is rather difficult to demonstrate theoretically the optimality of a certain cost function. One should perform Monte Carlo simulations to find “empirically” a suitable function. The following points may guide the design endeavour.
  • A least-squares technique is mandatory. It seems impossible to write down a likelihood function (for maximization) owing to the size of the body of the climate model equations. One may wish to make the sum of squares more robust with respect to “outliers.” On the other hand, one may give the “outliers” instead more weight in situations where the focus is on modelling the climate extremes.

  • GLS, employing the covariance matrices (variability, persistence) of the many climate variables, is a possible technique to reduce the estimation standard errors. The normalization (variability) produces dimensionless SSQG terms for each variable, which can be processed further (e.g., summed up).

  • A problem is multicollinearity (correlated predictors), stemming from spatial dependence among the climate variables (neighboured grid points). This may indicate to reduce the number of variables in the cost function by means of spatial binning. PCA techniques should help evaluating geographically meaningful bins (regions).

  • Errors in the observations (\( \left( {{{S}_{X}},{{S}_{Y}},{{S}_{Z}},...} \right) \)) should lead researchers to consider techniques like WLSXY estimation ( Section 8.1.2) to reduce estimation bias.

  • Further weighting could be performed “in the time domain” to enforce, for example, the most recent years to be more accurately simulated.

  • The degrees of freedom, ν, of the observation – model combination can be taken into account (a simple division by ν).

  • One may put bounds to the θ hyperspace to exclude estimation results that are inconsistent with physics (hard bounds) or prior knowledge (soft bounds). Bayesian formulas may help here.

The envisaged availability of “quantum computing power” does not release us from the task of constructing efficient methods to search through the hyperspace, to locate the minimum of the cost function: gradient techniques, Brent’s search, hybrid procedures or Bayesian approaches (Monte Carlo Markov Chain, see Hargreaves and Annan (2002) and Leith and Chandler (2010).

9.4.4 Climate model bias

Climate model bias regards, generally speaking, a function of the climate variable vector,
$$ \eta =h\left( \mathbf{X} \right). $$
(9.10)
The function, h, can be used to make η an index variable or extract a geographic region. For example, we may wish to study time-dependent, annual-mean, regional-mean, land-surface precipitation in central Europe,
$$ \eta \left( j \right)=n_{k}^{{-1}}n_{i}^{{-1}}\sum\limits_{{k\in \text{region}}} {\sum\limits_{{T\left( i \right)\in \text{year}\,\text{j}}} {{{X}_{k}}\left( i \right)} } , $$
(9.11)
where X k (i) is precipitation at grid point k and time T(i), n i is the number of time values within year j and n k is the number of model grid points within central Europe.
Let us now view the modelled sequence as an estimate obtained by means of a climate model, \( \hat{\eta }\left( j \right) \). Next we consider the true sequence. Since the truth is hidden, we take instead an observed sequence, \( {{\eta }_{\circ }}\left( j \right) \). This leads, in analogy to Eq. ( 3.2), to the climate model bias,
$$ \text{bia}{{\text{s}}_{{\hat{\eta }}}}\left( j \right)=E\left[ {\hat{\eta }\left( j \right)} \right]-{{\eta }_{\circ }}\left( j \right). $$
(9.12)

In the example of precipitation in central Europe, there are indications from a range of AOGCM–RM combinations that Open image in new window for the time interval from 1950 to the very recent past (Jacob D 2009, personal communication), that is, the climate models systematically overestimate precipitation. Similar overestimations were found for the region of Scandinavia (Goodess et al. 2009).

In the context of climate forecasting (Section 9.4.2), better predictions may therefore include a climate model bias correction. For example, if the model bias is simply a constant, \( \text{bias}\hat{\eta } \), then
$$ {\eta }^{\prime}\left( {{{j}_{{\text{future}}}}} \right)=\eta \left( {{{j}_{{\text{future}}}}} \right)-\text{bia}{{\text{s}}_{{\hat{\eta }}}}, $$
(9.13)
where j future indicates future (unobserved) time and the prime denotes bias correction. Evidently, the time-dependence of the bias and also its form (additive, multiplicative) should be analysed in such situations. Further developments may employ more complex stochastic models of the climate model bias (Jun et al. 2008).

9.5 Optimal estimation

Increased computer power would also allow to perform optimal estimation. We have sketched this concept in previous parts of this book ( Sections 6.2.7 and  Section 7.5.3.1). Not only climatology, other science branches as well may benefit from optimal estimation.

Central to the investigation in natural sciences, such as climatology, is to infer the truth from the data. This calls for the statistical language. In quantitative climatology, the investigative questions can be translated into a parameter, θ, which needs to be estimated using the data. The investigation cycles through loops: question, estimation, refined question based on the estimation result, new estimation, and so forth.

An estimator, \( \hat{\theta } \), is a recipe how to guess θ using the data. Since the sample size is less than infinity and the sampled climate system contains unknown influences (noise), we cannot expect that \( \widehat{\theta } \) equals θ. However, we can calculate the size of that error, the uncertainty. This leads to the measures \( \text{s}{{\text{e}}_{{\widehat{\theta }}}}\text{,bia}{{\text{s}}_{{\widehat{\theta }}}} \), \( \text{RMS}{{\text{E}}_{{\widehat{\theta }}}} \) and the confidence interval, \( \text{C}{{\text{I}}_{{\widehat{\theta },1-2\alpha }}} \), which is thought to include θ with probability 1−2α. Without having the information contained in such measures, it is difficult to assess how close \( \widehat{\theta } \) is to θ: estimates without error bars are useless.

For simple estimation problems (e.g., mean estimation) and simple noise properties (e.g., Gaussian distributional shape), the error measures can be analytically derived via the PDF of an estimator. However, climate is more complex—as regards the noise as well as the estimation problem. This book advocates therefore the bootstrap resampling approach, which allows to analyse complex problems for realistic (i.e., complex) properties such as non-Gaussian shape or serial dependence.

For the most part of this book, we have assumed the uncertainty to have its origin in the complex climate system and the measured variables (proxy, measurement and dating errors). We have occasionally considered ( Sections 4.1.7.4,  4.4 and  8.3.4) another error source, a mis-specified model. Statistical science refers to this error source as model uncertainty; see Chatfield (1995), Draper (1995), Candolo et al. (2003) and Chatfield (2004: Section 13.5 therein). By fitting a range of candidate models it is possible to infer the range of feasible estimation outcomes. For example, one may compare the estimated 100-year return level, HQ100, from a Weibull fit with the estimated HQ100 from a GEV fit to observed runoff data, and look whether the difference of the results is comparable to the statistical standard errors. Note that model uncertainty may regard also the assumed noise model (e.g., short versus long memory). A method to reduce model uncertainty is to employ graphical and computational tests of model suitability. As a method to quantify model uncertainty, we may study not only the range of the estimation outcomes but impose a weighting according to the probability that a particular model is correct. The “model probability” may be based in a Bayesian approach on a prior consultation of experts (Smith et al. 2009). In the example of HQ100, there is hope that the hydrologists would put more weight on the GEV model than on the Weibull. It is principally possible to add model uncertainty as a new dimension to the hyperspace of climate estimation (Fig. 9.1).
Fig. 9.1.

Hyperspace of climate parameter estimation. The Monte Carlo experiment prescribes the stochastic model, parameters and other properties (shape, sample size, spacing, persistence, etc.) in a way that the problem at hand (data and estimation) is covered. The method regards estimation and CI construction. The optimal estimation is determined by using a measure.

Climate is a paradigm of a complex system that requires for its analysis the bootstrap. In addition, climate opens the new problem dimensions of unequally spaced series and timescale errors. This book has presented various bootstrap algorithms to adapt closely to the estimation problem imposed by the data: ARB, MBB, SB, surrogate data, timescale-ARB, timescale-MBB, pairwise-ARB, pairwise-MBB and pair-wise-MBBres. It also described algorithms to support bootstrap resampling and CI construction: block length selection, calibration, the CI types normal, Student’s t, percentile and BCa.

The critical question is: What is the best method for inferring the truth from the data? What is the optimal estimation method, and how are the most accurate CIs constructed?

Future, strongly increased computing power allows to approach that question by means of Monte Carlo experiments. We outline this optimal estimation approach (Fig. 9.1). We reiterate that optimal estimation is not limited to the field of climate sciences.

The hyperspace of climate estimation has many, but not infinite dimensions. It consists of the three subspaces Monte Carlo design, method and measure.

The Monte Carlo design (Fig. 9.1) describes the data generating process.The design is used to generate artificial data, to which the method is applied. The design should, in some sense, cover the estimation problem (data and estimation) to be carried out. One group of dimensions is occupied by the type of estimation model and the parameters. For example, one may be interested in a linear regression model with the two parameters intercept and slope (Chapters  4 and  8). To restate, the Monte Carlo parameters (e.g., prescribed intercept and slope) should be close to the estimated parameters (estimated intercept and slope). The other group of dimensions in the Monte Carlo subspace describe the sample size (prescribed n, which should be close to the size of the sample at hand), the spacing (again, similar to the spacing of the sample) and the noise properties (also similar). An option is to invest three dimensions to model the persistence of the noise as an ARFIMA(\( p,\delta ,q \)) process (which contains the simpler types such as AR(1)) and one or two to model the shape (skewness, kurtosis). Heteroscedasticity may also be modelled. The ARFIMA process contains the preferred parsimonious,embedding-problem free AR(1) process (\( p=1,\delta =0,q=0 \)). Some dimensions have integer values (e.g., the ARFIMA parameter p), some have real values (e.g., the slope parameter). Timescale errors may also be modelled (additional dimensions).

The method subspace (Fig. 9.1) describes the estimation and CI construction. The ticks along the estimator dimension are named least squares, maximum likelihood, and so forth. CI construction requires more dimensions: one for distinguishing between classical and bootstrap CIs, and several for detailing the bootstrap methodology (block length selection for MBB, calibration, subsampling, etc.) and calculating the interval bounds from the replications. Consider, for example, the brute-force block length selector (Berkowitz and Kilian 2000): one dimension with integer values between 1 and n−1.

The measure subspace (Fig. 9.1) describes how to detect the optimal estimation method for the Monte Carlo experiment: CI accuracy and width, RMSE, bias, robustness, and so forth. It should make sense to consider also joint measures (e.g., CI accuracy and robustness).

The hyperspace of climate parameter estimation is large. Present computing power limits our ability to explore it and find the optimal method for solving a (climate) estimation problem. This book has examined many important estimation problems (regression, spectrum, extremes and correlation) but visited only parts of the hyperspace by means of Monte Carlo experiments. For example, in linear regression (Chapter  4), we have studied
  • \( \theta ={{\beta }_{0}} \) (intercept) and β1 (slope);

  • prescribed \( {{\beta }_{0}}=2,{{\beta }_{1}}=2 \);

  • \( n\in \left\{ {10,20,50,100,200,500,1000} \right\} \);

  • spacing: even and uneven (timescale errors);

  • shape: Gaussian and lognormal;

  • persistence: AR(1) , AR(2) and ARFIMA(0, 0.25, 0);

  • estimator: least squares only;

  • resampling: ARB, MBB, subsampling, timescale-ARB, timescale-MBB and pairwise-MBB;

  • CI type: classical and bootstrap BCa;

  • confidence level: 90, 95 and 99%;

  • calibration loop: none;

    and

  • measure: RMSE, CI accuracy and CI length.

We have found “acceptable” results (mainly judged via CI accuracy) from the bootstrap method applied to Monte Carlo samples generated from designed processes that are considered as close to the climate processes. These positive results have given us confidence that the results (estimate with CI) from analysing the observed, real climate time series are valid. However, we have to concede that there may exist more accurate methods, resulting in particular from (computing-intensive) CI calibration. This may be of relevance especially for small sample sizes.

The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a time series, \( \left\{ {t\left( i \right),x\left( i \right)} \right\}_{{i=1}}^{n} \), some prior information (e.g., measurement standard errors, age–depth curve) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace (Fig. 9.1) to find the suitable method, that is, the mode of estimation and CI construction that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate time series.

References

  1. Allen M (1999) Do-it-yourself climate prediction. Nature 401(6754): 642.CrossRefGoogle Scholar
  2. Allen MR, Stott PA, Mitchell JFB, Schnur R, Delworth TL (2000) Quantifying the uncertainty in forecasts of anthropogenic climate change. Nature 407(6804): 617–620.CrossRefGoogle Scholar
  3. Berkowitz J, Kilian L (2000) Recent developments in bootstrapping time series. Econometric Reviews 19(1): 1–48.CrossRefGoogle Scholar
  4. Buck CE, Millard AR (Eds) (2004) Tools for Constructing Chronologies: Crossing Disciplinary Boundaries. Springer, London, 257 pp.Google Scholar
  5. Candolo C, Davison AC, Demétrio CGB (2003) A note on model uncertainty in linear regression. The Statistician 52(2): 165–177.Google Scholar
  6. Chatfield C (1995) Model uncertainty, data mining and statistical inference (with discussion). Journal of the Royal Statistical Society, Series A 158(3): 419–466.CrossRefGoogle Scholar
  7. Chatfield C (2004) The Analysis of Time Series: An Introduction. Sixth edition. Chapman and Hall, Boca Raton, FL, 333 pp.Google Scholar
  8. Christensen JH, Hewitson B, Busuioc A, Chen A, Gao X, Held I, Jones R, Kolli RK, Kwon W-T, Laprise R, Magaña Rueda V, Mearns L, Menéndez CG, Räisänen J, Rinke A, Sarr A, Whetton P (2007) Regional climate projections. In: Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor MMB, Miller Jr HL, Chen Z (Eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 847–940.Google Scholar
  9. Draper D (1995) Assessment and propagation of model uncertainty (with discussion). Journal of the Royal Statistical Society, Series B 57(1): 45–97.Google Scholar
  10. Goodess CM, Jacob D, Déqué M, Guttiérrez JM, Huth R, Kendon E, Leckebusch GC, Lorenz P, Pavan V (2009) Downscaling methods, data and tools for input to impacts assessments. In: van der Linden P, Mitchell JFB (Eds) ENSEMBLES: Climate change and its impacts at seasonal, decadal and centennial timescales. Met Office Hadley Centre, Exeter, pp 59–78.Google Scholar
  11. Hargreaves JC, Annan JD (2002) Assimilation of paleo-data in a simple Earth system model. Climate Dynamics 19(5–6): 371–381.Google Scholar
  12. Jun M, Knutti R, Nychka DW (2008) Spatial analysis to quantify numerical model bias and dependence: How many climate models are there? Journal of the American Statistical Association 103(483): 934–947.CrossRefGoogle Scholar
  13. Kuhn TS (1970) The Structure of Scientific Revolutions. Second edition. University of Chicago Press, Chicago, 210 pp.Google Scholar
  14. Lanyon BP, Barbieri M, Almeida MP, Jennewein T, Ralph TC, Resch KJ, Pryde GJ, O’Brien JL, Gilchrist A, White AG (2009) Simplifying quantum logic using higher-dimensional Hilbert spaces. Nature Physics 5(2): 134–140.CrossRefGoogle Scholar
  15. Leith NA, Chandler RE (2010) A framework for interpreting climate model outputs. Applied Statistics 59(2): 279–296.Google Scholar
  16. Meehl GA, Stocker TF, Collins WD, Friedlingstein P, Gaye AT, Gregory JM, Kitoh A, Knutti R, Murphy JM, Noda A, Raper SCB, Watterson IG, Weaver AJ, Zhao Z-C (2007) Global climate projections. In: Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor MMB, Miller Jr HL, Chen Z (Eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 747–845.Google Scholar
  17. Moss RH, Edmonds JA, Hibbard KA, Manning MR, Rose SK, van Vuuren DP, Carter TR, Emori S, Kainuma M, Kram T, Meehl GA, Mitchell JFB, Nakicenovic N, Riahi K, Smith SJ, Stouffer RJ, Thomson AM, Weyant JP, Wilbanks TJ (2010) The next generation of scenarios for climate change research and assessment. Nature 463(7282): 747–756.CrossRefGoogle Scholar
  18. Nielsen MA, Chuang IL (2000) Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, 676 pp.Google Scholar
  19. Parrenin F, Barnola J-M, Beer J, Blunier T, Castellano E, Chappellaz J, Dreyfus G, Fischer H, Fujita S, Jouzel J, Kawamura K, Lemieux-Dudon B, Loulergue L, Masson-Delmotte V, Narcisi B, Petit J-R, Raisbeck G, Raynaud D, Ruth U, Schwander J, Severi M, Spahni R, Steffensen JP, Svensson A, Udisti R, Waelbroeck C, Wolff E (2007) The EDC3 chronology for the EPICA Dome C ice core. Climate of the Past 3(3): 485–497.CrossRefGoogle Scholar
  20. Paul A, Schäfer-Neth C (2005) How to combine sparse proxy data and coupled climate models. Quaternary Science Reviews 24(7–9): 1095–1107.CrossRefGoogle Scholar
  21. Randall DA, Wood RA, Bony S, Colman R, Fichefet T, Fyfe J, Kattsov V, Pitman A, Shukla J, Srinivasan J, Stouffer RJ, Sumi A, Taylor KE (2007) Climate models and their evaluation. In: Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor MMB, Miller Jr HL, Chen Z (Eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 589–662.Google Scholar
  22. Smith RL, Tebaldi C, Nychka D, Mearns LO (2009) Bayesian modeling of uncertainty in ensembles of climate models. Journal of the American Statistical Association 104(485): 97–116.CrossRefGoogle Scholar
  23. Stainforth DA, Allen MR, Tredger ER, Smith LA (2007) Confidence, uncertainty and decision-support relevance in climate predictions. Philosophical Transactions of the Royal Society of London, Series A 365(1857): 2145–2161.CrossRefGoogle Scholar
  24. Stensrud DJ (2007) Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models. Cambridge University Press, Cambridge, 459 pp.CrossRefGoogle Scholar
  25. Tebaldi C, Sansó B (2009) Joint projections of temperature and precipitation change from multiple climate models: A hierarchical Bayesian approach. Journal of the Royal Statistical Society, Series A 172(1): 83–106.CrossRefGoogle Scholar
  26. van der Linden P, Mitchell JFB (Eds) (2009) ENSEMBLES: Climate change and its impacts at seasonal, decadal and centennial timescales. Met Office Hadley Centre, Exeter, 160 pp.Google Scholar
  27. von Storch H, Zwiers FW (1999) Statistical Analysis in Climate Research. Cambridge University Press, Cambridge, 484 pp.Google Scholar
  28. Wunsch C (2006) Discrete Inverse and State Estimation Problems. Cambridge University Press, Cambridge, 371 pp.CrossRefGoogle Scholar
  29. DiCarlo L, Chow JM, Gambetta JM, Bishop LS, Johnson BR, Schuster DI, Majer J, Blais A, Frunzio L, Girvin SM, Schoelkopf RJ (2009) Demonstration of two-qubit algorithms with a superconducting quantum processor. Nature 460(7252): 240–244.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.Climate Risk AnalysisHannoverGermany
  2. 2.Alfred Wegener Institute for Polar and Marine ResearchBremerhavenGermany

Personalised recommendations