Climate Time Series Analysis pp 383395  Cite as
Future Directions
Abstract
What changes may bring the future to climate time series analysis? First we outline (Sections 9.1, 9.2 and 9.3) more shortterm objectives of “normal science” (Kuhn 1970), extensions of previous material (Chapters 1, 2, 3, 4, 5, 6, 7 and 8). Then we take a chance (Sections 9.4 and 9.5) and look on paradigm changes in climate data analysis that may be effected by virtue of strongly increased computing power (and storage capacity). Whether this technological achievement comes in the form of grid computing (Allen 1999; Allen et al. 2000; Stainforth et al. 2007) or quantum computing (Nielsen and Chuang 2000; DiCarlo 2009; Lanyon et al. 2009)—the assumption here is the availability of machines that are faster by a factor of ten to the power of, say, twelve, by a midterm period of, say, less than a few decades.
Keywords
Surrogate Data Bootstrap Resampling Singular Spectrum Analysis Climate Time Series Persistence ModelWhat changes may bring the future to climate time series analysis? First we outline (Sections 9.1, 9.2 and 9.3) more shortterm objectives of “normal science” (Kuhn 1970), extensions of previous material ( Chapters 1, 2, 3, 4, 5, 6, 7 and 8). Then we take a chance (Sections 9.4 and 9.5) and look on paradigm changes in climate data analysis that may be effected by virtue of strongly increased computing power (and storage capacity). Whether this technological achievement comes in the form of grid computing (Allen 1999; Allen et al. 2000; Stainforth et al. 2007) or quantum computing (Nielsen and Chuang 2000; DiCarlo et al. 2009; Lanyon et al. 2009)—the assumption here is the availability of machines that are faster by a factor of ten to the power of, say, twelve, by a midterm period of, say, less than a few decades.
9.1 Timescale modelling
Climate time series consist not only of measured values of a climate variable, but also of observed time values. Often the latter are not evenly spaced and also influenced by dating uncertainties. Conventional time series analysis largely ignored uneven and uncertain timescales, climate time series analysis has to take them into account.
The process that generated the times, \( \left\{ {{{t}_{X}}\left( i \right)} \right\} \) for univariate and also \( \left\{ {{{t}_{Y}}\left( j \right)} \right\} \) for bivariate series, depends on the climate archive. We have studied linear and piecewise linear processes for speleothem or sedimentary archives ( Section 4.1.7) and nonparametric models for ice cores ( Section 8.6.1). Such types of models are the basis for including uncertain timescales in the error determination by means of bootstrap resampling (\( \left\{ {t_{X}^{*}\left( i \right)} \right\} \) and also \( \left\{ {t_{Y}^{*}\left( j \right)} \right\} \)). In bivariate and higher dimensional estimation problems, also the joint distributions of the timescale processes are important. See the example of the Vostok ice core ( Section 8.6.1) with the coupled timescales for the ice and the gas.
Climate archive modelling should be enhanced in the future to provide accurate descriptions of uncertain timescales. Archive models should evidently include the physics of the accumulation of the archive. One may even think of physiological models describing the performance of humans in layer counting of regular sequences such as varves (Table 1.3). A second ingredient of climate archive modelling are statistical constraints, for example, a strictly monotonically increasing age – depth curve in a speleothem archive or an absolutely dated fixpoint in a marine sediment core. An exemplary paper (Parrenin et al. 2007) of climate archive modelling studies the accumulation and flow in an ice sheet, into which a core is drilled. The Bayesian approach may be suitable for combining the inputs from physics and statistical constraints (Buck and Millard 2004).
9.2 Novel estimation problems
Chapters 2, 3, 4, 5, and 6 presented stochastic processes and estimation algorithms for inferring the fundamental properties of univariate climate processes in the climate equation (Eq. 1.2): trend, variability, persistence, spectrum and extremes. Chapters 7 and 8 studied bivariate processes: correlation and the regression relation between two univariate processes. We believe to have covered with these chapters the vast majority of application fields for the climate sciences.
However, in science there is always room for asking more questions, that means in a quantitative approach, for attempting to estimate different climate parameters in the uni or bivariate setting.
An obvious example of such a novel estimation problem is SSA, mentioned in the background material of Chapter 1. This decomposition method has been formulated so far only for evenly spaced, discrete time series. Interpolation to equidistance is obsolete because it biases the objectives of the decomposition (estimates of trend, variability, etc.). SSA formulations applicable to unevenly spaced records should therefore be developed.
Other novel estimation approaches are expected to come from the array of nonlinear dynamical systems theory ( Section 1.6). This field has a focus more on application data from controlled measurements or computer experiments and less on unevenly spaced, short paleoclimatic time series. A breakthrough, also with respect to SSA, may come from techniques of reconstructing the phase space at irregular points.
9.3 Higher dimensions
The higher number of dimensions may also result from describing the climate evolution in the spatial domain (e.g., X is temperature in the northern, Y in the southern hemisphere). There is a variety of highdimensional, spatial estimation problems: multivariate regression, PCA and many more (von Storch and Zwiers 1999: Part V therein).
9.4 Climate models
Computer models render the climate system in the form of mathematical equations. The currently most sophisticated types, AOGCMs ( Fig. 1.9), require the most powerful computers.Nevertheless, the rendered spatial and temporal scales are bounded by finite resolutions and finite domain sizes. Also the number of simulated climate processes is limited.
The problem of a finite spatial resolution is currently tackled by means of using an AOGCM (grid size several tens to a few hundred kilometres) for the global domain and nesting into it a regional model or RM (grid size reduced by a factor \( \sim 20 \)) for a subdomain of interest (say, Europe). The AOGCM “forces” the RM (Meehl et al. 2007; Christensen et al. 2007), that means, prescribes the conditions at the boundaries of the subdomain. Subgrid processes, not resolved even by the RM (e.g., cloud processes) and therefore not explicitly renderable by the AOGCM–RM combination, can be implicitly included by employing inferred parametric relations (e.g., between cloud formation and temperature). The AOGCM–RM combination includes many variables, \( {{\left( {X,Y,Z,...} \right)}^{\prime }}\equiv X \), from the climate at grid points, and many parameters, \( {{\left( {{{\theta }_{0}},{{\theta }_{1}},{{\theta }_{2}},...} \right)}^{\prime }}\equiv \theta \), from the parameterizations (Stensrud 2007) and other model equations. For convenience of presentation, we consider the climate variable vector, X, and the climate model parameter vector, θ.
Our premise of a future “quantum boost” by a factor \( \sim {{10}^{{12}}} \) can make regionalization dispensable and let more realistic AOGCMs (grid size several tens to a few hundred metres) become calculable with computing times reduced from, say, a year to less than a month. Regarding the sophistication of a climate model, the increased computing power can also be utilized for including processes from the fields of biology and economy (greenhouse gas emissions (Moss et al. 2010) and “climate engineering” measures). Indeed, a finer spatial grid does require more processes to be explicitly included. Regarding the temporal scale, the boost should allow to simulate much larger spans (transient paleoclimate runs) by the means of AOGCMs and their successors.
There exists, however, another field where to invest computing power, namely the uncertainty determination of climate model results. We sketch this area in light of the methodology presented in this book, statistical estimation and bootstrap resampling.
The observations are, of course, strongly limited in the number of climate variables, geographic locations and time resolutions. There have been few observations made of, say, temperature in 1000 m height above sealevel at \( 130^{\circ} \) \( 30^{\circ} \)S for the time interval from 1850 to 2010 and a spacing of \( d\left( i \right)=\Delta T \) = 30 minutes.
9.4.1 Fitting climate models to observations
Let us view climate modelling as an estimation problem. The task is to estimate the model parameters, θ, given observations, \( \left\{ {{{\text{x}}_{\circ }}\left( i \right)} \right\}_{{i=1}}^{n} \). This set shall include the “missing observations.” The task requires to run the model and produce \( \left\{ {\text{x}\left( i \right)} \right\}_{{i=1}}^{n} \). The less distant the model output is to the observations, the better the fit.
The parameter vector is included in the righthand side of the equation because the model output, \( \left\{ {\text{x}\left( i \right)} \right\}_{{i=1}}^{n} \), depends on it.
The outlined procedure is with current computing power not feasible for a full estimation of AOGCM parameters. It has been performed for a simple climate model containing only three variables (Hargreaves and Annan 2002) and an Earth system model of intermediate complexity (Paul and SchäferNeth 2005).The concept of fitting climate models to data is also denoted as data assimilation or state estimation (Wunsch 2006).
The observation resample, \( \text{x}_{\circ }^{*}\left( i \right) \), can be obtained via the surrogate data bootstrap ( Section 3.3.3), taking into account the errors of the observation devices, the distributional shapes (which may be Gaussian or not), the covariances (which may be rather small) and the “internal climate variability” (which may have to be estimated by means of separate model experiments). The model output resample, x ^{*}(i), incorporates a new (trial) set of parameters, θ^{*}. However, it should also be based on a random initial state, x ^{*}(1), because the initial conditions are not exactly known. x ^{*}(1) may be taken randomly from a set of time series values, Open image in new window , of a climate model run without forcing components (stationarity). This “ensemble technique” is already currently being applied to quantify the uncertainty component owing to imperfectly known initial conditions (Randall et al. 2007; van der Linden and Mitchell 2009). Also the forcing variable, R(i), may have to be described stochastically for being included in the surrogate data approach.
The replications, \( \left\{ {{{{\hat{\theta }}}^{{*b}}}} \right\}_{{b=1}}^{B} \), serve in the usual manner (Section et al. Section 3.4) for constructing CIs. Of particular interest should be the joint PDF of the climate model parameter estimators, which may be described by means of confidence regions in the parameter hyperspace (Smith et al 2009; Tebaldi and Sansó 2009).Realistic climate model error and CI determination do not require a handful of runs (current ensemble technique) but rather B runs, with B in the usual order of 2000 or even higher (because of the dimensionality).
9.4.2 Forecasting with climate models
Models are employed to forecast future climate, \( \text{x}\left( {n+1} \right) \), at time \( t\left( {n+1} \right) \). (Indeed, forecasts are made for many time steps to cover the typical range from the present to the year 2100.) This is achieved in our vision by a run of the model employing the estimated, optimal parameters, \( \hat{\theta } \). That run has to use also a guess of the future forcing, \( \mathbf{R}\left( {n+1} \right) \).
Of crucial importance, scientifically and socioeconomically, is to determine the size of the forecasting error. The bootstrap methodology, utilized for that purpose in the bivariate setting (Section 8.5), should be helpful also in the highdimensional setting.
The recommendation is to produce forecast resamples, \( {{\text{x}}^{*}}\left( {n+1} \right) \), from which to calculate standard errors, CIs, confidence bands (over a time span), and so forth.

The parameterization uncertainty can be taken into account by resampling from the set of replications, \( \left\{ {{{{\hat{\theta }}}^{{*b}}}} \right\}_{{b=1}}^{B} \). This preserves the covariance structure of the parameter estimates.

The initialcondition uncertainty can be taken into account by means of the ensemble technique.

The forcing uncertainty may be difficult to include in a quantitative manner. This step does likely necessitate the usage of separate forcing models.
9.4.3 Design of the cost function

A leastsquares technique is mandatory. It seems impossible to write down a likelihood function (for maximization) owing to the size of the body of the climate model equations. One may wish to make the sum of squares more robust with respect to “outliers.” On the other hand, one may give the “outliers” instead more weight in situations where the focus is on modelling the climate extremes.

GLS, employing the covariance matrices (variability, persistence) of the many climate variables, is a possible technique to reduce the estimation standard errors. The normalization (variability) produces dimensionless SSQG terms for each variable, which can be processed further (e.g., summed up).

A problem is multicollinearity (correlated predictors), stemming from spatial dependence among the climate variables (neighboured grid points). This may indicate to reduce the number of variables in the cost function by means of spatial binning. PCA techniques should help evaluating geographically meaningful bins (regions).

Errors in the observations (\( \left( {{{S}_{X}},{{S}_{Y}},{{S}_{Z}},...} \right) \)) should lead researchers to consider techniques like WLSXY estimation ( Section 8.1.2) to reduce estimation bias.

Further weighting could be performed “in the time domain” to enforce, for example, the most recent years to be more accurately simulated.

The degrees of freedom, ν, of the observation – model combination can be taken into account (a simple division by ν).

One may put bounds to the θ hyperspace to exclude estimation results that are inconsistent with physics (hard bounds) or prior knowledge (soft bounds). Bayesian formulas may help here.
The envisaged availability of “quantum computing power” does not release us from the task of constructing efficient methods to search through the hyperspace, to locate the minimum of the cost function: gradient techniques, Brent’s search, hybrid procedures or Bayesian approaches (Monte Carlo Markov Chain, see Hargreaves and Annan (2002) and Leith and Chandler (2010).
9.4.4 Climate model bias
In the example of precipitation in central Europe, there are indications from a range of AOGCM–RM combinations that Open image in new window for the time interval from 1950 to the very recent past (Jacob D 2009, personal communication), that is, the climate models systematically overestimate precipitation. Similar overestimations were found for the region of Scandinavia (Goodess et al. 2009).
9.5 Optimal estimation
Increased computer power would also allow to perform optimal estimation. We have sketched this concept in previous parts of this book ( Sections 6.2.7 and Section 7.5.3.1). Not only climatology, other science branches as well may benefit from optimal estimation.
Central to the investigation in natural sciences, such as climatology, is to infer the truth from the data. This calls for the statistical language. In quantitative climatology, the investigative questions can be translated into a parameter, θ, which needs to be estimated using the data. The investigation cycles through loops: question, estimation, refined question based on the estimation result, new estimation, and so forth.
An estimator, \( \hat{\theta } \), is a recipe how to guess θ using the data. Since the sample size is less than infinity and the sampled climate system contains unknown influences (noise), we cannot expect that \( \widehat{\theta } \) equals θ. However, we can calculate the size of that error, the uncertainty. This leads to the measures \( \text{s}{{\text{e}}_{{\widehat{\theta }}}}\text{,bia}{{\text{s}}_{{\widehat{\theta }}}} \), \( \text{RMS}{{\text{E}}_{{\widehat{\theta }}}} \) and the confidence interval, \( \text{C}{{\text{I}}_{{\widehat{\theta },12\alpha }}} \), which is thought to include θ with probability 1−2α. Without having the information contained in such measures, it is difficult to assess how close \( \widehat{\theta } \) is to θ: estimates without error bars are useless.
For simple estimation problems (e.g., mean estimation) and simple noise properties (e.g., Gaussian distributional shape), the error measures can be analytically derived via the PDF of an estimator. However, climate is more complex—as regards the noise as well as the estimation problem. This book advocates therefore the bootstrap resampling approach, which allows to analyse complex problems for realistic (i.e., complex) properties such as nonGaussian shape or serial dependence.
Climate is a paradigm of a complex system that requires for its analysis the bootstrap. In addition, climate opens the new problem dimensions of unequally spaced series and timescale errors. This book has presented various bootstrap algorithms to adapt closely to the estimation problem imposed by the data: ARB, MBB, SB, surrogate data, timescaleARB, timescaleMBB, pairwiseARB, pairwiseMBB and pairwiseMBBres. It also described algorithms to support bootstrap resampling and CI construction: block length selection, calibration, the CI types normal, Student’s t, percentile and BCa.
The critical question is: What is the best method for inferring the truth from the data? What is the optimal estimation method, and how are the most accurate CIs constructed?
Future, strongly increased computing power allows to approach that question by means of Monte Carlo experiments. We outline this optimal estimation approach (Fig. 9.1). We reiterate that optimal estimation is not limited to the field of climate sciences.
The hyperspace of climate estimation has many, but not infinite dimensions. It consists of the three subspaces Monte Carlo design, method and measure.
The Monte Carlo design (Fig. 9.1) describes the data generating process.The design is used to generate artificial data, to which the method is applied. The design should, in some sense, cover the estimation problem (data and estimation) to be carried out. One group of dimensions is occupied by the type of estimation model and the parameters. For example, one may be interested in a linear regression model with the two parameters intercept and slope (Chapters 4 and 8). To restate, the Monte Carlo parameters (e.g., prescribed intercept and slope) should be close to the estimated parameters (estimated intercept and slope). The other group of dimensions in the Monte Carlo subspace describe the sample size (prescribed n, which should be close to the size of the sample at hand), the spacing (again, similar to the spacing of the sample) and the noise properties (also similar). An option is to invest three dimensions to model the persistence of the noise as an ARFIMA(\( p,\delta ,q \)) process (which contains the simpler types such as AR(1)) and one or two to model the shape (skewness, kurtosis). Heteroscedasticity may also be modelled. The ARFIMA process contains the preferred parsimonious,embeddingproblem free AR(1) process (\( p=1,\delta =0,q=0 \)). Some dimensions have integer values (e.g., the ARFIMA parameter p), some have real values (e.g., the slope parameter). Timescale errors may also be modelled (additional dimensions).
The method subspace (Fig. 9.1) describes the estimation and CI construction. The ticks along the estimator dimension are named least squares, maximum likelihood, and so forth. CI construction requires more dimensions: one for distinguishing between classical and bootstrap CIs, and several for detailing the bootstrap methodology (block length selection for MBB, calibration, subsampling, etc.) and calculating the interval bounds from the replications. Consider, for example, the bruteforce block length selector (Berkowitz and Kilian 2000): one dimension with integer values between 1 and n−1.
The measure subspace (Fig. 9.1) describes how to detect the optimal estimation method for the Monte Carlo experiment: CI accuracy and width, RMSE, bias, robustness, and so forth. It should make sense to consider also joint measures (e.g., CI accuracy and robustness).

\( \theta ={{\beta }_{0}} \) (intercept) and β_{1} (slope);

prescribed \( {{\beta }_{0}}=2,{{\beta }_{1}}=2 \);

\( n\in \left\{ {10,20,50,100,200,500,1000} \right\} \);

spacing: even and uneven (timescale errors);

shape: Gaussian and lognormal;

persistence: AR(1) , AR(2) and ARFIMA(0, 0.25, 0);

estimator: least squares only;

resampling: ARB, MBB, subsampling, timescaleARB, timescaleMBB and pairwiseMBB;

CI type: classical and bootstrap BCa;

confidence level: 90, 95 and 99%;

calibration loop: none;
and

measure: RMSE, CI accuracy and CI length.
We have found “acceptable” results (mainly judged via CI accuracy) from the bootstrap method applied to Monte Carlo samples generated from designed processes that are considered as close to the climate processes. These positive results have given us confidence that the results (estimate with CI) from analysing the observed, real climate time series are valid. However, we have to concede that there may exist more accurate methods, resulting in particular from (computingintensive) CI calibration. This may be of relevance especially for small sample sizes.
The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a time series, \( \left\{ {t\left( i \right),x\left( i \right)} \right\}_{{i=1}}^{n} \), some prior information (e.g., measurement standard errors, age–depth curve) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace (Fig. 9.1) to find the suitable method, that is, the mode of estimation and CI construction that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate time series.
References
 Allen M (1999) Doityourself climate prediction. Nature 401(6754): 642.CrossRefGoogle Scholar
 Allen MR, Stott PA, Mitchell JFB, Schnur R, Delworth TL (2000) Quantifying the uncertainty in forecasts of anthropogenic climate change. Nature 407(6804): 617–620.CrossRefGoogle Scholar
 Berkowitz J, Kilian L (2000) Recent developments in bootstrapping time series. Econometric Reviews 19(1): 1–48.CrossRefGoogle Scholar
 Buck CE, Millard AR (Eds) (2004) Tools for Constructing Chronologies: Crossing Disciplinary Boundaries. Springer, London, 257 pp.Google Scholar
 Candolo C, Davison AC, Demétrio CGB (2003) A note on model uncertainty in linear regression. The Statistician 52(2): 165–177.Google Scholar
 Chatfield C (1995) Model uncertainty, data mining and statistical inference (with discussion). Journal of the Royal Statistical Society, Series A 158(3): 419–466.CrossRefGoogle Scholar
 Chatfield C (2004) The Analysis of Time Series: An Introduction. Sixth edition. Chapman and Hall, Boca Raton, FL, 333 pp.Google Scholar
 Christensen JH, Hewitson B, Busuioc A, Chen A, Gao X, Held I, Jones R, Kolli RK, Kwon WT, Laprise R, Magaña Rueda V, Mearns L, Menéndez CG, Räisänen J, Rinke A, Sarr A, Whetton P (2007) Regional climate projections. In: Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor MMB, Miller Jr HL, Chen Z (Eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 847–940.Google Scholar
 Draper D (1995) Assessment and propagation of model uncertainty (with discussion). Journal of the Royal Statistical Society, Series B 57(1): 45–97.Google Scholar
 Goodess CM, Jacob D, Déqué M, Guttiérrez JM, Huth R, Kendon E, Leckebusch GC, Lorenz P, Pavan V (2009) Downscaling methods, data and tools for input to impacts assessments. In: van der Linden P, Mitchell JFB (Eds) ENSEMBLES: Climate change and its impacts at seasonal, decadal and centennial timescales. Met Office Hadley Centre, Exeter, pp 59–78.Google Scholar
 Hargreaves JC, Annan JD (2002) Assimilation of paleodata in a simple Earth system model. Climate Dynamics 19(5–6): 371–381.Google Scholar
 Jun M, Knutti R, Nychka DW (2008) Spatial analysis to quantify numerical model bias and dependence: How many climate models are there? Journal of the American Statistical Association 103(483): 934–947.CrossRefGoogle Scholar
 Kuhn TS (1970) The Structure of Scientific Revolutions. Second edition. University of Chicago Press, Chicago, 210 pp.Google Scholar
 Lanyon BP, Barbieri M, Almeida MP, Jennewein T, Ralph TC, Resch KJ, Pryde GJ, O’Brien JL, Gilchrist A, White AG (2009) Simplifying quantum logic using higherdimensional Hilbert spaces. Nature Physics 5(2): 134–140.CrossRefGoogle Scholar
 Leith NA, Chandler RE (2010) A framework for interpreting climate model outputs. Applied Statistics 59(2): 279–296.Google Scholar
 Meehl GA, Stocker TF, Collins WD, Friedlingstein P, Gaye AT, Gregory JM, Kitoh A, Knutti R, Murphy JM, Noda A, Raper SCB, Watterson IG, Weaver AJ, Zhao ZC (2007) Global climate projections. In: Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor MMB, Miller Jr HL, Chen Z (Eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 747–845.Google Scholar
 Moss RH, Edmonds JA, Hibbard KA, Manning MR, Rose SK, van Vuuren DP, Carter TR, Emori S, Kainuma M, Kram T, Meehl GA, Mitchell JFB, Nakicenovic N, Riahi K, Smith SJ, Stouffer RJ, Thomson AM, Weyant JP, Wilbanks TJ (2010) The next generation of scenarios for climate change research and assessment. Nature 463(7282): 747–756.CrossRefGoogle Scholar
 Nielsen MA, Chuang IL (2000) Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, 676 pp.Google Scholar
 Parrenin F, Barnola JM, Beer J, Blunier T, Castellano E, Chappellaz J, Dreyfus G, Fischer H, Fujita S, Jouzel J, Kawamura K, LemieuxDudon B, Loulergue L, MassonDelmotte V, Narcisi B, Petit JR, Raisbeck G, Raynaud D, Ruth U, Schwander J, Severi M, Spahni R, Steffensen JP, Svensson A, Udisti R, Waelbroeck C, Wolff E (2007) The EDC3 chronology for the EPICA Dome C ice core. Climate of the Past 3(3): 485–497.CrossRefGoogle Scholar
 Paul A, SchäferNeth C (2005) How to combine sparse proxy data and coupled climate models. Quaternary Science Reviews 24(7–9): 1095–1107.CrossRefGoogle Scholar
 Randall DA, Wood RA, Bony S, Colman R, Fichefet T, Fyfe J, Kattsov V, Pitman A, Shukla J, Srinivasan J, Stouffer RJ, Sumi A, Taylor KE (2007) Climate models and their evaluation. In: Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor MMB, Miller Jr HL, Chen Z (Eds) Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, pp 589–662.Google Scholar
 Smith RL, Tebaldi C, Nychka D, Mearns LO (2009) Bayesian modeling of uncertainty in ensembles of climate models. Journal of the American Statistical Association 104(485): 97–116.CrossRefGoogle Scholar
 Stainforth DA, Allen MR, Tredger ER, Smith LA (2007) Confidence, uncertainty and decisionsupport relevance in climate predictions. Philosophical Transactions of the Royal Society of London, Series A 365(1857): 2145–2161.CrossRefGoogle Scholar
 Stensrud DJ (2007) Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models. Cambridge University Press, Cambridge, 459 pp.CrossRefGoogle Scholar
 Tebaldi C, Sansó B (2009) Joint projections of temperature and precipitation change from multiple climate models: A hierarchical Bayesian approach. Journal of the Royal Statistical Society, Series A 172(1): 83–106.CrossRefGoogle Scholar
 van der Linden P, Mitchell JFB (Eds) (2009) ENSEMBLES: Climate change and its impacts at seasonal, decadal and centennial timescales. Met Office Hadley Centre, Exeter, 160 pp.Google Scholar
 von Storch H, Zwiers FW (1999) Statistical Analysis in Climate Research. Cambridge University Press, Cambridge, 484 pp.Google Scholar
 Wunsch C (2006) Discrete Inverse and State Estimation Problems. Cambridge University Press, Cambridge, 371 pp.CrossRefGoogle Scholar
 DiCarlo L, Chow JM, Gambetta JM, Bishop LS, Johnson BR, Schuster DI, Majer J, Blais A, Frunzio L, Girvin SM, Schoelkopf RJ (2009) Demonstration of twoqubit algorithms with a superconducting quantum processor. Nature 460(7252): 240–244.CrossRefGoogle Scholar