Skip to main content

Advertisement

Log in

Cross-validation analysis of bias models in Bayesian multi-model projections of climate

  • Published:
Climate Dynamics Aims and scope Submit manuscript

Abstract

Climate change projections are commonly based on multi-model ensembles of climate simulations. In this paper we consider the choice of bias models in Bayesian multimodel predictions. Buser et al. (Clim Res 44(2–3):227–241, 2010a) introduced a hybrid bias model which combines commonly used constant bias and constant relation bias assumptions. The hybrid model includes a weighting parameter which balances these bias models. In this study, we use a cross-validation approach to study which bias model or bias parameter leads to, in a specific sense, optimal climate change projections. The analysis is carried out for summer and winter season means of 2 m-temperatures spatially averaged over the IPCC SREX regions, using 19 model runs from the CMIP5 data set. The cross-validation approach is applied to calculate optimal bias parameters (in the specific sense) for projecting the temperature change from the control period (1961–2005) to the scenario period (2046–2090). The results are compared to the results of the Buser et al. (Clim Res 44(2–3):227–241, 2010a) method which includes the bias parameter as one of the unknown parameters to be estimated from the data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. By the Bayes formula (1), the posterior distribution \(p(\varTheta |\mathcal {D})\) is the product of the likelihood distribution (9) and the prior distributions given in Table 3. For example, one can see that (10) and (11) are Gaussian densities by re-organizing the terms.

  2. Figures S35−S61 in the supplementary material show the estimates for each region and also estimates of the 2D joint histogram of the parameters \(\varDelta \mu\) and \(\kappa\) for each region.

References

  • Bellprat O, Kotlarski S, Lüthi D, Schär C (2013) Physical constraints for temperature biases in climate models. Geophys Res Lett 40:4042–4047

    Article  Google Scholar 

  • Boberg F, Christensen J (2012) Overestimation of mediterranean summer temperature projections due to model deficiencies. Nat Clim Change 2:433–436

    Article  Google Scholar 

  • Buser C, Künsch H, Lüthi D, Wild M, Schär C (2009) Bayesian multi-model projection of climate: bias assumptions and interannual variability. Clim Dyn 33(6):849–868. doi:10.1007/s00382-009-0588-6

    Article  Google Scholar 

  • Buser C, Künsch H, Schär C (2010a) Bayesian multi-model projections of climate: generalization and application to ensembles results. Clim Res 44(2–3):227–241

    Article  Google Scholar 

  • Buser C, Künsch H, Weber A (2010b) Biases and uncertainty in climate projections. Scand J Stat 37:179–199

    Article  Google Scholar 

  • Candille G, Talagrand O (2005) Evaluation of probabilistic prediction systems for a scalar variable. Q J R Meteorol Soc 131:2131–2150

    Article  Google Scholar 

  • Christensen J, Boberg F (2012) Temperature dependent climate projection deficiencies in CMIP5 models. Geophys Res Lett 39(L24):705

    Google Scholar 

  • Christensen J, Boberg F (2013) Correction to temperature dependent climate projection deficiencies in CMIP5 models. Geophys Res Lett 40:2307–2308

    Article  Google Scholar 

  • Christensen J, Boberg F, Christensen O, Lucas-Picher P (2008) On the need for bias correction of regional climate change projections of temperature and precipitation. Geophys Res Lett 35(L20):709

    Google Scholar 

  • Gelman A, Carlin J, Stern H, Rubin D (2003) Bayesian data analysis, 2nd edn. Chapman & Hall/CRC, Boca Raton

    Google Scholar 

  • Gilks W, Richardson S, Spiegelhalter D (1996) Markov chain Monte Carlo in practice. Chapmann & Hall, Boca Raton

    Google Scholar 

  • Gneiting T, Raftery A (2007) Strictly proper scoring rules, prediction, and estimation. Am Stat Assoc 102:359–378

    Article  Google Scholar 

  • Grimit EP, Gneiting T, Berrocal VJ, Johnson NA (2006) The continuous ranked probability score for circular variables and its application to mesoscale forecast ensemble verification. Q J R Meteorol Soc 132:2925–2942

    Article  Google Scholar 

  • Harris I, Jones P, Osborn T, Lister D (2014) Updated high-resolution grids of monthly climatic observations the CRU TS3.10 dataset. Int J Climatol 34(3):623–642

    Article  Google Scholar 

  • Heaton M, Greasby T, Sain S (2013) Modeling uncertainty in climate using ensembles of regional and global climate models and multiple observation-based data sets. SIAM/ASA J Uncertain Quantif 1:535–559

    Article  Google Scholar 

  • Hersbach H (2000) Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather Forecast 15:559–570

    Article  Google Scholar 

  • Ho C, Stephenson D, Collins M, Ferro C, Brown S (2012) Calibration strategies: a source of additional uncertainty in climate change projections. Bull Amer Meteor Soc 93:21–26

    Article  Google Scholar 

  • Jolliffe I, Stephenson D (eds) (2011) Forecast verification: a practitioner’s guide in atmospheric science, 2nd edn. Wiley, New York

    Google Scholar 

  • Kerkhoff C, Künsch H, Schär C (2014) Assessment of bias assumptions for climate models. J Clim 27(17):6799–6918

    Article  Google Scholar 

  • Maraun D (2012) Nonstationarities of regional climate model biases in European seasonal mean temperature and precipitation sums. Geophys Res Lett 39(L06):706

    Google Scholar 

  • McQuarrie A, Tsai CL (1998) Regression and time series model selection. World Scientific, Singapore

    Book  Google Scholar 

  • Räisänen J, Ylhäisi J (2012) Can model weighting improve probabilistic projections of climate change? Clim Dyn 39:1981–1998

    Article  Google Scholar 

  • Räisänen J, Ruokolainen L, Ylhäisi J (2010) Weighting of model results for improving best estimates of climate change. Clim Dyn 35:407–422

    Article  Google Scholar 

  • Seneviratne S, Nicholls N, Easterling D, Goodess C, Kanae S, Kossin J, Luo Y, Marengo J, McInnes K, Rahimi M, Reichstein M, Sorteberg A, Vera C, Zhang X (2012) Changes in climate extremes and their impacts on the natural physical environment. In: IPCC (ed) Managing the risks of extreme events and disasters to advance climate change adaptation. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change (IPCC), pp 109–230

  • Smith R, Tebaldi C, Nychka D, Mearns L (2009) Bayesian modeling of uncertainty in ensembles of climate models. J Am Stat Assoc 104:97–116

    Article  Google Scholar 

  • Stanski H, Wilson L, Burrows W (1989) Survey of common verification methods in meteorology. Research report 89-5, Atmospheric Environment Service Forecast Research Division, Canada

  • Tebaldi C, Knutti R (2007) The use of the multi-model ensemble in probabilistic climate projections. Philos Trans R Soc A 365:2053–2075

    Article  Google Scholar 

  • Tebaldi C, Sansó B (2009) Joint projections of temperature and precipitation change from multiple climate models: a hierarchical bayesian approach. J R Stat Soc A 172:83–106

    Article  Google Scholar 

  • Tebaldi C, Smith R, Mearns DNL (2005) Quantifying uncertainty in projection of regional climate change: a bayesian approach to the analysis of multimodel ensembles. J Clim 18:1524–1540

    Article  Google Scholar 

  • Thomson AM, Calvin KV, Smith SJ, Kyle GP, Volke A, Patel P, Delgado-Arias S, Bond-Lamberty B, Wise MA, Clarke LE, Edmonds JA (2011) RCP4.5: a pathway for stabilization of radiative forcing by 2100. Clim Change 109(1–2):77–94. doi:10.1007/s10584-011-0151-4

    Article  Google Scholar 

  • Weigel A, Liniger M, Appenzeller C (2008) Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts? Q J R Meteorol Soc 134:241–260

    Article  Google Scholar 

  • Wilks D (2006) Comparison of ensemble-MOS methods in the Lorenz 96 setting. Meteorol Appl 13:243–256

    Article  Google Scholar 

Download references

Acknowledgments

This work has been supported by strategic funding of the University of Eastern Finland and funding of Academy of Finland (application numbers 213476, 250215 AND 272041, Finnish Programme for Center of Excellence in Research 2006–2011, 2012–2017 AND 2014–2019). We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table 2 of this paper) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. M. J. Huttunen.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 7708 KB)

Appendix

Appendix

1.1 Continuous ranked probability score

Let x be a scalar variable (e.g. 2 m-temperature). Suppose that a probability function forecast of x is given by p(x) and we have also an observation \(x^{\text {obs}}\) of the x. The Continuous Ranked Probability Score (CRPS) (Stanski et al. 1989; Hersbach 2000; Candille and Talagrand 2005; Grimit et al. 2006) is defined as

$$\begin{aligned} \mathrm {CRPS}=\int _{-\infty }^\infty \left[ P^{\text {pred}}(x)-P^{\text {obs}}(x)\right] ^2{\,\mathrm d}x \end{aligned}$$
(14)

where \(P^{\text {pred}}(x)=\int _{-\infty }^x p(x'){\,\mathrm d}x'\) is the cumulative distribution function of p(x) and \(P^{\text {obs}}\) is the cumulative distribution function for the observation:

$$\begin{aligned} P^{\text {obs}}(x)= & {} \left\{ \begin{array}{cc} 0,&{} \text {if }x<x^{\text {obs}}\\ 1,&{}\text {if }x\ge x^{\text {obs}}\end{array}\right. =H(x-x^{\text {obs}}) . \end{aligned}$$

where H is the Heaviside function (\(H(x)=1\) if \(x\ge 0\) and 0 otherwise).

In this paper, the “distance” between the predictions of the future temperatures \(Y_{0,t}\) (when \(\ell\)’th model is taken as the “truth” in the cross-validation) and the actual future temperatures \(Y_{\ell ,t}\) is measured using the CRPS. The (total) CRPS score is calculated as the mean of CRPSs calculated for every year \(t=1,\ldots ,T\). By Eq. (13), the prediction distribution is a (weighted) sum of Gaussian distributions and the CRPS for such Gaussian mixture model can be calculated in closed form using an expression given in Grimit et al. (2006).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huttunen, J.M.J., Räisänen, J., Nissinen, A. et al. Cross-validation analysis of bias models in Bayesian multi-model projections of climate. Clim Dyn 48, 1555–1570 (2017). https://doi.org/10.1007/s00382-016-3160-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00382-016-3160-1

Keywords

Navigation