Advertisement

Calibration, Validation, and Confirmation

  • Mathias FrischEmail author
Chapter
Part of the Simulation Foundations, Methods and Applications book series (SFMA)

Abstract

This chapter examines the role of parameter calibration in the confirmation and validation of complex computer simulation models. I examine the question to what extent calibration data can confirm or validate the calibrated model, focusing in particular on Bayesian approaches to confirmation. I distinguish several different Bayesian approaches to confirmation and argue that complex simulation models exhibit a predictivist effect: Complex computer simulation models constitute a case in which predictive success, as opposed to the mere accommodation of evidence, provides a more stringent test of the model. Data used in tuning do not validate or confirm a model to the same extent as data successfully predicted by the model do.

Keywords

Predictivism Bayesian epistemology Problem of old evidence Tuning Climate models 

References

  1. Barnes, E. C. (2008). The paradox of Predictivism. Cambridge, New York: Cambridge University Press.Google Scholar
  2. Barnes, E. C. (1999). The quantitative problem of old evidence. The British Journal for the Philosophy of Science, 50(2), 249–264.MathSciNetCrossRefGoogle Scholar
  3. Bellprat, O., Kotlarski, S., Lüthi, D., & Schär, C. (2012). Objective calibration of regional climate models. Journal of Geophysical Research: Atmospheres, 117(D23).  https://doi.org/10.1029/2012JD018262.CrossRefGoogle Scholar
  4. Box, G. E. P. (1979). Robustness in the strategy of scientific model building. In R. L. Launer & G. N. Wilkinson (Eds.), Robustness in Statistics (pp. 201–36). Academic Press. https://www.sciencedirect.com/science/article/pii/B9780124381506500182.
  5. Brush, S. G. (1994). Dynamics of theory change: The role of predictions. In PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (January) (pp. 133–45).Google Scholar
  6. Christoph, B., Reto, K., & Gertrude, H. H. (2017). Building confidence in climate model projections: An analysis of inferences from fit. Wiley Interdisciplinary Reviews: Climate Change, 8(3), e454.  https://doi.org/10.1002/wcc.454.Google Scholar
  7. Cartwright, N. (1983). How the laws of physics lie. Oxford University Press.Google Scholar
  8. Douglas, H., & Magnus, P. D. (2013). State of the field: Why novel prediction matters. Studies in History and Philosophy of Science Part A, 44(4), 580–589.CrossRefGoogle Scholar
  9. Ellery, E., & Fitelson, B. (2000). Comments and criticism: Measuring confirmation and evidence. Journal of Philosophy, 97(12), 663–72.Google Scholar
  10. Ellery, E., & Fitelson, B. (2002). Symmetries and asymmetries in evidential support. Philosophical Studies, 107(2), 129–42.Google Scholar
  11. Frisch, M. (2015). Predictivism and old evidence: A critical look at climate model tuning. European Journal for Philosophy of Science, 5(2), 171–190.  https://doi.org/10.1007/s13194-015-0110-4.MathSciNetCrossRefGoogle Scholar
  12. Garber, D. (1983). Old evidence and logical omniscience in Bayesian confirmation theory. http://conservancy.umn.edu/handle/11299/185350.
  13. Gleckler, P. J., Taylor, K. E., & Doutriaux, C. (2008). Performance metrics for climate models. Journal of Geophysical Research: Atmospheres, 113(D6), D06104.  https://doi.org/10.1029/2007JD008972.CrossRefGoogle Scholar
  14. Glymour, C. (2010). Why I Am Not a Bayesian. In Philosophy of Probability: Contemporary Readings. Routledge.Google Scholar
  15. Glymour, C. N. (1980). Theory and evidence. Princeton, N.J.: Princeton University Press.Google Scholar
  16. Golaz, J.-C., Horowitz, L. W., & Levy, H. (2013). Cloud tuning in a coupled climate model: impact on 20th century warming. Geophysical Research Letters, 40(10), 2246–2251.  https://doi.org/10.1002/grl.50232.CrossRefGoogle Scholar
  17. Golaz, J.-C., Salzmann, M., Donner, L. J., Horowitz, L. W., Ming, Y., & Zhao, M. (2010). Sensitivity of the aerosol indirect effect to subgrid variability in the cloud parameterization of the GFDL atmosphere general circulation model AM3. Journal of Climate, 24(13), 3145–3160.  https://doi.org/10.1175/2010JCLI3945.1.CrossRefGoogle Scholar
  18. Held, I. M. (2005). The gap between simulation and understanding in climate modeling. Bulletin of the American Meteorological Society, 86(11), 1609–1614.  https://doi.org/10.1175/BAMS-86-11-1609.CrossRefGoogle Scholar
  19. Hourdin, F., Mauritsen, T., Gettelman, A., Golaz, J.-C., Balaji, V., Duan, Q., et al. (2016). The art and science of climate model tuning. Bulletin of the American Meteorological Society, 98(3), 589–602.  https://doi.org/10.1175/BAMS-D-15-00135.1.CrossRefGoogle Scholar
  20. Howson, C. (1991). The ‘Old Evidence’ problem. The British Journal for the Philosophy of Science, 42(4), 547–555.  https://doi.org/10.1093/bjps/42.4.547.MathSciNetCrossRefGoogle Scholar
  21. Howson, C., & Franklin, A. (1991). Maher, Mendeleev and Bayesianism. Philosophy of Science, 58(4), 574–585.MathSciNetCrossRefGoogle Scholar
  22. Intergovernmental Panel on Climate Change, ed. (2014). Evaluation of climate models. In Climate Change 2013—The Physical Science Basis (pp. 741–866). Cambridge: Cambridge University Press. http://ebooks.cambridge.org/ref/id/CBO9781107415324A028.
  23. Intergovernmental Panel on Climate Change. (2015). Climate Change 2014: Mitigation of Climate Change: Working Group III Contribution to the IPCC Fifth Assessment Report. Cambridge University Press.Google Scholar
  24. Kennedy, M. C., & O’Hagan, A. (2001). Bayesian calibration of computer models. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 63 (3), 425–64.Google Scholar
  25. Knutti, R., Allen, M. R., Friedlingstein, P., Gregory, J. M., Hegerl, G. C., Meehl, G. A., et al. (2008). A Review of Uncertainties in global temperature projections over the twenty-first century. Journal of Climate, 21(11), 2651–2663.  https://doi.org/10.1175/2007JCLI2119.1.CrossRefGoogle Scholar
  26. Maher, P. (1988). Prediction, accommodation, and the logic of discovery. In PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1988 (January): (pp. 273–85).Google Scholar
  27. Masson, D., & Knutti, R. (2012). Predictor screening, calibration, and observational constraints in climate model ensembles: An illustration using climate sensitivity. Journal of Climate, 26(3), 887–898.  https://doi.org/10.1175/JCLI-D-11-00540.1.CrossRefGoogle Scholar
  28. Mauritsen, T., Stevens, B., Roeckner, E., Crueger, T., Esch, M., Giorgetta, M., Haak, H. et al. (2012). Tuning the climate of a global model. Journal of Advances in Modeling Earth Systems, 4(3), M00A01.  https://doi.org/10.1029/2012MS000154.CrossRefGoogle Scholar
  29. Oberkampf, W. L., Trucano, T. G., & Hirsch, C. (2004). Verification, validation, and predictive capability in computational engineering and physics. Applied Mechanics Reviews, 57(5), 345–384.  https://doi.org/10.1115/1.1767847.CrossRefGoogle Scholar
  30. Oberkampf, W. L., & Barone, M. F. (2006). Measures of agreement between computation and experiment: Validation metrics. Journal of Computational Physics, Uncertainty Quantification in Simulation Science, 217(1), 5–36.  https://doi.org/10.1016/j.jcp.2006.03.037.CrossRefzbMATHGoogle Scholar
  31. Parker, W. S. (2009). Confirmation and adequacy? for? purpose in climate modelling. Aristotelian Society Supplementary Volume, 83(1), 233–249.CrossRefGoogle Scholar
  32. Parker, W. S. 2010. Predicting weather and climate: Uncertainty, ensembles and probability. Studies in History and Philosophy of Science Part B 41 (3): 263–272.CrossRefGoogle Scholar
  33. Parker, W. S. (2013). Computer simulation. In S. Psillos & M. Curd (Eds.), The Routledge Companion to Philosophy of Science, 2nd Edition. Routledge.Google Scholar
  34. Sprenger, Jan. (2015). A novel solution to the problem of old evidence. Philosophy of Science, 82(3), 383–401.  https://doi.org/10.1086/681767.MathSciNetCrossRefGoogle Scholar
  35. Steele, K., & Charlotte, W. (2016). Model-selection theory: The need for a more nuanced picture of use-novelty and double-counting. The British Journal for the Philosophy of Science.  https://doi.org/10.1093/bjps/axw024.
  36. Worrall, J. (1980). 001: The methodology of scientific research programmes: Philosophical papers (Vol. 1). Cambridge: Cambridge University Press.Google Scholar
  37. Worrall, J. (2014). Prediction and accommodation revisited. Studies in History and Philosophy of Science Part A, 45(March), 54–61.  https://doi.org/10.1016/j.shpsa.2013.10.001.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute for Philosophy, Leibniz Universität HannoverHanoverGermany

Personalised recommendations