Calibration, Validation, and Confirmation
This chapter examines the role of parameter calibration in the confirmation and validation of complex computer simulation models. I examine the question to what extent calibration data can confirm or validate the calibrated model, focusing in particular on Bayesian approaches to confirmation. I distinguish several different Bayesian approaches to confirmation and argue that complex simulation models exhibit a predictivist effect: Complex computer simulation models constitute a case in which predictive success, as opposed to the mere accommodation of evidence, provides a more stringent test of the model. Data used in tuning do not validate or confirm a model to the same extent as data successfully predicted by the model do.
KeywordsPredictivism Bayesian epistemology Problem of old evidence Tuning Climate models
- Barnes, E. C. (2008). The paradox of Predictivism. Cambridge, New York: Cambridge University Press.Google Scholar
- Box, G. E. P. (1979). Robustness in the strategy of scientific model building. In R. L. Launer & G. N. Wilkinson (Eds.), Robustness in Statistics (pp. 201–36). Academic Press. https://www.sciencedirect.com/science/article/pii/B9780124381506500182.
- Brush, S. G. (1994). Dynamics of theory change: The role of predictions. In PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (January) (pp. 133–45).Google Scholar
- Cartwright, N. (1983). How the laws of physics lie. Oxford University Press.Google Scholar
- Ellery, E., & Fitelson, B. (2000). Comments and criticism: Measuring confirmation and evidence. Journal of Philosophy, 97(12), 663–72.Google Scholar
- Ellery, E., & Fitelson, B. (2002). Symmetries and asymmetries in evidential support. Philosophical Studies, 107(2), 129–42.Google Scholar
- Garber, D. (1983). Old evidence and logical omniscience in Bayesian confirmation theory. http://conservancy.umn.edu/handle/11299/185350.
- Glymour, C. (2010). Why I Am Not a Bayesian. In Philosophy of Probability: Contemporary Readings. Routledge.Google Scholar
- Glymour, C. N. (1980). Theory and evidence. Princeton, N.J.: Princeton University Press.Google Scholar
- Golaz, J.-C., Salzmann, M., Donner, L. J., Horowitz, L. W., Ming, Y., & Zhao, M. (2010). Sensitivity of the aerosol indirect effect to subgrid variability in the cloud parameterization of the GFDL atmosphere general circulation model AM3. Journal of Climate, 24(13), 3145–3160. https://doi.org/10.1175/2010JCLI3945.1.CrossRefGoogle Scholar
- Intergovernmental Panel on Climate Change, ed. (2014). Evaluation of climate models. In Climate Change 2013—The Physical Science Basis (pp. 741–866). Cambridge: Cambridge University Press. http://ebooks.cambridge.org/ref/id/CBO9781107415324A028.
- Intergovernmental Panel on Climate Change. (2015). Climate Change 2014: Mitigation of Climate Change: Working Group III Contribution to the IPCC Fifth Assessment Report. Cambridge University Press.Google Scholar
- Kennedy, M. C., & O’Hagan, A. (2001). Bayesian calibration of computer models. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 63 (3), 425–64.Google Scholar
- Maher, P. (1988). Prediction, accommodation, and the logic of discovery. In PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1988 (January): (pp. 273–85).Google Scholar
- Parker, W. S. (2013). Computer simulation. In S. Psillos & M. Curd (Eds.), The Routledge Companion to Philosophy of Science, 2nd Edition. Routledge.Google Scholar
- Steele, K., & Charlotte, W. (2016). Model-selection theory: The need for a more nuanced picture of use-novelty and double-counting. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axw024.
- Worrall, J. (1980). 001: The methodology of scientific research programmes: Philosophical papers (Vol. 1). Cambridge: Cambridge University Press.Google Scholar