Advertisement

How Do the Validations of Simulations and Experiments Compare?

  • Anouk BarberousseEmail author
  • Julie Jebeile
Chapter
Part of the Simulation Foundations, Methods and Applications book series (SFMA)

Abstract

Whereas experiments and computer simulations seem very different at first view because the former, but not the latter, involve interactions with material properties, we argue that this difference is not so important with respect to validation, as far as epistemology is concerned. Major differences remain nevertheless from the methodological point of view. We present and defend this distinction between epistemology (the domain of scientific operations that are justified by rational principles aiming at improving current knowledge) and methodology (the domain of scientific operations that are governed by rules, not all of which are grounded on rational, explicit principles). We illustrate this distinction and related claims by comparing how experiments and simulations are validated in evolutionary studies, a domain in which both experiments in the lab and computer simulations are relatively new but mutually reinforcing.

Keywords

Theory-ladeness Holism of confirmation Opacity Duhem–Quine problem Verification and Validation Calibration Benchmarking Parameter tuning Sensitivity analysis Measurement errors Numerical errors Evolutionary studies Richard Lenski’s Long-Term Experimental Evolution 

References

  1. Franklin, A. (1997). Calibration. Perspectives on Science, 5, 31–80.Google Scholar
  2. Hourdin, F., Mauritsen, T., Gettelman, A., Golaz, J., Balaji, V., Duan, Q., et al. (2017). The art and science of climate model tuning. Bulletin of the American Meteorological Society.Google Scholar
  3. Humphreys, P. (2004). Extending ourselves. Computational science, empiricism, and scientific method. OUP.Google Scholar
  4. Jebeile, J., & Barberousse, A. (2016). Empirical agreement in model validation. Studies in History and Philosophy of Science Part A, 56, 168–174.CrossRefGoogle Scholar
  5. Lenhard, J. (2018). Holism, or the erosion of modularity–a methodological challenge for validation, to appear in Philosophy of Science (PSA 2016).Google Scholar
  6. Lenhard, J., & Winsberg, E. (2010). Holism, entrenchment, and the future of climate model pluralism. Studies in History and Philosophy of Science Part B, 41(3), 253–262.CrossRefGoogle Scholar
  7. Lenski, R. (2004). The future of evolutionary biology. Ludus Vitalis, 12(21), 67–89.Google Scholar
  8. Lenski, R. (2017). Experimental evolution and the dynamics of adaptation and genome evolution in microbial populations. The ISME Journal, 11, 2181–2194.CrossRefGoogle Scholar
  9. Lenski, R., Ofria, C., Collier, T., & Adami, C. (1999). Genome complexity, robustness and genetic interactions in digital organisms. Nature, 400, 661–664.CrossRefGoogle Scholar
  10. Mayo, D. (1996). Error and the growth of experimental knowledge. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  11. Morrison, M. (2009). Models, measurement and computer simulation: The changing face of experimentation. Philosophical Studies, 143, 33–47.CrossRefGoogle Scholar
  12. Morrison, M. (2015). Reconstructing reality: Models, mathematics, and simulations. USA: OUP.Google Scholar
  13. Oberkampf, W. L., Trucano, T. G. (2002). Verification and validation in computational fluid dynamics. Rapport Sandia. SAND2002-0529.Google Scholar
  14. Oberkampf, W. L., Trucano, T. G., & Hirsch, C. (2002). Verification, validation and predictive capacity in computational engineering and physics. Applied Mechanics Review, 57(5), 345.CrossRefGoogle Scholar
  15. Ofria, C., & Wilde, C. O. (2004). Avida: A software platform for research in computational evolutionary biology. Artificial Life, 10(2), 191–229.CrossRefGoogle Scholar
  16. Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646.CrossRefGoogle Scholar
  17. Parker, W. S. (2008a). Franklin, holmes, and the epistemology of computer simulation. International Studies in the Philosophy of Science, 22(2), 165–183.MathSciNetCrossRefGoogle Scholar
  18. Parker, W. S. (2008b). Computer simulation through an error-statistical lens. Synthese, 163(3), 371–84.MathSciNetCrossRefGoogle Scholar
  19. Roy, C. (2010). Review of discretization error estimators in scientific computing. In 48th AIAA Aerospace Sciences Meeting, Orlando, FL, January 4–7, 2010.Google Scholar
  20. Tal, E. (2011). From data to phenomena and back again: Computer-simulated signatures. Synthese, 182(1), 117–129.MathSciNetCrossRefGoogle Scholar
  21. Trucano, T. G., Swiler, L. P., Igusa, T., Oberkampf, W. L., & Pilch, M. (2006) Calibration, validation, and sensitivity analysis: What’s what. Reliability Engineering & System Safety, 91(10–11), 1331–1357.Google Scholar
  22. Winsberg, E. (1999). Sanctioning models: The epistemology of simulation. Science in Context, 12(2), 275–292.CrossRefGoogle Scholar
  23. Winsberg, E. (2010). Science in the age of computer simulation. The University of Chicago Press.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Sorbonne UniversitéParisFrance
  2. 2.Université Catholique de LouvainLouvain-la-NeuveBelgium

Personalised recommendations