The Multidimensional Epistemology of Computer Simulations: Novel Issues and the Need to Avoid the Drunkard’s Search Fallacy

  • Cyrille ImbertEmail author
Part of the Simulation Foundations, Methods and Applications book series (SFMA)


Computers have transformed science and help to extend the boundaries of human knowledge. However, does the validation and diffusion of results of computational inquiries and computer simulations call for a novel epistemological analysis? I discuss how the notion of novelty should be cashed out to investigate this issue meaningfully and argue that a consequentialist framework similar to the one used by Goldman to develop social epistemology can be helpful at this point. I highlight computational, mathematical, representational, and social stages on which the validity of simulation-based belief-generating processes hinges, and emphasize that their epistemic impact depends on the scientific practices that scientists adopt at these different stages. I further argue that epistemologists cannot ignore these partially novel issues and conclude that the epistemology of computational inquiries needs to go beyond that of models and scientific representations and has cognitive, social, and in the present case computational, dimensions.


Computer simulations Novelty Epistemology Validation Goldman Consequentialism Social epistemology Random numbers Computational values Monte Carlo Random number generator Modeling norms Modularity Opacity Verification Reproducibility Models Invisibleness of failure Naturalism 



Past interactions with Roman Frigg, Stephan Hartmann, and Paul Humphreys about various issues discussed in this chapter were extremely stimulating, and I probably owe them more than I am aware of. I am also very grateful to the editors, whose comments contributed significantly to improving this chapter.


  1. Andersen, H. (2014). Epistemic dependence in contemporary science: Practices and malpractices. In L. Soler, S. Zwart, M. Lynch, & V. Israel-Jost (Eds.), Commentary on epistemic dependence in contemporary science: Practices and malpractices by Hanne Andersen (pp. 161–173). Routledge Studies in the Philosophy of Science, London: Routledge.Google Scholar
  2. Baker, M. (2016). 1,500 Scientists lift the lid on reproducibility. Nature News, 533(7604), 452.CrossRefGoogle Scholar
  3. Barberousse, A., Franceschelli, S., & Imbert C. (2009). Computer simulations as experiments. Synthese, 169(3), 557–574.Google Scholar
  4. Barberousse, A., & Imbert, C. (2013). New mathematics for old physics: The case of lattice fluids. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 44(3), 231–241.MathSciNetCrossRefGoogle Scholar
  5. Barberousse, A., & Imbert, C. (2014). Recurring models and sensitivity to computational constraints. Sherwood J. B. Sugden (Ed.), Monist, 97(3), 259–279.Google Scholar
  6. Beisbart, C. (2018). Are computer simulations experiments? And if not, how are they related to each other? European Journal for Philosophy of Science, 1–34.Google Scholar
  7. Bloor, D. (1976). Knowledge and social imagery (Routledge Direct Editions). London, Boston: Routledge & K. Paul.Google Scholar
  8. Collberg, C., & Proebsting, T. A. (2016). Repeatability in computer systems research. Communications of the ACM, 59(3), 62–69.CrossRefGoogle Scholar
  9. Collins, H. M. (1985). Changing order: Replication and induction in scientific practice. London, Beverly Hills: Sage Publications.Google Scholar
  10. De Matteis, A., Pagnutti, S. (1988). Parallelization of random number generators and long-range correlations. Numerische Mathematik, 53(5), 595–608.MathSciNetCrossRefGoogle Scholar
  11. DeMillo, R. A., Lipton, R. J., & Sayward, F. G. (1978). Hints on test data selection: Help for the practicing programmer. Computer, 11(4), 34–41.CrossRefGoogle Scholar
  12. DeMillo, R. A., Lipton, R. J., & Perlis, A. J. (1979). Social processes and proofs of theorems and programs. Communications of the ACM, 22(5), 271–280.CrossRefGoogle Scholar
  13. Demmel, J., & Nguyen, H. D. (2013). Numerical reproducibility and accuracy at exascale. In 2013 IEEE 21st Symposium on Computer Arithmetic (pp. 235–237).Google Scholar
  14. Dijkstra, E. W. (1978). On a political pamphlet from the middle ages. ACM SIGSOFT software engineering notes, 3(2), 14–16.CrossRefGoogle Scholar
  15. Dijkstra, E. W. (1972). The humble programmer. Communications of the ACM, 15(10), 859–866.CrossRefGoogle Scholar
  16. El Skaf, R., & Imbert, C. (2013). Unfolding in the empirical sciences: Experiments, thought experiments and computer simulations. Synthese, 190(16), 3451–3474.Google Scholar
  17. Fetzer, J. H. (1988). Program verification: The very idea. Communications of the ACM, 31(9), 1048–1063.CrossRefGoogle Scholar
  18. Fillion, N., & Corless, R. M. (2014). On the epistemological analysis of modeling and computational error in the mathematical sciences. Synthese, 191(7), 1451–1467.Google Scholar
  19. Fomel, S., & Claerbout, J. F. (2009). Guest editors’ introduction: Reproducible research. Computing in Science Engineering, 11(1), 5–7.CrossRefGoogle Scholar
  20. Fresco, N., & Primiero, G. (2013). Miscomputation. Philosophy & Technology, 26(3), 253–272.Google Scholar
  21. Frigg, R., & Reiss, J. (2009). The Philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613.Google Scholar
  22. Frigg, R., & Hartmann, S. (2017). Models in Science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy, Spring 2017. Metaphysics Research Lab, Stanford University,
  23. Goldman, A. I. (1999). Knowledge in a social world. Oxford, New York: Clarendon Press, Oxford University Press.Google Scholar
  24. Hardwig, J. (1985). Epistemic dependence. Journal of Philosophy, 82(7), 335–349.CrossRefGoogle Scholar
  25. Hastie, R., Penrod, S., & Pennington, N. (1983). Inside the jury. Cambridge, Massachusetts, United States: Harvard University Press.CrossRefGoogle Scholar
  26. Heinrich, J. (2004). Detecting a bad random number generator. CDF/MEMO/STATISTICS/PUBLIC/6850. University of Pennsylvania.
  27. Hellekalek, P. (1998). Don’t trust parallel Monte Carlo. In Proceedings Parallel and Distributed Simulation Conference (pp. 82–89), Alberta, Canada.Google Scholar
  28. Hill, D. R. C. (2015). Parallel random numbers, simulation, and reproducible research. Computing in Science Engineering, 17(4), 66–71.CrossRefGoogle Scholar
  29. Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press.Google Scholar
  30. Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626.MathSciNetCrossRefGoogle Scholar
  31. Imbert, C. (2014). The identification and prevention of bad practices and malpractices in science. In L. Soler, S. Zwart, M. Lynch, & V. Israel-Jost (Eds.), Science after the practice turn in the philosophy, history, and social studies of science (pp. 174–187). Routledge Studies in the Philosophy of Science, London: RoutledgeGoogle Scholar
  32. Imbert, C. (2017). Computer simulations and computational models in science. In Springer handbook of model-based science (pp. 735–781). Springer Handbooks, Cham: Springer.CrossRefGoogle Scholar
  33. Jones, D. (2010). Good practice in (pseudo) random number generation for bioinformatics applications. Technical report, UCL Bioinformatics Group.Google Scholar
  34. Kalven Jr, H., & Zeisel, H. (1966). The American jury. London: The University of Chicago press.Google Scholar
  35. Kitcher, P. (1992). The Naturalists Return. The Philosophical Review, 101(1), 53–114.CrossRefGoogle Scholar
  36. Kitcher, P. (1993). The Advancement of science: Science without legend, objectivity without illusions. New York: Oxford University Press, 1993.Google Scholar
  37. Kitcher, P. (2002). The third way: Reflections on helen longino’s the fate of knowledge. Philosophy of science, 69(4), 549–559.MathSciNetCrossRefGoogle Scholar
  38. Lenhard, J., forthcoming. Holism, or the erosion of modularity-a methodological challenge for validation. Philosophy of Science.Google Scholar
  39. Lenhard, J., & Carrier, M. (2017). Mathematics as a tool-tracing new roles of mathematics in the sciences.Google Scholar
  40. Matsumoto, M., Wada, I., Kuramoto, A., & Ashihara, H. (2007). Common defects in initialization of pseudorandom number generators. ACM Transactions on Modeling and Computer Simulation, 17(4).Google Scholar
  41. Rennie, D., Yank, V., & Emanuel, L. (1997, August 20). When authorship fails. A proposal to make contributors accountable. JAMA, 278(7), 579–585.Google Scholar
  42. Rennie, D., Flanagin, A., & Yank, V. (2001). The contributions of authors. JAMA, 284(1), 89–91.Google Scholar
  43. Shapiro, S. (1997). Splitting the difference: The historical necessity of synthesis in software engineering. IEEE Annals of the History of Computing, 19(1), 20–54.MathSciNetCrossRefGoogle Scholar
  44. Simon, H. A. (1957). Models of man: Social and rational mathematical essays on rational human behavior in a social setting. New York: Wiley.zbMATHGoogle Scholar
  45. Solomon, M. (1994). Social Empiricism. Noûs, 28(3), 325–343.CrossRefGoogle Scholar
  46. Foote, B., & Yoder, J. (1999). Pattern languages of program design 4 (= Software Patterns. 4). Addison Wesley.Google Scholar
  47. Wilson, G., Aruliah D. A., Brown C. T., Hong N. P. C., Davis, M, et al. (2014). Best practices for scientific computing. PLOS Biology, 12(1).CrossRefGoogle Scholar
  48. Wimsatt, W. C. (2007). Re-engineering philosophy for limited beings: Piecewise approximations to reality. Cambridge, Mass: Harvard University Press.Google Scholar
  49. Winsberg, E. B. (2010). Science in the age of computer simulation. Chicago: Etats-Unis.CrossRefGoogle Scholar
  50. Woods, J. (2013). Errors of reasoning: Naturalizing the logic of inference. College Publications.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Archives Poincaré, CNRS, Université de LorraineNANCY CedexFrance

Personalised recommendations