Advertisement

Philosophy & Technology

, Volume 27, Issue 3, pp 461–477 | Cite as

Software Intensive Science

  • John SymonsEmail author
  • Jack Horner
Research Article

Abstract

This paper argues that the difference between contemporary software intensive scientific practice and more traditional non-software intensive varieties results from the characteristically high conditionality of software. We explain why the path complexity of programs with high conditionality imposes limits on standard error correction techniques and why this matters. While it is possible, in general, to characterize the error distribution in inquiry that does not involve high conditionality, we cannot characterize the error distribution in inquiry that depends on software. Software intensive science presents distinctive error and uncertainty modalities that pose new challenges for the epistemology of science.

Keywords

Models Simulations Software Epistemology Complexity Post-human science 

Notes

Acknowledgments

This work benefited from discussions with Sam Arbesman, George Crawford, Paul Humphreys, and Tony Pawlicki. We are grateful to the reviewers of earlier versions of this paper for extensive and insightful criticisms. For any errors that remain, we blame the path complexity of our (biological) software.

References

  1. Alexandrova, A. (2008). Making models count. Philosophy of Science, 75(3), 383–404.CrossRefGoogle Scholar
  2. ANSI. (1977). American National Standard Programming Language Fortran. ANSI, X3, 9–1977.Google Scholar
  3. Batterman, R. W. (2009). Idealization and modeling. Synthese, 169(3), 427–446.CrossRefGoogle Scholar
  4. Black, R., van Veenendaal, E., & Graham, D. (2012). Foundations of software testing ISTQB certification. Cengage Learning EMEA.Google Scholar
  5. Bokulich, A. (2011). How scientific models can explain. Synthese, 180(1), 33–45.CrossRefGoogle Scholar
  6. Bolinska, A. (2013). Epistemic representation, informativeness and the aim of faithful representation. Synthese, 190(2), 219–234.CrossRefGoogle Scholar
  7. Boolos, G., Burgess, J., & Jeffrey, R. (2002). Computability and Logic (4th ed.). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  8. Boschetti, F., Fulton, E. A., Bradbury, R. H., & Symons, J. (2012). What is a model, why people don’t trust them, and why they should. In Negotiating our future: Living scenarios for Australia to 2050, Vol 2, 107–119). Australian Academy of Science.Google Scholar
  9. Center for Systems and Software Engineering, University of Southern California. (2013). COCOMO II. http://csse.usc.edu/csse/research/COCOMOII/cocomo_main.html.
  10. Chakravartty, A. (2011). Scientific realism. In Stanford encyclopedia of philosophy. E. Zalta (Ed.). http://plato.stanford.edu/entries/scientific-realism/.
  11. Chang, C., & Keisler, J. (1990). Model theory. North-Holland.Google Scholar
  12. Chung, K. (2001). A course in probability theory (3rd ed.). New York: Academic.Google Scholar
  13. Cox, D. (2006). Principles of statistical inference. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  14. Diestel, R. (1997). Graph theory. New York: Springer.Google Scholar
  15. Eugen, L. (2012). Large-scale prediction and testing of drug activity on side-effect targets. Nature, 486(7403), 361–367.Google Scholar
  16. Feldman, S. I., Gay, D. M. Maimone, M. W., & Schryer, N. (1990). A Fortran to C Converter. AT&T Bell Laboratories technical report.Google Scholar
  17. Fewster, M., & Graham, D. (1999). Software test automation. Reading: Addison-Wesley.Google Scholar
  18. Frigg, R., & Reiss, J. (2009). The philosophy of simulation: hot new issues or same old stew? Synthese, 169(3), 593–613.CrossRefGoogle Scholar
  19. Giere, R. (1976). Empirical probability, objective statistical methods, and scientific inquiry. In C. A. Hooker & W. Harper (Eds.), Foundations of probability theory, statistical inference, and statistical theories of science (Vol. 2, pp. 63–101). Dordrecht: Reidel.CrossRefGoogle Scholar
  20. Good, I. J. (1983). Good thinking: The Foundations of probability and its applications. University of Minnesota Press. Republished by Dover, 2009.Google Scholar
  21. Graham, R. M., Clancy, G. J., Jr., & DeVaney, D. B. (1973). A software design and evaluation system. Communications of the ACM, 16(2), 110–116. Reprinted in E Yourdon, (Ed.), Writings of the Revolution. New York: Yourdon Press, 1982 (pp. 112–122).CrossRefGoogle Scholar
  22. Guala, F. (2002). Models, simulations, and experiments. In Model-based reasoning (pp. 59–74). SpringerGoogle Scholar
  23. Gustafson, J. (1998). Computational verifiability and the ASCI Program. Computational Science and Engineering 5, 36–45. http://www.johngustafson.net/pubs/pub55/ASCIPaper.htm.
  24. Halmos, P. (1950). Measure theory. D. Van Nostrand Reinhold.Google Scholar
  25. Hatton, L. (1997). The T experiments: errors in scientific software. IEEE Computational Science and Engineering 4, 27–38. Also available at http://www.leshatton.org/1997/04/the-t-experiments-errors-in-scientific-software/.
  26. Hatton, L. (2013). Power-laws and the conservation of information in discrete token systems: Part 1: General theory. http://www.leshatton.org/Documents/arxiv_jul2012_hatton.pdf.
  27. Hennessy, J., & Patterson, D. (2007). Computer architecture: A quantitative approach (4th ed.). New York: Elsevier.Google Scholar
  28. Hogg, R., McKean, J., & Craig, A. (2005). Introduction to mathematical statistics (6th ed.). Upper Saddle River: Pearson.Google Scholar
  29. Horner, J. K. (2003). The development programmatics of large scientific codes. Proceedings of the 2003 International Conference on Software Engineering Research and Practice (pp. 224–227). Athens: CSREA Press.Google Scholar
  30. Horner, J. K. (2013). Persistence of Plummer-distributed small globular clusters as a function of primordial-binary population size. Proceedings of the 2013 International Conference on Scientific Computing (pp. 38–44). Athens: CSREA Press.Google Scholar
  31. Humphreys, P. (1994). Numerical experimentation. In Patrick Suppes: Scientific philosopher (pp. 103–121). Kluwer.Google Scholar
  32. Hunter, G. (1971). Metalogic: An introduction to the metatheory of standard first-order logic. Berkeley: University of California Press.Google Scholar
  33. IEEE. (2000). IEEE-STD-1471-2000. Recommended practice for architectural description of software-intensive systems. http://standards.IEEE.org.
  34. ISO/IEC. (2005). ISO/IEC 9899: TC2—Programming languages – C—Open standards. Google Scholar
  35. ISO/IEC. (2008). ISO/IEC 12207:2008. Systems and software engineering—Software life cycle processes.Google Scholar
  36. Kuhn, T. (1970). The structure of scientific revolutions. Second edition, enlarged (2nd ed.). Chicago: University of Chicago Press.Google Scholar
  37. Littlewood, B., & Strigini, L. (2000). Software reliability and dependability: a roadmap. ICSE ‘00 Proceedings of the Conference on the Future of Software Engineering (pp. 175–188).Google Scholar
  38. Maxwell, J. (1891). A treatise on electricity and magnetism. Third edition (1891). Dover reprint, 1954.Google Scholar
  39. Mayo, D., & Spanos, A. (2011). Error statistics. In P.S. Bandyopadhyay & M. R. Forster (volume Eds.). D. M. Gabbay, P. Thagard & J. Woods (general Eds.), Philosophy of statistics, Handbook of philosophy of science, Volume 7, Philosophy of statistics. (pp. 1–46). Elsevier.Google Scholar
  40. McCabe, T. (1976). A complexity measure. IEEE Transactions on Software Engineering 2, 308–320. Also available at http://www.literateprogramming.com/mccabe.pdf.
  41. Morton, K. W., & Mayers, D. F. (2005). Numerical solution of partial differential equations. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  42. National Coordination Office for Networking and Information Technology Research and Development. (2013). DoE’s ASCI Program. http://www.nitrd.gov/pubs/bluebooks/2001/asci.html.
  43. Newton (1726). The Principia. Edition of 1726 (Trans: Motte, A.). 1848. Prometheus reprint, 1995.Google Scholar
  44. Nielson, F., Nielson, H. R., & Hankin, C. (1999). Principles of program analysis. Heidelberg: Springer.CrossRefGoogle Scholar
  45. Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646.CrossRefGoogle Scholar
  46. Parker, W. S. (2009). II—Confirmation and adequacy‐for‐purpose in climate modelling. Aristotelian Society Supplementary Volume, 83 (1).Google Scholar
  47. Peled, D., Pelliccione, P., & Spoletini, P. (2008). Model checking. In B. Wah (Ed.). Wiley encyclopedia of computer science and engineering Google Scholar
  48. Primiero, G. (2013). A taxonomy of errors for information systems. Minds and Machines. doi: 10.1007/s11023-013-9307-5.Google Scholar
  49. Reichenbach, H. (1958). The philosophy of space and time. (Trans: Reichenbach, M., & Freund, J). New York: Dover.Google Scholar
  50. Salmon, W. (1967). The foundations of scientific inference. Pittsburgy: University of Pittsburgh Press.Google Scholar
  51. Schmidt, M., & Lipson, H. (2009). Distilling free-form natural laws from experimental data. Science, 324(5923), 81–85.CrossRefGoogle Scholar
  52. Silva, J. (2012). A vocabulary of program slicing-based techniques. ACM Computing Surveys 44, Article No. 12.Google Scholar
  53. Sorenson, R. (2011). Epistemic paradoxes. In E. Zalta (Ed.), Stanford encyclopedia of philosophy. http://plato.stanford.edu/entries/epistemic-paradoxes/.
  54. Symons, J. (2008). Computational models of emergent properties. Minds and Machines, 18(4), 475–491.CrossRefGoogle Scholar
  55. Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18, 809–821.CrossRefGoogle Scholar
  56. Taylor, J. (1982). An introduction to error analysis: The study of uncertainties in physical measurements (2nd ed.). Sausalito: University Science.Google Scholar
  57. United Nations. (1996). Resolution adopted by the general assembly:50/245. Comprehensive Nuclear-Test-Ban Treaty.Google Scholar
  58. Waite, W. M., & Goos, G. (1984). Compiler construction. New York: Springer.CrossRefGoogle Scholar
  59. Winsberg, E. (1999). Sanctioning models: the epistemology of simulation. Science in Context, 12(2), 275–292.CrossRefGoogle Scholar
  60. Winsberg, E., & Lenhard, J. (2010). Holism and entrenchment in climate model validation. In M. Carrier & A. Nordmann (Eds.), Science in the context of application: Methodological change, conceptual transformation, cultural reorientation. Dordrecht: Springer.Google Scholar
  61. Woodward, J. (2009). Scientific explanation. In E. Zalta (Ed.), Stanford encyclopedia of philosophy. http://plato.stanford.edu/entries/scientific-explanation/.

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Department of PhilosophyUniversity of KansasLawrenceUSA
  2. 2.Los AlamosUSA

Personalised recommendations