Advertisement

Understanding Error Rates in Software Engineering: Conceptual, Empirical, and Experimental Approaches

  • Jack K. Horner
  • John SymonsEmail author
Commentary

Abstract

Software-intensive systems are ubiquitous in the industrialized world. The reliability of software has implications for how we understand scientific knowledge produced using software-intensive systems and for our understanding of the ethical and political status of technology. The reliability of a software system is largely determined by the distribution of errors and by the consequences of those errors in the usage of that system. We select a taxonomy of software error types from the literature on empirically observed software errors and compare that taxonomy to Giuseppe Primiero’s Minds and Machines 24: 249–273, (2014) taxonomy of error in information systems. Because Primiero’s taxonomy is articulated in terms of a coherent, explicit model of computation and is more fine-grained than the empirical taxonomy we select, we might expect Primiero’s taxonomy to provide insights into how to reduce the frequency of software error better than the empirical taxonomy. Whether using one software error taxonomy can help to reduce the frequency of software errors better than another taxonomy is ultimately an empirical question.

Keywords

Software error Error Philosophy of software engineering Computer science education 

Notes

Acknowledgments

We wish to thank the anonymous reviewers of this paper for insightful, constructive suggestions. For any errors that remain, we are solely responsible.

Funding Information

John Symons’ work is supported by The National Security Agency through the Science of Security initiative contract no. H98230-18-D-0009.

References

  1. Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1983). Data structures and algorithms. Addison-Wesley.Google Scholar
  2. Aho, A. V., Lam, M. S., Sethi, R., & Ullman, J. D. (2006). Compilers: principles, techniques, and tools. 2nd edition. Addison Wesley.Google Scholar
  3. Banker, R. D., Datar, S. M., Kemerer, C. F., & Zweig, D. (2002). Software errors and software maintenance management. Information Technology and Management, 3, 25–41.CrossRefGoogle Scholar
  4. Boehm, B. W. (1981). Software engineering economics. Prentice Hall.Google Scholar
  5. Boehm, B. W. et al. (2000). Software cost estimation with COCOMO II. Prentice Hall.Google Scholar
  6. Bowen, J. B. (1980). Standard error classification to support software reliability assessment. Proceedings of the National Computer Conference. pp. 697–705.Google Scholar
  7. Charette, R. N. (2005). Why software fails. IEEE Spectrum.Google Scholar
  8. Crow, E. L., Davis, F. A., and Maxfield, M. W. (1955). Statistics Manual. NAVORD Report 3369 NOTS 948. U. S. Naval Ordnance Test Station, China Lake, CA. Dover reprint, 1960.Google Scholar
  9. DeMarco, T. (1982). Controlling software projects. Yourdon Press.Google Scholar
  10. Devore, J. L. (1995). Probability and statistics for engineering and the sciences. Fourth Edition. Duxbury Press.Google Scholar
  11. Florac, W. A., and Carleton, A. D. (1999). Measuring the software process: statistical process control for software process improvement. Addison-Wesley.Google Scholar
  12. Grady, R. B. (1992). Practical software metrics for project management and process improvement. Prentice-Hall.Google Scholar
  13. Hatton, L. (1997). The T-experiments: errors in scientific software. IEEE Computational Science and Engineering, 4(2), 27–38.CrossRefGoogle Scholar
  14. Hogg, R. V., McKean, J. W., and Craig, A. T. (2005). Introduction to mathematical statistics. Sixth Edition. Prentice Hall.Google Scholar
  15. Horner, J. K., & Symons, J. F. (2014). Reply to Primiero and Angius on software intensive science. Philosophy and Technology, 27, 491–494.CrossRefGoogle Scholar
  16. Humphrey, W. S. (2008). The software quality challenge. Cross Talk: The Journal of Defense Systems Engineering. Google Scholar
  17. Humphreys, P. (2004). Extending ourselves: computational science, empiricism, and scientific method. Oxford University Press.Google Scholar
  18. IEEE. (2000). IEEE-STD-1471-2000. Recommended practice for architectural description of software-intensive systems. http://standards.IEEE.org. Accessed 10 November 2018.
  19. ISO/IEC. (2017). 12207:2017. Systems and software engineering — software life cycle processes.Google Scholar
  20. Jones, T. C. (1978). Measuring programming quality and productivity. IBM Systems Journal, 17, 39–63.CrossRefGoogle Scholar
  21. Jones, T. C. (1981). “Program quality and programmer productivity: a survey of the state of the art”. ASM Lectures.Google Scholar
  22. Jones, C. (2008). Applied software measurement: global analysis of productivity and quality. 3rd edition. McGraw-Hill.Google Scholar
  23. Gopal, G. K., (2006). 100 statistical tests. Sage Publishing.Google Scholar
  24. Malkawi, M. (2014). Empirical data and analysis of defects in operating systems kernels. Proceedings of the 24th IBIMA conference. Milan, Italy, 6–7 November 2014. Available online at https://www.researchgate.net/publication/281278326_Empirical_Data_and_Analysis_of_Defects_in_Operating_Systems_Kernels. Accessed 8 June 2018.
  25. Mantis. (2019). MantisBT. https://mantisbt.org/. Accessed 29 January 2019.
  26. McGarry, J. et al. (2002). Practical software measurement: objective information for decision makers. Addison-Wesley.Google Scholar
  27. Micro Focus. (2019). HP ALM/Quality Center. https://www.microfocus.com/en-us/products/application-lifecycle-management/overview. Accessed 29 January 2019.
  28. Motor Industry Software Reliability Association (MISRA). (2013). Guidelines for the use of the C language in critical systems. https://www.misra.org.uk/. Accessed 30 June 2018.
  29. Petricek, T. (2017). Miscomputation in software: learning to live with errors. The Art, Science, and Engineering of Programming, Vol. 1, Issue 2, Article 14. https://arxiv.org/ftp/arxiv/papers/1703/1703.10863.pdf. Accessed 29 January 2019.
  30. Phipps, G. (1999). Comparing observed bug and productivity rates for Java and C++. Journal of Software: Practice and Experience, 29, 345–358.Google Scholar
  31. Piccinini, G. (2015). Physical computation: a mechanistic account. Oxford.Google Scholar
  32. Plutora, Inc. (2018). Plutora. https://www.plutora.com/. Accessed 29 January 2019.
  33. Primiero, G. (2014). A taxonomy of errors for information systems. Minds and Machines, 24, 249–273.CrossRefGoogle Scholar
  34. Rescher, N. (2009). Error:(on our predicament when things go wrong). University of Pittsburgh Press.Google Scholar
  35. Royce, W. W. (1970). Managing the development of large software systems: concepts and techniques. Proceedings, WESCON, August 29170.Google Scholar
  36. Stutzke, R. D. (2005). Estimating software-intensive systems: projects, products, and processes. Addison Wesley.Google Scholar
  37. Symons, J. F., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 1–24.Google Scholar
  38. Symons, J. F., & Horner, J. K. (2014). Software intensive science. Philosophy and Technology.  https://doi.org/10.1007/s13347-014-0163.
  39. Symons, J. F., & Horner, J. K. (2017). Software error as a limit to inquiry for finite agents: challenges for the post-human scientist. In: Powers T. (eds) Philosophy and computing. Philosophical Studies Series, vol 128. (pp. 85–97) Springer.Google Scholar
  40. Thayer, T. A., Lipow, M., & Nelson, E.C. (1978). Software reliability: a study of large project reality. North-Holland.Google Scholar
  41. Thielen, B.J. (1978). SURTASS code Review Statistics. Hughes-Fullerton IDC 78/1720.1004.Google Scholar
  42. University of Virginia, Department of Computer Science (2018). Secure Programming Lint (splint), v3.1.2. http://www.splint.org/. Accessed 2 July 2018.

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.University of KansasLawrenceUSA

Personalised recommendations