Understanding Error Rates in Software Engineering: Conceptual, Empirical, and Experimental Approaches
Software-intensive systems are ubiquitous in the industrialized world. The reliability of software has implications for how we understand scientific knowledge produced using software-intensive systems and for our understanding of the ethical and political status of technology. The reliability of a software system is largely determined by the distribution of errors and by the consequences of those errors in the usage of that system. We select a taxonomy of software error types from the literature on empirically observed software errors and compare that taxonomy to Giuseppe Primiero’s Minds and Machines 24: 249–273, (2014) taxonomy of error in information systems. Because Primiero’s taxonomy is articulated in terms of a coherent, explicit model of computation and is more fine-grained than the empirical taxonomy we select, we might expect Primiero’s taxonomy to provide insights into how to reduce the frequency of software error better than the empirical taxonomy. Whether using one software error taxonomy can help to reduce the frequency of software errors better than another taxonomy is ultimately an empirical question.
KeywordsSoftware error Error Philosophy of software engineering Computer science education
We wish to thank the anonymous reviewers of this paper for insightful, constructive suggestions. For any errors that remain, we are solely responsible.
John Symons’ work is supported by The National Security Agency through the Science of Security initiative contract no. H98230-18-D-0009.
- Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1983). Data structures and algorithms. Addison-Wesley.Google Scholar
- Aho, A. V., Lam, M. S., Sethi, R., & Ullman, J. D. (2006). Compilers: principles, techniques, and tools. 2nd edition. Addison Wesley.Google Scholar
- Boehm, B. W. (1981). Software engineering economics. Prentice Hall.Google Scholar
- Boehm, B. W. et al. (2000). Software cost estimation with COCOMO II. Prentice Hall.Google Scholar
- Bowen, J. B. (1980). Standard error classification to support software reliability assessment. Proceedings of the National Computer Conference. pp. 697–705.Google Scholar
- Charette, R. N. (2005). Why software fails. IEEE Spectrum.Google Scholar
- Crow, E. L., Davis, F. A., and Maxfield, M. W. (1955). Statistics Manual. NAVORD Report 3369 NOTS 948. U. S. Naval Ordnance Test Station, China Lake, CA. Dover reprint, 1960.Google Scholar
- DeMarco, T. (1982). Controlling software projects. Yourdon Press.Google Scholar
- Devore, J. L. (1995). Probability and statistics for engineering and the sciences. Fourth Edition. Duxbury Press.Google Scholar
- Florac, W. A., and Carleton, A. D. (1999). Measuring the software process: statistical process control for software process improvement. Addison-Wesley.Google Scholar
- Grady, R. B. (1992). Practical software metrics for project management and process improvement. Prentice-Hall.Google Scholar
- Hogg, R. V., McKean, J. W., and Craig, A. T. (2005). Introduction to mathematical statistics. Sixth Edition. Prentice Hall.Google Scholar
- Humphrey, W. S. (2008). The software quality challenge. Cross Talk: The Journal of Defense Systems Engineering. Google Scholar
- Humphreys, P. (2004). Extending ourselves: computational science, empiricism, and scientific method. Oxford University Press.Google Scholar
- IEEE. (2000). IEEE-STD-1471-2000. Recommended practice for architectural description of software-intensive systems. http://standards.IEEE.org. Accessed 10 November 2018.
- ISO/IEC. (2017). 12207:2017. Systems and software engineering — software life cycle processes.Google Scholar
- Jones, T. C. (1981). “Program quality and programmer productivity: a survey of the state of the art”. ASM Lectures.Google Scholar
- Jones, C. (2008). Applied software measurement: global analysis of productivity and quality. 3rd edition. McGraw-Hill.Google Scholar
- Gopal, G. K., (2006). 100 statistical tests. Sage Publishing.Google Scholar
- Malkawi, M. (2014). Empirical data and analysis of defects in operating systems kernels. Proceedings of the 24th IBIMA conference. Milan, Italy, 6–7 November 2014. Available online at https://www.researchgate.net/publication/281278326_Empirical_Data_and_Analysis_of_Defects_in_Operating_Systems_Kernels. Accessed 8 June 2018.
- Mantis. (2019). MantisBT. https://mantisbt.org/. Accessed 29 January 2019.
- McGarry, J. et al. (2002). Practical software measurement: objective information for decision makers. Addison-Wesley.Google Scholar
- Micro Focus. (2019). HP ALM/Quality Center. https://www.microfocus.com/en-us/products/application-lifecycle-management/overview. Accessed 29 January 2019.
- Motor Industry Software Reliability Association (MISRA). (2013). Guidelines for the use of the C language in critical systems. https://www.misra.org.uk/. Accessed 30 June 2018.
- Petricek, T. (2017). Miscomputation in software: learning to live with errors. The Art, Science, and Engineering of Programming, Vol. 1, Issue 2, Article 14. https://arxiv.org/ftp/arxiv/papers/1703/1703.10863.pdf. Accessed 29 January 2019.
- Phipps, G. (1999). Comparing observed bug and productivity rates for Java and C++. Journal of Software: Practice and Experience, 29, 345–358.Google Scholar
- Piccinini, G. (2015). Physical computation: a mechanistic account. Oxford.Google Scholar
- Plutora, Inc. (2018). Plutora. https://www.plutora.com/. Accessed 29 January 2019.
- Rescher, N. (2009). Error:(on our predicament when things go wrong). University of Pittsburgh Press.Google Scholar
- Royce, W. W. (1970). Managing the development of large software systems: concepts and techniques. Proceedings, WESCON, August 29170.Google Scholar
- Stutzke, R. D. (2005). Estimating software-intensive systems: projects, products, and processes. Addison Wesley.Google Scholar
- Symons, J. F., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 1–24.Google Scholar
- Symons, J. F., & Horner, J. K. (2014). Software intensive science. Philosophy and Technology. https://doi.org/10.1007/s13347-014-0163.
- Symons, J. F., & Horner, J. K. (2017). Software error as a limit to inquiry for finite agents: challenges for the post-human scientist. In: Powers T. (eds) Philosophy and computing. Philosophical Studies Series, vol 128. (pp. 85–97) Springer.Google Scholar
- Thayer, T. A., Lipow, M., & Nelson, E.C. (1978). Software reliability: a study of large project reality. North-Holland.Google Scholar
- Thielen, B.J. (1978). SURTASS code Review Statistics. Hughes-Fullerton IDC 78/1720.1004.Google Scholar
- University of Virginia, Department of Computer Science (2018). Secure Programming Lint (splint), v3.1.2. http://www.splint.org/. Accessed 2 July 2018.