Empirical Software Engineering

, Volume 18, Issue 4, pp 625–658 | Cite as

Using error abstraction and classification to improve requirement quality: conclusions from a family of four empirical studies

Article

Abstract

Achieving high software quality is a primary concern for software development organizations. Researchers have developed many quality improvement methods that help developers detect faults early in the lifecycle. To address some of the limitations of fault-based quality improvement approaches, this paper describes an approach based on errors (i.e. the sources of the faults). This research extends Lanubile et al.’s, error abstraction process by providing a formal requirement error taxonomy to help developers identify both faults and errors. The taxonomy was derived from the software engineering and psychology literature. The error abstraction and classification process and the requirement error taxonomy are validated using a family of four empirical studies. The main conclusions derived from the four studies are: (1) the error abstraction and classification process is an effective approach for identifying faults; (2) the requirement error taxonomy is useful addition to the error abstraction process; and (3) deriving requirement errors from cognitive psychology research is useful.

Keywords

Software inspections Error abstraction Software engineering Software quality Empirical studies 

Notes

Acknowledgements

We thank the study participants. We also thank Dr. Thomas Philip for providing access to his courses. We acknowledge the Empirical Software Engineering groups at MSU and NDSU for providing useful feedback on the study designs and data analysis. We thank Dr. Gary Bradshaw for his expertise on cognitive psychology. We thank Dr. Edward Allen and Dr. Guilherme Travassos for reviewing early drafts of this paper. We also thank the reviewers for their helpful comments.

References

  1. Basili VR, Green S, Laitenberger O, Lanubile F, Shull F, Sørumgård S, Zelkowitz MV (1996) The empirical investigation of perspective-based reading. Empir Software Eng: An International Journal 1(2):133–164CrossRefGoogle Scholar
  2. Basili VR, Shull F, Lanubile F (July 1999) Building knowledge through families of experiments. IEEE Trans Software Eng 25(4):456–473Google Scholar
  3. Bland M (2000) An introduction to medical statistics, Chapter-9, 3rd edn. Oxford, University Press Inc, New York. ISBN 0192632698Google Scholar
  4. Boehm B, Basili VR (2001) Software defect reduction top 10 List. Computer 34(1):135–137CrossRefGoogle Scholar
  5. Card DN (1998) Learning from our mistakes with defect causal analysis. IEEE Softw 15(1):56–63CrossRefGoogle Scholar
  6. Card SK, Moran TP, Newell A (1983) The psychology of human-computer interaction. Erlbaum, HillsdaleGoogle Scholar
  7. Carver J (2003) The impact of background and experience on software inspections, PhD Thesis. Department of Computer Science, University of Maryland College Park, MarylandGoogle Scholar
  8. Chaar JK, Halliday MJ, Bhandari IS, Chillarege R (1993) In-process evaluation for software inspection and test. IEEE Trans Software Eng 19(11):1055–1070CrossRefGoogle Scholar
  9. Chillarege R, Bhandari IS, Chaar JK, Halliday MJ, Moebus DS, Ray BK, Wong MY (1992) Orthogonal defect classification-a concept for in-process measurements. IEEE Trans Software Eng 18(11):943–956CrossRefGoogle Scholar
  10. Endres A, Rombach D (2003) A handbook of software and systems engineering, 1st edn. Pearson Addison Wesley, HarlowGoogle Scholar
  11. Field A (2007) Discovering statistics using SPSS, 2nd edn. SAGE Publications Ltd, LondonGoogle Scholar
  12. Florac W (1992) Software quality measurement: a framework for counting problems and defects. Technical Reports, CMU/SEI-92-TR-22. Software Engineering InstituteGoogle Scholar
  13. Grady RB (1996) Software failure analysis for high-return process improvement. Hewlett-Packard J 47(4):15–24Google Scholar
  14. IEEE Std 610.12-1990 (1990) IEEE standard glossary of software engineering terminologyGoogle Scholar
  15. Jacobs J, Moll JV, Krause P, Kusters R, Trienekens J, Brombacher A (2005) Exploring defect causes in products developed by virtual teams. J Inform Software Tech 47(6):399–410CrossRefGoogle Scholar
  16. Kan SH, Basili VR, Shapiro LN (1994) Software quality: an overview from the perspective of total quality management. IBM Syst J 33(1):4–19CrossRefGoogle Scholar
  17. Kitchenham B (2004) Procedures for Performing Systematic Reviews. TR/SE-0401. Department of Computer Science, Keele University and National ICT, Australia Ltd. http://www.elsevier.com/framework_products/promis_misc/inf-systrev.pdf
  18. Lanubile F, Shull F, Basili VR (1998) Experimenting with error abstraction in requirements documents. In Proceedings of Fifth International Software Metrics Symposium, METRICS98 pp 114–121Google Scholar
  19. Lawrence CP, Kosuke I (2004) Design error classification and knowledge. J Knowl Manag Pract (May)Google Scholar
  20. Lezak M, Perry D, Stoll D (2000) A case study in root cause defect analysis. In Proceedings of the 22nd International Conference on Software Engineering. Limerick, Ireland. pp 428–437Google Scholar
  21. Masuck C (2005) Incorporating a fault categorization and analysis process in the software build cycle. J Comput Sci Colleges 20(5):239–248Google Scholar
  22. Mays RG, Jones CL, Holloway GJ, Studinski DP (1990) Experiences with defect prevention. IBM Syst J 29(1):4–32CrossRefGoogle Scholar
  23. Nakashima T, Oyama M, Hisada H, Ishii N (1999) Analysis of software bug causes and its prevention. J Inform Software Tech 41(15):1059–1068CrossRefGoogle Scholar
  24. Norman DA (1981) Categorization of action slips. Psychol Rev 88:1–15MathSciNetCrossRefGoogle Scholar
  25. Pfleeger SL, Atlee JM (2006) Software engineering theory and practice, 3rd edn. Prentice Hall, Upper Saddle RiverGoogle Scholar
  26. Rasmussen J (1982) Human errors: a taxonomy for describing human malfunction in industrial installations. J Occup Accid 4:311–335CrossRefGoogle Scholar
  27. Rasmussen, J., “Skills, Rules, Knowledge: Signals, Signs and Symbols and Other Distinctions in Human Performance Models.” IEEE Transactions: Systems, Man, & Cybernetics, 1983. SMC-13: 257-267.Google Scholar
  28. Reason J (1990) Human error. Cambridge University Press, New YorkCrossRefGoogle Scholar
  29. Sakthivel S (1991) A survey of requirements verification techniques. J Inf Technol 6:68–79CrossRefGoogle Scholar
  30. Seaman CB (1999) Qualitative methods in empirical studies of software engineering. IEEE Trans Softw Eng 25(4):557–572CrossRefGoogle Scholar
  31. Seaman CB, Basili VR (1997) An empirical study of communication in code inspections. Proceedings of International Conference in Software Engineering, pp 96–106, Boston, Mass. MayGoogle Scholar
  32. Sommerville I (2007) Software engineering, 8th edn. Addison Wesley, HarlowMATHGoogle Scholar
  33. Walia GS (2006a) Empirical validaton of requirement error abstraction and classification: a Multidisciplinary Approach, M.S Thesis, Comput Sci Eng, Mississippi, StarkvilleGoogle Scholar
  34. Walia GS, Carver J (2009) A systematic literature review to identify and classify requirement errors. J Inform Software Tech 51(7):1087–1109CrossRefGoogle Scholar
  35. Walia GS, Carver J, Philip T (2006b) Requirement Error Abstraction and Classification: An Empirical Study. In Proceedings of IEEE Symposium on Empirical Software Engineering. ACM Press, Brazil pp 336–345Google Scholar
  36. Walia G, Carver J, Philip T (2007) Requirement error abstraction and classification: a control group replicated study, in 18th IEEE International Symposium on Software Reliability Engineering. Trollhättan, SwedenGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  1. 1.Department of Computer ScienceNorth Dakota State UniversityFargoUSA
  2. 2.Department of Computer ScienceUniversity of AlabamaTuscaloosaUSA

Personalised recommendations