Advertisement

Exploring Feasibility of Software Defects Orthogonal Classification

  • Davide Falessi
  • Giovanni Cantone
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 10)

Abstract

Defect categorization is the basis of many works that relate to software defect detection. The assumption is that different subjects assign the same category to the same defect. Because this assumption was questioned, our following decision was to study the phenomenon, in the aim of providing empirical evidence. Because defects can be categorized by using different criteria, and the experience of the involved professionals in using such a criterion could affect the results, our further decisions were: (i) to focus on the IBM Orthogonal Defect Classification (ODC); (ii) to involve professionals after having stabilized process and materials with students. This paper is concerned with our basic experiment. We analyze a benchmark including two thousand and more data that we achieved through twenty-four segments of code, each segment seeded with one defect, and by one hundred twelve sophomores, trained for six hours, and then assigned to classify those defects in a controlled environment for three continual hours. The focus is on: Discrepancy among categorizers, and orthogonality, affinity, effectiveness, and efficiency of categorizations. Results show: (i) training is necessary to achieve orthogonal and effective classifications, and obtain agreement between subjects, (ii) efficiency is five minutes per defect classification in the average, (iii) there is affinity between some categories.

Keywords

Software engineering Experimental software engineering Orthogonal Defect Classification Defect class affinity Fault detection Effectiveness Efficiency 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abdelnabi, Z., Cantone, G., Ciolkowski, M., Rombach, D.: Comparing Code Reading Techniques Applied to Object-oriented Software Frameworks with regard to Effectiveness and Defect Detection Rate. In: Proceedings of the 2004 International Symposium on Empirical Software Engineering, Redondo Beach (CA) (2004)Google Scholar
  2. 2.
    Basili, V.R., Caldiera, G., Rombach, H.D.: Goal Question Metric Paradigm. In: Marciniak, J.J. (ed.) Encyclopaedia of Software Engineering, vol. 1, pp. 528–532. John Wiley & Sons, Chichester (1994)Google Scholar
  3. 3.
    Basili, V.R., Selby, R.: Comparing the Effectiveness of Software Testing Strategies. In: IEEE Transactions on Software Engineering, December 1987, pp. 1278–1296. CS Press (1987)Google Scholar
  4. 4.
    Cantone, G., Abdulnabi, Z.A., Lomartire, A., Calavaro, G.: Effectiveness of Code Reading and Functional Testing with Event-Driven Object-Oriented Software. In: Conradi, R., Wang, A.I. (eds.) ESERNET 2001. LNCS, vol. 2765, pp. 166–193. Springer, Heidelberg (2003)Google Scholar
  5. 5.
    Cohen, J.: A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement 20, 37–46 (1960)CrossRefGoogle Scholar
  6. 6.
    Durães, J., Madeira, H.: Definition of Software Fault Emulation Operators: a Field Data Study. In: Proc. of 2003 International Conference on Dependable Systems and Networks (2003)Google Scholar
  7. 7.
    El Emam, K., Wieczorek, I.: The Repeatability of Code Defect Classifications. In: Proceedings of International Symposium on Software Reliability Engineering, pp. 322–333 (1998)Google Scholar
  8. 8.
    Henningsson, K., Wohlin, C.: Assuring Fault Classification Agreement – An Empirical Evaluation. In: Proceedings of the 2004 International Symposium on Empirical Software Engineering (2004)Google Scholar
  9. 9.
    Juristo, N., Vegas, S.: Functional Testing, Structural Testing, and Code Reading: What Fault Type Do They Each Detect? In: Conradi, R., Wang, A.I. (eds.) ESERNET 2001. LNCS, vol. 2765, pp. 208–232. Springer, Heidelberg (2003)Google Scholar
  10. 10.
    Myers, G.J.: A Controlled Experiment in Program Testing and Code Walkthroughs/Reviews. Communications of ACM 21(9), 760–768 (1978)CrossRefGoogle Scholar
  11. 11.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. The International Series in Software Engineering (2000)Google Scholar
  12. 12.
    IBM a, Details of ODC v 5.11, (last access May 02, 2006), www.research.ibm.com/softeng/ODC/DETODC.HTM
  13. 13.
    IBM b, ODC Frequently Asked Questions, (last access May 02, 2006), www.research.ibm.com/softeng/ODC/FAQ.HTM

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Davide Falessi
    • 1
  • Giovanni Cantone
    • 1
  1. 1.Univ. of Roma “Tor Vergata”, DISPRomeItaly

Personalised recommendations