Skip to main content

Exploring Feasibility of Software Defects Orthogonal Classification

  • Conference paper
Software and Data Technologies (ICSOFT 2006)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 10))

Included in the following conference series:

Abstract

Defect categorization is the basis of many works that relate to software defect detection. The assumption is that different subjects assign the same category to the same defect. Because this assumption was questioned, our following decision was to study the phenomenon, in the aim of providing empirical evidence. Because defects can be categorized by using different criteria, and the experience of the involved professionals in using such a criterion could affect the results, our further decisions were: (i) to focus on the IBM Orthogonal Defect Classification (ODC); (ii) to involve professionals after having stabilized process and materials with students. This paper is concerned with our basic experiment. We analyze a benchmark including two thousand and more data that we achieved through twenty-four segments of code, each segment seeded with one defect, and by one hundred twelve sophomores, trained for six hours, and then assigned to classify those defects in a controlled environment for three continual hours. The focus is on: Discrepancy among categorizers, and orthogonality, affinity, effectiveness, and efficiency of categorizations. Results show: (i) training is necessary to achieve orthogonal and effective classifications, and obtain agreement between subjects, (ii) efficiency is five minutes per defect classification in the average, (iii) there is affinity between some categories.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abdelnabi, Z., Cantone, G., Ciolkowski, M., Rombach, D.: Comparing Code Reading Techniques Applied to Object-oriented Software Frameworks with regard to Effectiveness and Defect Detection Rate. In: Proceedings of the 2004 International Symposium on Empirical Software Engineering, Redondo Beach (CA) (2004)

    Google Scholar 

  2. Basili, V.R., Caldiera, G., Rombach, H.D.: Goal Question Metric Paradigm. In: Marciniak, J.J. (ed.) Encyclopaedia of Software Engineering, vol. 1, pp. 528–532. John Wiley & Sons, Chichester (1994)

    Google Scholar 

  3. Basili, V.R., Selby, R.: Comparing the Effectiveness of Software Testing Strategies. In: IEEE Transactions on Software Engineering, December 1987, pp. 1278–1296. CS Press (1987)

    Google Scholar 

  4. Cantone, G., Abdulnabi, Z.A., Lomartire, A., Calavaro, G.: Effectiveness of Code Reading and Functional Testing with Event-Driven Object-Oriented Software. In: Conradi, R., Wang, A.I. (eds.) ESERNET 2001. LNCS, vol. 2765, pp. 166–193. Springer, Heidelberg (2003)

    Google Scholar 

  5. Cohen, J.: A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement 20, 37–46 (1960)

    Article  Google Scholar 

  6. Durães, J., Madeira, H.: Definition of Software Fault Emulation Operators: a Field Data Study. In: Proc. of 2003 International Conference on Dependable Systems and Networks (2003)

    Google Scholar 

  7. El Emam, K., Wieczorek, I.: The Repeatability of Code Defect Classifications. In: Proceedings of International Symposium on Software Reliability Engineering, pp. 322–333 (1998)

    Google Scholar 

  8. Henningsson, K., Wohlin, C.: Assuring Fault Classification Agreement – An Empirical Evaluation. In: Proceedings of the 2004 International Symposium on Empirical Software Engineering (2004)

    Google Scholar 

  9. Juristo, N., Vegas, S.: Functional Testing, Structural Testing, and Code Reading: What Fault Type Do They Each Detect? In: Conradi, R., Wang, A.I. (eds.) ESERNET 2001. LNCS, vol. 2765, pp. 208–232. Springer, Heidelberg (2003)

    Google Scholar 

  10. Myers, G.J.: A Controlled Experiment in Program Testing and Code Walkthroughs/Reviews. Communications of ACM 21(9), 760–768 (1978)

    Article  Google Scholar 

  11. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering: An Introduction. The International Series in Software Engineering (2000)

    Google Scholar 

  12. IBM a, Details of ODC v 5.11, (last access May 02, 2006), www.research.ibm.com/softeng/ODC/DETODC.HTM

  13. IBM b, ODC Frequently Asked Questions, (last access May 02, 2006), www.research.ibm.com/softeng/ODC/FAQ.HTM

Download references

Author information

Authors and Affiliations

Authors

Editor information

Joaquim Filipe Boris Shishkov Markus Helfert

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Falessi, D., Cantone, G. (2008). Exploring Feasibility of Software Defects Orthogonal Classification. In: Filipe, J., Shishkov, B., Helfert, M. (eds) Software and Data Technologies. ICSOFT 2006. Communications in Computer and Information Science, vol 10. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-70621-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-70621-2_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-70619-9

  • Online ISBN: 978-3-540-70621-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics