Skip to main content

Effectiveness of Code Reading and Functional Testing with Event-Driven Object-Oriented Software

  • Chapter

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2765))

Abstract

This chapter is concerned with experimental comparisons of code reading and functional testing (including fault identification) of concurrent event-driven Java software. Our initial idea was that functional-testing is more effective than code reading with respect to concurrent event-driven OO software. A controlled experiment was initially conducted with sophomore students (inexperienced subjects). Subsequently, it was replicated with some changes with junior and senior students (moderately experienced subjects). We also conducted a further replication with Master students, which is not considered in this Chapter. The experiment goal was studied from different perspectives, including effect of techniques on the different types of faults. Results can be overviewed as the following: 1) Concerning the initial, basic experiment: with inexperienced subjects and a strict interval of inspecting time of two hours, there was no statistically significant difference between the techniques under consideration; subjects performance indicator was 62% for code reading and 75% for functional testing. 2) Concerning the (first) replication: with moderately expert subjects, again a strict interval of inspecting time of two hours, and more than twice number of seeded faults, there was no statistically significant difference between the techniques; subjects performance indicator was 100% for code reading and 92% for functional testing; subjects performance indicator shows that more experienced subjects were asking for more inspecting time; however, functional testing performed much better than in the basic experiment. Computation faults were the most detectable for code reading while control faults were the most detectable for functional testing. Moreover, moderately expert subjects were more effective than inexperienced ones in detecting interface and event types of faults. Furthermore moderately expert functional testers detected many preexistent (non-seeded) faults, while both inexperienced subjects, and moderately experienced code readers could not detect non-seeded faults.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Basili, V.R., Selby, R.: Comparing the Effectiveness of Software Testing Strategies. IEEE Transactions on Software Engineering 10(12), 1278–1296 (1987)

    Article  Google Scholar 

  2. Cantone, G., Abdulnabi, Z.A.: Effectiveness and Fault Detection Rate of Code Reading and Functional Testing with Event-driven OO Java Software: Results from a Multi-replicated Experiment with Students of Different Level of Experience, URM2–DISP–ESEG 01.03 T.R., Rome (April 2003)

    Google Scholar 

  3. Basili, V.R., Weiss, D.M.: A Methodology for Collecting Valid Software Engineering Data. IEEE–TSE, 728–738 (November 1984)

    Google Scholar 

  4. Basili, V.R., Caldiera, G., Rombach, H.D.: Goal Question Metric Paradigm. In: Marciniak, J.J. (ed.) Encyclopedia of Software Engineering, vol. I, pp. 528–532. Wiley, Chichester (1994)

    Google Scholar 

  5. van Solingen, R., Berghout, E.: The Goal/Question/Metric Method: A Practical Guide for Quality Improvement and Software Development. McGraw-Hill Intl. Editions, New York (1988)

    Google Scholar 

  6. Kamsties, E., Lott, C.M.: An Empirical Evaluation of Three Defect-Detection Techniques. University of Kaiserslautern, ISERN–95–02 T.R., Germany (1995), www.iese.fhg.de/cgi-bin/ISERN_bibq ; 4.04.2003

  7. Brooks, A., Daly, J., Miller, J., Roper, M., Wood, M.: Replication’s Role in Experimental Computer Science. EfoCS–5–94 (RR/94/171) (1994)

    Google Scholar 

  8. Roper, D.A.M., Wood, M.: Practical Code Inspection of Object-Oriented Systems. Proceedings of WISE 2001, Paris (2001), www.cas.mcmaster.ca/wise/

  9. Juristo, N., Vegas, S.: Functional Testing, Structural Testing and Code Reading: What Fault Type do they Each Detect? Universidad Politécnica de Madrid, ESERNET book v3.9 (2003) (see Chapter 12 of this book)

    Google Scholar 

  10. Cantone, G., Celiberti, S.: Evaluating efficiency, effectiveness, and other indices of code reading and functional testing for concurrent event-driven OO Java software: Results from a multi-replicated experiment with students of different level of experience. In: Proceedings of 2002 IEEE ISESE Symposium, Vol. II, Nara, JP, October 3-4 pp. 23–24 (2002)

    Google Scholar 

  11. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering. An Introduction. Kluwer A.P., Dordrecht (2000)

    MATH  Google Scholar 

  12. Budd, T.A.: Introduction to Object Oriented Programming with Java. Wiley, Chichester (2001)

    Google Scholar 

  13. Pfleeger, S.L.: Software engineering: Theory and Practice, 2nd edn. Prentice Hall, Englewood Cliffs (2002)

    Google Scholar 

  14. IBM Corp. S/390 Orthogonal Defect Classification Education, www-1.ibm.com/servers/eserver/zseries/odc/nonshock/odc8ns.html 4.04.2003

  15. Briand, W.: Modeling Development Effort in Object-Oriented Systems Using Design Properties. IEEE Transactions on Software Engineering 27(11), 963–986 (2001)

    Article  Google Scholar 

  16. Genero, M., Piattini, M., Manso, E., Cantone, G.: Building UML Class Diagram Maintainability Prediction Models Based on Early Metrics. In: Proceedings of the Eighth IEEE Symposium on Software Metrics (METRICS 2003) (2003) (to appear)

    Google Scholar 

  17. Mendes, E.I, Watson, I., Mosley, N., Counsell, S.: A Comparison of Development Effort Estimation Techniques for Web Hypermedia Applications. In: Proceedings of the Eighth IEEE Symposium on Software Metrics (METRICS 2002) (2002)

    Google Scholar 

  18. Basili, V.R.: Quantitative evaluation of software methodology. (Keynote address). In: Proceedings of the First Pan Pacific Computer Conference, Melbourne (September 1985)

    Google Scholar 

  19. Basili, V.R., Caldiera, G., Rombach, H.D.: Experience Factory. In: Marciniak, J.J. (ed.) Encyclopedia of Software Engineering, vol. I, pp. 469–476. Wiley, Chichester (1994)

    Google Scholar 

  20. Neighbors, J.M.: Draco: A method for engineering reusable software systems. In: Software Reusability, Concepts and Models, vol. 1, pp. 295–319. ACM Press, New York (1989)

    Google Scholar 

  21. Basili, V.R., Caldiera, G., Cantone, G.: A Reference Architecture for the Component Factory. ACM TOSEM 1(1) (January 1992)

    Google Scholar 

  22. Cantone, G., Cantone, L., Donzelli, P.: Models, Measures and Learning Organizations for Software Technologies. In: Global Semiconductor Technology 2001, published by World Markets Research Centre (WMRC) in association with SIA, SISA, Nepcon WEST, FSA, pp. 136–148 (January 2001)

    Google Scholar 

  23. Cantone, G.: Measure-driven Processes and Architecture for the Empirical Evaluation of Software Technology. Journal of Software Maintenance: Research & Practice 12(1), 47–78 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Cantone, G., Abdulnabi, Z.A., Lomartire, A., Calavaro, G. (2003). Effectiveness of Code Reading and Functional Testing with Event-Driven Object-Oriented Software. In: Conradi, R., Wang, A.I. (eds) Empirical Methods and Studies in Software Engineering. Lecture Notes in Computer Science, vol 2765. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45143-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45143-3_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40672-3

  • Online ISBN: 978-3-540-45143-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics