Skip to main content

Is Branch Coverage a Good Measure of Testing Effectiveness?

  • Chapter
Empirical Software Engineering and Verification (LASER 2010, LASER 2009, LASER 2008)

Abstract

Most approaches to testing use branch coverage to decide on the quality of a given test suite. The intuition is that covering branches relates directly to uncovering faults. The empirical study reported here applied random testing to 14 Eiffel classes for a total of 2520 hours and recorded the number of uncovered faults and the branch coverage over time. For the tested classes, (1) random testing reaches 93% branch coverage (2) it exercises almost the same set of branches every time, (3) it detects different faults from execution to execution, (4) during the first 10 minutes of testing, while branch coverage increases rapidly, there is a strong correlation between branch coverage and the number of uncovered faults, (5) over 50% of the faults are detected at a time where branch coverage hardly changes, and the correlation between branch coverage and the number of uncovered faults is weak.

These results provide evidence that branch coverage is not a good stopping criterion for random testing. They also show that branch coverage is not a good indicator for the effectiveness of a test suite.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. EiffelBase. Eiffel Software, http://www.eiffel.com/libraries/base.html

  2. EiffelStudio. Eiffel Software, http://www.eiffel.com/

  3. Arcuri, A., Iqbal, M., Briand, L.: Formal analysis of the effectiveness and predictability of random testing. In: Proceedings of the 19th International Symposium on Software Testing and Analysis, pp. 219–230. ACM (2010)

    Google Scholar 

  4. Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: Experimental assessment of random testing for object-oriented software. In: Proceedings of the International Symposium on Software Testing and Analysis 2007 (ISSTA 2007), pp. 84–94 (2007)

    Google Scholar 

  5. Ciupa, I., Pretschner, A., Leitner, A., Oriol, M., Meyer, B.: On the predictability of random tests for object-oriented software. In: First International Conference on Software Testing, Verification, and Validation 2008 (ICST 2008), pp. 72–81 (2008)

    Google Scholar 

  6. European Cooperation for Space Coordination. Space product assurance - Software product assurance, ECSS-Q-ST-80C. ESA Requirements and Standards Division (2009)

    Google Scholar 

  7. Frankl, P., Weiss, S.: An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering 19(8), 774–787 (1993)

    Article  Google Scholar 

  8. Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. IEEE Trans. Software Eng. 1(2), 156–173 (1975)

    Article  MathSciNet  Google Scholar 

  9. Gupta, A., Jalote, P.: An approach for experimentally evaluating effectiveness and efficiency of coverage criteria for software testing. Int. J. Softw. Tools Technol. Transf. 10(2), 145–160 (2008)

    Article  Google Scholar 

  10. Hamlet, D.: When only random testing will do. In: RT 2006: Proceedings of the 1st international workshop on Random testing, pp. 1–9. ACM, New York (2006)

    Google Scholar 

  11. Hamlet, R.: Random testing. In: Encyclopedia of Software Engineering, pp. 970–978. Wiley (1994)

    Google Scholar 

  12. Hamming, R.W.: Error detecting and error correcting codes. The Bell System Technical Journal 26(2), 147–160 (1950)

    Article  MathSciNet  Google Scholar 

  13. Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: ICSE 1994: Proceedings of the 16th International Conference on Software Engineering, pp. 191–200. IEEE Computer Society Press, Los Alamitos (1994)

    Google Scholar 

  14. Leitner, A., Oriol, M., Zeller, A., Ciupa, I., Meyer, B.: Efficient unit test case minimization. In: Proceedings of the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007), pp. 417–420 (2007)

    Google Scholar 

  15. Myers, G.J.: The Art of Software Testing, 2nd edn. John Wiley and Sons (2004)

    Google Scholar 

  16. Ntafos, S.: On random and partition testing. In: ISSTA 1998: Proceedings of the 1998 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 42–48. ACM, New York (1998)

    Chapter  Google Scholar 

  17. Weyuker, E., Jeng, B.: Analyzing partition testing strategies. IEEE Transactions on Software Engineering 17(7), 703–711 (1991)

    Article  Google Scholar 

  18. Yang, Q., Li, J.J., Weiss, D.: A survey of coverage based testing tools. In: AST 2006: Proceedings of the 2006 International Workshop on Automation of Software Test, pp. 99–103. ACM, New York (2006)

    Chapter  Google Scholar 

  19. Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29(4), 366–427 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bertrand Meyer Martin Nordio

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Wei, Y., Meyer, B., Oriol, M. (2012). Is Branch Coverage a Good Measure of Testing Effectiveness?. In: Meyer, B., Nordio, M. (eds) Empirical Software Engineering and Verification. LASER LASER LASER 2010 2009 2008. Lecture Notes in Computer Science, vol 7007. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25231-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25231-0_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25230-3

  • Online ISBN: 978-3-642-25231-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics