Advertisement

Hybrid Is Better: Why and How Test Coverage and Software Reliability Can Benefit Each Other

  • Antonia BertolinoEmail author
  • Breno Miranda
  • Roberto Pietrantuono
  • Stefano Russo
Conference paper
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 372)

Abstract

Functional, structural and operational testing are three broad categories of software testing methods driven by the product functionalities, the way it is implemented, and the way it is expected to be used, respectively. A large body of the software testing literature is devoted to evaluate and compare test techniques in these categories. Although it appears reasonable to devise hybrid methods to merge their different strengths - because different techniques may complement each other by targeting different types of faults and/or using different artifacts - we still miss clear guidelines on how to best combine them.

We discuss differences and limitations of two popular testing approaches, namely coverage-driven and operational-profile testing, belonging to structural and operational testing, respectively. We show why and how test coverage and operational profile can cross-fertilize each other, improving the effectiveness of structural testing or, conversely, the product reliability achievable by operational testing.

Keywords

Software testing Reliability Structural testing Operational testing 

Notes

Acknowledgements

This work has been partially supported by the PRIN 2015 project “GAUSS” funded by MIUR. B. Miranda wishes to thank the postdoctoral fellowship jointly sponsored by CAPES and FACEPE (APQ-0826-1.03/16; BCT-0204-1.03/17).

References

  1. 1.
    Adams, E.N.: Optimizing preventive service of software products. IBM J. Res. Dev. 28(1), 2–14 (1984)CrossRefGoogle Scholar
  2. 2.
    Alrmuny, D.: A comparative study of test coverage-based software reliability growth models. In: Proceedings of the 11th International Conference on Information Technology: New Generations, pp. 255–259. ITNG, IEEE (2014)Google Scholar
  3. 3.
    Beizer, B.: Software Testing Techniques, 2nd edn. Van Nostrand Reinhold Co., New York (1990)zbMATHGoogle Scholar
  4. 4.
    Bertolino, A.: Software testing. In: Bourque, P., Dupuis, R. (eds.) Software Engineering Body of Knowledge (SWEBOK), Chap. 5. IEEE Computer Society (2001)Google Scholar
  5. 5.
    Bertolino, A., Miranda, B., Pietrantuono, R., Russo, S.: Adaptive coverage and operational profile-based testing for reliability improvement. In: Proceedings of the 39th International Conference on Software Engineering, pp. 541–551. ICSE, IEEE (2017)Google Scholar
  6. 6.
    Bishop, C.: Pattern Recognition and Machine Learning. Information Science and Statistics. Springer-Verlag, New York (2006)zbMATHGoogle Scholar
  7. 7.
    Cai, K.Y., Li, Y.C., Liu, K.: Optimal and adaptive testing for software reliability assessment. Inf. Softw. Technol. 46(15), 989–1000 (2004)CrossRefGoogle Scholar
  8. 8.
    Chen, T.Y., Leung, H., Mak, I.K.: Adaptive random testing. In: Maher, M.J. (ed.) ASIAN 2004. LNCS, vol. 3321, pp. 320–329. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30502-6_23CrossRefGoogle Scholar
  9. 9.
    Cotroneo, D., Pietrantuono, R., Russo, S.: A learning-based method for combining testing techniques. In: Proceedings of the 35th International Conference on Software Engineering (ICSE), pp. 142–151. IEEE (2013)Google Scholar
  10. 10.
    Cotroneo, D., Pietrantuono, R., Russo, S.: Combining operational and debug testing for improving reliability. IEEE Trans. Reliab. 62(2), 408–423 (2013)CrossRefGoogle Scholar
  11. 11.
    Cotroneo, D., Pietrantuono, R., Russo, S.: RELAI testing: a technique to assess and improve software reliability. IEEE Trans. Software Eng. 42(5), 452–475 (2016)CrossRefGoogle Scholar
  12. 12.
    Del Frate, F., Garg, P., Mathur, A., Pasquini, A.: On the correlation between code coverage and software reliability. In: Proceedings of the 6th International Symposium on Software Reliability Engineering, pp. 124–132. ISSRE, IEEE, October 1995Google Scholar
  13. 13.
    Dijkstra, E.W.: Structured programming. In: N.Buxton, J., Randell, B. (eds.) Software Engineering Techniques. NATO Science Committee (1970)Google Scholar
  14. 14.
    Donnelly, M., Everett, B., Musa, J., Wilson, G., Nikora, A.: Best current practice of SRE. In: Handbook of software Reliability Engineering, Chap. 6, pp. 219–254. IEEE Computer Society Press and McGraw-Hill (1996)Google Scholar
  15. 15.
    Frankl, P.G., Hamlet, R.G., Littlewood, B., Strigini, L.: Evaluating testing methods by delivered reliability. IEEE Trans. Software Eng. 24(8), 586–601 (1998)CrossRefGoogle Scholar
  16. 16.
    Frankl, P.G., Weyuker, E.J.: An applicable family of data flow testing criteria. IEEE Trans. Software Eng. 14(10), 1483–1498 (1988)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Frankl, P.G., Deng, Y.: Comparison of delivered reliability of branch, data flow and operational testing: a case study. ACM SIGSOFT Software Eng. Notes 25(5), 124–134 (2000)CrossRefGoogle Scholar
  18. 18.
    Gay, G., Staats, M., Whalen, M., Heimdahl, M.P.: The risks of coverage-directed test case generation. IEEE Trans. Software Eng. 41(8), 803–819 (2015)CrossRefGoogle Scholar
  19. 19.
    Harrold, M.J., Rothermel, G., Wu, R., Yi, L.: An Empirical Investigation of Program Spectra. In: Proceedings of the 1998 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, pp. 83–90. PASTE, ACM (1998)Google Scholar
  20. 20.
    Herzig, K.: There’s never enough time to do all the testing you want. In: Perspectives on Data Science for Software Engineering, pp. 91–95. Elsevier (2016)Google Scholar
  21. 21.
    Herzig, K.: Let’s assume we had to pay for testing. In: Keynote at the 11th IEEE/ACM International Workshop on Automation of Software Test (2016). https://www.kim-herzig.de/2016/06/28/keynote-ast-2016/
  22. 22.
    Horgan, J., Mathur, A.: Software testing and reliability. The Handbook of Software Reliability Engineering, pp. 531–565 (1996)Google Scholar
  23. 23.
    Inozemtseva, L., Holmes, R.: Coverage is not strongly correlated with test suite effectiveness. In: Proceedings of the 36th International Conference on Software Engineering, pp. 435–445. ICSE, ACM (2014)Google Scholar
  24. 24.
    Institute of Electrical and Electronic Engineers: IEEE standard glossary of software engineering terminology. IEEE Standard 610 12, 09 1990Google Scholar
  25. 25.
    Littlewood, B., Popov, P., Strigini, L., Shryane, N.: Modelling the effects of combining diverse software fault detection techniques. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) Formal Methods and Testing. LNCS, vol. 4949, pp. 345–366. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78917-8_12CrossRefGoogle Scholar
  26. 26.
    Lyu, M.R.: Software reliability engineering: a roadmap. In: Future of Software Engineering, pp. 153–170. FOSE, IEEE (2007)Google Scholar
  27. 27.
    Marick, B.: How to misuse code coverage. In: Proceedings of the 16th International Conference on Testing Computer Software, pp. 16–18 (1999)Google Scholar
  28. 28.
    Miranda, B., Bertolino, A.: Does code coverage provide a good stopping rule for operational profile based testing? In: Proceedings of the 11th International Workshop on Automation of Software Test, pp. 22–28. AST, ACM (2016)Google Scholar
  29. 29.
    Miranda, B., Bertolino, A.: An assessment of operational coverage as both an adequacy and a selection criterion for operational profile based testing. Software Qual. J. 26(4), 1571–1594 (2018)CrossRefGoogle Scholar
  30. 30.
    Musa, J.D.: A theory of software reliability and its application. IEEE Trans. Software Eng. SE–1(3), 312–327 (1975)CrossRefGoogle Scholar
  31. 31.
    Musa, J.D.: Operational profiles in software-reliability engineering. IEEE Softw. 10(2), 14–32 (1993)CrossRefGoogle Scholar
  32. 32.
    Neil, M., Fenton, N., Nielson, L.: Building large-scale Bayesian networks. Knowl. Eng. Rev. 15(3), 257–284 (2000)CrossRefGoogle Scholar
  33. 33.
    Omri, F.: Weighted statistical white-box testing with proportional-optimal stratification. In: WCOP 2014 Proceedings of the 19th International Doctoral Symposium on Components and Architecture, pp. 19–24. ACM (2014)Google Scholar
  34. 34.
    Pietrantuono, R., Russo, S.: On adaptive sampling-based testing for software reliability assessment. In: Proceedings of the 27th International Symposium on Software Reliability Engineering, pp. 1–11. ISSRE, IEEE, October 2016Google Scholar
  35. 35.
    Pietrantuono, R., Russo, S.: Probabilistic sampling-based testing for accelerated reliability assessment. In: Proceedings of the IEEE 18th International Conference on Software Quality, Reliability and Security (QRS), pp. 35–46. IEEE, July 2018Google Scholar
  36. 36.
    Prause, C.R., Werner, J., Hornig, K., Bosecker, S., Kuhrmann, M.: Is 100% test coverage a reasonable requirement? Lessons learned from a space software project. In: Felderer, M., Méndez Fernández, D., Turhan, B., Kalinowski, M., Sarro, F., Winkler, D. (eds.) PROFES 2017. LNCS, vol. 10611, pp. 351–367. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-69926-4_25CrossRefGoogle Scholar
  37. 37.
    Roper, M.: Software testing–searching for the missing link. Inf. Softw. Technol. 41(14), 991–994 (1999)CrossRefGoogle Scholar
  38. 38.
    Singh, H., Cortellessa, V., Cukic, B., Gunel, E., Bharadwaj, V.: A Bayesian approach to reliability prediction and assessment of component based systems. In: Proceedings of the 12th International Symposium on Software Reliability Engineering, pp. 12–21. ISSRE, November 2001Google Scholar
  39. 39.
    Smidts, C., Cukic, B., Gunel, E., Li, M., Singh, H.: Software reliability corroboration. In: Proceedings of the 27th Annual NASA Goddard/IEEE Software Engineering Workshop, pp. 82–87. IEEE, December 2002Google Scholar
  40. 40.
    Smidts, C., Mutha, C., Rodríguez, M., Gerber, M.J.: Software testing with an operational profile: OP definition. ACM Comput. Surv. 46(3), 39:1–39:39 (2014)CrossRefGoogle Scholar
  41. 41.
    Sridharan, M., Namin, A.: Prioritizing mutation operators based on importance sampling. In: 21st International Symposium on Software Reliability Engineering, pp. 378–387. ISSRE, IEEE, November 2010Google Scholar
  42. 42.
    Tian, J., Lu, P., Palma, J.: Test-execution-based reliability measurement and modeling for large commercial software. IEEE Trans. Software Eng. 21(5), 405–414 (1995)CrossRefGoogle Scholar
  43. 43.
    Wei, Y., Meyer, B., Oriol, M.: Is branch coverage a good measure of testing effectiveness? In: Meyer, B., Nordio, M. (eds.) LASER 2008-2010. LNCS, vol. 7007, pp. 194–212. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-25231-0_5CrossRefGoogle Scholar
  44. 44.
    Weyuker, E.J., Jeng, B.: Analyzing partition testing strategies. IEEE Trans. Software Eng. 17(7), 703–711 (1991)CrossRefGoogle Scholar
  45. 45.
    Wong, W., Gao, R., Li, Y., Abreu, R., Wotawa, F.: A survey on software fault localization. IEEE Trans. Software Eng. 42(8), 707–740 (2016)CrossRefGoogle Scholar
  46. 46.
    Xie, T., Notkin, D.: Checking inside the black box: regression testing by comparing value spectra. IEEE Trans. Software Eng. 31(10), 869–883 (2005)CrossRefGoogle Scholar
  47. 47.
    Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29(4), 366–427 (1997)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Antonia Bertolino
    • 1
    Email author
  • Breno Miranda
    • 2
  • Roberto Pietrantuono
    • 3
  • Stefano Russo
    • 3
  1. 1.ISTI - CNRPisaItaly
  2. 2.Federal University of PernambucoRecifeBrazil
  3. 3.Università degli Studi di Napoli Federico IINapoliItaly

Personalised recommendations