Advertisement

Generating Effective Test Suites by Combining Coverage Criteria

  • Gregory Gay
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10452)

Abstract

A number of criteria have been proposed to judge test suite adequacy. While search-based test generation has improved greatly at criteria coverage, the produced suites are still often ineffective at detecting faults. Efficacy may be limited by the single-minded application of one criterion at a time when generating suites—a sharp contrast to human testers, who simultaneously explore multiple testing strategies. We hypothesize that automated generation can be improved by selecting and simultaneously exploring multiple criteria.

To address this hypothesis, we have generated multi-criteria test suites, measuring efficacy against the Defects4J fault database. We have found that multi-criteria suites can be up to 31.15% more effective at detecting complex, real-world faults than suites generated to satisfy a single criterion and 70.17% more effective than the default combination of all eight criteria. Given a fixed search budget, we recommend pairing a criterion focused on structural exploration—such as Branch Coverage—with targeted supplemental strategies aimed at the type of faults expected from the system under test. Our findings offer lessons to consider when selecting such combinations.

Keywords

Search-based test generation Automated test generation Adequacy criteria Search-based software engineering 

Notes

Acknowledgements

This work is supported by National Science Foundation grant CCF-1657299.

References

  1. 1.
    Anand, S., Burke, E., Chen, T.Y., Clark, J., Cohen, M.B., Grieskamp, W., Harman, M., Harrold, M.J., McMinn, P.: An orchestrated survey on automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013)CrossRefGoogle Scholar
  2. 2.
    Fraser, G., Staats, M., McMinn, P., Arcuri, A., Padberg, F.: Does automated white-box test generation really help software testers? In: Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA 2013), pp. 291–301, New York, NY, USA. ACM (2013)Google Scholar
  3. 3.
    Gay, G.: The fitness function for the job: search-based generation of test suites that detect real faults. In: Proceedings of the 2017 International Conference on Software Testing (ICST 2017). IEEE (2017)Google Scholar
  4. 4.
    Gay, G., Rajan, A., Staats, M., Whalen, M., Heimdahl, M.P.E.: The effect of program and model structure on the effectiveness of MC/DC test adequacy coverage. ACM Trans. Softw. Eng. Methodol. 25(3), 25:1–25:34 (2016)CrossRefGoogle Scholar
  5. 5.
    Gay, G., Staats, M., Whalen, M., Heimdahl, M.: The risks of coverage-directed test case generation. IEEE Trans. Softw. Eng. 41(8), 803–819 (2015)CrossRefGoogle Scholar
  6. 6.
    Gopinath, R., Jensen, C., Groce, A.: Mutations: how close are they to real faults? In: 25th International Symposium on Software Reliability Engineering, pp. 189–200, November 2014Google Scholar
  7. 7.
    Groce, A., Alipour, M.A., Gopinath, R.: Coverage and its discontents. In: Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software (Onward! 2014), New York, NY, USA, pp. 255–268. ACM (2014)Google Scholar
  8. 8.
    Inozemtseva, L., Holmes, R.: Coverage is not strongly correlated with test suite effectiveness. In: Proceedings of the 36th International Conference on Software Engineering (ICSE 2014), New York, NY, USA, pp. 435–445. ACM (2014)Google Scholar
  9. 9.
    Jeffrey, D., Gupta, N.: Improving fault detection capability by selectively retaining test cases during test suite reduction. IEEE Trans. Softw. Eng. 33(2), 108–123 (2007)CrossRefGoogle Scholar
  10. 10.
    Just, R., Jalali, D., Ernst, M.D.: Defects4J: a database of existing faults to enable controlled testing studies for Java programs. In: Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA 2014), New York, NY, USA, pp. 437–440. ACM (2014)Google Scholar
  11. 11.
    Lakhotia, K., Harman, M., McMinn, P.: A multi-objective approach to search-based test data generation. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation (GECCO 2007), New York, NY, USA, pp. 1098–1105. ACM (2007)Google Scholar
  12. 12.
    McMinn, P.: Search-based software test data generation: a survey. Softw. Testing Verification Reliabil. 14, 105–156 (2004)CrossRefGoogle Scholar
  13. 13.
    Mockus, A., Nagappan, N., Dinh-Trong, T.: Test coverage and post-verification defects: a multiple case study. In: 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM 2009), pp. 291–301, October 2009Google Scholar
  14. 14.
    Rojas, J.M., Campos, J., Vivanti, M., Fraser, G., Arcuri, A.: Combining multiple coverage criteria in search-based unit test generation. In: Barros, M., Labiche, Y. (eds.) SSBSE 2015. LNCS, vol. 9275, pp. 93–108. Springer, Cham (2015). doi: 10.1007/978-3-319-22183-0_7 CrossRefGoogle Scholar
  15. 15.
    Shamshiri, S., Just, R., Rojas, J.M., Fraser, G., McMinn, P., Arcuri, A.: Do automatically generated unit tests find real faults? An empirical study of effectiveness and challenges. In: Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE 2015), New York, NY, USA. ACM (2015)Google Scholar
  16. 16.
    Yoo, S., Harman, M.: Using hybrid algorithm for pareto efficient multi-objective test suite minimisation. J. Syst. Softw. 83(4), 689–701 (2010)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.University of South CarolinaColumbiaUSA

Personalised recommendations