Generating Effective Test Suites by Combining Coverage Criteria
A number of criteria have been proposed to judge test suite adequacy. While search-based test generation has improved greatly at criteria coverage, the produced suites are still often ineffective at detecting faults. Efficacy may be limited by the single-minded application of one criterion at a time when generating suites—a sharp contrast to human testers, who simultaneously explore multiple testing strategies. We hypothesize that automated generation can be improved by selecting and simultaneously exploring multiple criteria.
To address this hypothesis, we have generated multi-criteria test suites, measuring efficacy against the Defects4J fault database. We have found that multi-criteria suites can be up to 31.15% more effective at detecting complex, real-world faults than suites generated to satisfy a single criterion and 70.17% more effective than the default combination of all eight criteria. Given a fixed search budget, we recommend pairing a criterion focused on structural exploration—such as Branch Coverage—with targeted supplemental strategies aimed at the type of faults expected from the system under test. Our findings offer lessons to consider when selecting such combinations.
KeywordsSearch-based test generation Automated test generation Adequacy criteria Search-based software engineering
This work is supported by National Science Foundation grant CCF-1657299.
- 2.Fraser, G., Staats, M., McMinn, P., Arcuri, A., Padberg, F.: Does automated white-box test generation really help software testers? In: Proceedings of the 2013 International Symposium on Software Testing and Analysis (ISSTA 2013), pp. 291–301, New York, NY, USA. ACM (2013)Google Scholar
- 3.Gay, G.: The fitness function for the job: search-based generation of test suites that detect real faults. In: Proceedings of the 2017 International Conference on Software Testing (ICST 2017). IEEE (2017)Google Scholar
- 6.Gopinath, R., Jensen, C., Groce, A.: Mutations: how close are they to real faults? In: 25th International Symposium on Software Reliability Engineering, pp. 189–200, November 2014Google Scholar
- 7.Groce, A., Alipour, M.A., Gopinath, R.: Coverage and its discontents. In: Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software (Onward! 2014), New York, NY, USA, pp. 255–268. ACM (2014)Google Scholar
- 8.Inozemtseva, L., Holmes, R.: Coverage is not strongly correlated with test suite effectiveness. In: Proceedings of the 36th International Conference on Software Engineering (ICSE 2014), New York, NY, USA, pp. 435–445. ACM (2014)Google Scholar
- 10.Just, R., Jalali, D., Ernst, M.D.: Defects4J: a database of existing faults to enable controlled testing studies for Java programs. In: Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA 2014), New York, NY, USA, pp. 437–440. ACM (2014)Google Scholar
- 11.Lakhotia, K., Harman, M., McMinn, P.: A multi-objective approach to search-based test data generation. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation (GECCO 2007), New York, NY, USA, pp. 1098–1105. ACM (2007)Google Scholar
- 13.Mockus, A., Nagappan, N., Dinh-Trong, T.: Test coverage and post-verification defects: a multiple case study. In: 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM 2009), pp. 291–301, October 2009Google Scholar
- 15.Shamshiri, S., Just, R., Rojas, J.M., Fraser, G., McMinn, P., Arcuri, A.: Do automatically generated unit tests find real faults? An empirical study of effectiveness and challenges. In: Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE 2015), New York, NY, USA. ACM (2015)Google Scholar