Combining Multiple Coverage Criteria in Search-Based Unit Test Generation

  • José Miguel Rojas
  • José Campos
  • Mattia Vivanti
  • Gordon Fraser
  • Andrea Arcuri
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9275)

Abstract

Automated test generation techniques typically aim at maximising coverage of well-established structural criteria such as statement or branch coverage. In practice, generating tests only for one specific criterion may not be sufficient when testing object oriented classes, as standard structural coverage criteria do not fully capture the properties developers may desire of their unit test suites. For example, covering a large number of statements could be easily achieved by just calling the main method of a class; yet, a good unit test suite would consist of smaller unit tests invoking individual methods, and checking return values and states with test assertions. There are several different properties that test suites should exhibit, and a search-based test generator could easily be extended with additional fitness functions to capture these properties.

However, does search-based testing scale to combinations of multiple criteria, and what is the effect on the size and coverage of the resulting test suites? To answer these questions, we extended the EvoSuite unit test generation tool to support combinations of multiple test criteria, defined and implemented several different criteria, and applied combinations of criteria to a sample of 650 open source Java classes. Our experiments suggest that optimising for several criteria at the same time is feasible without increasing computational costs: When combining nine different criteria, we observed an average decrease of only 0.4 % for the constituent coverage criteria, while the test suites may grow up to 70 %.

References

  1. 1.
    Alshahwan, N., Harman, M.: Coverage and fault detection of the output-uniqueness test selection criteria. In: Proceedings of ISSTA 2014, pp. 181–192. ACM (2014)Google Scholar
  2. 2.
    Arcuri, A.: It really does matter how you normalize the branch distance in search-based software testing. Softw. Test. Verif. Reliab. 23(2), 119–147 (2013)CrossRefGoogle Scholar
  3. 3.
    Arcuri, A., Briand, L.: A Hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering. Softw. Test. Verif. Reliab. 24(3), 219–250 (2014)CrossRefGoogle Scholar
  4. 4.
    Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: Proceedings of FSE 2011, pp. 416–419. ACM (2011)Google Scholar
  5. 5.
    Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  6. 6.
    Fraser, G., Arcuri, A.: A large scale evaluation of automated unit test generation using evosuite. ACM Trans. Softw. Eng. Methodol. 24(2), 8:1–8:42 (2014)CrossRefGoogle Scholar
  7. 7.
    Fraser, G., Arcuri, A.: Achieving scalable mutation-based generation of whole test suites. Empirical Softw. Eng. 20(3), 1–30 (2014)Google Scholar
  8. 8.
    Fraser, G., Staats, M., McMinn, P., Arcuri, A., Padberg, F.: Does automated white-box test generation really help software testers? In: Proceedings of ISSTA 2013, pp. 291–301. ACM (2013)Google Scholar
  9. 9.
    Harman, M., Lakhotia, K., McMinn, P.: A multi-objective approach to search-based test data generation. In: Proceedings of GECCO 2007, pp. 1098–1105. ACM (2007)Google Scholar
  10. 10.
    Jamrozik, K., Fraser, G., Tillman, N., de Halleux, J.: Generating test suites with augmented dynamic symbolic execution. In: Veanes, M., Viganò, L. (eds.) TAP 2013. LNCS, vol. 7942, pp. 152–167. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  11. 11.
    Jeffrey, D., Gupta, N.: Improving fault detection capability by selectively retaining test cases during test suite reduction. IEEE Trans. Softw. Eng. 33(2), 108–123 (2007)CrossRefGoogle Scholar
  12. 12.
    Korel, B.: Automated software test data generation. IEEE Trans. Softw. Eng. 16(8), 870–879 (1990)CrossRefGoogle Scholar
  13. 13.
    Li, N., Meng, X., Offutt, J., Deng, L.: Is bytecode instrumentation as good as source code instrumentation: an empirical study with industrial tools (experience report). In: Proceedings of ISSRE 2013, pp. 380–389. IEEE (2013)Google Scholar
  14. 14.
    McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  15. 15.
    Sampath, S., Bryce, R., Memon, A.: A uniform representation of hybrid criteria for regression testing. IEEE Trans. Softw. Eng. 39(10), 1326–1344 (2013)CrossRefGoogle Scholar
  16. 16.
    Yoo, S., Harman, M.: Pareto efficient multi-objective test case selection. In: Proceedings of ISSTA 2007, pp. 140–150. ACM (2007)Google Scholar
  17. 17.
    Yoo, S., Harman, M.: Using hybrid algorithm for pareto efficient multi-objective test suite minimisation. J. Syst. Softw. 83(4), 689–701 (2010)CrossRefGoogle Scholar
  18. 18.
    Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. 29(4), 366–427 (1997)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • José Miguel Rojas
    • 1
  • José Campos
    • 1
  • Mattia Vivanti
    • 2
  • Gordon Fraser
    • 1
  • Andrea Arcuri
    • 3
    • 4
  1. 1.Department of Computer ScienceThe University of SheffieldSheffieldUK
  2. 2.Università Della Svizzera Italiana (USI)LuganoSwitzerland
  3. 3.ScientaOsloNorway
  4. 4.University of LuxembourgLuxembourg CityLuxembourg

Personalised recommendations