Advertisement

An Empirical Evaluation of Evolutionary Algorithms for Test Suite Generation

  • José Campos
  • Yan Ge
  • Gordon Fraser
  • Marcelo Eler
  • Andrea Arcuri
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10452)

Abstract

Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many aspects of these algorithms have been evaluated in detail (e.g., test length and different kinds of techniques aimed at improving performance, like seeding), the influence of the specific algorithms has to date seen less attention in the literature. As it is theoretically impossible to design an algorithm that is best on all possible problems, a common approach in software engineering problems is to first try a Genetic Algorithm, and only afterwards try to refine it or compare it with other algorithms to see if any of them is more suited for the addressed problem. This is particularly important in test generation, since recent work suggests that random search may in practice be equally effective, whereas the reformulation as a many-objective problem seems to be more effective. To shed light on the influence of the search algorithms, we empirically evaluate six different algorithms on a selection of non-trivial open source classes. Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it confirms that the many-objective search is the most effective.

Notes

Acknowledgments

This work is supported by EPSRC project EP/N023978/1, São Paulo Research Foundation (FAPESP) grant 2015/26044-0, and the National Research Fund, Luxembourg (FNR/P10/03).

References

  1. 1.
    Arcuri, A., Fraser, G.: Parameter tuning or default values? An empirical investigation in search-based software engineering. Empir. Softw. Eng. 18(3), 594–623 (2013)CrossRefGoogle Scholar
  2. 2.
    Basak, A., Lohn, J.: A comparison of evolutionary algorithms on a set of antenna design benchmarks. In: de la Fraga, L.G. (ed.) 2013 IEEE Conference on Evolutionary Computation, Cancun, vol. 1, pp. 598–604, 20–23 June 2013Google Scholar
  3. 3.
    Daka, E., Campos, J., Fraser, G., Dorn, J., Weimer, W.: Modeling readability to improve unit tests. In: Proceedings of the Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015), pp. 107–118. ACM, New York (2015)Google Scholar
  4. 4.
    Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Schoenauer, M., Deb, K., Rudolph, G., Yao, X., Lutton, E., Merelo, J.J., Schwefel, H.-P. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 849–858. Springer, Heidelberg (2000). doi: 10.1007/3-540-45356-3_83 CrossRefGoogle Scholar
  5. 5.
    Doerr, B., Doerr, C., Ebel, F.: From black-box complexity to designing new genetic algorithms. Theor. Comput. Sci. 567, 87–104 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: Proceedings of ESEC/FSE, pp. 416–419. ACM (2011)Google Scholar
  7. 7.
    Fraser, G., Arcuri, A.: Handling test length bloat. Softw. Test. Verif. Reliab. 23(7), 553–582 (2013)CrossRefGoogle Scholar
  8. 8.
    Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  9. 9.
    Fraser, G., Arcuri, A.: A large-scale evaluation of automated unit test generation using evosuite. ACM Trans. Softw. Eng. Methodol. (TOSEM) 24(2), 8:1–8:42 (2014)CrossRefGoogle Scholar
  10. 10.
    Gay, G.: The fitness function for the job: search-based generation of test suites that detect real faults. In: 2017 IEEE 10th International Conference on Software Testing, Verification and Validation (ICST). IEEE (2017)Google Scholar
  11. 11.
    Ghani, K., Clark, J.A., Zhan, Y.: Comparing algorithms for search-based test data generation of matlab simulink models. In: 2009 IEEE Congress on Evolutionary Computation, pp. 2940–2947, May 2009Google Scholar
  12. 12.
    Harman, M., McMinn, P.: A theoretical & empirical analysis of evolutionary testing and hill climbing for structural test data generation. In: Proceedings of the International Symposium on Software Testing and Analysis, pp. 73–83. ACM (2007)Google Scholar
  13. 13.
    Jansen, T., De Jong, K.A., Wegener, I.: On the choice of the offspring population size in evolutionary algorithms. Evol. Comput. 13(4), 413–440 (2005)CrossRefGoogle Scholar
  14. 14.
    Karnopp, D.C.: Random search techniques for optimization problems. Automatica 1(2–3), 111–121 (1963)CrossRefGoogle Scholar
  15. 15.
    Lopez-Herrejon, R.E., Ferrer, J., Chicano, F., Egyed, A., Alba, E.: Comparative analysis of classical multi-objective evolutionary algorithms and seeding strategies for pairwise testing of software product lines. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC), pp. 387–396 (2014)Google Scholar
  16. 16.
    Omar, E., Ghosh, S., Whitley, D.: Comparing search techniques for finding subtle higher order mutants. In: Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO 2014), pp. 1271–1278. ACM (2014)Google Scholar
  17. 17.
    Panichella, A., Kifetew, F., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. PP(99), 1 (2017)CrossRefGoogle Scholar
  18. 18.
    Panichella, A., Kifetew, F.M., Tonella, P.: Reformulating branch coverage as a many-objective optimization problem. In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), pp. 1–10. IEEE (2015)Google Scholar
  19. 19.
    Ramírez, A., Romero, J.R., Ventura, S.: A comparative study of many-objective evolutionary algorithms for the discovery of software architectures. Empir. Softw. Engg. 21(6), 2546–2600 (2016)CrossRefGoogle Scholar
  20. 20.
    Rojas, J.M., Campos, J., Vivanti, M., Fraser, G., Arcuri, A.: Combining multiple coverage criteria in search-based unit test generation. In: Barros, M., Labiche, Y. (eds.) SSBSE 2015. LNCS, vol. 9275, pp. 93–108. Springer, Cham (2015). doi: 10.1007/978-3-319-22183-0_7 CrossRefGoogle Scholar
  21. 21.
    Rojas, J.M., Vivanti, M., Arcuri, A., Fraser, G.: A detailed investigation of the effectiveness of whole test suite generation. Empir. Softw. Eng. 22, 852–893 (2016)CrossRefGoogle Scholar
  22. 22.
    Sahin, O., Akay, B.: Comparisons of metaheuristic algorithms and fitness functions on software test data generation. Appl. Soft Comput. 49, 1202–1214 (2016)CrossRefGoogle Scholar
  23. 23.
    Shamshiri, S., Rojas, J.M., Fraser, G., McMinn, P.: Random or genetic algorithm search for object-oriented test suite generation? In: Proceedings of the Conference on Genetic and Evolutionary Computation, pp. 1367–1374. ACM (2015)Google Scholar
  24. 24.
    Ter-Sarkisov, A., Marsland, S.R.: Convergence properties of (\(\mu + \lambda \)) evolutionary algorithms. In: AAAI (2011)Google Scholar
  25. 25.
    Varshney, S., Mehrotra, M.: A differential evolution based approach to generate test data for data-flow coverage. In: 2016 International Conference on Computing, Communication and Automation (ICCCA), pp. 796–801, April 2016Google Scholar
  26. 26.
    Wolfram, M., Marten, A.K., Westermann, D.: A comparative study of evolutionary algorithms for phase shifting transformer setting optimization. In: 2016 IEEE International Energy Conference (ENERGYCON), pp. 1–6, April 2016Google Scholar
  27. 27.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRefGoogle Scholar
  28. 28.
    Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8(2), 173–195 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • José Campos
    • 1
  • Yan Ge
    • 1
  • Gordon Fraser
    • 1
  • Marcelo Eler
    • 2
  • Andrea Arcuri
    • 3
    • 4
  1. 1.Department of Computer ScienceThe University of SheffieldSheffieldUK
  2. 2.University of São PauloSão PauloBrazil
  3. 3.Westerdals Oslo ACTOsloNorway
  4. 4.University of LuxembourgLuxembourgLuxembourg

Personalised recommendations