Advertisement

Diversity in Search-Based Unit Test Suite Generation

  • Nasser M. Albunian
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10452)

Abstract

Search-based unit test generation is often based on evolutionary algorithms. Lack of diversity in the population of an evolutionary algorithm may lead to premature convergence at local optima, which would negatively affect the code coverage in test suite generation. While methods to improve population diversity are well-studied in the literature on genetic algorithms (GAs), little attention has been paid to diversity in search-based unit test generation so far. The aim of our research is to study the effects of population diversity on search-based unit test generation by applying different diversity maintenance and control techniques. As a first step towards understanding the influence of population diversity on the test generation, we adapt diversity measurements based on phenotypic and genotypic representation to the search space of unit test suites.

Keywords

Search-based test generation Population diversity Genetic algorithm 

References

  1. 1.
    Alba, E., Dorronsoro, B.: Cellular Genetic Algorithms. Operations Research/Computer Science Interfaces Series. Springer, US (2009)zbMATHGoogle Scholar
  2. 2.
    Alshraideh, M., Bottaci, L., Mahafzah, B.A.: Using program data-state scarcity to guide automatic test data generation. Software Qual. J. 18(1), 109–144 (2010)CrossRefGoogle Scholar
  3. 3.
    Burke, E., Gustafson, S., Kendall, G., Krasnogor, N.: Advanced population diversity measures in genetic programming. In: Guervós, J.J.M., Adamidis, P., Beyer, H.-G., Schwefel, H.-P., Fernández-Villacañas, J.-L. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 341–350. Springer, Heidelberg (2002). doi: 10.1007/3-540-45712-7_33 Google Scholar
  4. 4.
    Feldt, R., Torkar, R., Gorschek, T., Afzal, W.: Searching for cognitively diverse tests: towards universal test diversity metrics. In: Software Testing Verification and Validation Workshop, ICSTW 2008, pp. 178–186. IEEE (2008)Google Scholar
  5. 5.
    Fraser, G., Arcuri, A.: Evosuite: automatic test suite generation for object-oriented software. In: Proceeding of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, pp. 416–419. ACM (2011)Google Scholar
  6. 6.
    Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Software Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  7. 7.
    Fraser, G., Arcuri, A.: A large-scale evaluation of automated unit test generation using EvoSuite. ACM Trans. Softw. Eng. Methodol. 24(2), 8:1–8:42 (2014)CrossRefGoogle Scholar
  8. 8.
    Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: ALIFE, pp. 329–336 (2008)Google Scholar
  9. 9.
    McMinn, P.: Search-based software testing: past, present and future. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 153–163. IEEE (2011)Google Scholar
  10. 10.
    Morrison, J., Oppacher, F.: Maintaining genetic diversity in genetic algorithms through co-evolution. In: Mercer, R.E., Neufeld, E. (eds.) AI 1998. LNCS, vol. 1418, pp. 128–138. Springer, Heidelberg (1998). doi: 10.1007/3-540-64575-6_45 CrossRefGoogle Scholar
  11. 11.
    Palomba, F., Panichella, A., Zaidman, A., Oliveto, R., De Lucia, A.: Automatic test case generation: what if test code quality matters? In: Proceedings of the International Symposium on Software Testing and Analysis, pp. 130–141. ACM (2016)Google Scholar
  12. 12.
    Panichella, A., Kifetew, F., Tonella, P.: automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Software Eng. PP, 1 (2017)Google Scholar
  13. 13.
    Panichella, A., Kifetew, F.M., Tonella, P.: Reformulating branch coverage as a many-objective optimization problem. In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), pp. 1–10. IEEE (2015)Google Scholar
  14. 14.
    Pellerin, E., Pigeon, L., Delisle, S.: Self-adaptive parameters in genetic algorithms. In: International Society for Optics and Photonics, pp. 53–64 (2004)Google Scholar
  15. 15.
    Sareni, B., Krahenbuhl, L.: Fitness sharing and niching methods revisited. Trans. Evol. Comp 2(3), 97–106 (1998)CrossRefGoogle Scholar
  16. 16.
    Shahbazi, A.: Diversity-based automated test case generation. Ph.D. thesis, University of Alberta (2015)Google Scholar
  17. 17.
    Shahbazi, A., Miller, J.: Black-Box string test case generation through a multi-objective optimization. IEEE Trans. Software Eng. 42(4), 361–378 (2016)CrossRefGoogle Scholar
  18. 18.
    Shamshiri, S., Rojas, J.M., Fraser, G., McMinn, P.: Random or genetic algorithm search for object-oriented test suite generation? In: Proceeding of the Conference on Genetic and Evolutionary Computation, pp. 1367–1374. ACM (2015)Google Scholar
  19. 19.
    Črepinšek, M., Liu, S.H., Mernik, M.: Exploration and exploitation in evolutionary algorithms: a survey. ACM Comput. Surv. 45(3), 35:1–35:33 (2013)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.The University of SheffieldSheffieldUK

Personalised recommendations