Search-Based Testing of Procedural Programs: Iterative Single-Target or Multi-target Approach?

  • Simone Scalabrino
  • Giovanni Grano
  • Dario Di Nucci
  • Rocco Oliveto
  • Andrea De Lucia
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9962)


In the context of testing of Object-Oriented (OO) software systems, researchers have recently proposed search based approaches to automatically generate whole test suites by considering simultaneously all targets (e.g., branches) defined by the coverage criterion (multi-target approach). The goal of whole suite approaches is to overcome the problem of wasting search budget that iterative single-target approaches (which iteratively generate test cases for each target) can encounter in case of infeasible targets. However, whole suite approaches have not been implemented and experimented in the context of procedural programs. In this paper we present OCELOT (Optimal Coverage sEarch-based tooL for sOftware Testing), a test data generation tool for C programs which implements both a state-of-the-art whole suite approach and an iterative single-target approach designed for a parsimonious use of the search budget. We also present an empirical study conducted on 35 open-source C programs to compare the two approaches implemented in OCELOT. The results indicate that the iterative single-target approach provides a higher efficiency while achieving the same or an even higher level of coverage than the whole suite approach.


Test data generation Search-based software testing Genetic Algorithm 


  1. 1.
    Arcuri, A.: It does matter how you normalise the branch distance in search based software testing. In: International Conference on Software Testing, Verification and Validation, pp. 205–214. IEEE (2010)Google Scholar
  2. 2.
    Arcuri, A., Briand, L.: A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: International Conference on Software Engineering, pp. 1–10. IEEE (2011)Google Scholar
  3. 3.
    Arcuri, A., Fraser, G.: Parameter tuning or default values? An empirical investigation in search-based software engineering. Empirical Softw. Eng. 18(3), 594–623 (2013)CrossRefGoogle Scholar
  4. 4.
    Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)CrossRefGoogle Scholar
  5. 5.
    Beizer, B.: Software Testing Techniques. Van Nostrand Reinhold Co., New York (1990)MATHGoogle Scholar
  6. 6.
    Chang, K.H., Cross II, J.H., Carlisle, W.H., Liao, S.S.: A performance evaluation of heuristics-based test case generation methods for software branch coverage. Int. J. Softw. Eng. Knowl. Eng. 6(04), 585–608 (1996)CrossRefGoogle Scholar
  7. 7.
    Conover, W.J.: Practical Nonparametric Statistics (1980)Google Scholar
  8. 8.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)CrossRefGoogle Scholar
  9. 9.
    Durillo, J.J., Nebro, A.J.: JMetal: a java framework for multi-objective optimization. Adv. Eng. Softw. 42(10), 760–771 (2011)CrossRefGoogle Scholar
  10. 10.
    Ferguson, R., Korel, B.: The chaining approach for software test data generation. ACM Trans. Softw. Eng. Methodol. 5(1), 63–86 (1996)CrossRefGoogle Scholar
  11. 11.
    Ferrer, J., Chicano, F., Alba, E.: Evolutionary algorithms for the multi-objective test data generation problem. Softw. Pract. Experience 42(11), 1331–1362 (2012)CrossRefGoogle Scholar
  12. 12.
    Fraser, G., Arcuri, A.: Evosuite: automatic test suite generation for object-oriented software. In: Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 416–419. ACM (2011)Google Scholar
  13. 13.
    Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  14. 14.
    Gallagher, M.J., Narasimhan, V.L.: ADTEST: a test data generation suite for Ada software systems. IEEE Trans. Softw. Eng. 23(8), 473–484 (1997)CrossRefGoogle Scholar
  15. 15.
    Harman, M., Kim, S.G., Lakhotia, K., McMinn, P., Yoo, S.: Optimizing for the number of tests generated in search based test data generation with an application to the oracle cost problem. In: International Conference on Software Testing, Verification, and Validation Workshops, pp. 182–191. IEEE (2010)Google Scholar
  16. 16.
    Harman, M., McMinn, P.: A theoretical & empirical analysis of evolutionary testing and hill climbing for structural test data generation. In: International Symposium on Software Testing and Analysis, pp. 73–83. ACM (2007)Google Scholar
  17. 17.
    Harrold, M.J., Gupta, R., Soffa, M.L.: A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol. 2(3), 270–285 (1993)CrossRefGoogle Scholar
  18. 18.
    Jones, B.F., Sthamer, H.H., Eyres, D.E.: Automatic structural testing using genetic algorithms. Softw. Eng. J. 11(5), 299–306 (1996)CrossRefGoogle Scholar
  19. 19.
    Korel, B.: Automated software test data generation. IEEE Trans. Softw. Eng. 16(8), 870–879 (1990)CrossRefGoogle Scholar
  20. 20.
    Lakhotia, K., Harman, M., Gross, H.: AUSTIN: an open source tool for search based software testing of C programs. Inf. Softw. Technol. 55(1), 112–125 (2013)CrossRefGoogle Scholar
  21. 21.
    Larson, E., Austin, T.: High coverage detection of input-related security faults. Ann Arbor 1001(48105), 29 (2003)Google Scholar
  22. 22.
    Laumanns, M., Thiele, L., Deb, K., Zitzler, E.: Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 10(3), 263–282 (2002)CrossRefGoogle Scholar
  23. 23.
    McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. 4, 308–320 (1976)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verification Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  25. 25.
    McMinn, P.: IGUANA: Input Generation Using Automated Novel Algorithms. a plug and play research tool. Technical report. CS-07-14, Department of Computer Science, University of Sheffield (2007)Google Scholar
  26. 26.
    Michael, C.C., McGraw, G., Schatz, M.A.: Generating software test data by evolution. IEEE Trans. Softw. Eng. 27(12), 1085–1110 (2001)CrossRefGoogle Scholar
  27. 27.
    Michael, C.C., McGraw, G.E., Schatz, M.A., Walton, C.C.: Genetic algorithms for dynamic test data generation. In: International Conference Automated Software Engineering, pp. 307–308. IEEE (1997)Google Scholar
  28. 28.
    Oster, N., Saglietti, F.: Automatic test data generation by multi-objective optimisation. In: Górski, J. (ed.) SAFECOMP 2006. LNCS, vol. 4166, pp. 426–438. Springer, Heidelberg (2006). doi: 10.1007/11875567_32 CrossRefGoogle Scholar
  29. 29.
    Panichella, A., Kifetew, F.M., Tonella, P.: Reformulating branch coverage as a many-objective optimization problem. In: International Conference on Software Testing, Verification and Validation, pp. 1–10. IEEE (2015)Google Scholar
  30. 30.
    Pargas, R.P., Harrold, M.J., Peck, R.R.: Test-data generation using genetic algorithms. Softw. Test. Verification Reliab. 9(4), 263–282 (1999)CrossRefGoogle Scholar
  31. 31.
    Tonella, P.: Evolutionary testing of classes. In: ACM SIGSOFT Software Engineering Notes, vol. 29, pp. 119–128. ACM (2004)Google Scholar
  32. 32.
    Tracey, N., Clark, J., Mander, K., McDermid, J.: An automated framework for structural test-data generation. In: International Conference on Automated Software Engineering, pp. 285–288. IEEE (1998)Google Scholar
  33. 33.
    Vargha, A., Delaney, H.D.: A critique and improvement of the CL common language effect size statistics of McGraw and Wong. J. Educ. Behav. Stat. 25(2), 101–132 (2000)Google Scholar
  34. 34.
    Watson, A.H., McCabe, T.J., Wallace, D.R.: Structured testing: a testing methodology using the cyclomatic complexity metric. NIST Spec. Publ. 500(235), 1–114 (1996)Google Scholar
  35. 35.
    Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Inf. Softw. Technol. 43(14), 841–854 (2001)CrossRefGoogle Scholar
  36. 36.
    Xanthakis, S., Ellis, C., Skourlas, C., Le Gall, A., Katsikas, S., Karapoulios, K.: Application of genetic algorithms to software testing. In: International Conference on Software Engineering and Its Applications, pp. 625–636 (1992)Google Scholar
  37. 37.
    Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verification Reliab. 22(2), 67–120 (2012)CrossRefGoogle Scholar
  38. 38.
    Zitzler, E., Künzli, S.: Indicator-based selection in multiobjective search. In: Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.-P. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 832–842. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-30217-9_84 CrossRefGoogle Scholar
  39. 39.
    Zitzler, E., Laumanns, M., Thiele, L.: SPEA 2: Improving the strength pareto evolutionary algorithm. Technical report (2001)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Simone Scalabrino
    • 1
  • Giovanni Grano
    • 2
  • Dario Di Nucci
    • 2
  • Rocco Oliveto
    • 1
  • Andrea De Lucia
    • 2
  1. 1.University of MoliseCampobassoItaly
  2. 2.University of SalernoSalernoItaly

Personalised recommendations