Automated Software Engineering

, Volume 24, Issue 3, pp 603–621 | Cite as

Analysing the fitness landscape of search-based software testing problems

Article

Abstract

Search-based software testing automatically derives test inputs for a software system with the goal of improving various criteria, such as branch coverage. In many cases, evolutionary algorithms are implemented to find near-optimal test suites for software systems. The result of the search is usually received without any indication of how successful the search has been. Fitness landscape characterisation can help understand the search process and its probability of success. In this study, we recorded the information content, negative slope coefficient and the number of improvements during the progress of a genetic algorithm within the EvoSuite framework. Correlating the metrics with the branch and method coverages and the fitness function values reveals that the problem formulation used in EvoSuite could be improved by revising the objective function. It also demonstrates that given the current formulation, the use of crossover has no benefits for the search as the most problematic landscape features are not the number of local optima but the presence of many plateaus.

Keywords

Test data generation Genetic algorithms Fitness landscape characterisation 

Notes

Acknowledgments

We would like to thank Gordon Fraser and his team who developed EvoSuite for making the source code available. We wish to acknowledge Monash University for the use of their Nimrod software in this work. The Nimrod project has been funded by the Australian Research Council and a number of Australian Government agencies, and was initially developed by the Distributed Systems Technology. This research was supported under Australian Research Council’s Discovery Projects funding scheme (Project Number DE140100017).

Compliance with ethical standards

Conflicts of interest

The authors declare that they have no conflict of interest.

Ethical statement

This research does not involve human participants and animals.

References

  1. Aleti, A., Grunske, L.: Test data generation with a kalman filter-based adaptive genetic algorithm. J. Syst. Softw. 103, 343–352 (2015)CrossRefGoogle Scholar
  2. Ali, S., Briand, L., Hemmati, H., Panesar-Walawege, R.: A systematic review of the application and empirical investigation of search-based test case generation. Softw. Eng. IEEE Trans. 36(6), 742–762 (2010)CrossRefGoogle Scholar
  3. Angel, E., Zissimopoulos, V.: Autocorrelation coefficient for the graph bipartitioning problem. Theor. Comput. Sci. 191, 229–243 (1998)MathSciNetCrossRefMATHGoogle Scholar
  4. Arcuri, A., Fraser, G.: On parameter tuning in search based software engineering. In: SSBSE, pp. 33–47 (2011)Google Scholar
  5. Arcuri, A., Fraser, G.: Parameter tuning or default values? An empirical investigation in search-based software engineering. Empir. Softw. Eng. 18(3), 594–623 (2013)CrossRefGoogle Scholar
  6. Bäck, T., Eiben, A.E., van der Vaart, N.A.L.: An empirical study on GAs without parameters. Parallel Problem Solving from Nature—PPSN VI (6th PPSN’2000). Lecture Notes in Computer Science (LNCS), vol. 1917, pp. 315–324. Springer, New York (2000)CrossRefGoogle Scholar
  7. Boyer, R.S., Elspas, B., Levitt, K.N.: Select—a formal system for testing and debugging programs by symbolic execution. ACM SigPlan Not. 10(6), 234–245 (1975)CrossRefGoogle Scholar
  8. Clarke, L.A.: A system to generate test data and symbolically execute programs. Softw. Eng. IEEE Trans. 3, 215–222 (1976)MathSciNetCrossRefGoogle Scholar
  9. Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  10. Fraser, G., Arcuri, A., McMinn, P.: A memetic algorithm for whole test suite generation. J. Syst. Softw. 103, 311–327 (2015)CrossRefGoogle Scholar
  11. Gheorghita, M., Moser, I., Aleti, A.: Characterising fitness landscapes using predictive local search. In: Proceedings of the 15th annual conference companion on genetic and evolutionary computation, pp. 67–68. ACM (2013a)Google Scholar
  12. Gheorghita, M., Moser, I., Aleti, A.: Designing and characterising fitness landscapes with various operators. In: Evolutionary computation (CEC), 2013 IEEE congress on, pp. 2766–2772 (2013b)Google Scholar
  13. Godefroid, P., Klarlund, N., Sen, K.: Dart: Directed automated random testing. In: Proceedings of the 2005 ACM SIGPLAN conference on programming language design and implementation, PLDI ’05, pp. 213–223. ACM (2005)Google Scholar
  14. Hierons, R.M., Bogdanov, K., Bogdanov, J.P., Cleaveland, R., Derrick, J., Dick, J., Gheorghe, M., Harman, M., Kapoor, K., Krause, P.J., Lüttgen, G., Simons, A.J.H., Vilkomir, S.A., Woodward, M.R., Zedan, H.: Using formal specifications to support testing. ACM Comput. Surv. 41(2), 9 (2009)CrossRefGoogle Scholar
  15. Jones, B.F., Sthamer, H.-H., Eyres, D.E.: Automatic structural testing using genetic algorithms. Softw. Eng. J. 11(5), 299–306 (1996)CrossRefGoogle Scholar
  16. Kalboussi, S., Bechikh, S., Kessentini, M., Said, L.B.: Preference-based many-objective evolutionary testing generates harder test cases for autonomous agents. In: Ruhe, G., Zhang, Y., (eds.) Search based software engineering—5th international symposium, SSBSE 2013. Lecture notes in computer science, vol. 8084, pp. 245–250. Springer, New YorkGoogle Scholar
  17. Kinnear, K.E.: Fitness landscapes and difficulty in genetic programming. In: IEEE conference on evolutionary computation, vol. 1, pp. 142–47. IEEE (1994)Google Scholar
  18. Kirkpatrick, S., Vecchi, M., et al.: Optimization by simmulated annealing. Science 220(4598), 671–680 (1983)MathSciNetCrossRefMATHGoogle Scholar
  19. Mao, C., Xiao, L., Yu, X., Chen, J.: Adapting ant colony optimization to generate test data for software structural testing. Swarm Evolut. Comput. 20, 23–36 (2015)CrossRefGoogle Scholar
  20. Matinnejad, R., Nejati, S., Briand, L.C., Bruckmann, T., Poull, C.: Search-based automated testing of continuous controllers: framework, tool support, and case studies. Inf. Softw. Technol. 57, 705–722 (2015)CrossRefGoogle Scholar
  21. McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  22. Merz, P., Freisleben, B.: Fitness landscape analysis and memetic algorithms for the quadratic assignment problem. IEEE Evolut. Comput. 4(4), 337–352 (2000)CrossRefGoogle Scholar
  23. Michael, C.C., McGraw, G., Schatz, M.A.: Generating software test data by evolution. Softw. Eng. IEEE Trans. 27(12), 1085–1110 (2001)CrossRefGoogle Scholar
  24. Miller, W., Spooner, D.L.: Automatic generation of floating-point test data. IEEE Trans. Softw. Eng. 2(3), 223–226 (1976)MathSciNetCrossRefGoogle Scholar
  25. Pargas, R.P., Harrold, M.J., Peck, R.R.: Test-data generation using genetic algorithms. Softw. Test. Verif. Reliab. 9(4), 263–282 (1999)CrossRefGoogle Scholar
  26. Pitzer, E., Affenzeller, M.: A comprehensive survey on fitness landscape analysis. In: Fodor, J., Klempous, J.C.R., Araujo, C.P.S. (eds.) Recent Advances in Intelligent Engineering Systems of Studies in Computational Intelligence, pp. 161–191. Springer, New York (2012)Google Scholar
  27. Roper, M.: Computer aided software testing using genetic algorithms. 10th international quality week (1997)Google Scholar
  28. Sen, K., Marinov, D., and Agha, G.: Cute: a concolic unit testing engine for c. In: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on foundations of software engineering, ESEC/FSE-13, pp. 263–272, ACM, New York (2005)Google Scholar
  29. Shannon, C., Weaver, W.: The Mathematical Theory of Communication. Number pt. 11 in Illini Books. University of Illinois Press, Champaign (1963)Google Scholar
  30. Smith, T., Husbands, P., Layzell, P.J., O’Shea, M.: Fitness landscapes and evolvability. Evolut. Comput. 10(1), 1–34 (2002)CrossRefGoogle Scholar
  31. Smith-Miles, K., Lopes, L.: Measuring instance difficulty for combinatorial optimization problems. Comput. Oper. Res. 39(5), 875–889 (2012)MathSciNetCrossRefMATHGoogle Scholar
  32. Stadler, P.: Landscapes and their correlation functions. J. Math. Chem. 20, 1–45 (1996)MathSciNetCrossRefMATHGoogle Scholar
  33. Tillmann, N. De Halleux, J.: Pex: white box test generation for net. In: Proceedings of the 2nd international conference on tests and proofs, TAP’08, pp. 134–153. Springer, New York (2008)Google Scholar
  34. Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Test., Verif. Reliab. 22(5), 297–312 (2012)CrossRefGoogle Scholar
  35. Vanneschi, L., Tomassini, M., Collard, P., Vérel, S.: Negative slope coefficient: a measure to characterize genetic programming fitness landscapes. Genetic Programming. Lecture Notes in Computer Science, vol. 3905, pp. 178–189. Springer, Berlin (2006)Google Scholar
  36. Vassilev, V.K., Fogarty, T.C., Miller, J.F.: Information characteristics and the structure of landscapes. Evol. Comput. 8(1), 31–60 (2000)CrossRefGoogle Scholar
  37. Vivanti, M., Mis, A., Gorla, A., Fraser, G.: Search-based data-flow test generation. In: IEEE 24th international symposium on software reliability engineering, ISSRE 2013, pp. 370–379. IEEE Computer Society (2013)Google Scholar
  38. Watkins, A.L.: The automatic generation of test data using genetic algorithms. In: Proceedings of the 4th software quality conference, vol. 2, pp. 300–309 (1995)Google Scholar
  39. Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Inf. Softw. Technol. 43(14), 841–854 (2001)CrossRefGoogle Scholar
  40. Weinberger, E.: Correlated and uncorrelated fitness landscapes and how to tell the difference. Biol. Cybern. 63, 325–336 (1990)CrossRefMATHGoogle Scholar
  41. Weinberger, E.D.: Local properties of kauffman’s N–k model: a tunably rugged enegy landscape. Phys. Rev. A 44(10), 6399–6413 (1991)CrossRefGoogle Scholar
  42. Whitley, D.: The GENITOR algorithm and selection pressure: why rank-based allocation. In: Schaffer, J.D. (ed) Proceedings of the third international conference on genetic algorithms, pp. 116–121, San Mateo, Morgan Kaufmann (1989)Google Scholar
  43. Williams, N., Marre, B., Mouy, P., and Roger, M.: Pathcrawler: Automatic generation of path tests by combining static and dynamic analysis. In: Cin, M. D., Kaâniche, M., Pataricza, A., (eds) Proceedings of dependable computing—EDCC-5, 5th European dependable computing conference, budapest, Hungary, April 20–22, 2005, Lecture notes in computer science, vol. 3463, pp. 281–292. Springer, New York (2005)Google Scholar
  44. Xanthakis, S., Ellis, C., Skourlas, C., Le Gall, A., Katsikas, S., and Karapoulios, K.: Application of genetic algorithms to software testing. In: Proceedings of the 5th international conference on software engineering and its applications, pp. 625–636 (1992)Google Scholar
  45. Xin, B., Chen, J., and Pan, F.: Problem difficulty analysis for particle swarm optimization: deception and modality. In: Proceedings of the first ACM/SIGEVO summit on genetic and evolutionary computation, pp. 623–630. ACM (2009)Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Faculty of Information TechnologyMonash UniversityClaytonAustralia
  2. 2.Faculty of Science, Engineering & TechnologySwinburne University of TechnologyHawthornAustralia
  3. 3.Institute of Software TechnologyUniversity of StuttgartStuttgartGermany

Personalised recommendations