Advertisement

Incremental Control Dependency Frontier Exploration for Many-Criteria Test Case Generation

  • Annibale Panichella
  • Fitsum Meshesha Kifetew
  • Paolo Tonella
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11036)

Abstract

Several criteria have been proposed over the years for measuring test suite adequacy. Each criterion can be converted into a specific objective function to optimize with search-based techniques in an attempt to generate test suites achieving the highest possible coverage for that criterion. Recent work has tried to optimize for multiple-criteria at once by constructing a single objective function obtained as a weighted sum of the objective functions of the respective criteria. However, this solution suffers the problem of sum scalarization, i.e., differences along the various dimensions being optimized get lost when such dimensions are projected into a single value. Recent advances in SBST formulated coverage as a many-objective optimization problem rather than applying sum scalarization. Starting from this formulation, in this work, we apply many-objective test generation that handles multiple adequacy criteria simultaneously. To scale the approach to the big number of objectives to be optimized at the same time, we adopt an incremental strategy, where only coverage targets in the control dependency frontier are considered until the frontier is expanded by covering a previously uncovered target.

Notes

Acknowledgement

This work is partially supported by the Italian Ministry of Education, University, and Research (MIUR) with the PRIN project GAUSS (grant no. 2015KWREMX).

References

  1. 1.
    Abreu, R., Zoeteweij, P., Van Gemund, A.J.: An observation-based model for fault localization (2008)Google Scholar
  2. 2.
    Arcuri, A.: Many independent objective (MIO) algorithm for test suite generation. In: Menzies, T., Petke, J. (eds.) SSBSE 2017. LNCS, vol. 10452, pp. 3–17. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66299-2_1CrossRefGoogle Scholar
  3. 3.
    Arcuri, A., Fraser, G.: Parameter tuning or default values? An empirical investigation in search-based software engineering. Empirical Softw. Eng. 18(3), 594–623 (2013)CrossRefGoogle Scholar
  4. 4.
    Campos, J., Ge, Y., Fraser, G., Eler, M., Arcuri, A.: An empirical evaluation of evolutionary algorithms for test suite generation. In: Menzies, T., Petke, J. (eds.) SSBSE 2017. LNCS, vol. 10452, pp. 33–48. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66299-2_3CrossRefGoogle Scholar
  5. 5.
    Conover, W.J.: Practical Nonparametric Statistics, 3rd edn. Wiley, Hoboken (1998)Google Scholar
  6. 6.
    Deb, K., Deb, K.: Multi-objective optimization. In: Burke, E., Kendall, G. (eds.) Search Methodologies. Springer, Boston (2014).  https://doi.org/10.1007/978-1-4614-6940-7_15CrossRefGoogle Scholar
  7. 7.
    Fonseca, C.M., Fleming, P.J.: An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 3(1), 1–16 (1995)CrossRefGoogle Scholar
  8. 8.
    Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE 2011, pp. 416–419 (2011)Google Scholar
  9. 9.
    Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)CrossRefGoogle Scholar
  10. 10.
    Fraser, G., Arcuri, A.: A large-scale evaluation of automated unit test generation using evosuite. ACM Trans. Softw. Eng. Methodol. 24(2), 8:1–8:42 (2014).  https://doi.org/10.1145/2685612CrossRefGoogle Scholar
  11. 11.
    Gay, G.: Generating effective test suites by combining coverage criteria. In: Menzies, T., Petke, J. (eds.) SSBSE 2017. LNCS, vol. 10452, pp. 65–82. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66299-2_5CrossRefGoogle Scholar
  12. 12.
    Just, R., Jalali, D., Ernst, M.D.: Defects4j: a database of existing faults to enable controlled testing studies for Java programs. In: International Symposium on Software Testing and Analysis, ISSTA 2014, San Jose, CA, USA, 21–26 July 2014, pp. 437–440 (2014)Google Scholar
  13. 13.
    Just, R., Jalali, D., Inozemtseva, L., Ernst, M.D., Holmes, R., Fraser, G.: Are mutants a valid substitute for real faults in software testing? In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 654–665. ACM, New York (2014).  https://doi.org/10.1145/2635868.2635929
  14. 14.
    McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. 4, 308–320 (1976)MathSciNetCrossRefGoogle Scholar
  15. 15.
    McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  16. 16.
    Panichella, A., Kifetew, F.M., Tonella, P.: Reformulating branch coverage as a many-objective optimization problem. In: 8th IEEE International Conference on Software Testing, Verification and Validation, ICST, pp. 1–10 (2015)Google Scholar
  17. 17.
    Panichella, A., Kifetew, F.M., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. 44(2), 122–158 (2018).  https://doi.org/10.1109/TSE.2017.2663435CrossRefGoogle Scholar
  18. 18.
    Panichella, A., Molina, U.R.: Java unit testing tool competition: fifth round. In: Proceedings of the 10th International Workshop on Search-Based Software Testing, pp. 32–38. IEEE Press (2017)Google Scholar
  19. 19.
    Rojas, J.M., Campos, J., Vivanti, M., Fraser, G., Arcuri, A.: Combining multiple coverage criteria in search-based unit test generation. In: Barros, M., Labiche, Y. (eds.) SSBSE 2015. LNCS, vol. 9275, pp. 93–108. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-22183-0_7CrossRefGoogle Scholar
  20. 20.
    Rojas, J.M., Vivanti, M., Arcuri, A., Fraser, G.: A detailed investigation of the effectiveness of whole test suite generation. Empirical Softw. Eng. 22(2), 852–893 (2017).  https://doi.org/10.1007/s10664-015-9424-2CrossRefGoogle Scholar
  21. 21.
    Voas, J.M.: Pie: a dynamic failure-based technique. IEEE Trans. Softw. Eng. 18(8), 717–727 (1992).  https://doi.org/10.1109/32.153381CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Annibale Panichella
    • 1
  • Fitsum Meshesha Kifetew
    • 2
  • Paolo Tonella
    • 3
  1. 1.Delft University of TechnologyDelftThe Netherlands
  2. 2.Fondazione Bruno KesslerTrentoItaly
  3. 3.Università della Svizzera Italiana(USI)LuganoSwitzerland

Personalised recommendations