Skip to main content

LIPS vs MOSA: A Replicated Empirical Study on Automated Test Case Generation

  • Conference paper
  • First Online:
Search Based Software Engineering (SSBSE 2017)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 10452))

Included in the following conference series:

Abstract

Replication is a fundamental pillar in the construction of scientific knowledge. Test data generation for procedural programs can be tackled using a single-target or a many-objective approach. The proponents of LIPS, a novel single-target test generator, conducted a preliminary empirical study to compare their approach with MOSA, an alternative many-objective test generator. However, their empirical investigation suffers from several external and internal validity threats, does not consider complex programs with many branches and does not include any qualitative analysis to interpret the results. In this paper, we report the results of a replication of the original study designed to address its major limitations and threats to validity. The new findings draw a completely different picture on the pros and cons of single-target vs many-objective approaches to test case generation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/EvoSuite/evosuite/tree/master/client/src/main/java/org/evosuite/ga/metaheuristics/mosa.

  2. 2.

    https://github.com/EvoSuite/evosuite.

  3. 3.

    The number of branches reported here is sometimes slightly different from that of the original study because EvoSuite performs the instrumentation and counts the branches at the byte code, not source code, level.

  4. 4.

    http://www.joptimizer.com.

  5. 5.

    http://nd4j.org.

References

  1. Baker, R.D.: Modern permutation test software. In: Edgington, E. (ed.) Randomization Tests. Marcel Decker, New York (1995)

    Google Scholar 

  2. Conover, W.J.: Practical Nonparametric Statistics, 3rd edn. Wiley, New York (1998)

    Google Scholar 

  3. Deb, K., Deb, D.: Analysing mutation schemes for real-parameter genetic algorithms. Int. J. Artif. Intell. Soft Comput. 4(1), 1–28 (2014)

    Article  Google Scholar 

  4. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2000)

    Article  Google Scholar 

  5. Fraser, G., Arcuri, A.: Whole test suite generation. IEEE Trans. Softw. Eng. 39(2), 276–291 (2013)

    Article  Google Scholar 

  6. Fraser, G., Arcuri, A.: A large-scale evaluation of automated unit test generation using EvoSuite. ACM Trans. Softw. Eng. Methodol. 24(2), 8:1–8:42 (2014). http://doi.acm.org/10.1145/2685612

    Article  Google Scholar 

  7. Juzgado, N.J., Vegas, S.: The role of non-exact replications in software engineering experiments. Empir. Softw. Eng. 16(3), 295–324 (2011)

    Article  Google Scholar 

  8. McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)

    Article  Google Scholar 

  9. Panichella, A., Kifetew, F., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. PP(99), 1 (2017). Pre-print available online

    Article  Google Scholar 

  10. Panichella, A., Kifetew, F.M., Tonella, P.: Reformulating branch coverage as a many-objective optimization problem. In: 8th IEEE International Conference on Software Testing, Verification and Validation, ICST, pp. 1–10 (2015)

    Google Scholar 

  11. Scalabrino, S., Grano, G., Nucci, D., Oliveto, R., Lucia, A.: Search-based testing of procedural programs: iterative single-target or multi-target approach? In: Sarro, F., Deb, K. (eds.) SSBSE 2016. LNCS, vol. 9962, pp. 64–79. Springer, Cham (2016). doi:10.1007/978-3-319-47106-8_5

    Google Scholar 

  12. Shull, F., Basili, V.R., Carver, J., Maldonado, J.C., Travassos, G.H., Mendonça, M.G., Fabbri, S.: Replicating software engineering experiments: addressing the tacit knowledge problem. In: 2002 International Symposium on Empirical Software Engineering (ISESE 2002), 3–4 October 2002, Nara, pp. 7–16 (2002)

    Google Scholar 

  13. Shull, F., Carver, J.C., Vegas, S., Juzgado, N.J.: The role of replications in empirical software engineering. Empir. Softw. Eng. 13(2), 211–218 (2008)

    Article  Google Scholar 

  14. Tonella, P.: Evolutionary testing of classes. In: ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2004), pp. 119–128. ACM (2004)

    Google Scholar 

  15. Vargha, A., Delaney, H.D.: A critique and improvement of the CL common language effect size statistics of Mcgraw and Wong. J. Educ. Behav. Stat. 25(2), 101–132 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Annibale Panichella .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Panichella, A., Kifetew, F.M., Tonella, P. (2017). LIPS vs MOSA: A Replicated Empirical Study on Automated Test Case Generation. In: Menzies, T., Petke, J. (eds) Search Based Software Engineering. SSBSE 2017. Lecture Notes in Computer Science(), vol 10452. Springer, Cham. https://doi.org/10.1007/978-3-319-66299-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-66299-2_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-66298-5

  • Online ISBN: 978-3-319-66299-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics