Advertisement

Bio-Inspired Optimization of Test Data Generation for Concurrent Software

  • Ricardo F. VilelaEmail author
  • Victor H. S. C. Pinto
  • Thelma E. Colanzi
  • Simone R. S. Souza
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11664)

Abstract

Concurrent software includes a number of key features such as communication, concurrency, and non-determinism, which increase the complexity of software testing. One of the main challenges is the test data generation. Techniques of search-based software can also benefit concurrent software testing. To do so, this paper adopts a bio-inspired approach, called BioConcST, to support the automatic test data generation for concurrent programs. BioConcST uses a Genetic Algorithm (GA) and an evolutionary strategy adapted to accept genetic information from some bad individuals (test data) in order to generate better individuals. Structural testing criteria for concurrent programs are used to guide the evolution of test data generation. An experimental study was carried out to compare BioConcST with an elitist GA strategy (EGA) in terms of adequacy of testing criteria for message-passing and shared-memory programs. Twelve concurrent Java programs were included and the results suggest BioConcST is a promising approach, since in all the testing criteria evaluated, it achieved a better coverage and the effect-size measure was large in most cases.

Keywords

Concurrent software testing Structural testing Search-based software testing Genetic algorithm Test data generation 

Notes

Acknowledgement

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001 and National Council for Scientific and Technological Development (CNPq).

References

  1. 1.
    Bertolino, A.: Software testing research: achievements, challenges, dreams. In: 2007 Future of Software Engineering, FOSE 2007, Washington, DC, USA, pp. 85–103. IEEE Computer Society (2007)Google Scholar
  2. 2.
    Chicano, F., Ferreira, M., Alba, E.: Comparing metaheuristic algorithms for error detection in Java programs. In: Cohen, M.B., Ó Cinnéide, M. (eds.) SSBSE 2011. LNCS, vol. 6956, pp. 82–96. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23716-4_11CrossRefGoogle Scholar
  3. 3.
    Grama, A., Karypis, G., Kumar, V., Gupta, A.: Introduction to Parallel Computing, 2nd edn. Addison-Wesley Longman Publishing Co., Inc., Boston (2003)zbMATHGoogle Scholar
  4. 4.
    Hrubá, V., Křena, B., Letko, Z., Pluháčková, H., Vojnar, T.: Multi-objective genetic optimization for noise-based testing of concurrent software. In: Le Goues, C., Yoo, S. (eds.) SSBSE 2014. LNCS, vol. 8636, pp. 107–122. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-09940-8_8CrossRefGoogle Scholar
  5. 5.
    Konnov, I., Veith, H., Widder, J.: On the completeness of bounded model checking for threshold-based distributed algorithms: reachability. Inf. Comput. 252, 95–109 (2017).  https://doi.org/10.1016/j.ic.2016.03.006MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Mairhofer, S., Feldt, R., Torkar, R.: Search-based software testing and test data generation for a dynamic programming language. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO 2011 (2011)Google Scholar
  7. 7.
    Melo, S.M., Souza, S.R.S., Souza, P.S.L.: Structural testing for multithreaded programs: an experimental evaluation of the cost, strength and effectiveness. In: Proceedings of the 24th International Conference on Software Engineering & Knowledge Engineering (SEKE 2012), pp. 476–479 (2012)Google Scholar
  8. 8.
    Nistor, A., Luo, Q., Pradel, M., Gross, T.R., Marinov, D.: Ballerina: automatic generation and clustering of efficient random unit tests for multithreaded code. In: International Conference on Software Engineering (2012)Google Scholar
  9. 9.
    Ostrand, T.J., Weyuker, E.J.: The distribution of faults in a large industrial software system. SIGSOFT Softw. Eng. Notes 27(4), 55–64 (2002)CrossRefGoogle Scholar
  10. 10.
    Panichella, A., Kifetew, F.M., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. 44(2), 122–158 (2018)CrossRefGoogle Scholar
  11. 11.
    Prado, R.R., et al.: Extracting static and dynamic structural information from Java concurrent programs for coverage testing. In: Latin American Computing Conference (2015)Google Scholar
  12. 12.
    Rai, D., Tyagi, K.: Bio-inspired optimization techniques: a critical comparative study. SIGSOFT Softw. Eng. Notes 38(4), 1–7 (2013)CrossRefGoogle Scholar
  13. 13.
    Rojas, J.M., Vivanti, M., Arcuri, A., Fraser, G.: A detailed investigation of the effectiveness of whole test suite generation. Empir. Softw. Eng. 22, 852–893 (2017)CrossRefGoogle Scholar
  14. 14.
    Scalabrino, S., Grano, G., Di Nucci, D., Oliveto, R., De Lucia, A.: Search-based testing of procedural programs: iterative single-target or multi-target approach? In: Sarro, F., Deb, K. (eds.) SSBSE 2016. LNCS, vol. 9962, pp. 64–79. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-47106-8_5CrossRefGoogle Scholar
  15. 15.
    Shamshiri, S., Rojas, J.M., Fraser, G., McMinn, P.: Random or genetic algorithm search for object-oriented test suite generation? In: Annual Conference on Genetic and Evolutionary Computation (GECCO 2015), pp. 1367–1374 (2015)Google Scholar
  16. 16.
    ShiZhen, ZhouYang, C.T.: Comparison of steady state and elitist selection genetic algorithms. In: Proceedings of International Conference on Intelligent Mechatronics and Automation, pp. 495–499 (2004)Google Scholar
  17. 17.
    Souza, P.S., Souza, S.R., Zaluska, E.: Structural testing for message-passing concurrent programs: an extended test model. Concurr. Comput. 26, 21–50 (2014)CrossRefGoogle Scholar
  18. 18.
    Souza, P.S., Souza, S.S., Rocha, M.G., Prado, R.R., Batista, R.N.: Data flow testing in concurrent programs with message passing and shared memory paradigms. In: Proceedings of the International Conference on Computational Science (2013)Google Scholar
  19. 19.
    Steenbuck, S., Fraser, G.: Generating unit tests for concurrent classes. In: IEEE International Conference on Software Testing, Verification and Validation (2013)Google Scholar
  20. 20.
    Tian, T., Gong, D.: Test data generation for path coverage of message-passing parallel programs based on co-evolutionary genetic algorithms. Autom. Softw. Eng. 23, 1–32 (2014)Google Scholar
  21. 21.
    Vilela, R.F., Souza, P.S.L., Delamaro, M.E., Souza, S.R.S.: Evidence on the configuration of genetic algorithms for test data generation (portuguese). In: Proceedings of XIX Ibero-American Conference on Software Engineering (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ricardo F. Vilela
    • 1
    Email author
  • Victor H. S. C. Pinto
    • 1
  • Thelma E. Colanzi
    • 2
  • Simone R. S. Souza
    • 1
  1. 1.Institute of Mathematical and Computer SciencesUniversity of São Paulo (ICMC-USP)São CarlosBrazil
  2. 2.Informatics DepartmentState University of MaringáMaringáBrazil

Personalised recommendations