Advertisement

Testing of Concurrent Programs Using Genetic Algorithms

  • Vendula Hrubá
  • Bohuslav Křena
  • Zdeněk Letko
  • Shmuel Ur
  • Tomáš Vojnar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7515)

Abstract

Noise injection disturbs the scheduling of program threads in order to increase the probability that more of their different legal interleavings occur during the testing process. However, there exist many different types of noise heuristics with many different parameters that are not easy to set such that noise injection is really efficient. In this paper, we propose a new way of using genetic algorithms to search for suitable types of noise heuristics and their parameters. This task is formalized as the test and noise configuration search problem in the paper, followed by a discussion of how to represent instances of this problem for genetic algorithms, which objectives functions to use, as well as parameter tuning of genetic algorithms when solving the problem. The proposed approach is evaluated on a set of benchmarks, showing that it provides significantly better results than the so far preferred random noise injection.

Keywords

Genetic Algorithm Model Check Crossover Operator Good Individual Concurrent Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alba, E., Chicano, F.: Finding Safety Errors with ACO. In: Proc. of GECCO 2007. ACM Press (2007)Google Scholar
  2. 2.
    Alba, E., Chicano, F.: Searching for Liveness Property Violations in Concurrent Systems with ACO. In: Proc. of GECCO 2008, pp. 1727–1734. ACM Press (2008)Google Scholar
  3. 3.
    Alba, E., Chicano, F., Ferreira, M., Gomez-Pulido, J.: Finding Deadlocks in Large Concurrent Java Programs Using Genetic Algorithms. In: Proc. of GECCO 2008. ACM Press (2008)Google Scholar
  4. 4.
    Ben-Asher, Y., Eytani, Y., Farchi, E., Ur, S.: Noise Makers Need To Know Where To Be Silent–Producing Schedules That Find Bugs. In: Proc. of ISOLA 2006. IEEE CS (2006)Google Scholar
  5. 5.
    Bron, A., Farchi, E., Magid, Y., Nir, Y., Ur, S.: Applications of Synchronization Coverage. In: Proc. of PPoPP 2005. ACM Press (2005)Google Scholar
  6. 6.
    Chicano, F., Ferrer, J., Alba, E.: Elementary Landscape Decomposition of the Test Suite Minimization Problem. In: Cohen, M.B., Ó Cinnéide, M. (eds.) SSBSE 2011. LNCS, vol. 6956, pp. 48–63. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  7. 7.
    Edelstein, O., Farchi, E., Goldin, E., Nir, Y., Ratsaby, G., Ur, S.: Framework for Testing Multi-Threaded Java Programs. Concurrency and Computation: Practice and Experience 15(3-5), 485–499 (2003)zbMATHCrossRefGoogle Scholar
  8. 8.
    Elmas, T., Qadeer, S., Tasiran, S.: Goldilocks: A Race and Transaction-aware Java Runtime. In: Proc. of PLDI 2007. ACM Press (2007)Google Scholar
  9. 9.
    Eytani, Y.: Concurrent Java Test Generation as a Search Problem. ENTCS 144, 57–72 (2006)Google Scholar
  10. 10.
    Giannakopoulou, D., Pasareanu, C.S., Lowry, M., Washington, R.: Lifecycle Verification of the NASA Ames K9 Rover Executive. In: Proc. of ICAPS 2005 (2005)Google Scholar
  11. 11.
    Godefroid, P., Khurshid, S.: Exploring Very Large State Spaces Using Genetic Algorithms. STTT 6(2), 117–127 (2004)CrossRefGoogle Scholar
  12. 12.
    Hrubá, V., Křena, B., Letko, Z., Vojnar, T.: Testing of Concurrent Programs Using Genetic Algorithms. Technical report FIT-TR-2012-01, BUT (2012)Google Scholar
  13. 13.
    Křena, B., Letko, Z., Vojnar, T.: Coverage Metrics for Saturation-Based and Search-Based Testing of Concurrent Software. In: Khurshid, S., Sen, K. (eds.) RV 2011. LNCS, vol. 7186, pp. 177–192. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  14. 14.
    Křena, B., Letko, Z., Vojnar, T.: Noise Injection Heuristics for Concurrency Testing. In: Kotásek, Z., Bouda, J., Černá, I., Sekanina, L., Vojnar, T., Antoš, D. (eds.) MEMICS 2011. LNCS, vol. 7119, pp. 123–135. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    Křena, B., Letko, Z., Vojnar, T., Ur, S.: A Platform for Search-based Testing of Concurrent Software. In: Proc. of PADTAD 2010. ACM Press (2010)Google Scholar
  16. 16.
    Musuvathi, M., Qadeer, S., Ball, T.: Chess: A Systematic Testing Tool for Concurrent Software. Technical Report MSR-TR-2007-149, Microsoft Research (2007)Google Scholar
  17. 17.
    Talbi, E.-G.: Metaheuristics: From Design to Implementation. Wiley Publishing (2009)Google Scholar
  18. 18.
    Tzoref, R., Ur, S., Yom-Tov, E.: Instrumenting Where It Hurts: An Automatic Concurrent Debugging Technique. In: Proc. of ISSTA 2007. ACM Press (2007)Google Scholar
  19. 19.
    Šimša, J., Bryant, R., Gibson, G.: dBug: Systematic Testing of Unmodified Distributed and Multi-threaded Systems. In: Groce, A., Musuvathi, M. (eds.) SPIN Workshops 2011. LNCS, vol. 6823, pp. 188–193. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  20. 20.
    White, D.: Software Review: The ECJ Toolkit. Genetic Programming and Evolvable Machines 13, 65–67 (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Vendula Hrubá
    • 1
  • Bohuslav Křena
    • 1
  • Zdeněk Letko
    • 1
  • Shmuel Ur
    • 2
  • Tomáš Vojnar
    • 1
  1. 1.IT4Innovations Centre of ExcellenceFIT, Brno University of TechnologyCzech Republic
  2. 2.Shmuel Ur Innovations, Ltd.Israel

Personalised recommendations