Advertisement

Efficient Testing of Concurrent Programs with Abstraction-Guided Symbolic Execution

  • Neha Rungta
  • Eric G. Mercer
  • Willem Visser
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5578)

Abstract

In this work we present an abstraction-guided symbolic execution technique that quickly detects errors in concurrent programs. The input to the technique is a set of target locations that represent a possible error in the program. We generate an abstract system from a backward slice for each target location. The backward slice contains program locations relevant in testing the reachability of the target locations. The backward slice only considers sequential execution and does not capture any inter-thread dependencies. A combination of heuristics are to guide a symbolic execution along locations in the abstract system in an effort to generate a corresponding feasible execution trace to the target locations. When the symbolic execution is unable to make progress, we refine the abstraction by adding locations to handle inter-thread dependencies. We demonstrate empirically that abstraction-guided symbolic execution generates feasible execution paths in the actual system to find concurrency errors in a few seconds where exhaustive symbolic execution fails to find the same errors in an hour.

Keywords

Target Location Model Check Concurrent Program Symbolic Execution Program Location 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anand, S., Godefroid, P., Tillmann, N.: Demand-driven compositional symbolic execution. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 367–381. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  2. 2.
    Artho, C., Biere, A.: Applying static analysis to large-scale, multi-threaded java programs. In: Proc. ASWEC, Washington, DC, USA, p. 68. IEEE Computer Society, Los Alamitos (2001)Google Scholar
  3. 3.
    Ball, T., Rajamani, S.: The SLAM toolkit. In: Berry, G., Comon, H., Finkel, A. (eds.) CAV 2001. LNCS, vol. 2102, pp. 260–264. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  4. 4.
    Edelkamp, S.: Planning with pattern databases. In: Proc. European Conference on Planning, pp. 13–24 (2001)Google Scholar
  5. 5.
    Engler, D., Ashcraft, K.: RacerX: effective, static detection of race conditions and deadlocks. In: Proc. SOSP 2003, Bolton Landing, NY, USA, pp. 237–252. ACM Press, New York (2003)Google Scholar
  6. 6.
    Eytani, Y., Havelund, K., Stoller, S.D., Ur, S.: Towards a framework and a benchmark for testing tools for multi-threaded programs: Research articles. Concurr. Comput. Pract. Exper. 19(3), 267–279 (2007)CrossRefGoogle Scholar
  7. 7.
    Godefroid, P.: Model checking for programming languages using Verisoft. In: Proc. of POPL, pp. 174–186. ACM, New York (1997)CrossRefGoogle Scholar
  8. 8.
    Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Software verification with Blast. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 235–239. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Holzmann, G.J.: The SPIN Model Checker: Primer and Reference Manual. Addison-Wesley, Reading (2003)Google Scholar
  10. 10.
    Horwitz, S., Reps, T., Binkley, D.: Interprocedural slicing using dependence graphs. SIGPLAN Not. 39(4), 229–243 (2004)CrossRefGoogle Scholar
  11. 11.
    Khurshid, S., Pasareanu, C., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  12. 12.
    King, J.C.: Symbolic execution and program testing. Communications of the ACM 19(7), 385–394 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Musuvathi, M., Qadeer, S.: Iterative context bounding for systematic testing of multithreaded programs. SIGPLAN Not. 42(6), 446–455 (2007)CrossRefGoogle Scholar
  14. 14.
    Musuvathi, M., Qadeer, S.: Fair stateless model checking. In: Proc. of PLDI, pp. 362–371. ACM, New York (2008)CrossRefGoogle Scholar
  15. 15.
    Nanshi, K., Somenzi, F.: Guiding simulation with increasingly refined abstract traces. In: Proc. DAC, pp. 737–742. ACM, New York (2006)Google Scholar
  16. 16.
    De Paula, F.M., Hu, A.J.: An effective guidance strategy for abstraction-guided simulation. In: Proc. DAC 2007, pp. 63–68. ACM, New York (2007)Google Scholar
  17. 17.
    Pǎsǎreanu, C.S., Mehlitz, P.C., Bushnell, D.H., Gundy-Burlet, K., Lowry, M., Person, S., Pape, M.: Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: Proc. ISSTA, pp. 15–26. ACM, New York (2008)CrossRefGoogle Scholar
  18. 18.
    Rungta, N., Mercer, E.G.: Hardness for explicit state software model checking benchmarks. In: Proc. SEFM, London, U.K, September 2007, pp. 247–256 (2007)Google Scholar
  19. 19.
    Rungta, N., Mercer, E.G.: A meta heuristic for effectively detecting concurrency errors. In: Haifa Verification Conference, Haifa, Israel (2008)Google Scholar
  20. 20.
    Rungta, N., Mercer, E.G.: Guided model checking for programs with polymorphism. In: Proc. PEPM, pp. 21–30. ACM, New York (2009)Google Scholar
  21. 21.
    Sen, K.: Race directed random testing of concurrent programs. SIGPLAN Not. 43(6), 11–21 (2008)CrossRefGoogle Scholar
  22. 22.
    Sen, K., Agha, G.: A race-detection and flipping algorithm for automated testing of multi-threaded programs. In: Bin, E., Ziv, A., Ur, S. (eds.) HVC 2006. LNCS, vol. 4383, pp. 166–182. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proc. ESEC/FSE-13, pp. 263–272. ACM, New York (2005)CrossRefGoogle Scholar
  24. 24.
    Tomb, A., Brat, G., Visser, W.: Variably interprocedural program analysis for runtime error detection. In: Proc. ISSTA, pp. 97–107. ACM Press, New York (2007)CrossRefGoogle Scholar
  25. 25.
    Visser, W., Havelund, K., Brat, G.P., Park, S., Lerda, F.: Model checking programs. Automated Software Engineering 10(2), 203–232 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Neha Rungta
    • 1
  • Eric G. Mercer
    • 1
  • Willem Visser
    • 2
  1. 1.Dept. of Computer ScienceBrigham Young UniversityProvoUSA
  2. 2.Division of Computer ScienceUniversity of StellenboshSouth Africa

Personalised recommendations