Predicting Concurrency Failures in the Generalized Execution Traces of x86 Executables

  • Chao Wang
  • Malay Ganai
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7186)

Abstract

In this tutorial, we first provide a brief overview of the latest development in SMT based symbolic predictive analysis techniques and their applications to runtime verification. We then present a unified runtime analysis platform for detecting concurrency related program failures in the x86 executables of shared-memory multithreaded applications. Our platform supports efficient monitoring and easy customization of a wide range of execution trace generalization techniques. Many of these techniques have been successfully incorporated into our in-house verification tools, including BEST (Binary instrumentation based Error-directed Symbolic Testing), which can detect concurrency related errors such as deadlocks and race conditions, generate failure-triggering thread schedules, and provide the visual mapping between runtime events and their program code to help debugging.

Keywords

Concurrent Program Execution Trace Symbolic Execution Atomic Region Symbolic Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ross, P.E.: Top 11 technologies of the decade. IEEE Spectrum 48(1), 27–63 (2011)CrossRefGoogle Scholar
  2. 2.
    Manson, J., Pugh, W., Adve, S.V.: The java memory model. In: ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 378–391 (2005)Google Scholar
  3. 3.
    Boehm, H.J., Adve, S.V.: Foundations of the c++ concurrency memory model. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 68–78 (2008)Google Scholar
  4. 4.
    Luk, C.K., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., Wallace, S., Reddi, V.J., Hazelwood, K.: PIN: Building customized program analysis tools with dynamic instrumentation. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 190–200. ACM, New York (2005)CrossRefGoogle Scholar
  5. 5.
    Wang, C., Kundu, S., Ganai, M., Gupta, A.: Symbolic Predictive Analysis for Concurrent Programs. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 256–272. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  6. 6.
    Wang, C., Chaudhuri, S., Gupta, A., Yang, Y.: Symbolic pruning of concurrent program executions. In: ACM SIGSOFT Symposium on Foundations of Software Engineering, pp. 23–32 (2009)Google Scholar
  7. 7.
    Wang, C., Limaye, R., Ganai, M., Gupta, A.: Trace-Based Symbolic Analysis for Atomicity Violations. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 328–342. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  8. 8.
    Kundu, S., Ganai, M.K., Wang, C.: Contessa: Concurrency Testing Augmented with Symbolic Analysis. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 127–131. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MATHCrossRefGoogle Scholar
  10. 10.
    Clarke, L.A.: A system to generate test data and symbolically execute programs. IEEE Trans. Software Eng. 2(3), 215–222 (1976)CrossRefGoogle Scholar
  11. 11.
    Sen, K., Rosu, G., Agha, G.: Runtime safety analysis of multithreaded programs. In: ACM SIGSOFT Symposium on Foundations of Software Engineering, pp. 337–346 (2003)Google Scholar
  12. 12.
    Xu, M., Bodík, R., Hill, M.D.: A serializability violation detector for shared-memory server programs. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 1–14 (2005)Google Scholar
  13. 13.
    Lu, S., Tucek, J., Qin, F., Zhou, Y.: AVIO: detecting atomicity violations via access interleaving invariants. In: Architectural Support for Programming Languages and Operating Systems, pp. 37–48 (2006)Google Scholar
  14. 14.
    Wang, L., Stoller, S.D.: Runtime analysis of atomicity for multithreaded programs. IEEE Trans. Software Eng. 32(2), 93–110 (2006)CrossRefGoogle Scholar
  15. 15.
    Chen, F., Serbanuta, T., Rosu, G.: jPredictor: a predictive runtime analysis tool for java. In: International Conference on Software Engineering, pp. 221–230 (2008)Google Scholar
  16. 16.
    Flanagan, C., Freund, S.N.: Atomizer: A dynamic atomicity checker for multithreaded programs. In: Parallel and Distributed Processing Symposium (2004)Google Scholar
  17. 17.
    Flanagan, C., Freund, S.N., Yi, J.: Velodrome: a sound and complete dynamic atomicity checker for multithreaded programs. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 293–303 (2008)Google Scholar
  18. 18.
    Farzan, A., Madhusudan, P.: Monitoring Atomicity in Concurrent Programs. In: Gupta, A., Malik, S. (eds.) CAV 2008. LNCS, vol. 5123, pp. 52–65. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  19. 19.
    Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21(7), 558–565 (1978)MATHCrossRefGoogle Scholar
  20. 20.
    Sen, K., Roşu, G., Agha, G.: Detecting Errors in Multithreaded Programs by Generalized Predictive Analysis of Executions. In: Steffen, M., Tennenholtz, M. (eds.) FMOODS 2005. LNCS, vol. 3535, pp. 211–226. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  21. 21.
    Chen, F., Roşu, G.: Parametric and Sliced Causality. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 240–253. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  22. 22.
    Sadowski, C., Freund, S.N., Flanagan, C.: SingleTrack: A Dynamic Determinism Checker for Multithreaded Programs. In: Castagna, G. (ed.) ESOP 2009. LNCS, vol. 5502, pp. 394–409. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  23. 23.
    Savage, S., Burrows, M., Nelson, G., Sobalvarro, P., Anderson, T.: Eraser: A dynamic data race detector for multithreaded programs. ACM Trans. Comput. Syst. 15(4), 391–411 (1997)CrossRefGoogle Scholar
  24. 24.
    von Praun, C., Gross, T.R.: Object race detection. In: ACM SIGPLAN Conference on Object Oriented Programming, Systems, Languages, and Applications, pp. 70–82 (2001)Google Scholar
  25. 25.
    Farzan, A., Madhusudan, P.: The Complexity of Predicting Atomicity Violations. In: Kowalewski, S., Philippou, A. (eds.) TACAS 2009. LNCS, vol. 5505, pp. 155–169. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  26. 26.
    Farzan, A., Madhusudan, P., Sorrentino, F.: Meta-analysis for Atomicity Violations under Nested Locking. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 248–262. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  27. 27.
    Kahlon, V., Ivančić, F., Gupta, A.: Reasoning About Threads Communicating via Locks. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 505–518. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  28. 28.
    Sinha, A., Malik, S., Wang, C., Gupta, A.: Predictive analysis for detecting serializability violations through trace segmentation. In: International Conference on Formal Methods and Models for Codesign (2011)Google Scholar
  29. 29.
    Kahlon, V., Wang, C.: Universal Causality Graphs: A Precise Happens-Before Model for Detecting Bugs in Concurrent Programs. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 434–449. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  30. 30.
    Ganai, M.K., Kundu, S.: Reduction of Verification Conditions for Concurrent System Using Mutually Atomic Transactions. In: Păsăreanu, C.S. (ed.) SPIN 2009. LNCS, vol. 5578, pp. 68–87. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  31. 31.
    Qadeer, S., Rehof, J.: Context-Bounded Model Checking of Concurrent Software. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 93–107. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  32. 32.
    Musuvathi, M., Qadeer, S., Ball, T., Basler, G., Nainar, P.A., Neamtiu, I.: Finding and reproducing heisenbugs in concurrent programs. In: OSDI, pp. 267–280 (2008)Google Scholar
  33. 33.
    Sinha, N., Wang, C.: On interference abstractions. In: ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 423–434 (2011)Google Scholar
  34. 34.
    Farzan, A., Madhusudan, P.: Causal Atomicity. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 315–328. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  35. 35.
    Ganai, M.K.: Scalable and precise symbolic analysis for atomicity violations. In: IEEE/ACM International Conference on Automated Software Engineering (2011)Google Scholar
  36. 36.
    Ganai, M.K., Arora, N., Wang, C., Gupta, A., Balakrishnan, G.: BEST: A symbolic testing tool for predicting multi-threaded program failures. In: IEEE/ACM International Conference on Automated Software Engineering (2011)Google Scholar
  37. 37.
    Dutertre, B., de Moura, L.: A Fast Linear-Arithmetic Solver for DPLL(T). In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 81–94. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  38. 38.
    Farchi, E., Nir, Y., Ur, S.: Concurrent bug patterns and how to test them. In: Parallel and Distributed Processing Symposium, p. 286 (2003)Google Scholar
  39. 39.
    Godefroid, P.: Software model checking: The VeriSoft approach. Formal Methods in System Design 26(2), 77–101 (2005)CrossRefGoogle Scholar
  40. 40.
    Yang, Y., Chen, X., Gopalakrishnan, G.: Inspect: A runtime model checker for multithreaded C programs. Technical Report UUCS-08-004, University of Utah (2008)Google Scholar
  41. 41.
    Godefroid, P.: VeriSoft: A Tool for the Automatic Analysis of Concurrent Reactive Software. In: Grumberg, O. (ed.) CAV 1997. LNCS, vol. 1254, pp. 476–479. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  42. 42.
    Flanagan, C., Godefroid, P.: Dynamic partial-order reduction for model checking software. In: ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 110–121 (2005)Google Scholar
  43. 43.
    Qadeer, S., Wu, D.: KISS: keep it simple and sequential. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 14–24 (2004)Google Scholar
  44. 44.
    Musuvathi, M., Qadeer, S.: Partial-order reduction for context-bounded state exploration. Technical Report MSR-TR-2007-12, Microsoft Research (December 2007)Google Scholar
  45. 45.
    Joshi, P., Naik, M., Park, C.-S., Sen, K.: CalFuzzer: An Extensible Active Testing Framework for Concurrent Programs. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 675–681. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  46. 46.
    Sorrentino, F., Farzan, A., Madhusudan, P.: PENELOPE: weaving threads to expose atomicity violations. In: ACM SIGSOFT Symposium on Foundations of Software Engineering, pp. 37–46 (2010)Google Scholar
  47. 47.
    Wang, C., Said, M., Gupta, A.: Coverage guided systematic concurrency testing. In: International Conference on Software Engineering, pp. 221–230 (2011)Google Scholar
  48. 48.
    Khurshid, S., Păsăreanu, C.S., Visser, W.: Generalized Symbolic Execution for Model Checking and Testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  49. 49.
    Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: Programming Language Design and Implementation, pp. 213–223 (June 2005)Google Scholar
  50. 50.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: Automated whitebox fuzz testing. In: Network and Distributed System Security Symposium (2008)Google Scholar
  51. 51.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: ACM SIGSOFT Symposium on Foundations of Software Engineering, pp. 263–272 (2005)Google Scholar
  52. 52.
    Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: ASE, pp. 443–446 (2008)Google Scholar
  53. 53.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, pp. 209–224 (2008)Google Scholar
  54. 54.
    Lewandowski, G., Bouvier, D.J., Chen, T.Y., McCartney, R., Sanders, K., Simon, B., VanDeGrift, T.: Commonsense understanding of concurrency: computing students and concert tickets. Commun. ACM 53, 60–70 (2010)CrossRefGoogle Scholar
  55. 55.
    Cadar, C., Godefroid, P., Khurshid, S., Pasareanu, C.S., Sen, K., Tillmann, N., Visser, W.: Symbolic execution for software testing in practice: preliminary assessment. In: International Conference on Software Engineering, pp. 1066–1071 (2011)Google Scholar
  56. 56.
    Zamfir, C., Candea, G.: Execution synthesis: a technique for automated software debugging. In: EuroSys, pp. 321–334 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Chao Wang
    • 1
  • Malay Ganai
    • 2
  1. 1.Virginia TechBlacksburgUSA
  2. 2.NEC Laboratories AmericaPrincetonUSA

Personalised recommendations