Abstraction and Mining of Traces to Explain Concurrency Bugs

  • Mitra Tabaei Befrouei
  • Chao Wang
  • Georg Weissenbacher
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8734)

Abstract

We propose an automated mining-based method for explaining concurrency bugs. We use a data mining technique called sequential pattern mining to identify problematic sequences of concurrent read and write accesses to the shared memory of a multi-threaded program. Our technique does not rely on any characteristics specific to one type of concurrency bug, thus providing a general framework for concurrency bug explanation. In our method, given a set of concurrent execution traces, we first mine sequences that frequently occur in failing traces and then rank them based on the number of their occurrences in passing traces. We consider the highly ranked sequences of events that occur frequently only in failing traces an explanation of the system failure, as they can reveal its causes in the execution traces. Since the scalability of sequential pattern mining is limited by the length of the traces, we present an abstraction technique which shortens the traces at the cost of introducing spurious explanations. Spurious as well as misleading explanations are then eliminated by a subsequent filtering step, helping the programmer to focus on likely causes of the failure. We validate our approach using a number of case studies, including synthetic as well as real-world bugs.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  2. 2.
    Delgado, N., Gates, A.Q., Roach, S.: A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Transactions on Software Engineering (TSE) 30(12), 859–872 (2004)CrossRefGoogle Scholar
  3. 3.
    Elmas, T., Qadeer, S., Tasiran, S.: Goldilocks: a race-aware Java runtime. Communications of the ACM 53(11), 85–92 (2010)CrossRefGoogle Scholar
  4. 4.
    Engler, D.R., Ashcraft, K.: RacerX: effective, static detection of race conditions and deadlocks. In: Symposium on Operating Systems Principles (SOSP), pp. 237–252. ACM (2003)Google Scholar
  5. 5.
    Flanagan, C., Freund, S.N.: FastTrack: efficient and precise dynamic race detection. Communications of the ACM 53(11), 93–101 (2010)CrossRefGoogle Scholar
  6. 6.
    Flanagan, C., Qadeer, S.: A type and effect system for atomicity. In: PLDI, pp. 338–349. ACM (2003)Google Scholar
  7. 7.
    Hammer, C., Dolby, J., Vaziri, M., Tip, F.: Dynamic detection of atomic-set-serializability violations. In: International Conference on Software Engineering (ICSE), pp. 231–240. ACM (2008)Google Scholar
  8. 8.
    Leue, S., Tabaei Befrouei, M.: Counterexample explanation by anomaly detection. In: Donaldson, A., Parker, D. (eds.) SPIN 2012. LNCS, vol. 7385, pp. 24–42. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  9. 9.
    Leue, S., Tabaei Befrouei, M.: Mining sequential patterns to explain concurrent counterexamples. In: Bartocci, E., Ramakrishnan, C.R. (eds.) SPIN 2013. LNCS, vol. 7976, pp. 264–281. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Lewis, D.: Counterfactuals. Wiley-Blackwell (2001)Google Scholar
  11. 11.
    Lu, S., Tucek, J., Qin, F., Zhou, Y.: AVIO: detecting atomicity violations via access interleaving invariants. In: Architectural Support for Programming Languages and Operating Systems, ASPLOS (2006)Google Scholar
  12. 12.
    Lu, S., Park, S., Seo, E., Zhou, Y.: Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. ACM Sigplan Notices 43, 329–339 (2008)CrossRefGoogle Scholar
  13. 13.
    Lucia, B., Ceze, L.: Finding concurrency bugs with context-aware communication graphs. In: Symposium on Microarchitecture (MICRO), pp. 553–563. ACM (2009)Google Scholar
  14. 14.
    Mabroukeh, N.R., Ezeife, C.I.: A taxonomy of sequential pattern mining algorithms. ACM Computing Surveys 43(1), 3:1–3:41 (2010)Google Scholar
  15. 15.
    Musuvathi, M., Qadeer, S.: Iterative context bounding for systematic testing of multithreaded programs. In: PLDI, pp. 446–455. ACM (2007)Google Scholar
  16. 16.
    Netzer, R.H.B., Miller, B.P.: Improving the accuracy of data race detection. SIGPLAN Notices 26(7), 133–144 (1991)CrossRefGoogle Scholar
  17. 17.
    Papadimitriou, C.H.: The serializability of concurrent database updates. Journal of the ACM 26(4), 631–653 (1979)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Park, S., Vuduc, R., Harrold, M.J.: A unified approach for localizing non-deadlock concurrency bugs. In: Software Testing, Verification and Validation (ICST), pp. 51–60. IEEE (2012)Google Scholar
  19. 19.
    Park, S., Vuduc, R.W., Harrold, M.J.: Falcon: fault localization in concurrent programs. In: International Conference on Software Engineering (ICSE), pp. 245–254. ACM (2010)Google Scholar
  20. 20.
    Park, S., Lu, S., Zhou, Y.: CTrigger: exposing atomicity violation bugs from their hiding places. In: Architectural Support for Programming Languages and Operating Systems (ASPLOS), pp. 25–36. ACM (2009)Google Scholar
  21. 21.
    Rößler, J., Fraser, G., Zeller, A., Orso, A.: Isolating failure causes through test case generation. In: International Symposium on Software Testing and Analysis, pp. 309–319. ACM (2012)Google Scholar
  22. 22.
    Savage, S., Burrows, M., Nelson, G., Sobalvarro, P., Anderson, T.: Eraser: A dynamic data race detector for multithreaded programs. Transactions on Computer Systems (TOCS) 15(4), 391–411 (1997)CrossRefGoogle Scholar
  23. 23.
    Wang, J., Han, J.: Bide: Efficient mining of frequent closed sequences. In: ICDE (2004)Google Scholar
  24. 24.
    Wang, L., Stoller, S.D.: Runtime analysis of atomicity for multithreaded programs. TSE 32(2), 93–110 (2006)CrossRefGoogle Scholar
  25. 25.
    Xu, M., Bodík, R., Hill, M.D.: A serializability violation detector for shared-memory server programs. In: PLDI, pp. 1–14. ACM (2005)Google Scholar
  26. 26.
    Yan, X., Han, J., Afshar, R.: CloSpan: Mining closed sequential patterns in large datasets. In: Proceedings of 2003 SIAM International Conference on Data Mining, SDM 2003 (2003)Google Scholar
  27. 27.
    Yang, Y., Chen, X., Gopalakrishnan, G.C., Kirby, R.M.: Distributed dynamic partial order reduction based verification of threaded software. In: Bošnački, D., Edelkamp, S. (eds.) SPIN 2007. LNCS, vol. 4595, pp. 58–75. Springer, Heidelberg (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Mitra Tabaei Befrouei
    • 1
  • Chao Wang
    • 2
  • Georg Weissenbacher
    • 1
  1. 1.Vienna University of TechnologyViennaAustria
  2. 2.Virginia TechBlacksburgUSA

Personalised recommendations