Advertisement

Cluster Computing

, Volume 19, Issue 1, pp 153–166 | Cite as

PEDAL: a dynamic analysis tool for efficient concurrency bug reproduction in big data environment

  • Yan HuEmail author
  • Jun Yan
  • Kim-Kwang Raymond Choo
Article

Abstract

Concurrency bugs usually manifest under very rare conditions, and reproducing such bugs can be a challenging task. To reproduce concurrency bugs with a given input, one would have to explore the vast interleaving space, searching for erroneous schedules. The challenges are compounded in a big data environment. This paper explores the topic of concurrency bug reproduction using runtime data. We approach the concurrency testing and bug reproduction problem differently from existing literature, by emphasizing on the preemptable synchronization points. In our approach, a light-weight profiler is implemented to monitor program runs, and collect synchronization points where thread scheduler could intervene and make scheduling decisions. Traces containing important synchronization API calls and shared memory accesses are recorded and analyzed. Based on the preemptable synchronization points, we build a reduced preemption set (RPS) to narrow down the search space for erroneous schedules. We implement an optimized preemption-bounded schedule search algorithm and an RPS directed search algorithm, in order to reproduce concurrency bugs more efficiently. Those schedule exploration algorithms are integrated into our prototype, Profile directed Event driven Dynamic AnaLysis (PEDAL). The runtime data consisting of synchronization points is used as a source of feedback for PEDAL. To demonstrate utility, we evaluate the performance of PEDAL against those of two systematic concurrency testing tools. The findings demonstrate that PEDAL can detect concurrency bugs more quickly with given inputs, and consuming less memory. To prove its scalability in a big data environment, we use PEDAL to analyze several real concurrency bugs in large scale multithread programs, namely: Apache, and MySQL.

Keywords

Concurrency analysis Profiling Dynamic analysis Bug reproduction 

Notes

Acknowledgments

We thank anonymous reviews for their valuable comments, which help us improve the paper. The work in this paper is partially funded by National Science Foundation of China (NSFC 61300017, 61572097).

References

  1. 1.
    Altekar, G., Stoica, I.: Odr: output-deterministic replay for multicore debugging. In: SOSP, pp. 193–206 (2009)Google Scholar
  2. 2.
  3. 3.
  4. 4.
  5. 5.
    Ball, T., Burckhardt, S., Coons, K.E., Musuvathi, M., Qadeer, S.: Preemption sealing for efficient concurrency testing. In: TACAS, pp. 420–434 (2010)Google Scholar
  6. 6.
    Deng, D., Zhang, W., Lu, S.: Efficient concurrency-bug detection across inputs. In: Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2013, part of SPLASH 2013, Indianapolis, IN, USA, October 26–31, 2013, pp. 785–802 (2013)Google Scholar
  7. 7.
    Flanagan, C., Freund, S.N.: Fasttrack: efficient and precise dynamic race detection. In: PLDI, pp. 121–133 (2009)Google Scholar
  8. 8.
    Fonseca, P., Rodrigues, R., Brandenburg, B.B.: SKI: exposing kernel concurrency bugs through systematic schedule exploration. In: 11th USENIX Symposium on Operating Systems Design and Implementation, OSDI ’14, Broomfield, CO, USA, October 6–8, 2014, pp. 415–431 (2014)Google Scholar
  9. 9.
    Hu, Y., Yan, J., Zhang, J., Jiang, H.: Profile directed systematic testing of concurrent programs. In: 8th International Workshop on Automation of Software Test, AST 2013, San Francisco, CA, USA, May 18–19, 2013, pp. 47–52Google Scholar
  10. 10.
    Huang, J., Liu, P., Zhang, C.: Leap: lightweight deterministic multi-processor replay of concurrent java programs. In: SIGSOFT FSE, pp. 385–386 (2010)Google Scholar
  11. 11.
    Huang, J., Zhang, C., Dolby, J.: CLAP: recording local executions to reproduce concurrency failures. In: ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’13, Seattle, WA, USA, June 16–19, 2013, pp. 141–152 (2013)Google Scholar
  12. 12.
    Jagannath, V., Gligoric, M., Jin, D., Luo, Q., Rosu, G., Marinov, D.: Improved multithreaded unit testing. In: SIGSOFT FSE, pp. 223–233 (2011)Google Scholar
  13. 13.
    Jalbert, N., Sen, K.: A trace simplification technique for effective debugging of concurrent programs. In: SIGSOFT FSE, pp. 57–66 (2010)Google Scholar
  14. 14.
    Joshi, P., Naik, M., Park, C.S., Sen, K.: Calfuzzer: An extensible active testing framework for concurrent programs. In: CAV, pp. 675–681 (2009)Google Scholar
  15. 15.
    Musuvathi, M., Qadeer, S.: Iterative context bounding for systematic testing of multithreaded programs. In: PLDI, pp. 446–455 (2007)Google Scholar
  16. 16.
  17. 17.
  18. 18.
    Naik, M., Park, C.S., Sen, K., Gay, D.: Effective static deadlock detection. In: ICSE, pp. 386–396 (2009)Google Scholar
  19. 19.
    Nethercote, N., Seward, J.: Valgrind: a framework for heavyweight dynamic binary instrumentation. In: Proceedings of the ACM SIGPLAN 2007 Conference on Programming Language Design and Implementation, San Diego, California, USA, June 10–13, 2007, pp. 89–100 (2007)Google Scholar
  20. 20.
    Park, S., Lu, S., Zhou, Y.: Ctrigger: exposing atomicity violation bugs from their hiding places. In: ASPLOS, pp. 25–36 (2009)Google Scholar
  21. 21.
    Patil, H., Pereira, C., Stallcup, M., Lueck, G., Cownie, J.: Pinplay: a framework for deterministic replay and reproducible analysis of parallel programs. In: Proceedings of the CGO 2010, The 8th International Symposium on Code Generation and Optimization, Toronto, Ontario, Canada, April 24–28, 2010, pp. 2–11 (2010)Google Scholar
  22. 22.
    Pratikakis, P., Foster, J.S., Hicks, M.: Locksmith: Practical static race detection for C. ACM Trans. Program. Lang. Syst. 33(1), 3 (2011)CrossRefGoogle Scholar
  23. 23.
    Samak, M., Ramanathan, M.K.: Trace driven dynamic deadlock detection and reproduction. In: ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP ’14, Orlando, FL, USA, February 15–19, 2014, pp. 29–42 (2014)Google Scholar
  24. 24.
    Thomson, P., Donaldson, A.F., Betts, A.: Concurrency testing using schedule bounding: an empirical study. In: ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP ’14, Orlando, FL, USA, February 15–19, 2014, pp. 15–28 (2014)Google Scholar
  25. 25.
    Wang, C., Said, M., Gupta, A.: Coverage guided systematic concurrency testing. In: ICSE, pp. 221–230 (2011)Google Scholar
  26. 26.
    Wang, L., Geng, H., Liu, P., Lu, K., Kolodziej, J., Ranjan, R., Zomaya, A.Y.: Particle swarm optimization based dictionary learning for remote sensing big data. Knowl.-Based Syst. 79, 43–50 (2015)CrossRefGoogle Scholar
  27. 27.
    Wang, L., Lu, K., Liu, P., Ranjan, R., Chen, L.: IK-SVD: dictionary learning for spatial big data via incremental atom update. Comput. Sci. Eng. 16(4), 41–52 (2014)CrossRefGoogle Scholar
  28. 28.
    Wang, L., Ma, Y., Zomaya, A.Y., Ranjan, R., Chen, D.: A parallel file system with application-aware data layout policies for massive remote sensing image processing in digital earth. IEEE Trans. Parallel Distrib. Syst. 26(6), 1497–1508 (2015)CrossRefGoogle Scholar
  29. 29.
    Wang, L., Stoller, S.D.: Runtime analysis of atomicity for multithreaded programs. IEEE Trans. Softw. Eng. 32(2), 93–110 (2006)CrossRefGoogle Scholar
  30. 30.
    Wang, L., Tao, J., Ranjan, R., Marten, H., Streit, A., Chen, J., Chen, D.: G-hadoop: Mapreduce across distributed data centers for data-intensive computing. Future Gener. Comput. Syst. 29(3), 739–750 (2013)CrossRefGoogle Scholar
  31. 31.
    Weeratunge, D., Zhang, X., Jagannathan, S.: Analyzing multicore dumps to facilitate concurrency bug reproduction. In: ASPLOS, pp. 155–166 (2010)Google Scholar
  32. 32.
  33. 33.
    Yu, J., Narayanasamy, S.: A case for an interleaving constrained shared-memory multi-processor. In: 36th International Symposium on Computer Architecture (ISCA 2009), June 20–24, 2009, Austin, TX, USA, pp. 325–336 (2009)Google Scholar
  34. 34.
    Yu, J., Narayanasamy, S., Pereira, C., Pokam, G.: Maple: a coverage-driven testing tool for multithreaded programs. In: OOPSLA, pp. 485–502 (2012)Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.School of Software, Dalian University of TechnologyDalianChina
  2. 2.Institute of SoftwareChinese Academy of SciencesBeijingChina
  3. 3.University of South AustraliaAdelaideAustralia
  4. 4.Key Laboratory for Ubiquitous Network and Service Software of Liaoning ProvinceDalianChina

Personalised recommendations