Advertisement

BUBEN: Automated Library Abstractions Enabling Scalable Bug Detection for Large Programs with I/O and Complex Environment

  • Pavel ParízekEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11781)

Abstract

An important goal of software engineering research is to create methods for efficient verification and detecting bugs. In this context, we focus on two challenges: (1) scalability to large and realistic software systems and (2) tools unable to directly analyze programs that perform I/O operations and interact with their environment. The common sources of problems with scalability include the huge number of thread interleavings and usage of large libraries. Programs written in managed languages, such as Java, cannot be directly analyzed by many verification tools due to insufficient support for native library methods. Both issues affect especially path-sensitive verification techniques.

We present the Buben system that automatically generates abstractions of complex software systems written in Java. The whole process has three phases: (1) dynamic analysis that records under-approximate information about behavior of native methods and library methods that perform I/O, (2) static analysis that computes over-approximate summaries of side effects of library methods, and (3) program code transformation that replaces calls of native methods and creates abstractions of library methods. Software systems abstracted in this way can be analyzed, e.g. for the presence of bugs, without the risk of a tool failure caused by unsupported libraries and more efficiently too. We evaluated Buben on several programs from popular benchmark suites, including DaCapo.

Notes

Acknowledgments

We would like to thank Ondřej Lhoták for all his suggestions regarding the paper content and presentation. This work was partially supported by the Czech Science Foundation project 18-17403S.

References

  1. 1.
    Artzi, S., Kiezun, A., Glasser, D., Ernst, M.: Combined static and dynamic mutability analysis. In: Proceedings of ASE 2007. ACM (2007)Google Scholar
  2. 2.
    Binkley, D., Gallagher, K.B.: Program slicing. In: Advances in Computers, vol. 43 (1996)Google Scholar
  3. 3.
    Blackburn, S.M., et al.: The DaCapo benchmarks: Java benchmarking development and analysis. In: Proceedings of OOPSLA 2006. ACM (2006)Google Scholar
  4. 4.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of OSDI 2008. USENIX (2008)Google Scholar
  5. 5.
    Ceccarello, M., Tkachuk, O.: Automated generation of model classes for Java PathFinder. In: Proceedings of Java Pathfinder Workshop 2013, ACM SIGSOFT Software Engineering Notes, vol. 39, no. 1 (2014)CrossRefGoogle Scholar
  6. 6.
    Cherem, S., Rugina, R.: A practical escape and effect analysis for building lightweight method summaries. In: Krishnamurthi, S., Odersky, M. (eds.) CC 2007. LNCS, vol. 4420, pp. 172–186. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-71229-9_12CrossRefGoogle Scholar
  7. 7.
    Flanagan, C., Freund, S.N.: The RoadRunner dynamic analysis framework for concurrent programs. In: Proceedings of PASTE 2010. ACM (2010)Google Scholar
  8. 8.
    Giffhorn, D., Hammer, C.: Precise slicing of concurrent programs. Autom. Softw. Eng. 16(2), 197 (2009)CrossRefGoogle Scholar
  9. 9.
    Marek, L., Villazon, A., Zheng, Y., Ansaloni, D., Binder, W., Qi, Z.: DiSL: a domain-specific language for bytecode instrumentation. In: Proceedings of AOSD 2012. ACM (2012)Google Scholar
  10. 10.
    Matosevic, I., Abdelrahman, T.S.: Efficient bottom-up heap analysis for symbolic path-based data access summaries. In: Proceedings of CGO 2012. ACM (2012)Google Scholar
  11. 11.
    Naeem, N.A., Lhoták, O.: Faster alias set analysis using summaries. In: Knoop, J. (ed.) CC 2011. LNCS, vol. 6601, pp. 82–103. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-19861-8_6 CrossRefGoogle Scholar
  12. 12.
    Rountev, A., Sharp, M., Xu, G.: IDE dataflow analysis in the presence of large object-oriented libraries. In: Hendren, L. (ed.) CC 2008. LNCS, vol. 4959, pp. 53–68. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78791-4_4CrossRefGoogle Scholar
  13. 13.
    Sălcianu, A., Rinard, M.: Purity and side effect analysis for Java programs. In: Cousot, R. (ed.) VMCAI 2005. LNCS, vol. 3385, pp. 199–215. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-30579-8_14CrossRefGoogle Scholar
  14. 14.
    Tkachuk, O., Dwyer, M.: Adapting side effect analysis for modular program model checking. In: Proceedings of ESEC/FSE 2003. ACM (2003)Google Scholar
  15. 15.
    Yorsh, G., Yahav, E., Chandra, S.: Generating precise and concise procedure summaries. In: Proceedings of POPL 2008. ACM (2008)Google Scholar
  16. 16.
    Java Pathfinder verification framework (JPF). https://github.com/javapathfinder/jpf-core/wiki

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of Mathematics and Physics, Department of Distributed and Dependable SystemsCharles UniversityPragueCzechia

Personalised recommendations