Advertisement

Evaluating and Integrating Diverse Bug Finders for Effective Program Analysis

  • Bailin Lu
  • Wei DongEmail author
  • Liangze Yin
  • Li Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11293)

Abstract

Many static analysis methods and tools have been developed for program bug detection. They are based on diverse theoretical principles, such as pattern matching, abstract interpretation, model checking and symbolic execution. Unfortunately, none of them can meet most requirements for bug finding. Individual tool always faces high false negatives and/or false positives, which is the main obstacle for using them in practice. A direct and promising way to improve the capability of static analysis is to integrate diverse bug finders. In this paper, we first selected five state-of-the-art C/C++ static analysis tools implemented with different theories. We then evaluated them over different defect types and code structures in detail. To increase the precision and recall for tool integration, we studied how to properly employ machine learning algorithms based on features of programs and tools. Evaluation results show that: (1) the abilities of diverse tools are quite different for defect types and code structures, and their overlaps are quite small; (2) the integration based on machine learning can obviously improve the overall performance of static analysis. Finally, we investigated the defect types and code structures which are still challenging for existing tools. They should be addressed in future research on static analysis.

Keywords

Static analysis Tool integration Machine learning 

Notes

Acknowledgments

This work was funded by the National Nature Science Foundation of China (No.61690203, No.61802415, No.61532007).

References

  1. 1.
  2. 2.
    Clang Static Analyzer. http://clang-analyzer.llvm.org/
  3. 3.
  4. 4.
  5. 5.
    Ayewah, N., Penix, J., Morgenthaler, J.D., Pugh, W., Hovemeyer, D.: Using static analysis to find bugs. IEEE Softw. 25, 22–29 (2008).  https://doi.org/10.1109/MS.2008.130CrossRefGoogle Scholar
  6. 6.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Usenix Conference on Operating Systems Design and Implementation, pp. 209–224 (2009)Google Scholar
  7. 7.
    CAS: CAS static analysis tool study methodology. NSA (2012)Google Scholar
  8. 8.
    CAS: Juliet test suite v1.2 for C/verb/C++ user guide. NSA (2012)Google Scholar
  9. 9.
    Chen, C., Li, J., Kong, D.: Source code static analysis based on data fusion. Comput. Eng. 34(20), 66–68 (2008)Google Scholar
  10. 10.
    Chess, B., West, J.: Secure Programming with Static Analysis. Addison-Wesley Professional, Boston (2007)Google Scholar
  11. 11.
    Cousot, P., Cousot, R.: Abstract interpretation frameworks. J. Log. Comput. 2(4), 511–547 (1992)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Clarke, E.M., Emerson, E.A., Sifakis, J.: Model checking: algorithmic verification and debugging. Commun. ACM 52(11), 74–84 (2009)CrossRefGoogle Scholar
  13. 13.
    Heckman, S.S., Williams, L.A.: A systematic literature review of actionable alert identification techniques for automated static code analysis. Inf. Softw. Technol. 53(4), 363–387 (2011)CrossRefGoogle Scholar
  14. 14.
    Johnson, B., Song, Y., Murphy-Hill, E.R., Bowdidge, R.W.: Why don’t software developers use static analysis tools to find bugs? In: Notkin, D., Cheng, B.H.C., Pohl, K. (eds.) 35th International Conference on Software Engineering, ICSE 2013, San Francisco, CA, USA, 18–26 May 2013, pp. 672–681. IEEE Computer Society (2013)Google Scholar
  15. 15.
    kgirard: The tool output integration framework (TOIF) is a powerful composite vulnerability detection platform (2016). https://github.com/KdmAnalytics/toif
  16. 16.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Aspects Comput. 27(3), 573–609 (2015)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Kroening, D., Tautschnig, M.: CBMC – C bounded model checker. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 389–391. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-54862-8_26CrossRefGoogle Scholar
  19. 19.
    McLean, R.K.: Comparing static security analysis tools using open source software. In: IEEE Sixth International Conference on Software Security and Reliability Companion, pp. 68–74. IEEE (2012)Google Scholar
  20. 20.
    Meng, N., Wang, Q., Wu, Q., Mei, H.: An approach to merge results of multiple static analysis tools (short paper). In: Zhu, H. (ed.) Proceedings of the Eighth International Conference on Quality Software, pp. 169–174. IEEE Computer Society (2008)Google Scholar
  21. 21.
    Omidiora, E.O., Adeyanju, I.A., Fenwa, O.D.: Comparison of machine learning classifiers for recognition of online and offline handwritten digits. Comput. Eng. Intell. Syst. 4(13), 39–47 (2013)Google Scholar
  22. 22.
    Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(10), 2825–2830 (2012)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Rutar, N., Almazan, C.B., Foster, J.S.: A comparison of bug finding tools for java. In: International Symposium on Software Reliability Engineering, pp. 245–256 (2004)Google Scholar
  24. 24.
    Thung, F.: To what extent could we detect field defects? An empirical study of false negatives in static bug finding tools. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, pp. 50–59 (2012)Google Scholar
  25. 25.
    Thung, F., Lucia, Lo, D., Jiang, L., Devanbu, P.T.: To what extent could we detect field defects? An empirical study of false negatives in static bug finding tools. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, pp. 50–59. SelectedWorks (2013)Google Scholar
  26. 26.
    Wagner, S., Deissenboeck, F., Aichner, M., Wimmer, J., Schwalb, M.: An evaluation of two bug pattern tools for java. In: International Conference on Software Testing, Verification, and Validation, pp. 248–257 (2008)Google Scholar
  27. 27.
    Wagner, S., Jürjens, J., Koller, C., Trischberger, P.: Comparing bug finding tools with reviews and tests. In: Khendek, F., Dssouli, R. (eds.) TestCom 2005. LNCS, vol. 3502, pp. 40–55. Springer, Heidelberg (2005).  https://doi.org/10.1007/11430230_4CrossRefGoogle Scholar
  28. 28.
    Zhang, S., Shang, Z.: Software defect pattern analysis and location based on Cppcheck. Comput. Eng. Appl. 51(3), 69–73 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.National University of Defense TechnologyChangshaChina
  2. 2.Meituan CorporationBeijingChina

Personalised recommendations