Advertisement

Automated Source Code Instrumentation for Verifying Potential Vulnerabilities

  • Hongzhe Li
  • Jaesang Oh
  • Hakjoo Oh
  • Heejo LeeEmail author
Conference paper
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 471)

Abstract

With a rapid yearly growth rate, software vulnerabilities are making great threats to the system safety. In theory, detecting and removing vulnerabilities before the code gets ever deployed can greatly ensure the quality of software released. However, due to the enormous amount of code being developed as well as the lack of human resource and expertise, severe vulnerabilities still remain concealed or cannot be revealed effectively. Current source code auditing tools for vulnerability discovery either generate too many false positives or require overwhelming manual efforts to report actual software flaws. In this paper, we propose an automatic verification mechanism to discover and verify vulnerabilities by using program source instrumentation and concolic testing. In the beginning, we leverage CIL to statically analyze the source code including extracting the program CFG, locating the security sinks and backward tracing the sensitive variables. Subsequently, we perform automated program instrumentation to insert security probes ready for the vulnerability verification. Finally, the instrumented program source is passed to the concolic testing engine to verify and report the existence of an actual vulnerability. We demonstrate the efficacy and efficiency of our mechanism by implementing a prototype system and perform experiments with nearly 4000 test cases from Juliet Test Suite. The results show that our system can verify over \(90\,\%\) of test cases and it reports buffer overflow flaws with \(Precision = 100\,\%\) (0 FP) and \(Recall = 94.91\,\%\). In order to prove the practicability of our system working in real world programs, we also apply our system on 2 popular Linux utilities, Bash and Cpio. As a result, our system finds and verifies vulnerabilities in a fully automatic way with no false positives.

Keywords

Automatic instrumentation Security sinks Security constraints Vulnerability verification 

References

  1. 1.
    MITRE group.: Common Vulnerabilities and Exposures (CVE). https://cve.mitre.org/
  2. 2.
    Sen, K., Marinov, D., Agha, G.: Cute: a concolic unit testing engine for C. In: ACM International Symposium on Foundations of Software Engineering, pp. 263–272 (2005)Google Scholar
  3. 3.
    Necula, G.C., McPeak, S., Rahul, S.P., Weimer, W.: CIL: Intermediate language and tools for analysis and transformation of C programs. In: Nigel Horspool, R. (ed.) CC 2002. LNCS, vol. 2304, pp. 213–228. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Perl, H., Dechand, S., Smith, M., Arp, D., Yamaguchi, F., Rieck, K., Acar, Y.: Vccfinder: Finding potential vulnerabilities in open-source projects to assist code audits. In: Proceedings of the 22nd ACM CCS, pp. 426–437 (2015)Google Scholar
  5. 5.
    Yamaguchi, F., Maier, A., Gascon, H., Rieck, K.: Automatic inference of search patterns for taint-style vulnerabilities. In: IEEE Symposium of Security and Privacy, pp. 797–812 (2015)Google Scholar
  6. 6.
    Yamaguchi, F., Golde, N., Arp, D., Rieck, K.: Modeling and discovering vulnerabilities with code property graphs. In: IEEE Symposium of Security and Privacy, pp. 590–604 (2014)Google Scholar
  7. 7.
    Oh, H., Lee, W., Heo, K., Yang, H., Yi, K.: Selective context-sensitivity guided by impact pre-analysis. ACM SIGPLAN Not. 49(6), 475–484 (2014)CrossRefGoogle Scholar
  8. 8.
    Li, H., Kwon, H., Kwon, J., Lee, H.: CLORIFI: software vulnerability discovery using code clone verification. Pract. Experience Concurrency Comput. 28(6), 1900–1917 (2015)CrossRefGoogle Scholar
  9. 9.
    Wheeler, D.: Flawfinder (2011). http://www.dwheeler.com/flawfinder
  10. 10.
    Kim, M., Kim, Y., Jang, Y.: Industrial application of concolic testing on embedded software: Case studies. In: IEEE International Conference on Software Testing, Verification and Validation, pp. 390–399 (2012)Google Scholar
  11. 11.
    Cadar, C., Dunbar, D., Engler, D.: Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: USENIX Symposium on Operating Systems Design and Implementation, vol. 8, pp. 209–224 (2008)Google Scholar
  12. 12.
    Zhang, D., Liu, D., Lei, Y., Kung, D., Csallner, C., Wang, W.: Detecting vulnerabilities in C programs using trace-based testing. In: IEEE/IFIP International Conference on Dependable Systems and Networks, pp. 241–250 (2010)Google Scholar
  13. 13.
    Di Paola, S.: Sinks: Dom Xss Test Cases Wiki Project. http://code.google.com/p/domxsswiki/wiki/Sinks
  14. 14.
    Boland, T., Black, P.E.: Juliet 1.1 C/C++ and Java test suite. J. Comput. 45(10), 89–90 (2012)Google Scholar
  15. 15.
    Yamaguchi, F., Wressnegger, C., Gascon, H., Rieck, K.: Chucky: Exposing missing checks in source code for vulnerability discovery. In: ACM CCS, pp. 499–510 (2013)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2016

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringKorea UniversitySeoulSouth Korea

Personalised recommendations