Skip to main content

Analyzing system software components using API model guided symbolic execution

Abstract

Analyzing real-world software is challenging due to complexity of the software frameworks or APIs they depend on. In this paper, we present a tool, PROMPT, that facilitates the analysis of software components using API model guided symbolic execution. PROMPT has a specification component, PROSE, that lets users define an API model, which consists of a set of data constraints and life-cycle rules that define control-flow constraints among sequentially composed API functions. Given a PROSE model and a software component, PROMPT symbolically executes the component while enforcing the specified API model. PROMPT has been implemented on top of the KLEE symbolic execution engine and has been applied to Linux device drivers from the video, sound, and network subsystems and to some vulnerable components of BlueZ, the implementation of the Bluetooth protocol stack for the Linux kernel. PROMPT detected two new and four known memory vulnerabilities in some of the analyzed system software components.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Notes

  1. 1.

    PROMPT can be accessed at https://github.com/sysrel/PROMPT.

  2. 2.

    Models of the API functions can be reused across different versions as long as the modeled aspects do not change.

  3. 3.

    This singleton rule applies to other bus types including the PCI and I2C.

  4. 4.

    The PROSE models of our benchmarks can be found at https://github.com/sysrel/PROMPT/tree/master/JASE_bencmarks.

  5. 5.

    Note that the developers of such drivers typically have such domain knowledge and by modeling the relevant API functions they can analyze such drivers with the help of PROMPT.

  6. 6.

    klee-stats reports the amount of heap memory created via malloc.

References

  1. Amann, S., Nguyen, H.A., Nadi, S., Nguyen, T.N., Mezini, M.: A systematic evaluation of API-misuse detectors. CoRR arXiv:1712.00242 (2017)

  2. Bai, G., Ye, Q., Wu, Y., Botha, H., Sun, J., Liu, Y., Dong, J.S., Visser, W.: Towards model checking android applications. IEEE Trans. Softw. Eng. 44(6), 595–612 (2018)

    Article  Google Scholar 

  3. Ball, T., Levin, V., Rajamani, S.K.: A decade of software model checking with slam. Commun. ACM 54(7), 11 (2011)

  4. Ball, T., Rajamani, S.: Slic: A specification language for interface checking (of c). Tech. rep. (2002). https://tinyurl.com/y8y4zkdy

  5. Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Computer Aided Verification—23rd International Conference, CAV’11. Proceedings, pp. 184–190 (2011)

  6. BlueBorne: https://info.armis.com/rs/645-PDC-047/images/BlueBorne%20Technical%20White%20Paper\_20171130.pdf. Last accessed on August 1st, 2020

  7. Bluetooth: Fix slab-out-of-bounds read in hci\_extended\_inquiry\_result\_evt(). https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/net/bluetooth?id=51c19bf3d5cfaa66571e4b88ba2a6f6295311101. Last accessed on August 1st, 2020

  8. Bluetooth: https://www.bluetooth.com/. Last accessed on August 1st, 2020

  9. BlueZ: Official Linux Bluetooth protocol stack. http://www.bluez.org/. Last accessed on August 1st, 2020

  10. Bucur, S., Ureche, V., Zamfir, C., Candea, G.: Parallel symbolic execution for automated real-world software testing. In: Proceedings of the Sixth European Conference on Computer Systems, EuroSys 2011, Salzburg, Austria, April 10–13, pp. 183–198 (2011)

  11. Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: 8th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2008, December 8–10, 2008, San Diego, CA, USA, Proceedings, pp. 209–224 (2008)

  12. Cadar, C., Engler, D.: Execution generated test cases: how to make systems code crash itself. In: Proceedings of the 12th International Conference on Model Checking Software, SPIN’05 (2005)

  13. Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013)

    Article  Google Scholar 

  14. Carpenter, D.: Alsa: cs46xx: Potential null dereference in probe. https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/sound/pci/cs46xx?id=1524f4e47f90b27a3ac84efbdd94c63172246a6f

  15. Chatterjee, S., Lahiri, S.K., Qadeer, S., Rakamaric, Z.: A low-level memory model and an accompanying reachability predicate. Int. J. Softw. Tools Technol. Transf. 11(2), 105–116 (2009)

    Article  Google Scholar 

  16. Chipounov, V., Kuznetsov, V., Candea, G.: The s2e platform: design, implementation, and applications. ACM Trans. Comput. Syst. 30(1) (2012)

  17. Fahl, S., Harbach, M., Perl, H., Koetter, M., Smith, M.: Rethinking SSL development in an applied world. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS ’13 (2013)

  18. Georgiev, M., Iyengar, S., Jana, S., Anubhai, R., Boneh, D., Shmatikov, V.: The most dangerous code in the world: validating SSL certificates in non-browser software. In: Proceedings of the 2012 ACM Conference on Computer and Communications Security, CCS ’12 (2012)

  19. Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation, Chicago, IL, USA, June 12–15, 2005, pp. 213–223 (2005)

  20. Heule, S., Sridharan, M., Chandra, S.: Mimic: Computing models for opaque code. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015 (2015)

  21. Indela, S., Kulkarni, M., Nayak, K., Dumitraş, T.: Helping Johnny Encrypt: toward semantic interfaces for cryptographic frameworks. In: Proceedings of the 2016 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, Onward! (2016)

  22. KASAN: slab-out-of-bounds Read in hci\_extended\_inquiry\_result\_evt. https://syzkaller.appspot.com/bug?id=4bf11aa05c4ca51ce0df86e500fce486552dc8d2. Last accessed on August 1st, 2020

  23. Khurshid, S., Pasareanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Tools and Algorithms for the Construction and Analysis of Systems, 9th International Conference, TACAS 2003, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2003, Warsaw, Poland, April 7–11, 2003, Proceedings, pp. 553–568 (2003)

  24. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)

  25. Mehlitz, P.C., Tkachuk, O., Ujma, M.: JPF-AWT: model checking GUI applications. In: 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011), Lawrence, KS, USA, November 6–10, 2011, pp. 584–587 (2011)

  26. Myers, B.A., Stylos, J.: Improving API usability. Commun. ACM 59, 6 (2016)

    Article  Google Scholar 

  27. Nadi, S., Krüger, S., Mezini, M., Bodden, E.: Jumping through hoops: why do java developers struggle with cryptography APIs? In: Proceedings of the 38th International Conference on Software Engineering, ICSE ’16 (2016)

  28. Park, J., Jordan, A., Ryu, S.: Automatic modeling of opaque code for javascript static analysis. In: Fundamental Approaches to Software Engineering—22nd International Conference, FASE 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6–11, 2019, Proceedings, pp. 43–60 (2019)

  29. Qi, D., Sumner, W.N., Qin, F., Zheng, M., Zhang, X., Roychoudhury, A.: Modeling software execution environment. In: 19th Working Conference on Reverse Engineering, WCRE 2012, Kingston, ON, Canada, October 15–18, 2012, pp. 415–424 (2012)

  30. Ramos, D.A., Engler, D.: Under-constrained symbolic execution: correctness checking for real code. In: Proceedings of the 24th USENIX Conference on Security Symposium, SEC’15, pp. 49–64 (2015)

  31. Recoules, F., Bardin, S., Bonichon, R., Mounier, L., Potet, M.: Get rid of inline assembly through verification-oriented lifting. In: 34th IEEE/ACM International Conference on Automated Software Engineering, ASE 2019, San Diego, CA, USA, November 11–15, 2019, pp. 577–589 (2019)

  32. Renzelmann, M.J., Kadav, A., Swift, M.M.: Symdrive: Testing drivers without devices. In: Thekkath, C., Vahdat, A. (eds.) 10th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2012, Hollywood, CA, USA, October 8–10, 2012, pp. 279–292. USENIX Association (2012)

  33. Shi, K., Steinhardt, J., Liang, P.: Frangel: Component-based synthesis with control structures. Proc. ACM Program. Lang. 3(POPL) (2019)

  34. Syzkaller-Kernel Fuzzer: https://github.com/google/syzkaller. Last accessed on August 1st, 2020

  35. Visser, W., Mehlitz, P.C.: Model checking programs with java pathfinder. In: Model Checking Software, 12th International SPIN Workshop, San Francisco, CA, USA, August 22–24, 2005, Proceedings, p. 27 (2005)

  36. Wang, W., Barrett, C.W., Wies, T.: Partitioned memory models for program analysis. In: Verification, Model Checking, and Abstract Interpretation—18th International Conference, VMCAI 2017, Paris, France, January 15–17, 2017, Proceedings, Lecture Notes in Computer Science, vol. 10145, pp. 539–558. Springer (2017)

  37. Witkowski, T., Blanc, N., Kroening, D., Weissenbacher, G.: Model checking concurrent linux device drivers. In: 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007), November 5–9, 2007, Atlanta, GA, USA, pp. 501–504 (2007)

  38. Yavuz, T.: [patch] net: hso: do not call unregister if not registered. https://lists.openwall.net/netdev/2019/02/09/1 (2019)

  39. Yong, S.H., Horwitz, S., Reps, T.: Pointer analysis for programs with structures and casting. In: Proceedings of the ACM SIGPLAN 1999 Conference on Programming Language Design and Implementation, PLDI ’99, pp. 91–103. Association for Computing Machinery, New York, NY, USA (1999)

  40. Yun, I., Min, C., Si, X., Jang, Y., Kim, T., Naik, M.: APISan: sanitizing API usages through semantic cross-checking. In: 25th USENIX Security Symposium, USENIX Security’16, pp. 363–378 (2016)

  41. Zakharov, I.S., Mandrykin, M.U., Mutilin, V.S., Novikov, E., Petrenko, A.K., Khoroshilov, A.V.: Configurable toolset for static verification of operating systems kernel modules. Program. Comput. Softw. 41(1), 49–64 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially funded by the National Science Foundation under Grants CNS-1815883 and CNS-1942235 and by the Semiconductor Research Corporation. We would like to thank the anonymous reviewers for their feedback. We would like to thank Joshua Nelson for helping with the PROSE parser as an undergraduate researcher.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Tuba Yavuz.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yavuz, T., Bai, K.(. Analyzing system software components using API model guided symbolic execution. Autom Softw Eng 27, 329–367 (2020). https://doi.org/10.1007/s10515-020-00276-5

Download citation

Keywords

  • Symbolic execution
  • API modeling
  • Specification