Advertisement

Executable Counterexamples in Software Model Checking

  • Jeffrey Gennari
  • Arie GurfinkelEmail author
  • Temesghen Kahsai
  • Jorge A. Navas
  • Edward J. Schwartz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11294)

Abstract

Counterexamples—execution traces of the system that illustrate how an error state can be reached from the initial state—are essential for understanding verification failures. They are one of the most salient features of Model Checkers, which distinguish them from Abstract Interpretation and other Static Analysis techniques by providing a user with information on how to debug their system and/or the specification. While in Hardware and Protocol verification, the counterexamples can be replayed in the system, in Software Model Checking (SMC) counterexamples take the form of a textual or semi-structured report. This is problematic since it complicates the debugging process by preventing developers from using existing processes and tools such as debuggers, fault localization, and fault minimization.

In this paper, we argue that for SMC the most useful form of a counterexample is an executable mock environment that can be linked with the code under analysis (CUA) to produce an executable that exhibits the fault witnessed by the counterexample. A mock environment is different from a unit test since it can interface with the CUA at the function level, potentially allowing it to bypass complex logic that interprets program inputs. This makes mock environments easier to construct than unit tests. In this paper, we describe the automatic environment generation process that we have developed in the SeaHorn verification framework. We identify key challenges for generating mock environments from SMC counterexamples of complex memory manipulating programs that use many external libraries and function calls. We validate our prototype on the verification benchmarks from Linux Device Drivers in SV-COMP. Finally, we discuss open challenges and suggests avenues for future work.

Notes

Acknowledgments

Authors would like to thank Natarajan Shankar for his invaluable comments to improve the quality of this paper.

References

  1. 1.
    Beckman, N.E., Nori, A.V., Rajamani, S.K., Simmons, R.J.: Proofs from tests. In: Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2008, Seattle, WA, USA, 20–24 July 2008, pp. 3–14 (2008)Google Scholar
  2. 2.
    Beyer, D.: Software verification with validation of results. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10206, pp. 331–349. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-662-54580-5_20CrossRefGoogle Scholar
  3. 3.
    Beyer, D., Chlipala, A., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: 26th International Conference on Software Engineering (ICSE 2004), Edinburgh, UK, 23–28 May 2004, pp. 326–335 (2004)Google Scholar
  4. 4.
    Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses. In: Dubois, C., Wolff, B. (eds.) TAP 2018. LNCS, vol. 10889, pp. 3–23. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-92994-1_1CrossRefGoogle Scholar
  5. 5.
    Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-22110-1_16CrossRefGoogle Scholar
  6. 6.
    Beyer, D., Lemberger, T.: Software verification: testing vs. model checking. In: Strichman, O., Tzoref-Brill, R. (eds.) Hardware and Software: Verification and Testing. LNCS, vol. 10629, pp. 99–114. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70389-3_7CrossRefGoogle Scholar
  7. 7.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: 8th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2008, San Diego, California, USA, 8–10 December 2008, pp. 209–224 (2008)Google Scholar
  8. 8.
    Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: automatically generating inputs of death. ACM Trans. Inf. Syst. Secur. 12(2), 10:1–10:38 (2008)CrossRefGoogle Scholar
  9. 9.
    Christakis, M., Godefroid, P.: Proving memory safety of the ani windows image parser using compositional exhaustive testing. In: D’Souza, D., Lal, A., Larsen, K.G. (eds.) VMCAI 2015. LNCS, vol. 8931, pp. 373–392. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-46081-8_21CrossRefGoogle Scholar
  10. 10.
    Christakis, M., Müller, P., Wüstholz, V.: An experimental evaluation of deliberate unsoundness in a static program analyzer. In: D’Souza, D., Lal, A., Larsen, K.G. (eds.) VMCAI 2015. LNCS, vol. 8931, pp. 336–354. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-46081-8_19CrossRefGoogle Scholar
  11. 11.
    Christakis, M., Müller, P., Wüstholz, V.: Guiding dynamic symbolic execution toward unverified program executions. In: Proceedings of the 38th International Conference on Software Engineering, ICSE 2016, Austin, TX, USA, 14–22 May 2016, pp. 144–155 (2016)Google Scholar
  12. 12.
    Cordeiro, L.C., Fischer, B., Marques-Silva, J.: SMT-based bounded model checking for embedded ANSI-C software. IEEE Trans. Softw. Eng. 38(4), 957–974 (2012)CrossRefGoogle Scholar
  13. 13.
    Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper. 34(11), 1025–1050 (2004)CrossRefGoogle Scholar
  14. 14.
    Csallner, C., Smaragdakis, Y.: Check ‘n’ crash. In: Proceedings of the 27th International Conference on Software Engineering - ICSE 2005, p. 422. ACM Press, New York (2005)Google Scholar
  15. 15.
    Flanagan, C., Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended static checking for Java. In: Proceedings of the 2002 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Berlin, Germany, 17–19 June 2002, pp. 234–245 (2002)Google Scholar
  16. 16.
    Godefroid, P.: VeriSoft: a tool for the automatic analysis of concurrent reactive software. In: Grumberg, O. (ed.) CAV 1997. LNCS, vol. 1254, pp. 476–479. Springer, Heidelberg (1997).  https://doi.org/10.1007/3-540-63166-6_52CrossRefGoogle Scholar
  17. 17.
    Godefroid, P.: Micro execution. In: 36th International Conference on Software Engineering, ICSE 2014, Hyderabad, India, 31 May–07 June 2014, pp. 539–549 (2014)Google Scholar
  18. 18.
    Godefroid, P., Klarlund, N., Sen, K.: DART directed automated random testing. In: Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation, Chicago, IL, USA, 12–15 June 2005, pp. 213–223 (2005)Google Scholar
  19. 19.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: Automated whitebox fuzz testing. In: Proceedings of the Network and Distributed System Security Symposium, NDSS 2008, San Diego, California, USA, 10th February-13th February 2008 (2008)Google Scholar
  20. 20.
    Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.: Compositional may-must program analysis: unleashing the power of alternation. In: Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2010, Madrid, Spain, 17–23 January 2010, pp. 43–56 (2010)Google Scholar
  21. 21.
    Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: SYNERGY: a new algorithm for property checking. In: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2006, Portland, Oregon, USA, 5–11 November 2006, pp. 117–127 (2006)Google Scholar
  22. 22.
    Gurfinkel, A., Kahsai, T., Komuravelli, A., Navas, J.A.: The SeaHorn verification framework. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 343–361. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-21690-4_20CrossRefGoogle Scholar
  23. 23.
    Gurfinkel, A., Navas, J.A.: A context-sensitive memory model for verification of C/C++ programs. In: Ranzato, F. (ed.) SAS 2017. LNCS, vol. 10422, pp. 148–168. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66706-5_8CrossRefGoogle Scholar
  24. 24.
    Heizmann, M., et al.: Ultimate automizer with SMTInterpol. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 641–643. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36742-7_53CrossRefGoogle Scholar
  25. 25.
    Johnson, B., Song, Y., Murphy-Hill, E., Bowdidge, R.: Why don’t software developers use static analysis tools to find bugs? In: Proceedings of the 2013 International Conference on Software Engineering, ICSE 2013, pp. 672–681 (2013)Google Scholar
  26. 26.
    LDV: Linux Driver Verification. http://linuxtesting.org/ldv
  27. 27.
    Ma, K.-K., Yit Phang, K., Foster, J.S., Hicks, M.: Directed symbolic execution. In: Yahav, E. (ed.) SAS 2011. LNCS, vol. 6887, pp. 95–111. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23702-7_11CrossRefGoogle Scholar
  28. 28.
    Müller, P., Ruskiewicz, J.N.: Using debuggers to understand failed verification attempts. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 73–87. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21437-0_8CrossRefGoogle Scholar
  29. 29.
    Rocha, H., Barreto, R., Cordeiro, L., Neto, A.D.: Understanding programming bugs in ANSI-C software using bounded model checking counter-examples. In: Derrick, J., Gnesi, S., Latella, D., Treharne, H. (eds.) IFM 2012. LNCS, vol. 7321, pp. 128–142. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-30729-4_10CrossRefGoogle Scholar
  30. 30.
    Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006).  https://doi.org/10.1007/11817963_38CrossRefGoogle Scholar
  31. 31.
    Tillmann, N., de Halleux, J.: Pex–white box test generation for.NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-79124-9_10CrossRefGoogle Scholar
  32. 32.
    Visser, W., Pasareanu, C.S., Khurshid, S.: Test input generation with java pathfinder. In: Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2004, Boston, Massachusetts, USA, 11–14 July 2004, pp. 97–107 (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jeffrey Gennari
    • 1
  • Arie Gurfinkel
    • 2
    Email author
  • Temesghen Kahsai
    • 3
  • Jorge A. Navas
    • 4
  • Edward J. Schwartz
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghUSA
  2. 2.University of WaterlooWaterlooCanada
  3. 3.University of IowaIowa CityUSA
  4. 4.SRI InternationalMenlo ParkUSA

Personalised recommendations