Advertisement

The Symbolic Execution Debugger (SED): a platform for interactive symbolic execution, debugging, verification and more

  • Martin Hentschel
  • Richard Bubel
  • Reiner Hähnle
Regular Paper
  • 176 Downloads

Abstract

The Symbolic Execution Debugger (SED), is an extension of the Open image in new window debug platform for interactive debuggers based on symbolic execution. The SED comes with a static symbolic execution engine for sequential Open image in new window programs, but any third-party symbolic execution engine can be integrated into the SED. An interactive debugger based on symbolic execution allows one like a traditional debugger to locate defects in the source code. The difference is that all feasible execution paths are explored at once, and thus there is no need to know input values resulting in an execution that exhibits the failure. In addition, such a debugger can be used in code reviews and to guide and present results of an analysis based on symbolic execution such as, in our case, correctness proofs. Experimental evaluations proved that the SED increases the effectiveness of code reviews and proof understanding tasks.

Keywords

Symbolic execution Program understanding Deductive program verification Debugging Slicing 

References

  1. 1.
    Agrawal, H., DeMillo, R.A., Spafford, E.H.: Debugging with dynamic slicing and backtracking. Softw. Pract. Exp. 23(6), 589–616 (1993)CrossRefGoogle Scholar
  2. 2.
    Ahrendt, W., Beckert, B., Bubel, R., Hähnle, R., Schmitt, P., Ulbrich, M. (eds.): Deductive Software Verification—The KeY Book: From Theory to Practice, Volume 10001 of LNCS. Springer, Berlin (2016)Google Scholar
  3. 3.
    Albert, E., Cabanas, I., Flores-Montoya, A., Gomez-Zamalloa, M., Gutierrez, S.: jPET: an automatic test-case generator for java. In: Proceedings of the 18th Working Conference on Reverse Engineering, WCRE, pp. 441–442. IEEE (2011)Google Scholar
  4. 4.
    Anand, S., Păsăreanu, C., Visser, W.: JPF-SE: a symbolic execution extension to java pathfinder. In: Grumberg, O., Huth, M. (eds.) Tools and Algorithms for the Construction and Analysis of Systems Volume 4424 of LNCS, pp. 134–138. Springer, Berlin (2007)Google Scholar
  5. 5.
    Anderson, P., Teitelbaum, T.: Software inspection using codesurfer. In: Proceedings of the 1st Workshop on Inspection in Software Engineering (WISE), pp. 4–11, Paris, July 2001. Software Quality Research Lab, McMaster University (2001)Google Scholar
  6. 6.
    Asavoae, I.M., Asavoae, M., Riesco, A.: Towards a formal semantics-based technique for interprocedural slicing. In: Integrated Formal Methods, 11th International Conference IFM, pp. 291–306, Bertinoro (2014)Google Scholar
  7. 7.
    Barnett, M., Chang, B.-Y., DeLine, R., Jacobs, B., Leino, K.: Boogie: a modular reusable verifier for object-oriented programs. In: de Boer, F., Bonsangue, M., Graf, S., de Roever, W.-P. (eds.) Formal Methods for Components and Objects, Volume 4111 of LNCS, pp. 364–387. Springer, Berlin (2006)CrossRefGoogle Scholar
  8. 8.
    Barnett, M., Leino, K.R.M., Schulte, W.: The Spec# programming system: an overview. In: Barthe, G., Huisman, M., Lanet, J.-L., Burdy, L., Muntean, T. (eds.) Construction and Analysis of Safe, Secure, and Interoperable Smart devices (CASSIS), Volume 3362 of LNCS, pp. 49–69. Springer, Berlin (2005)CrossRefGoogle Scholar
  9. 9.
    Becker, L.: Evaluation of the symbolic execution debugger (SED) based on problems from DB Systel GmbH. Master’s Thesis, Technische Universität Darmstadt, Germany (Feb 2017)Google Scholar
  10. 10.
    Beckert, B., Grebing, S.: Interactive theorem proving—modelling the user in the proof process. In: Proc. of the Workshop on Bridging the Gap between Human and Automated Reasoning—A Workshop of the 25th Intl. Conf. on Automated Deduction (CADE-25), Berlin, August 1, 2015, pp. 60–73 (2015)Google Scholar
  11. 11.
    Beckert, B., Grebing, S., Böhl, F.: A usability evaluation of interactive theorem provers using focus groups. In Software Engineering and Formal Methods SEFM Collocated Workshops, pp. 3–19, Grenoble, Revised Selected Papers (2014)Google Scholar
  12. 12.
    Beckert, B., Moskal, M.: Deductive verification of system software in the verisoft XT project. KI 24(1), 57–61 (2010)Google Scholar
  13. 13.
    Belt, J., Hatcliff, J., Robby, P., Chalin Hardin, D., Deng, X.: Enhancing spark’s contract checking facilities using symbolic execution. In: Proceedings of the 2011 ACM Annual International Conference on Special Interest Group on the ADA Programming Language, SIGAda’11, pp. 47–60. ACM, New York (2011)Google Scholar
  14. 14.
    Beyer, D., Lemberger, T.: Symbolic execution with CEGAR. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation, 7th International Symposium (ISoLA), Part I, Corfu, Greece, Volume 9952 of LNCS, pp. 195–211. Springer, Berlin (2016)Google Scholar
  15. 15.
    Boyer, R.S., Elspas, B., Levitt, K.N.: SELECT–a formal system for testing and debugging programs by symbolic execution. ACM SIGPLAN Not. 10(6), 234–245 (1975)CrossRefGoogle Scholar
  16. 16.
    Burstall, R.M.: Program proving as hand simulation with a little induction. In: Information Processing’74, pp. 308–312. Elsevier, North-Holland (1974)Google Scholar
  17. 17.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. 8th USENIX Conf. on Operating Systems Design and Implementation, OSDI, pp. 209–224. USENIX Association (2008)Google Scholar
  18. 18.
    Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: automatically generating inputs of death. In: Proc. 13th ACM Conf. on Computer and Communications Security, CCS, pp. 322–335. ACM (2006)Google Scholar
  19. 19.
    De Gouw, S., Rot, J., De Boer, F.S., Bubel, R., Hähnle, R.: OpenJDK’s java.utils.collection.sort() is broken: the good, the bad and the worst case. In: Kroening, D., Pasareanu, C. (eds.) Proc. 27th Intl. Conf. on Computer Aided Verification (CAV), San Francisco, Volume 9206 of LNCS, pp. 273–289. Springer, Berlin (2015)Google Scholar
  20. 20.
    De Halleux, J., Tillmann, N.: Parameterized unit testing with Pex. In: Proc. 2nd Intl. Conf. on Tests and Proofs, LNCS, pp. 171–181. Springer (2008)Google Scholar
  21. 21.
    Deng, X., Lee, J., Robby. Bogor/Kiasan: a k-bounded symbolic execution for checking strong heap properties of open systems. In: 21st IEEE/ACM Intl. Conf. on Automated Software Engineering (ASE), pp. 157–166, Tokyo (2006)Google Scholar
  22. 22.
    Do, Q.H., Bubel, R., Hähnle, R.: Exploit generation for information flow leaks in object-oriented programs. In: ICT Systems Security and Privacy Protection, 30th IFIP TC 11 Intl. Conf., SEC, pp. 401–415, Hamburg (2015)Google Scholar
  23. 23.
    Dromey, R.G.: From requirements to design: formalizing the key steps. In: 1st Intl. Conf. on Software Engineering and Formal Methods, SEFM. IEEE Computer Society, Brisbane (2003)Google Scholar
  24. 24.
    Godefroid, P.: Compositional dynamic test generation. In: POPL, pp. 47–54 (2007)Google Scholar
  25. 25.
    Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: Proc. ACM SIGPLAN Conf. on Programming Language Design and Implementation, PLDI, pp. 213–223. ACM (2005)Google Scholar
  26. 26.
    Godefroid, P., Levin, M.Y., Molnar, D.: Sage: whitebox fuzzing for security testing. Queue 10(1), 20:20–20:27 (2012)CrossRefGoogle Scholar
  27. 27.
    Gray, J.: Why do computers stop and what can be done about it? Technical Report 85.7, PN87614, Tandem Computers (June 1985)Google Scholar
  28. 28.
    Grieskamp, W., Tillmann, N., Schulte, W.: XRT—exploring runtime for .NET architecture and applications. In: Cook, B., Stoller, S., Visser, W. (eds) Proc. Workshop on Software Model Checking (SoftMC 2005), Edinburgh, UK, Volume 144(3) of Electr. Notes Theor. Comput. Sci, pp. 3–26 (2006)Google Scholar
  29. 29.
    Hähnle, R., Baum, M., Bubel, R., Rothe, M.: A visual interactive debugger based on symbolic execution. In: Andrews, J., Nitto, E.D. (eds.) Proc. 25th IEEE/ACM Intl. Conf. on Automated Software Engineering, Antwerp, Belgium, pp. 143–146. ACM Press, New York (2010)CrossRefGoogle Scholar
  30. 30.
    Hall, R.J., Zisman, A.: Validating personal requirements by assisted symbolic behavior browsing. In: Automated Software Engineering, 19th Intl. Conf., pp. 56–66 (Sept. 2004)Google Scholar
  31. 31.
    Hatcliff, J., Chalin Robby P., Belt, J.: Explicating symbolic execution (xsymexe): an evidence-based verification framework. In: Proceedings of the 2013 International Conference on Software Engineering, ICSE’13, pp. 222–231. IEEE Press, Piscataway (2013)Google Scholar
  32. 32.
    Hentschel, M.: Integrating symbolic execution, debugging and verification. Ph.D. Thesis, Technische Universität Darmstadt (Jan 2016). http://tuprints.ulb.tu-darmstadt.de/5399/
  33. 33.
    Hentschel, M., Bubel, R., Hähnle, R.: Symbolic execution debugger (SED). In: Bonakdarpour, B., Smolka, S.A. (eds.) Runtime Verification, 14th Intl. Conf.  RV, Toronto, Canada, Volume 8734 of LNCS, pp. 255–262. Springer, Berlin (2014)Google Scholar
  34. 34.
    Hentschel, M., Hähnle, R., Bubel, R.: Visualizing unbounded symbolic execution. In: Seidl, M., Tillmann, N. (eds.) Proc. of Testing and Proofs (TAP) 2014, pp. 82–98. Springer, Berlin (2014). LNCSGoogle Scholar
  35. 35.
    Hentschel, M., Hähnle, R., Bubel, R.: An Empirical Evaluation of Two User Interfaces of an Interactive Program Verifier. In: Proc. of the 31st IEEE/ACM Intl. Conf. on Automated Software Engineering, ASE, pp. 403–413. ACM (2016)Google Scholar
  36. 36.
    Hentschel, M., Hähnle, R., Bubel, R.: Can Formal Methods Improve the Efficiency of Code Reviews? In:  Ábrahám, E., Huisman, M., (eds) 12th Intl. Conf. on Integrated Formal Methods, Reykjavik, Iceland, Volume 9681 of LNCS, pp. 3–19. Springer (2016)Google Scholar
  37. 37.
    Hentschel, M., Hähnle, R., Bubel, R.: The Interactive Verification Debugger: Effective Understanding of Interactive Proof Attempts. In: Proc. of the 31st IEEE/ACM Intl. Conf. on Automated Software Engineering, ASE, pp. 846–851. ACM (2016)Google Scholar
  38. 38.
    Hentschel, M., Käsdorf, S., Hähnle, R., Bubel, R.: An interactive verification tool meets an IDE. In: Albert, E., Sekerinski, E. (eds.) Integrated Formal Methods, LNCS, pp. 55–70. Springer, Berlin (2014)Google Scholar
  39. 39.
    Honfi, D., Voros, A., Micskei, Z.: SEViz: A Tool for Visualizing Symbolic Execution. In: 2015 IEEE 8th Intl. Conf. on Software Testing, Verification and Validation (ICST), pp. 1–8 (Apr 2015)Google Scholar
  40. 40.
    Ibing, A.: Parallel SMT-constrained symbolic execution for eclipse CDT/Codan. In: ICTSS, pp. 196–206 (2013)Google Scholar
  41. 41.
    Jacobs, B., Smans, J., Philippaerts, P., Vogels, F., Penninckx, W., Piessens, F.: VeriFast: A Powerful, Sound, Predictable, Fast Verifier for C and Java. In: Proc. of the 3rd Intl. Conf. on NASA Formal Methods, pp. 41–55. Springer (2011)Google Scholar
  42. 42.
    Jaffar, J., Murali, V., Navas, J. A., Santosa, A. E.: Tracer: A symbolic execution tool for verification. In: Proc. of the 24th Intl. Conf. on Computer Aided Verification, CAV’12, pp. 758–766. Springer, Berlin (2012)Google Scholar
  43. 43.
    Jamrozik, K., Fraser, G., Tillmann, N., Halleux, J. D.: Augmented dynamic symbolic execution. In: ASE, pp. 254–257. ACM (Sept. 2012)Google Scholar
  44. 44.
    Jayaraman, G., Ranganath, V.P., Hatcliff, J.: Kaveri: delivering the indus java program slicer to eclipse. In: Cerioli, M. (ed.) Fundamental Approaches to Software Engineering: 8th International Conference, FASE 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4–8, 2005. Proceedings, pp. 269–272. Springer, Berlin (2005)CrossRefGoogle Scholar
  45. 45.
    Ji, R., Bubel, R.: PE-KeY: a partial evaluator for Java programs. In: Derrick, J., Gnesi, S., Latella, D., Treharne, H. (eds.) Integrated Formal Methods, 9th Intl. Conf.  IFM, Pisa, Italy Volume 7321 of LNCS, pp. 283–295. Springer, Berlin (2012)Google Scholar
  46. 46.
    Ji, R., Hähnle, R., Bubel, R.: Program transformation based on symbolic execution and deduction. In: Robert, M.B., Hierons, B.M., Merayo, Mercedes G. (eds.) Proc. of the 11th Intl. Conf. on Software Engineering and Formal Methods (SEFM), Volume 8137 of LNCS, pp. 289–304. Springer, Berlin (2013)Google Scholar
  47. 47.
    Juhasz, U., Kassios, I.T., Müller, P., Novacek, M., Schwerhoff, M., Summers, A.J.: Viper: a verification infrastructure for permission-based reasoning. Technical Report, ETH Zurich (2014)Google Scholar
  48. 48.
    Katz, S., Manna, Z.: Towards automatic debugging of programs. In: Proc. of the Intl. Conf. on Reliable software, pp. 143–155. ACM Press, Los Angeles (1975)Google Scholar
  49. 49.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Kleene, S.C.: Introduction to Metamathematics, Volume 1 of Bibliotheca mathematica. North-Holland Publishing Co., Amsterdam (1952)Google Scholar
  51. 51.
    Ko, A.J., Myers, B.A.: Extracting and answering why and why not questions about java program output. ACM Trans. Softw. Eng. Methodol. 20(2), 4:1–4:36 (2010)CrossRefGoogle Scholar
  52. 52.
    Le Goues, C., Leino, K.R.M., Moskal, M.: The Boogie Verification Debugger. In: Proc. 9th Intl. Conf. on Software Engineering and Formal Methods, SEFM, pp. 407–414. Springer (2011)Google Scholar
  53. 53.
    Leavens, G.T., Poll, E., Clifton, C., Cheon, Y., Ruby, C., Cok, D., Müller, P., Kiniry, J., Chalin, P., Zimmerman, D.M., Dietl, W.: JML Reference Manual, May 31 (2013). Draft Revision 2344Google Scholar
  54. 54.
    Leino, K.R.M.: Dafny: an automatic program verifier for functional correctness. In: Proc. 16th LPAR, pp. 348–370. Springer (2010)Google Scholar
  55. 55.
    Leino, K.R.M., Moskal, M.: Usable auto-active verification. In: Ball, T., Zuck, L., Shankar, N. (eds.) Usable Verification Workshop (UV), Redmond (2010)Google Scholar
  56. 56.
    Lewis, B.: Debugging backwards in time. Arxiv Preprint CS.SE, Proc. AADEBUG 2003, Sept. (2003)Google Scholar
  57. 57.
    Liskov, B., Wing, J.M.: A behavioral notion of subtyping. ACM Trans. Program. Lang. Syst. 16(6), 1811–1841 (1994)CrossRefGoogle Scholar
  58. 58.
    Moskal, M.: From C to infinity and back: Unbounded auto-active verification with VCC. In: Madhusudan, P., Seshia, S.A., (eds) Computer Aided Verification: 24th Intl. Conf. CAV, Berkeley, CA, USA, Volume 7358 of LNCS, p. 6. Springer (2012)Google Scholar
  59. 59.
    Müller, P., Ruskiewicz, J.N.: Using debuggers to understand failed verification attempts. In: Butler, M., Schulte, W. (eds.) 17th Intl. Symp. on Formal Methods, Limerick, Ireland, pp. 73–87. Springer, Berlin (2011)Google Scholar
  60. 60.
    Ottenstein, K.J., Ottenstein, L.M.: The program dependence graph in a software development environment. In: Riddle, W.E., Henderson, P.B., (eds) Proc. of the ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Practical Software Development Environments, pp. 177–184. ACM (1984)Google Scholar
  61. 61.
    Pothier, G., Tanter, É.: Back to the future: omniscient debugging. IEEE Softw. 26(6), 78–85 (2009)CrossRefGoogle Scholar
  62. 62.
    Pǎsǎreanu, C.S., Mehlitz, P.C., Bushnell, D.H., Gundy-Burlet, K., Lowry, M., Person, S., Pape, M.: Combining unit-level symbolic execution and system-level concrete execution for testing Nasa software. In: Proc. Intl. Symposium on Software Testing and Analysis, ISSTA’08, pp. 15–26. ACM (2008)Google Scholar
  63. 63.
    Reif, W., Schellhorn, G., Stenzel, K., Balser, M.: Structured specifications and interactive proofs with KIV. In: Bibel, W., Schmitt, P. (eds.) Automated Deduction—A Basis for Applications, Volume 9 of Applied Logic Series, pp. 13–39. Springer, Berlin (1998)CrossRefGoogle Scholar
  64. 64.
    Reisner, E., Song, C., Ma, K.-K., Foster, J.S., Porter, A.: Using symbolic evaluation to understand behavior in configurable software systems. In: Proc. 32nd ACM/IEEE Intl. Conf. on Software Engineering, Vol. 1, ICSE, pp. 445–454. ACM (2010)Google Scholar
  65. 65.
    Saxena, P., Akhawe, D., Hanna, S., Mao, F., McCamant, S., Song, D.: A symbolic execution framework for JavaScript. In: Proc. IEEE Symposium on Security and Privacy, SP, pp. 513–528. IEEE (2010)Google Scholar
  66. 66.
    Sridharan, M., Fink, S.J., Bodik, R.: Thin slicing. In: Proc. 28th ACM SIGPLAN Conf. on Programming Language Design and Implementation, PLDI, pp. 112–122. ACM (2007)Google Scholar
  67. 67.
    Tillmann, N., de Halleux, J.: Pex—white box test generation for .NET. In: Proc. of Tests and Proofs (TAP’08), Volume 4966 of LNCS, pp. 134–153. Springer, Prato (Apr. 2008)Google Scholar
  68. 68.
    Tip, F.: A survey of program slicing techniques. J. Program. Lang. 3, 121–189 (1995)Google Scholar
  69. 69.
    Tschannen, J., Furia, C.A., Nordio, M., Polikarpova, N.: AutoProof: auto-active functional verification of object-oriented programs. In: Baier, C., Tinelli, C. (eds.) Proc. Tools and Algorithms for the Construction and Analysis of Systems, Volume 9035 of LNCS, pp. 566–580. Springer, Berlin (2015)Google Scholar
  70. 70.
    Vanoverberghe, D., Piessens, F.: Theoretical aspects of compositional symbolic execution. In: Fundamental Approaches to Software Engineering, Volume 6603 of LNCS, pp. 247–261. Springer (2011)Google Scholar
  71. 71.
    Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)CrossRefGoogle Scholar
  72. 72.
    Weiser, M.D.: Program slices: formal, psychological, and practical investigations of an automatic program abstraction method. Ph.D. Thesis, University of Michigan, Ann Arbor (1979)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  • Martin Hentschel
    • 1
  • Richard Bubel
    • 1
  • Reiner Hähnle
    • 1
  1. 1.TU DarmstadtDarmstadtGermany

Personalised recommendations