Advertisement

A synergistic approach to improving symbolic execution using test ranges

  • Guowei YangEmail author
  • Rui Qiu
  • Sarfraz Khurshid
  • Corina S. Păsăreanu
  • Junye Wen
S.I.: NFM2018
  • 13 Downloads

Abstract

Symbolic execution is a systematic technique for checking programs, which has received a lot of research attention during the last decade. In principle, symbolic execution provides a powerful analysis for bug finding. In practice however, the technique remains computationally expensive and hard to scale. This paper introduces SynergiSE, a novel synergistic approach to improving symbolic execution by tackling a key bottleneck to its wider adoption: costly and incomplete constraint solving. To mitigate the cost, SynergiSE introduces a succinct encoding of constraint solving results, thereby enabling symbolic execution to be distributed among different workers while sharing and reusing constraint solving results among them without having to communicate databases of constraint solving results. To mitigate the incompleteness, SynergiSE introduces an integration of complementary approaches for testing, e.g., search-based test generation, with symbolic execution, thereby enabling symbolic execution and other techniques to apply in tandem, say to create higher-quality tests. Experimental results using a suite of Java programs show that SynergiSE presents a promising approach for improving symbolic execution.

Notes

Acknowledgements

This work was funded in part by the National Science Foundation (NSF Grant Nos. CCF-1319688, CCF-1319858, CCF-1549161, CCF-1464123, and CNS-1239498).

References

  1. 1.
    Albert E, Gómez-Zamalloa M, Rojas JM, Puebla G (2011) Compositional clp-based test data generation for imperative languages. In: Proceedings of the 20th international conference on logic-based program synthesis and transformation, LOPSTR’11. Springer, Berlin, pp 99–116Google Scholar
  2. 2.
    Anand S, Burke EK, Chen TY, Clark J, Cohen MB, Grieskamp W, Harman M, Harrold MJ, Mcminn P (2013) An orchestrated survey of methodologies for automated software test case generation. J Syst Softw 86(8):1978–2001CrossRefGoogle Scholar
  3. 3.
    Baars AI, Harman M, Hassoun Y, Lakhotia K, McMinn P, Tonella P, Vos TEJ (2011) Symbolic search-based testing. In: 26th IEEE/ACM international conference on automated software engineering (ASE 2011), Lawrence, KS, USA, November 6–10, 2011, pp 53–62Google Scholar
  4. 4.
    Balasubramanian D, Păsăreanu CS, Whalen MW, Karsai G, Lowry M (2011) Polyglot: modeling and analysis for multiple statechart formalisms. In: Proceedings of the 2011 international symposium on software testing and analysis, ISSTA’11. ACM, New York, pp 45–55Google Scholar
  5. 5.
    Beyer D, Henzinger TA, Keremoglu ME, Wendler P (2012) Conditional model checking: a technique to pass information between verifiers. In: Proceedings of the ACM SIGSOFT 20th international symposium on the foundations of software engineering, FSE’12. ACM, New York, pp 57:1–57:11Google Scholar
  6. 6.
    Beyer D, Lemberger T (2018) CPA-SymExec: efficient symbolic execution in CPAchecker. In: Proceedings of the 33rd ACM/IEEE international conference on automated software engineering, ASE 2018. ACM, New York, pp 900–903Google Scholar
  7. 7.
    Boyapati C, Khurshid S, Marinov D (2002) Korat: automated testing based on java predicates. In: Proceedings of the 2002 ACM SIGSOFT international symposium on software testing and analysis, ISSTA’02. ACM, New York, pp 123–133Google Scholar
  8. 8.
    Bucur S, Ureche V, Zamfir C, Candea G (2011) Parallel symbolic execution for automated real-world software testing. In: Proceedings of the sixth conference on computer systems, EuroSys’11. ACM, New York, pp 183–198Google Scholar
  9. 9.
    Burnim J, Juvekar S, Sen K (2009) Wise: automated test generation for worst-case complexity. In: Proceedings of the 31st international conference on software engineering, ICSE’09. IEEE Computer Society, Washington, DC, pp 463–473Google Scholar
  10. 10.
    Cadar C, Dunbar D, Engler D (2008) Klee: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of the 8th USENIX conference on operating systems design and implementation, OSDI’08. USENIX Association, Berkeley, pp 209–224Google Scholar
  11. 11.
    Castaño R, Braberman V, Garbervetsky D, Uchitel S (2017) Model checker execution reports. In: Proceedings of the 32nd IEEE/ACM international conference on automated software engineering, ASE 2017. IEEE Press, Piscataway, pp 200–205Google Scholar
  12. 12.
    Chen J, Hu W, Zhang L, Hao D, Khurshid S, Zhang L (2018) Learning to accelerate symbolic execution via code transformation. In: Proceedings of the 32nd European conference on object-oriented programming, ECOOP 2018, Amsterdam, Netherlands, pp 6:1–6:27Google Scholar
  13. 13.
    Clarke LA (1976) A program testing system. In: Proceedings of the 1976 annual conference, ACM’76. ACM, New York, pp 488–491Google Scholar
  14. 14.
    Dini N (2016) MKorat: a novel approach for memoizing the Korat search and some potential applications. Master’s thesis, University of Texas at AustinGoogle Scholar
  15. 15.
    Dini N, Yelen C, Khurshid S (2017) Optimizing parallel korat using invalid ranges. In: Proceedings of the 24th ACM SIGSOFT international SPIN symposium on model checking of software, Santa Barbara, CA, USA, July 10–14, 2017, pp 182–191Google Scholar
  16. 16.
    Dong S, Olivo O, Zhang L, Khurshid S (2015) Studying the influence of standard compiler optimizations on symbolic execution. In: 26th IEEE international symposium on software reliability engineering, ISSRE 2015, Gaithersbury, MD, USA, November 2–5, 2015. IEEE Computer Society, pp 205–215Google Scholar
  17. 17.
    Dustmann OS, Wehrle K, Cadar C (2018) Parti: a multi-interval theory solver for symbolic execution. In: Proceedings of the 33rd ACM/IEEE international conference on automated software engineering, ASE 2018. ACM, New York, pp 430–440Google Scholar
  18. 18.
    Eclat: automatic generation and classification of test inputs. In: Proceedings of the 19th European conference on object-oriented programming, ECOOP’05, 2005. Springer, Berlin, pp 504–527Google Scholar
  19. 19.
    Forum MP (1994) Mpi: a message-passing interface standard. Technical report, Knoxville, TN, USAGoogle Scholar
  20. 20.
    Fraser G, Arcuri A (2011) Evolutionary generation of whole test suites. In: Proceedings of the 2011 11th international conference on quality software, QSIC’11. IEEE Computer Society, Washington, DC, pp 31–40Google Scholar
  21. 21.
    Godefroid P (2007) Compositional dynamic test generation. In: Proceedings of the 34th annual ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL’07. ACM, New York, pp 47–54Google Scholar
  22. 22.
    Godefroid P, Klarlund N, Sen K (2005) DART: directed automated random testing. In: Proceedings of the 2005 ACM SIGPLAN conference on programming language design and implementation, PLDI’05. ACM, New York, pp 213–223Google Scholar
  23. 23.
    Inkumsah K, Xie T (2007) Evacon: a framework for integrating evolutionary and concolic testing for object-oriented programs. In: 22nd IEEE/ACM international conference on automated software engineering (ASE 2007), November 5–9, 2007. Atlanta, Georgia, USA, pp 425–428Google Scholar
  24. 24.
    Inkumsah K, Xie T (2008) Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: Proceedings of the 2008 23rd IEEE/ACM international conference on automated software engineering, ASE’08. IEEE Computer Society, Washington, DC, pp 297–306Google Scholar
  25. 25.
    Jia X, Ghezzi C, Ying S (2015) Enhancing reuse of constraint solutions to improve symbolic execution. In: Proceedings of the 2015 international symposium on software testing and analysis, ISSTA 2015. ACM, New York, pp 177–187Google Scholar
  26. 26.
    Jia Y, Harman M (2011) An analysis and survey of the development of mutation testing. IEEE Trans Softw Eng 37(5):649–678CrossRefGoogle Scholar
  27. 27.
    Joshi A, Heimdahl MPE (2005) Model-based safety analysis of simulink models using scade design verifier. In: Proceedings of the 24th international conference on computer safety, reliability, and security, SAFECOMP’05. Springer, Berlin, pp 122–135Google Scholar
  28. 28.
    Khurshid S, Păsăreanu CS, Visser W (2003) Generalized symbolic execution for model checking and testing. In: Proceedings of the 9th international conference on tools and algorithms for the construction and analysis of systems, TACAS’03. Springer, Berlin, pp 553–568Google Scholar
  29. 29.
    Kim M, Kim Y, Rothermel G (2012) A scalable distributed concolic testing approach: an empirical evaluation. In: Proceedings of the 2012 IEEE fifth international conference on software testing, verification and validation, ICST ’12. IEEE Computer Society, Washington, DC, pp 340–349Google Scholar
  30. 30.
    King JC (1976) Symbolic execution and program testing. Commun ACM 19(7):385–394MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Köroglu Y, Sen A (2016) Design of a modified concolic testing algorithm with smaller constraints. In: Proceedings of the 7th workshop on constraint solvers in testing, verification, and analysis co-located with the international symposium on software testing and analysis (ISSTA 2016), Saarbrücken, Germany, July 17th, 2016, pp 3–14Google Scholar
  32. 32.
    Köroglu Y, Sen A (2016) Design of a modified concolic testing algorithm with smaller constraints. In: International workshop on constraints in software testing, verification and analysis 2016, CSTVA@ISSTAGoogle Scholar
  33. 33.
    Lakhotia K, McMinn P, Harman M (2010) An empirical investigation into branch coverage for C programs using CUTE and AUSTIN. J Syst Softw 83(12):2379–2391CrossRefGoogle Scholar
  34. 34.
    Lauterburg S, Sobeih A, Marinov D, Viswanathan M (2008) Incremental state-space exploration for programs with dynamically allocated data. In: 30th international conference on software engineering (ICSE 2008), Leipzig, Germany, May 10–18, 2008, pp 291–300Google Scholar
  35. 35.
    Li G, Andreasen E, Ghosh I (2014) SymJS: automatic symbolic testing of javascript web applications. In: Proceedings of the 22nd ACM SIGSOFT international symposium on foundations of software engineering, FSE 2014. ACM, New York, pp 449–459Google Scholar
  36. 36.
    Liew D, Cadar C, Donaldson AF (2016) Symbooglix: a symbolic execution engine for boogie programs. In: 2016 IEEE international conference on software testing, verification and validation, ICST 2016, Chicago, IL, USA, April 11–15, 2016, pp 45–56Google Scholar
  37. 37.
    Ma K-K, Phang KY, Foster JS, Hicks M (2011) Directed symbolic execution. In: Proceedings of the 18th international conference on static analysis, SAS’11. Springer, Berlin, pp 95–111Google Scholar
  38. 38.
    Misailovic S, Milicevic A, Petrovic N, Khurshid S, Marinov D (2007) Parallel test generation and execution with Korat. In: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering, ESEC-FSE’07. ACM, New York, pp 135–144Google Scholar
  39. 39.
    Pacheco C, Lahiri SK, Ernst MD, Ball T (2007) Feedback-directed random test generation. In: Proceedings of the 29th international conference on software engineering, ICSE’07. IEEE Computer Society, Washington, DC, pp 75–84Google Scholar
  40. 40.
    Pasareanu CS, Visser W, Bushnell DH, Geldenhuys J, Mehlitz PC, Rungta N (2013) Symbolic pathfinder: integrating symbolic execution with model checking for java bytecode analysis. Autom Softw Eng 20(3):391–425CrossRefGoogle Scholar
  41. 41.
    Person S, Yang G, Rungta N, Khurshid S (2011) Directed incremental symbolic execution. In: Proceedings of the 32nd ACM SIGPLAN conference on programming language design and implementation, PLDI’11. ACM, New York, pp 504–515Google Scholar
  42. 42.
    Păsăreanu CS, Mehlitz PC, Bushnell DH, Gundy-Burlet K, Lowry M, Person S, Pape M (2008) Combining unit-level symbolic execution and system-level concrete execution for testing Nasa software. In: Proceedings of the 2008 international symposium on software testing and analysis, ISSTA’08. ACM, New York, pp 15–26Google Scholar
  43. 43.
    Păsăreanu CS, Rungta N (2010) Symbolic pathfinder: symbolic execution of java bytecode. In: Proceedings of the IEEE/ACM international conference on automated software engineering, ASE’10. ACM, New York, pp 179–180Google Scholar
  44. 44.
    Qiu R (2016) Scaling and certifying symbolic execution, Ph.D. dissertation, University of Texas at AustinGoogle Scholar
  45. 45.
    Qiu R, Khurshid S, Păsăreanu CS, Wen J, Yang G (2018) Using test ranges to improve symbolic execution. In: Dutle A, Muñoz C, Narkawicz A (eds) NASA formal methods. Springer, Cham, pp 416–434Google Scholar
  46. 46.
    Qiu R, Khurshid S, Păsăreanu CS, Yang G (2017) A synergistic approach for distributed symbolic execution using test ranges. In: Proceedings of the 39th international conference on software engineering, ICSE 2017, Buenos Aires, Argentina, May 20–28, 2017—companion volume, pp 130–132Google Scholar
  47. 47.
    Qiu R, Yang G, Păsăreanu CS, Khurshid S (2015) Compositional symbolic execution with memoized replay. In: Proceedings of the 37th international conference on software engineering, vol 1, ICSE’15. IEEE Press, Piscataway, pp 632–642Google Scholar
  48. 48.
    Rojas JM, Păsăreanu CS (2013) Compositional symbolic execution through program specialization. In: BYTECODE’13, ETAPS’13Google Scholar
  49. 49.
    SAE Standards ARP4761 (1996) Guidelines and methods for conducting the safety assessment process on civil airborne systems and equipment. December 1996. SAE International. https://www.sae.org/standards/content/arp4761. Accessed 1 Jan 2019
  50. 50.
    Sen K, Agha G (2006) CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Computer aided verification. Springer, Berlin, pp 419–423Google Scholar
  51. 51.
    Sen K, Marinov D, Agha G (2005) CUTE: a concolic unit testing engine for C. In: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on foundations of software engineering, ESEC/FSE-13. ACM, New York, pp 263–272Google Scholar
  52. 52.
    Shafi A, Carpenter B, Baker M (2009) Nested parallelism for multi-core HPC systems using Java. J Parallel Distrib Comput 69(6):532–545CrossRefGoogle Scholar
  53. 53.
    Siddiqui JH, Khurshid S (2010) Parsym: parallel symbolic execution. In: 2010 2nd international conference on software technology and engineering, vol 1, pp V1-405–V1-409Google Scholar
  54. 54.
    Siddiqui JH, Khurshid S (2012) Scaling symbolic execution using ranged analysis. In: Proceedings of the ACM international conference on object oriented programming systems languages and applications, OOPSLA’12. ACM, New York, pp 523–536Google Scholar
  55. 55.
    SIR Repository. https://sir.csc.ncsu.edu. Accessed 1 Jan 2019
  56. 56.
    Slabý J, Strejček J, Trtík M (2012) Checking properties described by state machines: on synergy of instrumentation, slicing, and symbolic execution. In: Stoelinga M, Pinger R (eds) Formal methods for industrial critical systems. Springer, Berlin, pp 207–221CrossRefGoogle Scholar
  57. 57.
    Souza M, Borges M, d’Amorim M, Păsăreanu CS (2011) CORAL: solving complex constraints for symbolic PathFinder. In: Proceedings of the third international conference on NASA formal methods, NFM’11. Springer, Heidelberg, pp 359–374Google Scholar
  58. 58.
    Staats M, Pǎsǎreanu C (2010) Parallel symbolic execution for structural test generation. In: Proceedings of the 19th international symposium on software testing and analysis, ISSTA’10. ACM, New York, pp 183–194Google Scholar
  59. 59.
    Stampede. https://www.tacc.utexas.edu/stampede. Accessed 1 Jan 2019
  60. 60.
    SynergiSE. http://cs.txstate.edu/~g_y10/synergise. Accessed 1 Jan 2019
  61. 61.
    Visser W, Geldenhuys J, Dwyer MB (2012) Green: reducing, reusing and recycling constraints in program analysis. In: Proceedings of the ACM SIGSOFT 20th international symposium on the foundations of software engineering, FSE’12. ACM, New York, pp 58:1–58:11Google Scholar
  62. 62.
    Wagner J, Kuznetsov V, Candea G (2013) -OVERIFY: optimizing programs for fast verification. In: Presented as part of the 14th workshop on hot topics in operating systems, Santa Ana Pueblo, NM, USENIXGoogle Scholar
  63. 63.
    Wang R, Ning P, Xie T, Chen Q (2013) MetaSymploit: day-one defense against script-based attacks with security-enhanced symbolic analysis. In: Proceedings of the USENIX security symposiumGoogle Scholar
  64. 64.
    Wen J, Yang G (2018) Parallel property checking with symbolic execution. In: The 30th international conference on software engineering and knowledge engineering, Hotel Pullman, Redwood City, California, USA, July 1–3, 2018, pp 554–553Google Scholar
  65. 65.
    Wen J, Yang G (2019) Parallel property checking with staged symbolic execution. In: Proceedings of the 34th annual ACM symposium on applied computing, SAC 2019, Limassol, Cyprus, April 08–12, 2019. (to appear)Google Scholar
  66. 66.
    Yang G, Dwyer MB, Rothermel G (2009) Regression model checking. In: 25th IEEE international conference on software maintenance (ICSM 2009), September 20–26, 2009. Edmonton, Alberta, Canada, pp 115–124Google Scholar
  67. 67.
    Yang G, Khurshid S, Person S, Rungta N (2014) Property differencing for incremental checking. In: Proceedings of the 36th international conference on software engineering, ICSE 2014. ACM, New York, pp 1059–1070Google Scholar
  68. 68.
    Yang G, Person S, Rungta N, Khurshid S (2014) Directed incremental symbolic execution. ACM Trans Softw Eng Methodol 24(1):3:1–3:42CrossRefGoogle Scholar
  69. 69.
    Yang G, Păsăreanu CS, Khurshid S (2012) Memoized symbolic execution. In: Proceedings of the 2012 international symposium on software testing and analysis, ISSTA 2012. ACM, New York, pp 144–154Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceTexas State UniversitySan MarcosUSA
  2. 2.Electrical and Computer EngineeringThe University of Texas at AustinAustinUSA
  3. 3.Carnegie Mellon University/NASA AmesCAUSA

Personalised recommendations