Connecting Program Synthesis and Reachability: Automatic Program Repair Using Test-Input Generation

  • ThanhVu NguyenEmail author
  • Westley Weimer
  • Deepak Kapur
  • Stephanie Forrest
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10205)


We prove that certain formulations of program synthesis and reachability are equivalent. Specifically, our constructive proof shows the reductions between the template-based synthesis problem, which generates a program in a pre-specified form, and the reachability problem, which decides the reachability of a program location. This establishes a link between the two research fields and allows for the transfer of techniques and results between them.

To demonstrate the equivalence, we develop a program repair prototype using reachability tools. We transform a buggy program and its required specification into a specific program containing a location reachable only when the original program can be repaired, and then apply an off-the-shelf test-input generation tool on the transformed program to find test values to reach the desired location. Those test values correspond to repairs for the original program. Preliminary results suggest that our approach compares favorably to other repair methods.


Program synthesis Program verification Program reachability Reduction proof Automated program repair Test-input generation 



This research was partially supported by NSF awards CCF 1248069, CNS 1619098, CNS 1619123, as well as AFOSR grant FA8750-11-2-0039 and DARPA grant FA8650-10-C-7089.


  1. 1.
    Alur, R., Bodik, R., Juniwal, G., Martin, M.M., Raghothaman, M., Seshia, S.A., Singh, R., Solar-Lezama, A., Torlak, E., Udupa, A.: Syntax-guided synthesis. Dependable Softw. Syst. Eng. 40, 1–25 (2015)Google Scholar
  2. 2.
    Anand, S., Păsăreanu, C.S., Visser, W.: JPF–SE: a symbolic execution extension to Java PathFinder. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 134–138. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-71209-1_12 CrossRefGoogle Scholar
  3. 3.
    Artzi, S., Kiezun, A., Dolby, J., Tip, F., Dig, D., Paradkar, A., Ernst, M.D.: Finding bugs in dynamic web applications. In: ISSTA, pp. 261–272. ACM (2008)Google Scholar
  4. 4.
    Attie, P., Cherri, A., Al Bab, K.D., Sakr, M., Saklawi, J.: Model and program repair via sat solving. In: MEMOCODE, pp. 148–157. IEEE (2015)Google Scholar
  5. 5.
    Ball, T., Rajamani, S.K.: The SLAM project: debugging system software via static analysis. In: POPL, pp. 1–3. ACM (2002)Google Scholar
  6. 6.
    Beyer, D., Henzinger, T.A., Jhala, R., Majumdar, R.: The software model checker BLAST. Soft. Tools Technol. Transf. 9(5–6), 505–525 (2007)CrossRefGoogle Scholar
  7. 7.
    Bloem, R., et al.: FoREnSiC – an automatic debugging environment for C programs. In: Biere, A., Nahir, A., Vos, T. (eds.) HVC 2012. LNCS, vol. 7857, pp. 260–265. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39611-3_24 CrossRefGoogle Scholar
  8. 8.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, vol. 8, pp. 209–224. USENIX Association (2008)Google Scholar
  9. 9.
    Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013)CrossRefGoogle Scholar
  10. 10.
    Clarke, E.M., Grumberg, O., Peled, D.: Model Checking. MIT Press, Cambridge (1999)Google Scholar
  11. 11.
    Dallmeier, V., Zeller, A., Meyer, B.: Generating fixes from object behavior anomalies. In: ASE, pp. 550–554. IEEE (2009)Google Scholar
  12. 12.
    Debroy, V., Wong, W.E.: Using mutation to automatically suggest fixes for faulty programs. In: Software Testing, Verification and Validation, pp. 65–74. IEEE (2010)Google Scholar
  13. 13.
    Do, H., Elbaum, S., Rothermel, G.: Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir. Softw. Eng. 10(4), 405–435 (2005)CrossRefGoogle Scholar
  14. 14.
    ExCAPE: Expeditions in computer augmented program engineering. Accessed 19 Oct 2016
  15. 15.
    Forrester, J.E., Miller, B.P.: An empirical study of the robustness of Windows NT applications using random testing. In: USENIX Windows System Symposium, pp. 59–68 (2000)Google Scholar
  16. 16.
    Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. PLDI 40(6), 213–223 (2005)Google Scholar
  17. 17.
    Godefroid, P., Levin, M.Y., Molnar, D.A., et al.: Automated whitebox fuzz testing. In: Network and Distributed System Security Symposium, pp. 151–166 (2008)Google Scholar
  18. 18.
    Gopinath, D., Malik, M.Z., Khurshid, S.: Specification-based program repair using SAT. In: Abdulla, P.A., Leino, K.R.M. (eds.) TACAS 2011. LNCS, vol. 6605, pp. 173–188. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-19835-9_15 CrossRefGoogle Scholar
  19. 19.
    Gulwani, S.: Automating string processing in spreadsheets using input-output examples. In: POPL, pp. 317–330. ACM (2011)Google Scholar
  20. 20.
    Gulwani, S., Harris, W.R., Singh, R.: Spreadsheet data manipulation using examples. Commun. ACM 55(8), 97–105 (2012)CrossRefGoogle Scholar
  21. 21.
    Jin, G., Song, L., Zhang, W., Lu, S., Liblit, B.: Automated atomicity-violation fixing. In: PLDI, pp. 389–400. ACM (2011)Google Scholar
  22. 22.
    Jobstmann, B., Griesmayer, A., Bloem, R.: Program repair as a game. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 226–238. Springer, Heidelberg (2005). doi: 10.1007/11513988_23 CrossRefGoogle Scholar
  23. 23.
    Jones, J.A., Harrold, M.J.: Empirical evaluation of the Tarantula automatic fault-localization technique. In: ICSE, pp. 273–282. IEEE (2005)Google Scholar
  24. 24.
    Kim, D., Nam, J., Song, J., Kim, S.: Automatic patch generation learned from human-written patches. In: ICSE, pp. 802–811. ACM (2013)Google Scholar
  25. 25.
    Könighofer, R., Bloem, R.: Automated error localization and correction for imperative programs. In: FMCAD. IEEE (2011)Google Scholar
  26. 26.
    Könighofer, R., Bloem, R.: Repair with on-the-fly program analysis. In: Biere, A., Nahir, A., Vos, T. (eds.) HVC 2012. LNCS, vol. 7857, pp. 56–71. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-39611-3_11 CrossRefGoogle Scholar
  27. 27.
    Li, G., Ghosh, I., Rajan, S.P.: KLOVER: a symbolic execution and automatic test generation tool for C++ programs. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 609–615. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-22110-1_49 CrossRefGoogle Scholar
  28. 28.
    Long, F., Rinard, M.: Automatic patch generation by learning correct code. In: POPL, vol. 51, pp. 298–312. ACM (2016)Google Scholar
  29. 29.
    Mechtaev, S., Yi, J., Roychoudhury, A.: Angelix: scalable multiline program patch synthesis via symbolic analysis. In ICSE, pp. 691–701. ACM (2016)Google Scholar
  30. 30.
    Miller, B.P., Fredriksen, L., So, B.: An empirical study of the reliability of UNIX utilities. Commun. ACM 33(12), 32–44 (1990)CrossRefGoogle Scholar
  31. 31.
    Necula, G.C., McPeak, S., Rahul, S.P., Weimer, W.: CIL: intermediate language and tools for analysis and transformation of C programs. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 213–228. Springer, Heidelberg (2002). doi: 10.1007/3-540-45937-5_16 CrossRefGoogle Scholar
  32. 32.
    Nguyen, H.D.T., Qi, D., Roychoudhury, A., Chandra, S., SemFix: program repair via semantic analysis. In: ICSE, pp. 772–781. ACM (2013)Google Scholar
  33. 33.
    Nguyen, T., Kapur, D., Weimer, W., Forrest, S.: Using dynamic analysis to discover polynomial and array invariants. In: ICSE, pp. 683–693. IEEE (2012)Google Scholar
  34. 34.
    Nguyen, T., Kapur, D., Weimer, W., Forrest, S.: Connecting program synthesis and reachability. Technical report, University of Nebraska, Lincoln, October 2016Google Scholar
  35. 35.
    Rice, H.: Classes of recursively enumerable sets and their decision problems. Trans. Am. Math. Soc. 74(2), 358–366 (1953)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Saha, S., Garg, P., Madhusudan, P.: Alchemist: learning guarded affine functions. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 440–446. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-21690-4_26 CrossRefGoogle Scholar
  37. 37.
    Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006). doi: 10.1007/11817963_38 CrossRefGoogle Scholar
  38. 38.
    Singh, R., Gulwani, S., Solar-Lezama, A.: Automated feedback generation for introductory programming assignments. In: PLDI, pp. 15–26. ACM (2013)Google Scholar
  39. 39.
    Solar-Lezama, A.: Program synthesis by sketching. Ph.D. thesis, University of California, Berkeley (2008)Google Scholar
  40. 40.
    Solar-Lezama, A., Arnold, G., Tancau, L., Bodík, R., Saraswat, V.A., Seshia, S.A.: Sketching stencils. In: PLDI, pp. 167–178. ACM (2007)Google Scholar
  41. 41.
    Solar-Lezama, A., Rabbah, R., Bodík, R., Ebcioğlu, K.: Programming by sketching for bit-streaming programs. PLDI 40, 281–294 (2005)Google Scholar
  42. 42.
    Srivastava, S.: Satisfiability-based program reasoning and program synthesis. Ph.D. thesis, University of Maryland (2010)Google Scholar
  43. 43.
    Srivastava, S., Gulwani, S., Foster, J.S.: From program verification to program synthesis. In: POPL, pp. 313–326. ACM (2010)Google Scholar
  44. 44.
    Srivastava, S., Gulwani, S., Foster, J.S.: Template-based program verification and program synthesis. Soft. Tools Technol. Transf. 15(5–6), 497–518 (2013)CrossRefGoogle Scholar
  45. 45.
    SyGuS: Syntax-guided synthesis competition. Accessed 19 Oct 2016
  46. 46.
    Weimer, W., Nguyen, T., Le Goues, C., Forrest, S.: Automatically finding patches using genetic programming. In: ICSE, pp. 364–367. IEEE (2009)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany 2017

Authors and Affiliations

  • ThanhVu Nguyen
    • 1
    Email author
  • Westley Weimer
    • 2
  • Deepak Kapur
    • 3
  • Stephanie Forrest
    • 3
  1. 1.University of NebraskaLincolnUSA
  2. 2.University of VirginiaCharlottesvilleUSA
  3. 3.University of New MexicoAlbuquerqueUSA

Personalised recommendations