Just Test What You Cannot Verify!

  • Mike CzechEmail author
  • Marie-Christine Jakobs
  • Heike Wehrheim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9033)


Today, software verification is an established analysis method which can provide high guarantees for software safety. However, the resources (time and/or memory) for an exhaustive verification are not always available, and analysis then has to resort to other techniques, like testing. Most often, the already achieved partial verification results are discarded in this case, and testing has to start from scratch.

In this paper, we propose a method for combining verification and testing in which testing only needs to check the residual fraction of an uncompleted verification. To this end, the partial results of a verification run are used to construct a residual program (and residual assertions to be checked on it). The residual program can afterwards be fed into standard testing tools. The proposed technique is sound modulo the soundness of the testing procedure. Experimental results show that this combined usage of verification and testing can significantly reduce the effort for the subsequent testing.


Residual Program Safe State Original Program Testing Tool Satisfying Assignment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Gnu compiler collection, (accessed: October 13, 2014)
  2. 2.
    Barraclough, R.W., Binkley, D., Danicic, S., Harman, M., Hierons, R.M., Kiss, Á., Laurence, M., Ouarbya, L.: A trajectory-based strict semantics for program slicing. Theoretical Computer Science 411(11-13), 1372–1386 (2010)CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Beckman, N.E., Nori, A.V., Rajamani, S.K., Simmons, R.J.: Proofs from tests. In: ISSTA 2008, pp. 3–14. ACM (2008)Google Scholar
  4. 4.
    Bertolino, A.: Software testing research: Achievements, challenges, dreams. In: Briand, L.C., Wolf, A.L. (eds.) International Conference on Software Engineering, ISCE 2007, Workshop on the Future of Software Engineering, FOSE 2007, Minneapolis, MN, USA, May 23-25, pp. 85–103 (2007)Google Scholar
  5. 5.
    Beyer, D.: Status report on software verification. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 373–388. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  6. 6.
    Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: ICSE 2004, pp. 326–335. IEEE Computer Society (2004)Google Scholar
  7. 7.
    Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: FSE 2012, pp. 1–11. ACM (2012)Google Scholar
  8. 8.
    Beyer, D., Henzinger, T.A., Théoduloz, G.: Configurable software verification: Concretizing the convergence of model checking and program analysis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 504–518. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A Tool for Configurable Software Verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  10. 10.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI 2008, pp. 209–224. USENIX Association (2008)Google Scholar
  11. 11.
    Canfora, G., Cimitile, A., De Lucia, A.: Conditioned program slicing. Information and Software Technology 40(11-12), 595–607 (1998)CrossRefGoogle Scholar
  12. 12.
    Chebaro, O., Kosmatov, N., Giorgetti, A., Julliand, J.: Program slicing enhances a verification technique combining static and dynamic analysis. In: SAC 2012, pp. 1284–1291. ACM (2012)Google Scholar
  13. 13.
    Chen, J., MacDonald, S.: Towards a better collaboration of static and dynamic analyses for testing concurrent programs. In: PADTAD 2008, pp. 8:1–8:9. ACM (2008)Google Scholar
  14. 14.
    Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 132–146. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    Csallner, C., Smaragdakis, Y.: Check ’N’ Crash: Combining static checking and testing. In: ICSE 2005, pp. 422–431. ACM (2005)Google Scholar
  16. 16.
    Csallner, C., Smaragdakis, Y.: DSD-Crasher: A hybrid analysis tool for bug finding. In: ISSTA 2006, pp. 245–254. ACM (2006)Google Scholar
  17. 17.
    Cuoq, P., Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-c. In: Eleftherakis, G., Hinchey, M., Holcombe, M. (eds.) SEFM 2012. LNCS, vol. 7504, pp. 233–247. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  18. 18.
    Ge, X., Taneja, K., Xie, T., Tillmann, N.: DyTa: Dynamic symbolic execution guided with static verification results. In: ICSE 2011, pp. 992–994. ACM (2011)Google Scholar
  19. 19.
    Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: SYNERGY: a new algorithm for property checking. In: SIGSOFT FSE 2006, pp. 117–127. ACM Press (2006)Google Scholar
  20. 20.
    Gunter, E., Peled, D.: Model checking, testing and verification working together. Formal Aspects of Computing 17(2), 201–221 (2005)CrossRefzbMATHGoogle Scholar
  21. 21.
    Gupta, A., Majumdar, R., Rybalchenko, A.: From tests to proofs. In: Kowalewski, S., Philippou, A. (eds.) TACAS 2009. LNCS, vol. 5505, pp. 262–276. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  22. 22.
    Harman, M., Hierons, R., Fox, C., Danicic, S., Howroyd, J.: Pre/post conditioned slicing. In: ICSM 2001, pp. 138–147 (2001)Google Scholar
  23. 23.
    Horwitz, S., Reps, T., Binkley, D.: Interprocedural slicing using dependence graphs. In: PLDI 1988, pp. 35–46. ACM (1988)Google Scholar
  24. 24.
    Jalote, P., Vangala, V., Singh, T., Jain, P.: Program partitioning: A framework for combining static and dynamic analysis. In: WODA 2006, pp. 11–16. ACM (2006)Google Scholar
  25. 25.
    Kroening, D., Groce, A., Clarke, E.: Counterexample guided abstraction refinement via program execution. In: Davies, J., Schulte, W., Barnett, M. (eds.) ICFEM 2004. LNCS, vol. 3308, pp. 224–238. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  26. 26.
    Ku, K., Hart, T.E., Chechik, M., Lie, D.: A buffer overflow benchmark for software model checkers. In: ASE 2007, pp. 389–392. ACM (2007)Google Scholar
  27. 27.
    Naik, M., Yang, H., Castelnuovo, G., Sagiv, M.: Abstractions from tests. In: POPL 2012, pp. 373–386. ACM (2012)Google Scholar
  28. 28.
    Necula, G.C., McPeak, S., Rahul, S.P., Weimer, W.: CIL: Intermediate language and tools for analysis and transformation of C programs. In: Nigel Horspool, R. (ed.) CC 2002. LNCS, vol. 2304, pp. 213–228. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  29. 29.
    Pastore, F., Mariani, L., Hyvärinen, A.E.J., Fedyukovich, G., Sharygina, N., Sehestedt, S., Muhammad, A.: Verification-aided regression testing. In: ISSTA 2014, pp. 37–48. ACM (2014)Google Scholar
  30. 30.
    Rusu, V., Marchand, H., Tschaen, V., Jéron, T., Jeannet, B.: From safety verification to safety testing. In: Groz, R., Hierons, R.M. (eds.) TestCom 2004. LNCS, vol. 2978, pp. 160–176. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  31. 31.
    Sharygina, N., Peled, D.: A combined testing and verification approach for software reliability. In: Oliveira, J.N., Zave, P. (eds.) FME 2001. LNCS, vol. 2021, pp. 611–628. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  32. 32.
    Tip, F.: A survey of program slicing techniques. Journal of Programming Languages 3(3) (1995)Google Scholar
  33. 33.
    Yorsh, G., Ball, T., Sagiv, M.: Testing, abstraction, theorem proving: Better together? In: ISSTA 2006, pp. 145–156. ACM (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Mike Czech
    • 1
    Email author
  • Marie-Christine Jakobs
    • 1
  • Heike Wehrheim
    • 1
  1. 1.University of PaderbornPaderbornGermany

Personalised recommendations