Model-Based Testing for Verification Back-Ends

  • Cyrille Artho
  • Armin Biere
  • Martina Seidl
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7942)


Many verification tools used in practice rely on sophisticated SAT and SMT solvers. These reasoning engines are assumed and expected to be correct, but, in general, too complex to be fully verified. Therefore, effective testing techniques have to be employed. In this paper, we show how to employ model-based testing (MBT) to test sequences of application programming interface (API) calls and different system configurations. We applied this approach to our SAT solver Lingeling and compared it to existing testing approaches, revealing the effectiveness of MBT for the development of reliable SAT solvers.


Application Programming Interface Software Product Line Conjunctive Normal Form System Under Test Propositional Formula 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    D’Silva, V., Kroening, D., Weissenbacher, G.: A survey of automated techniques for formal software verification. TCAD 27(7), 1165–1178 (2008)Google Scholar
  2. 2.
    Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability. IOS Press (2009)Google Scholar
  3. 3.
    Prasad, M.R., Biere, A., Gupta, A.: A survey of recent advances in SAT-based formal verification. STTT 7(2), 156–173 (2005)CrossRefGoogle Scholar
  4. 4.
    Brummayer, R., Lonsing, F., Biere, A.: Automated Testing and Debugging of SAT and QBF Solvers. In: Strichman, O., Szeider, S. (eds.) SAT 2010. LNCS, vol. 6175, pp. 44–57. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Biere, A.: Lingeling and Friends at the SAT Competition 2011. FMV Report Series Technical Report 11(1) (2011)Google Scholar
  6. 6.
    Davis, M., Logemann, G., Loveland, D.: A machine program for theorem-proving. Communications of the ACM 5(7), 394–397 (1962)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Brummayer, R., Biere, A.: Fuzzing and delta-debugging SMT solvers. In: Proc. of the Workshop on Satisfiability Modulo Theories, pp. 1–5. ACM (2009)Google Scholar
  8. 8.
    Brummayer, R., Järvisalo, M.: Testing and debugging techniques for answer set solver development. TPLP 10(4-6), 741–758 (2010)zbMATHGoogle Scholar
  9. 9.
    Cuoq, P., Monate, B., Pacalet, A., Prevosto, V., Regehr, J., Yakobowski, B., Yang, X.: Testing static analyzers with randomly generated programs. In: Goodloe, A.E., Person, S. (eds.) NFM 2012. LNCS, vol. 7226, pp. 120–125. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  10. 10.
    Creignou, N., Egly, U., Seidl, M.: A Framework for the Specification of Random SAT and QSAT Formulas. In: Brucker, A.D., Julliand, J. (eds.) TAP 2012. LNCS, vol. 7305, pp. 163–168. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Nudelman, E., Leyton-Brown, K., Hoos, H.H., Devkar, A., Shoham, Y.: Understanding Random SAT: Beyond the Clauses-to-Variables Ratio. In: Wallace, M. (ed.) CP 2004. LNCS, vol. 3258, pp. 438–452. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Pérez, J.A.N., Voronkov, A.: Generation of Hard Non-Clausal Random Satisfiability Problems. In: Proc. of AAAI/IAAA, AAAI, pp. 436–442. The MIT Press (2005)Google Scholar
  13. 13.
    Misherghi, G., Su, Z.: HDD: hierarchical Delta Debugging. In: Proc. of ICSE, pp. 142–151. ACM (2006)Google Scholar
  14. 14.
    Artho, C., Biere, A., Hagiya, M., Potter, R., Ramler, R., Tanabe, Y., Yamamoto, F.: Modbat: A model-based API tester for event-driven systems. In: Dependable Systems Workshop (2012)Google Scholar
  15. 15.
    Claessen, K., Hughes, J.: Quickcheck: a lightweight tool for random testing of haskell programs. ACM Sigplan Notices 35(9), 268–279 (2000)CrossRefGoogle Scholar
  16. 16.
    Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Test., Verif. Reliab. 22(5), 297–312 (2012)CrossRefGoogle Scholar
  17. 17.
    Dai, H., Murphy, C., Kaiser, G.E.: Confu: Configuration fuzzing testing framework for software vulnerability detection. IJSSE 1(3), 41–55 (2010)Google Scholar
  18. 18.
    Cichos, H., Oster, S., Lochau, M., Schürr, A.: Model-Based Coverage-Driven Test Suite Generation for Software Product Lines. In: Whittle, J., Clark, T., Kühne, T. (eds.) MODELS 2011. LNCS, vol. 6981, pp. 425–439. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  19. 19.
    Barnett, M., Grieskamp, W., Nachmanson, L., Schulte, W., Tillmann, N., Veanes, M.: Towards a Tool Environment for Model-Based Testing with AsmL. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 252–266. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  20. 20.
    Liu, L(L.), Meyer, B., Schoeller, B.: Using Contracts and Boolean Queries to Improve the Quality of Automatic Test Generation. In: Gurevich, Y., Meyer, B. (eds.) TAP 2007. LNCS, vol. 4454, pp. 114–130. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  21. 21.
    Jalbert, N., Sen, K.: A trace simplification technique for effective debugging of concurrent programs. In: Proc. of FSE, pp. 57–66. ACM (2010)Google Scholar
  22. 22.
  23. 23.
    Holzleitner, J.: Using feedback to improve black box fuzz testing of SAT solvers. Master’s thesis, Johannes Kepler University Linz (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Cyrille Artho
    • 1
  • Armin Biere
    • 2
  • Martina Seidl
    • 2
    • 3
  1. 1.Research Institute for Secure Systems (RISEC), AISTNational Institute of Advanced Industrial Science and Technology (AIST)AmagasakiJapan
  2. 2.Institute for Formal Models and VerificationJohannes Kepler UniversityLinzAustria
  3. 3.Business Informatics GroupVienna University of TechnologyViennaAustria

Personalised recommendations