Empirical Software Metrics for Benchmarking of Verification Tools

  • Yulia Demyanova
  • Thomas Pani
  • Helmut Veith
  • Florian Zuleger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9206)


In this paper we study empirical metrics for software source code, which can predict the performance of verification tools on specific types of software. Our metrics comprise variable usage patterns, loop patterns, as well as indicators of control-flow complexity and are extracted by simple data-flow analyses. We demonstrate that our metrics are powerful enough to devise a machine-learning based portfolio solver for software verification. We show that this portfolio solver would be the (hypothetical) overall winner of both the 2014 and 2015 International Competition on Software Verification (SV-COMP). This gives strong empirical evidence for the predictive power of our metrics and demonstrates the viability of portfolio solvers for software verification.


  1. 1.
    Collective benchmark (cBench). Accessed 6 Feb 2015
  2. 2.
    Competition on Software Verification 2014. Accessed 6 Feb 2015
  3. 3.
    Competition on Software Verification 2015. Accessed 6 Feb 2015
  4. 4.
    SV-COMP 2014 - Minutes. Accessed 6 Feb 2015
  5. 5.
    Verifolio. Accessed 11 May 2015
  6. 6.
    Aho, A.V., Sethi, R., Ullman, J.D.: Compilers: Princiles, Techniques, and Tools. Addison-Wesley, Reading (1986)Google Scholar
  7. 7.
    Baier, C., Tinelli, C. (eds.): TACAS 2015. LNCS, vol. 9035. Springer, Heidelberg (2015) Google Scholar
  8. 8.
    Beyer, D.: Status report on software verification. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 373–388. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  9. 9.
    Beyer, D.: Software verification and verifiable witnesses. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 401–416. Springer, Heidelberg (2015) Google Scholar
  10. 10.
    Beyer, D., Henzinger, T.A., Théoduloz, G.: Configurable software verification: concretizing the convergence of model checking and program analysis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 504–518. Springer, Heidelberg (2007) CrossRefGoogle Scholar
  11. 11.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)Google Scholar
  12. 12.
    Boser, B.E., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classifiers. In: Conference on Computational Learning Theory (COLT 1992), pp. 144–152 (1992)Google Scholar
  13. 13.
    Chang, C., Lin, C.: LIBSVM: a library for support vector machines. ACM TIST 2(3), 27 (2011)Google Scholar
  14. 14.
    Clarke, E., Kroning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004) CrossRefGoogle Scholar
  15. 15.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)Google Scholar
  16. 16.
    Demyanova, Y., Veith, H., Zuleger, F.: On the concept of variable roles and its use in software analysis. In: Formal Methods in Computer-Aided Design (FMCAD 2013), pp. 226–230 (2013)Google Scholar
  17. 17.
    Dudka, K., Peringer, P., Vojnar, T.: Byte-precise verification of low-level list manipulation. In: Logozzo, F., Fähndrich, M. (eds.) Static Analysis. LNCS, vol. 7935, pp. 215–237. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  18. 18.
    Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T., Schneider, M.T., Ziller, S.: A portfolio solver for answer set programming: preliminary report. In: Delgrande, J.P., Faber, W. (eds.) LPNMR 2011. LNCS, vol. 6645, pp. 352–357. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  19. 19.
    Gomes, C.P., Selman, B.: Algorithm portfolios. Artif. Intell. 126(1–2), 43–62 (2001)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Gurfinkel, A., Belov, A.: FrankenBit: bit-precise verification with many bits. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 408–411. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  21. 21.
    He, H., Garcia, E.A.: Learning from imbalanced data. Knowl. Data Eng. 21(9), 1263–1284 (2009)CrossRefGoogle Scholar
  22. 22.
    Hsu, C.W., Chang, C.C., Lin, C.J., et al.: A practical guide to support vector classification (2003)Google Scholar
  23. 23.
    Huang, Y.M., Du, S.X.: Weighted support vector machine for classification with uneven training class sizes. Mach. Learn. Cybern. 7, 4365–4369 (2005)Google Scholar
  24. 24.
    Huberman, B.A., Lukose, R.M., Hogg, T.: An economics approach to hard computational problems. Science 275(5296), 51–54 (1997)CrossRefGoogle Scholar
  25. 25.
    Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H., Sellmann, M.: Algorithm selection and scheduling. In: Lee, J. (ed.) CP 2011. LNCS, vol. 6876, pp. 454–469. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  26. 26.
    Lobjois, L., Lemaître, M.: Branch and bound algorithm selection by performance prediction. In: Mostow, J., Rich, C. (eds.) National Conference on Artificial Intelligence and Innovative Applications of Artificial Intelligence Conference, pp. 353–358 (1998)Google Scholar
  27. 27.
    Maratea, M., Pulina, L., Ricca, F.: The multi-engine ASP solver me-asp. In: del Cerro, L.F., Herzig, A., Mengin, J. (eds.) JELIA 2012. LNCS, vol. 7519, pp. 484–487. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  28. 28.
    O’Mahony, E., Hebrard, E., Holland, A., Nugent, C., OSullivan, B.: Using case-based reasoning in an algorithm portfolio for constraint solving. In: Irish Conference on Artificial Intelligence and Cognitive Science (2008)Google Scholar
  29. 29.
    Pani, T.: Loop patterns in C programs. Diploma Thesis (2014).
  30. 30.
    Pulina, L., Tacchella, A.: A multi-engine solver for quantified boolean formulas. In: Bessière, C. (ed.) CP 2007. LNCS, vol. 4741, pp. 574–589. Springer, Heidelberg (2007) CrossRefGoogle Scholar
  31. 31.
    Pulina, L., Tacchella, A.: A self-adaptive multi-engine solver for quantified boolean formulas. Constraints 14(1), 80–116 (2009)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976)CrossRefGoogle Scholar
  33. 33.
  34. 34.
    Samulowitz, H., Memisevic, R.: Learning to solve QBF. In: Proceedings of the Conference on Artificial Intelligence (AAAI), pp. 255–260 (2007)Google Scholar
  35. 35.
    Tulsian, V., Kanade, A., Kumar, R., Lal, A., Nori, A.V.: MUX: algorithm selection for software model checkers. In: Working Conference on Mining Software Repositories, pp. 132–141 (2014)Google Scholar
  36. 36.
    Wu, T.F., Lin, C.J., Weng, R.C.: Probability estimates for multi-class classification by pairwise coupling. J. Mach. Learn. Res. 5, 975–1005 (2004)MathSciNetGoogle Scholar
  37. 37.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. (JAIR) 32, 565–606 (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Yulia Demyanova
    • 1
  • Thomas Pani
    • 1
  • Helmut Veith
    • 1
  • Florian Zuleger
    • 1
  1. 1.Vienna University of TechnologyViennaAustria

Personalised recommendations