Advertisement

SpySMAC: Automated Configuration and Performance Analysis of SAT Solvers

  • Stefan FalknerEmail author
  • Marius Lindauer
  • Frank Hutter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9340)

Abstract

Most modern SAT solvers expose a range of parameters to allow some customization for improving performance on specific types of instances. Performing this customization manually can be challenging and time-consuming, and as a consequence several automated algorithm configuration methods have been developed for this purpose. Although automatic algorithm configuration has already been applied successfully to many different SAT solvers, a comprehensive analysis of the configuration process is usually not readily available to users. Here, we present SpySMAC to address this gap by providing a lightweight and easy-to-use toolbox for (i) automatic configuration of SAT solvers in different settings, (ii) a thorough performance analysis comparing the best found configuration to the default one, and (iii) an assessment of each parameter’s importance using the fANOVA framework. To showcase our tool, we apply it to Lingeling and probSAT, two state-of-the-art solvers with very different characteristics.

Keywords

Local Search Random Forest Machine Learning Model Bayesian Optimization Parameter Importance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ansótegui, C., Sellmann, M., Tierney, K.: A Gender-based genetic algorithm for the automatic configuration of algorithms. In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 142–157. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  2. 2.
    Balint, A., Schöning, U.: Choosing probability distributions for stochastic local search and the role of make versus break. In: Cimatti and Sebastiani [7], pp. 16–19Google Scholar
  3. 3.
    Biere, A.: Yet another local search solver and lingeling and friends entering the SAT competition 2014. In: Belov, A., Diepold, D., Heule, M., Järvisalo, M. (eds.) Proceedings of SAT Competition 2014: Solver and Benchmark Descriptions. Department of Computer Science Series of Publications B, vol. B-2014-2, pp. 39–40. University of Helsinki (2014)Google Scholar
  4. 4.
    Breimann, L.: Random forests. Machine Learning Journal 45, 5–32 (2001)CrossRefGoogle Scholar
  5. 5.
    Brochu, E., Cora, V., de Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Computing Research Repository (2010). (CoRR) abs/1012.2599Google Scholar
  6. 6.
    Brummayer, R., Lonsing, F., Biere, A.: Automated testing and debugging of SAT and QBF solvers. In: Cimatti and Sebastiani [7], pp. 44–57Google Scholar
  7. 7.
    Cimatti, A., Sebastiani, R. (eds.): SAT 2012. LNCS, vol. 7317. Springer, Heidelberg (2012) zbMATHGoogle Scholar
  8. 8.
    Fawcett, C., Hoos, H.H.: Analysing differences between algorithm configurations through ablation. Journal of Heuristics, 1–28 (2015)Google Scholar
  9. 9.
    Gebser, M., Kaufmann, B., Schaub, T.: Conflict-driven answer set solving: From theory to practice. Artificial Intelligence 187–188, 52–89 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Hutter, F., Babić, D., Hoos, H.H., Hu, A.: Boosting verification by automatic tuning of decision procedures. In: O’Conner, L. (ed.) Formal Methods in Computer Aided Design (FMCAD 2007), pp. 27–34. IEEE Computer Society Press (2007)Google Scholar
  11. 11.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello, C.A.C. (ed.) LION 2011. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  12. 12.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Identifying key algorithm parameters and instance features using forward selection. In: Nicosia, G., Pardalos, P. (eds.) LION 7. LNCS, vol. 7997, pp. 364–381. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  13. 13.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: An efficient approach for assessing hyperparameter importance. In: Xing, E., Jebara, T. (eds.) Proceedings of the 31th International Conference on Machine Learning, (ICML 2014), vol. 32, pp. 754–762. Omniprdess (2014)Google Scholar
  14. 14.
    Hutter, F., Hoos, H.H., Leyton-Brown, K., Stützle, T.: ParamILS: An automatic algorithm configuration framework. Journal of Artificial Intelligence Research 36, 267–306 (2009)zbMATHGoogle Scholar
  15. 15.
    Hutter, F., Lindauer, M., Balint, A., Bayless, S., Hoos, H.H., Leyton-Brown, K.: The Configurable SAT Solver Challenge. Computing Research Repository (CoRR) (2015). http://arxiv.org/abs/1505.01221
  16. 16.
    KhudaBukhsh, A., Xu, L., Hoos, H.H., Leyton-Brown, K.: SATenstein: automatically building local search SAT solvers from components. In: Boutilier, C. (ed.) Proceedings of the 22th International Joint Conference on Artificial Intelligence (IJCAI 2009), pp. 517–524 (2009)Google Scholar
  17. 17.
    López-Ibáñez, M., Dubois-Lacoste, J., Stützle, T., Birattari, M.: The irace package, iterated race for automatic algorithm configuration. Tech. rep., IRIDIA, Université Libre de Bruxelles, Belgium (2011). http://iridia.ulb.ac.be/IridiaTrSeries/IridiaTr2011-004.pdf
  18. 18.
    Sakallah, K.A., Simon, L. (eds.): SAT 2011. LNCS, vol. 6695, pp. 134–144. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  19. 19.
    Tompkins, D.A.D., Balint, A., Hoos, H.H.: Captain jack: new variable selection heuristics in local search for SAT. In: Sakallah and Simon [18], pp. 302–316Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.University of FreiburgFreiburg Im BreisgauGermany

Personalised recommendations