Advertisement

On the Configuration of SAT Formulae

  • Mauro VallatiEmail author
  • Marco Maratea
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11946)

Abstract

It is well-known that the order in which clauses and literals are listed in a SAT formulae can have a strong impact on solvers’ performance.

In this work we investigate how the performance of SAT solvers can be improved by a specifically-designed SAT formulae configuration. We introduce a fully automated approach for this configuration task, that considers a number of criteria for optimising the order in which clauses and, within clauses, literals, are listed in a formula expressed using the Conjunctive Normal Form.

Our experimental analysis, involving three state-of-the-art SAT solvers and six different benchmark sets, shows that the configurations identified by the proposed approach can have a significant positive impact on solvers’ performance.

Keywords

SATisfiability Knowledge configuration Performance improvement 

References

  1. 1.
    Audemard, G., Lagniez, J.-M., Simon, L.: Improving glucose for incremental SAT solving with assumptions: application to MUS extraction. In: Järvisalo, M., Van Gelder, A. (eds.) SAT 2013. LNCS, vol. 7962, pp. 309–317. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-39071-5_23CrossRefzbMATHGoogle Scholar
  2. 2.
    Biere, A.: CaDiCaL, Lingeling, Plingeling, Treengeling and YalSAT entering the sat competition 2017. In: SAT competition 2017, Solver and Benchmark Descriptions (2017)Google Scholar
  3. 3.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  4. 4.
    Brooks, F.P.: No silver bullet: essence and accidents of software engineering. IEEE Computer 20, 10–19 (1987)CrossRefGoogle Scholar
  5. 5.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation, pp. 209–224 (2008)Google Scholar
  6. 6.
    Cerutti, F., Vallati, M., Giacomin, M.: On the impact of configuration on abstract argumentation automated reasoning. Int. J. Approx. Reason. 92, 120–138 (2018)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Crawford, J., Baker, A.: Experimental results on the application of satisfiability algorithms to scheduling problems. In: Proceedings of the International Conference of the Association for the Advancement of Artificial Intelligence (AAAI), pp. 1092–1097 (1994)Google Scholar
  8. 8.
    Falkner, S., Lindauer, M., Hutter, F.: SpySMAC: automated configuration and performance analysis of SAT solvers. In: Heule, M., Weaver, S. (eds.) SAT 2015. LNCS, vol. 9340, pp. 215–222. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24318-4_16CrossRefzbMATHGoogle Scholar
  9. 9.
    Fitzgerald, T., Malitsky, Y., O’Sullivan, B.: ReACTR: realtime algorithm configuration through tournament rankings. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI, pp. 304–310 (2015)Google Scholar
  10. 10.
    Giunchiglia, E., Maratea, M., Tacchella, A.: Dependent and independent variables in propositional satisfiability. In: Flesca, S., Greco, S., Ianni, G., Leone, N. (eds.) JELIA 2002. LNCS (LNAI), vol. 2424, pp. 296–307. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-45757-7_25CrossRefzbMATHGoogle Scholar
  11. 11.
    Giunchiglia, E., Maratea, M., Tacchella, A.: (In)Effectiveness of look-ahead techniques in a modern SAT solver. In: Rossi, F. (ed.) CP 2003. LNCS, vol. 2833, pp. 842–846. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45193-8_64CrossRefGoogle Scholar
  12. 12.
    Gomes, C.P., Selman, B., Crato, N., Kautz, H.: Heavy-tailed phenomena in satisfiability and constraint satisfaction problems. J. Autom. Reason. 24(1), 67–100 (2000)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello, C.A.C. (ed.) LION 2011. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-25566-3_40CrossRefGoogle Scholar
  14. 14.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: Tradeoffs in the empirical evaluation of competing algorithm designs. Ann. Math. Artif. Intell. 60(1), 65–89 (2010)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Hutter, F., Hoos, H.H., Leyton-Brown, K.: An efficient approach for assessing hyperparameter importance. In: Proceedings of the 31st International Conference on Machine Learning, pp. 754–762 (2014)Google Scholar
  16. 16.
    Hutter, F., Hoos, H.H., Leyton-Brown, K., Stützle, T.: ParamILS: an automatic algorithm configuration framework. J. Artif. Intell. Res. 36, 267–306 (2009)CrossRefGoogle Scholar
  17. 17.
    Hutter, F., Lindauer, M., Balint, A., Bayless, S., Hoos, H.H., Leyton-Brown, K.: The configurable SAT solver challenge (CSSC). Artif. Intell. 243, 1–25 (2017)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Hutter, F., Xu, L., Hoos, H.H., Leyton-Brown, K.: Algorithm runtime prediction: methods & evaluation. Artif. Intell. 206, 79–111 (2014)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC-instance-specific algorithm configuration. In: Proceedings of the 9th European Conference on Artificial Intelligence ECAI, pp. 751–756 (2010)Google Scholar
  20. 20.
    KhudaBukhsh, A.R., Xu, L., Hoos, H.H., Leyton-Brown, K.: SATenstein: automatically building local search SAT solvers from components. Artif. Intell. 232, 20–42 (2016)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Lindauer, M., Hoos, H., Hutter, F., Leyton-Brown, K.: Selection and configuration of parallel portfolios. In: Hamadi, Y., Sais, L. (eds.) Handbook of Parallel Constraint Reasoning, pp. 583–615. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-63516-3_15CrossRefGoogle Scholar
  22. 22.
    Prasad, M.R., Biere, A., Gupta, A.: A survey of recent advances in sat-based formal verification. Int. J. Softw. Tools Technol. Transf. 7(2), 156–173 (2005)CrossRefGoogle Scholar
  23. 23.
    Rintanen, J.: Engineering efficient planners with SAT. In: European Conference on Artificial Intelligence ECAI, pp. 684–689 (2012)Google Scholar
  24. 24.
    Tompkins, D.A.D., Balint, A., Hoos, H.H.: Captain Jack: new variable selection heuristics in local search for SAT. In: Sakallah, K.A., Simon, L. (eds.) SAT 2011. LNCS, vol. 6695, pp. 302–316. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21581-0_24CrossRefzbMATHGoogle Scholar
  25. 25.
    Vallati, M., Hutter, F., Chrpa, L., McCluskey, T.: On the effective configuration of planning domain models. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 1704–1711 (2015)Google Scholar
  26. 26.
    Vallati, M., Serina, I.: A general approach for configuring PDDL problem models. In: Proceedings of the International Conference on Automated Planning & Scheduling (ICAPS) (2018)Google Scholar
  27. 27.
    Williams, R., Gomes, C.P., Selman, B.: Backdoors to typical case complexity. In: Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, Acapulco, Mexico, 9–15 August 2003, pp. 1173–1178. Morgan Kaufmann (2003)Google Scholar
  28. 28.
    Xu, L., Hoos, H.H., Leyton-Brown, K.: Hydra: automatically configuring algorithms for portfolio-based selection. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computing and EngineeringUniversity of HuddersfieldHuddersfieldUK
  2. 2.DIBRISUniversity of GenovaGenovaItaly

Personalised recommendations