Learning from Learning Solvers
Modern constraint programming solvers incorporate SAT-style clause learning, where sets of domain restrictions that lead to failure are recorded as new clausal propagators. While this can yield dramatic reductions in search, there are also cases where clause learning does not improve or even hinders performance. Unfortunately, the reasons for these differences in behaviour are not well understood in practice. We aim to cast some light on the practical behaviour of learning solvers by profiling their execution. In particular, we instrument the learning solver Chuffed to produce a detailed record of its execution and extend a graphical profiling tool to appropriately display this information. Further, this profiler enables users to measure the impact of the learnt clauses by comparing Chuffed’s execution with that of a non-learning solver, and examining the points at which their behaviours diverge. We show that analysing a solver’s execution in this way can be useful not only to better understand its behaviour — opening what is typically a black box — but also to infer modifications to the original constraint model that can improve the performance of both learning and non-learning solvers.
We thank the anonymous reviewers who pointed to overlooked related work and provided useful comments. This research was partly sponsored by the Australian Research Council grant DP140100058.
- 1.Audemard, G., Simon, L.: Predicting learnt clauses quality in modern SAT solvers. In: Proceedings of the 21st International Joint Conference on Artifical Intelligence IJCAI 2009, pp. 399–404 (2009)Google Scholar
- 4.Chu, G.G.: Improving combinatorial optimization. Ph.d. thesis, The University of Melbourne (2011)Google Scholar
- 6.Silva, J.P.M., Sakallah, K.A.: GRASP - a new search algorithm for satisfiability. In: Proceedings of the 1996 IEEE/ACM International Conference on Computer-Aided Design ICCAD 1996, pp. 220–227. IEEE Computer Society, Washington, DC (1996)Google Scholar
- 7.Moskewicz, M.W., Madigan, C.F., Zhao, Y., Zhang, L., Malik, S.: Chaff: engineering an efficient SAT solver. In: Proceedings of the 38th Design Automation Conference, pp. 530–535. ACM (2001)Google Scholar
- 8.Newsham, Z., Lindsay, W., Liang, J.H., Czarnecki, K., Fischmeister, S., Ganesh, V.: SATGraf: visualizing community structure in boolean SAT instances. In: Heule, M., Weaver, S. (eds.) Theory and Applications of Satisfiability Testing – SAT 2015. LNCS, vol. 9340, pp. 62–70. Springer, Heidelberg (2015)CrossRefGoogle Scholar
- 11.Schulte, C., Tack, G., Lagerkvist, M.Z.: Modeling and programming with Gecode (2016). http://www.gecode.org
- 14.Shishmarev, M., Mears, C., Tack, G., Garcia de la Banda, M.: Visual search tree profiling. Constraints 21(1), 77–94 (2016)Google Scholar
- 15.Stuckey, P.J., Feydy, T., Schutt, A., Tack, G., Fischer, J.: The MiniZinc challenge 2008–2013. AI Mag. 35(2), 55–60 (2014)Google Scholar