Advertisement

Towards Effective Deep Learning for Constraint Satisfaction Problems

  • Hong Xu
  • Sven Koenig
  • T. K. Satish Kumar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11008)

Abstract

Many attempts have been made to apply machine learning techniques to constraint satisfaction problems (CSPs). However, none of them have made use of the recent advances in deep learning. In this paper, we apply deep learning to predict the satisfiabilities of CSPs. To the best of our knowledge, this is the first effective application of deep learning to CSPs that yields \({>}99.99\%\) prediction accuracy on random Boolean binary CSPs whose constraint tightnesses or constraint densities do not determine their satisfiabilities. We use a deep convolutional neural network on a matrix representation of CSPs. Since it is NP-hard to solve CSPs, labeled data required for training are in general costly to produce and are thus scarce. We address this issue using the asymptotic behavior of generalized Model A, a new random CSP generation model, along with domain adaptation and data augmentation techniques for CSPs. We demonstrate the effectiveness of our deep learning techniques using experiments on random Boolean binary CSPs. While these CSPs are known to be in P, we use them for a proof of concept.

References

  1. 1.
    Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems. software (2015). https://www.tensorflow.org/
  2. 2.
    Achlioptas, D., Molloy, M.S.O., Kirousis, L.M., Stamatiou, Y.C., Kranakis, E., Krizanc, D.: Random constraint satisfaction: a more accurate picture. Constraints 6(4), 329–344 (2001).  https://doi.org/10.1023/A:1011402324562MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Amadini, R., Gabbrielli, M., Mauro, J.: An empirical evaluation of portfolios approaches for solving CSPs. In: The International Conference on Integration of Artificial Intelligence and Operations Research Techniques in Constraint Programming, pp. 316–324 (2013).  https://doi.org/10.1007/978-3-642-38171-3_21
  4. 4.
    Amadini, R., Gabbrielli, M., Mauro, J.: An enhanced features extractor for a portfolio of constraint solvers. In: The Annual ACM Symposium on Applied Computing, pp. 1357–1359 (2014).  https://doi.org/10.1145/2554850.2555114
  5. 5.
    Arbelaez, A., Hamadi, Y., Sebag, M.: Continuous search in constraint programming. In: The IEEE International Conference on Tools with Artificial Intelligence, pp. 53–60 (2010).  https://doi.org/10.1109/ICTAI.2010.17
  6. 6.
    Bourlard, H.A., Morgan, N.: Connectionist Speech Recognition. Springer, New York (1994).  https://doi.org/10.1007/978-1-4615-3210-1
  7. 7.
    Chollet, F., et al.: Keras (2015). https://keras.io
  8. 8.
    Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 3642–3649 (2012).  https://doi.org/10.1109/CVPR.2012.6248110
  9. 9.
    Galassi, A., Lombardi, M., Mello, P., Milano, M.: Model agnostic solution of CSPs via deep learning: a preliminary study. In: The International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 254–262 (2018).  https://doi.org/10.1007/978-3-319-93031-2_18
  10. 10.
    Gent, I.P., et al.: Learning when to use lazy learning in constraint solving. In: The European Conference on Artificial Intelligence, pp. 873–878 (2010).  https://doi.org/10.3233/978-1-60750-606-5-873
  11. 11.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)Google Scholar
  12. 12.
    Guerri, A., Milano, M.: Learning techniques for automatic algorithm portfolio selection. In: The European Conference on Artificial Intelligence, pp. 475–479 (2004)Google Scholar
  13. 13.
    Hahnloser, R.H.R., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J., Seung, H.S.: Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947–951 (2000).  https://doi.org/10.1038/35016072CrossRefGoogle Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: The IEEE International Conference on Computer Vision, pp. 1026–1034 (2015).  https://doi.org/10.1109/ICCV.2015.123
  15. 15.
    Kadioglu, S., Malitsky, Y., Sellmann, M., Tierney, K.: ISAC - instance-specific algorithm configuration. In: The European Conference on Artificial Intelligence, pp. 751–756 (2010).  https://doi.org/10.3233/978-1-60750-606-5-751
  16. 16.
    Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. In: Data Mining and Constraint Programming: Foundations of a Cross-Disciplinary Approach, pp. 149–190 (2016).  https://doi.org/10.1007/978-3-319-50137-6_7
  17. 17.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: The Neural Information Processing Systems Conference, pp. 1097–1105 (2012).  https://doi.org/10.1145/3065386
  18. 18.
    LeCun, Y.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989).  https://doi.org/10.1162/neco.1989.1.4.541CrossRefGoogle Scholar
  19. 19.
    Loreggia, A., Malitsky, Y., Samulowitz, H., Saraswat, V.: Deep learning for algorithm portfolios. In: The AAAI Conference on Artificial Intelligence, pp. 1280–1286 (2016)Google Scholar
  20. 20.
    Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: The International Conference on Machine Learning, pp. 2014–2023 (2016)Google Scholar
  21. 21.
    O’Mahony, E., Hebrard, E., Holland, A., Nugent, C., O’Sullivan, B.: Using case-based reasoning in an algorithm portfolio for constraint solving. In: The Irish Conference on Artificial Intelligence and Cognitive Science (2008)Google Scholar
  22. 22.
    Prud’homme, C., Fages, J.G., Lorca, X.: Choco Documentation. TASC - LS2N CNRS UMR 6241, COSLING S.A.S. (2017). http://www.choco-solver.org
  23. 23.
    Pulina, L., Tacchella, A.: A multi-engine solver for quantified Boolean formulas. In: The International Conference on Principles and Practice of Constraint Programming, pp. 574–589 (2007).  https://doi.org/10.1007/978-3-540-74970-7_41
  24. 24.
    Sateesh Babu, G., Zhao, P., Li, X.L.: Deep convolutional neural network based regression approach for estimation of remaining useful life. In: The International Conference on Database Systems for Advanced Applications, pp. 214–228 (2016).  https://doi.org/10.1007/978-3-319-32025-0_14
  25. 25.
    Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., Dill, D.L.: Learning a SAT solver from single-bit supervision. arXiv:1802.03685 [cs.AI] (2018)
  26. 26.
    Smith, B.M., Dyer, M.E.: Locating the phase transition in binary constraint satisfaction problems. Artif. Intell. 81(1), 155–181 (1996).  https://doi.org/10.1016/0004-3702(95)00052-6MathSciNetCrossRefGoogle Scholar
  27. 27.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32, 565–606 (2008).  https://doi.org/10.1613/jair.2490CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.University of Southern CaliforniaLos AngelesUSA

Personalised recommendations