Advertisement

Model Agnostic Solution of CSPs via Deep Learning: A Preliminary Study

  • Andrea Galassi
  • Michele Lombardi
  • Paola Mello
  • Michela Milano
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10848)

Abstract

Deep Neural Networks (DNNs) have been shaking the AI scene, for their ability to excel at Machine Learning tasks without relying on complex, hand-crafted, features. Here, we probe whether a DNN can learn how to construct solutions of a CSP, without any explicit symbolic information about the problem constraints. We train a DNN to extend a feasible solution by making a single, globally consistent, variable assignment. The training is done over intermediate steps of the construction of feasible solutions. From a scientific standpoint, we are interested in whether a DNN can learn the structure of a combinatorial problem, even when trained on (arbitrarily chosen) construction sequences of feasible solutions. In practice, the network could also be used to guide a search process, e.g. to take into account (soft) constraints that are implicit in past solutions or hard to capture in a traditional declarative model. This research line is still at an early stage, and a number of complex issues remain open. Nevertheless, we already have intriguing results on the classical Partial Latin Square and N-Queen completion problems.

References

  1. 1.
    Adorf, H.M., Johnston, M.D.: A discrete stochastic neural network algorithm for constraint satisfaction problems. In: 1990 IJCNN International Joint Conference on Neural Networks, vol. 3, pp. 917–924, June 1990Google Scholar
  2. 2.
    Bouhouch, A., Chakir, L., Qadi, A.E.: Scheduling meeting solved by neural network and min-conflict heuristic. In: 2016 4th IEEE International Colloquium on Information Science and Technology (CiSt), pp. 773–778, October 2016Google Scholar
  3. 3.
    Chesani, F., Galassi, A., Lippi, M., Mello, P.: Can deep networks learn to play by the rules? A case study on nine men’s morris. IEEE Trans. Games PP(99), 1 (2018).  https://doi.org/10.1109/TG.2018.2804039CrossRefGoogle Scholar
  4. 4.
    Colbourn, C.J.: The complexity of completing partial latin squares. Discret. Appl. Math. 8(1), 25–30 (1984)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Ebrahimi, M.S., Abadi, H.K.: Study of residual networks for image recognition. arXiv preprint arXiv:1805.00325 (2018)
  6. 6.
    Gent, I.P., Jefferson, C., Nightingale, P.: Complexity of n-Queens completion. J. Artif. Intell. Res. 59, 815–848 (2017)MathSciNetMATHGoogle Scholar
  7. 7.
    Gomes, C.P., Selman, B., Crato, N.: Heavy-tailed distributions in combinatorial search. In: Smolka, G. (ed.) CP 1997. LNCS, vol. 1330, pp. 121–135. Springer, Heidelberg (1997).  https://doi.org/10.1007/BFb0017434CrossRefGoogle Scholar
  8. 8.
    Gomes, C.P., Selman, B., Kautz, H.A.: Boosting combinatorial search through randomization. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference, AAAI 1998, IAAI 1998, 26–30 July 1998, Madison, Wisconsin, USA, pp. 431–437 (1998). http://www.aaai.org/Library/AAAI/1998/aaai98-061.php
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  10. 10.
    He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_38CrossRefGoogle Scholar
  11. 11.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980
  12. 12.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  13. 13.
    Lee, J.H.M., Leung, H.F., Won, H.W.: Extending GENET for non-binary CSP’s. In: Proceedings of 7th IEEE International Conference on Tools with Artificial Intelligence, pp. 338–343, November 1995Google Scholar
  14. 14.
    Lombardi, M., Milano, M., Bartolini, A.: Empirical decision model learning. Artif. Intell. 244, 343–367 (2017).  https://doi.org/10.1016/j.artint.2016.01.005MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATHGoogle Scholar
  16. 16.
    Wang, C.J., Tsang, E.P.K.: Solving constraint satisfaction problems using neural networks. In: 1991 Second International Conference on Artificial Neural Networks, pp. 295–299, November 1991Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Science and Engineering (DISI)University of BolognaBolognaItaly

Personalised recommendations