Leveraging the Learning Power of Examples in Automated Constraint Acquisition

  • Christian Bessiere
  • Remi Coletta
  • Eugene C. Freuder
  • Barry O’Sullivan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3258)

Abstract

Constraint programming is rapidly becoming the technology of choice for modeling and solving complex combinatorial problems. However, users of constraint programming technology need significant expertise in order to model their problem appropriately. The lack of availability of such expertise can be a significant bottleneck to the broader uptake of constraint technology in the real world. In this paper we are concerned with automating the formulation of constraint satisfaction problems from examples of solutions and non-solutions. We combine techniques from the fields of machine learning and constraint programming. In particular we present a portfolio of approaches to exploiting the semantics of the constraints that we acquire to improve the efficiency of the acquisition process. We demonstrate how inference and search can be used to extract useful information that would otherwise be hidden in the set of examples from which we learn the target constraint satisfaction problem. We demonstrate the utility of the approaches in a case-study domain.

Keywords

Version Space Constraint Programming Constraint Satisfaction Problem Acquisition Process Horn Clause 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Choi, C.W., Lee, J.H.M., Stuckey, P.J.: Propagation redundancy in redundant modelling. In: Rossi, F. (ed.) CP 2003. LNCS, vol. 2833, pp. 229–243. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  2. 2.
    Coletta, R., Bessiere, C., O’Sullivan, B., Freuder, E.C., O’Connell, S., Quinqueton, J.: Constraint acquisition as semi-automatic modeling. In: Proceedings of AI 2003, pp. 111–124 (2003)Google Scholar
  3. 3.
    Dechter, A., Dechter, R.: Removing redundancies in constraint networks. In: Proceedings of AAAI 1987, pp. 105–109 (1987)Google Scholar
  4. 4.
    Dechter, R., van Beek, P.: Local and global relational consistency. Theoretical Computer Science 173(1), 283–308 (1997)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Mackworth, A.: Consistency in networks of relations. Artificial Intelligence 8, 99–118 (1977)MATHCrossRefGoogle Scholar
  6. 6.
    Mitchell, T.: Generalization as search. Artificial Intelligence 18(2), 203–226 (1982)CrossRefMathSciNetGoogle Scholar
  7. 7.
    Monasson, R., Zecchina, R., Kirkpatrick, S., Selman, B., Ttroyansky, L.: Determining computational complexity from characteristic ’phase transition’. Nature 400, 133–137 (1999)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Page, C.D., Frisch, A.M.: Generalization and learnability: A study of constrained atoms. In: Muggleton, S.H. (ed.) Inductive Logic Programming, pp. 29–61 (1992)Google Scholar
  9. 9.
    Sebag, M.: Delaying the choice of bias: A disjunctive version space approach. In: Proceedings of ICML 1996, pp. 444–452 (1996)Google Scholar
  10. 10.
    Smith, B.M.: Succeed-first or fail-first: A case study in variable and value ordering. In: Malyshkin, V.E. (ed.) PaCT 1997. LNCS, vol. 1277, pp. 321–330. Springer, Heidelberg (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Christian Bessiere
    • 1
  • Remi Coletta
    • 1
  • Eugene C. Freuder
    • 2
  • Barry O’Sullivan
    • 2
  1. 1.LIRMM-CNRS (UMR 5506)Montpellier Cedex 5France
  2. 2.Cork Constraint Computation Centre, Department of Computer ScienceUniversity College CorkIreland

Personalised recommendations