Probabilistic Rule Learning

  • Luc De Raedt
  • Ingo Thon
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6489)


Traditionally, rule learners have learned deterministic rules from deterministic data, that is, the rules have been expressed as logical statements and also the examples and their classification have been purely logical. We upgrade rule learning to a probabilistic setting, in which both the examples themselves as well as their classification can be probabilistic. The setting is incorporated in the probabilistic rule learner ProbFOIL, which combines the principles of the relational rule learner FOIL with the probabilistic Prolog, ProbLog. We report also on some experiments that demonstrate the utility of the approach.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Pan, R., Peng, Y., Ding, Z.: Belief update in bayesian networks using uncertain evidence. In: ICTAI 2006: Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence, pp. 441–444. IEEE Computer Society, Washington, DC, USA (2006)CrossRefGoogle Scholar
  2. 2.
    Karalic, A., Bratko, I.: First order regression. Machine Learning 26(2-3), 147–176 (1997)zbMATHCrossRefGoogle Scholar
  3. 3.
    Srinivasan, A., Page, D., Camacho, R., King, R.D.: Quantitative pharmacophore models with inductive logic programming. Machine Learning 64(1-3), 65–90 (2006)zbMATHCrossRefGoogle Scholar
  4. 4.
    De Raedt, L., Kimmig, A., Toivonen, H.: Problog: A probabilistic Prolog and its application in link discovery. In: Veloso, M. (ed.) Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 2462–2467 (2007)Google Scholar
  5. 5.
    Kimmig, A., Santos Costa, V., Rocha, R., Demoen, B., De Raedt, L.: On the efficient execution of probLog programs. In: Garcia de la Banda, M., Pontelli, E. (eds.) ICLP 2008. LNCS, vol. 5366, pp. 175–189. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    De Raedt, L.: Logical and Relational Learning. Springer, Heidelberg (2008)zbMATHCrossRefGoogle Scholar
  7. 7.
    Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  8. 8.
    Fürnkranz, J., Flach, P.A.: Roc ‘n’ rule learning — towards a better understanding of covering algorithms. Machine Learning 58(1), 39–77 (2005)zbMATHCrossRefGoogle Scholar
  9. 9.
    Fürnkranz, J., Widmer, G.: Incremental reduced error pruning. In: Cohen, W., Hirsh, H. (eds.) ICML 1994: Proceedings of the 11th International Conference on Machine Learning, pp. 70–77. Morgan Kaufmann, San Francisco (1994)Google Scholar
  10. 10.
    Dietterich, T., Michalski, R.: Learning to predict sequences. In: Michalski, R., Carbonell, J., Mitchell, T. (eds.) Machine learning: An Artificial Intelligence Approach, vol. 2. Morgan Kaufmann, San Francisco (1986)Google Scholar
  11. 11.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV, pp. 1150–1157 (1999)Google Scholar
  12. 12.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Luc De Raedt
    • 1
  • Ingo Thon
    • 1
  1. 1.Department of Computer ScienceKatholieke Universiteit LeuvenBelgium

Personalised recommendations