PFORTE: Revising Probabilistic FOL Theories

  • Aline Paes
  • Kate Revoredo
  • Gerson Zaverucha
  • Vitor Santos Costa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4140)


There has been significant recent progress in the integration of probabilistic reasoning with first order logic representations (SRL). So far, the learning algorithms developed for these models all learn from scratch, assuming an invariant background knowledge. As an alternative, theory revision techniques have been shown to perform well on a variety of machine learning problems. These techniques start from an approximate initial theory and apply modifications in places that performed badly in classification. In this work we describe the first revision system for SRL classification, PFORTE, which addresses two problems: all examples must be classified, and they must be classified well. PFORTE uses a two step-approach. The completeness component uses generalization operators to address failed proofs and the classification component addresses classification problems using generalization and specialization operators. Experimental results show significant benefits from using theory revision techniques compared to learning from scratch.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Baião, F., Mattoso, M., Shavlik, J., Zaverucha, G.: Applying theory revision to the design of distributed databases. In: Horváth, T., Yamamoto, A. (eds.) ILP 2003. LNCS (LNAI), vol. 2835, pp. 57–74. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  2. Buntine, W.: Theory refinement on Bayesian networks. In: Proc. 17th Conf. Uncertainty in Artificial Intelligence, pp. 52–60 (1991)Google Scholar
  3. Costa, V., Page, D., Qazi, M., Cussens, J.: CLP(BN): Constraint logic programming for probabilistic knowledge. In: Proc. 19th Annual Conf. on Uncertainty in Artificial Intelligence, pp. 517–524 (2003)Google Scholar
  4. Friedman, N., Getoor, L., Koller, D., Pfeffer, A.: Learning probabilistic relational models. In: Proc. 16th Int. Joint Conf. on Artificial Intelligence, pp. 1300–1309 (1999)Google Scholar
  5. Grossman, D., Domingos, P.: Learning bayesian network classifiers by maximizing conditional likelihood. In: Proc. 21th Int. Conf. on Machine Learning, pp. 361–368 (2004)Google Scholar
  6. Haddawy, P.: An overview of some recent developments on bayesian problem solving techniques. AI Magazine - Special issue on Uncertainty in AI 20(2), 11–29 (1999)Google Scholar
  7. Kersting, K., De Raedt, L.: Towards Combining Inductive Logic Programming with Bayesian Networks. In: Rouveirol, C., Sebag, M. (eds.) ILP 2001. LNCS (LNAI), vol. 2157, p. 118. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  8. Kersting, K., De Raedt, L.: Basic Principles of Learning Bayesian Logic Programs. Technical Report 174, University of Freiburg (2002)Google Scholar
  9. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. Int. Joint Conf. on Artificial Intelligence, pp. 1137–1145 (1995)Google Scholar
  10. Koller, D., Pfeffer, A.: Learning probabilities for noisy first-order rules. In: Proc. 15th Int. Joint Conf. on Artficial Intelligence, pp. 1316–1323 (1997)Google Scholar
  11. Muggleton, S.: Learning structure and parameters of stochastic logic programs. In: Matwin, S., Sammut, C. (eds.) ILP 2002. LNCS (LNAI), vol. 2583, pp. 198–206. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  12. Murphy, K.: The Bayes Net Toolbox for Matlab. Computing Science and Statistics 33 (2001)Google Scholar
  13. Paes, A., Revoredo, K., Zaverucha, G., Costa, V.: Probabilistic first-order theory revision from examples. In: Kramer, S., Pfahringer, B. (eds.) ILP 2005. LNCS, vol. 3625, pp. 295–311. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  14. Pearl, J.: Probabilistic Reasoning in Inteligent Systems: networks of plausible inference. Morgan Kaufmann, San Francisco (1988)Google Scholar
  15. Quinlan, J.: Learning logical definitions from relations. Machine Learning 5, 239–266 (1990)Google Scholar
  16. Ramachandran, S., Mooney, R.: Theory refinement of bayesian networks with hidden variables. In: Proc. 15th Int. Conf. on Machine Learning, pp. 454–462 (1998)Google Scholar
  17. Revoredo, K., Zaverucha, G.: Revision of first-order Bayesian classifiers. In: Matwin, S., Sammut, C. (eds.) ILP 2002. LNCS (LNAI), vol. 2583, pp. 223–237. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  18. Richards, B.L., Mooney, R.J.: Automated refinement of first-order Horn-clause domain theories. Machine Learning 19, 95–131 (1995)Google Scholar
  19. Richardson, M., Domingos, P.: Markov Logic Networks. Machine Learning 62, 107–136 (2006)CrossRefGoogle Scholar
  20. Sato, T., Kameya, Y.: Prism: A language for symbolic-statistical modeling. In: Proc. 15th Int. Joint Conf. on Artificial Intelligence, pp. 1330–1339 (1997)Google Scholar
  21. Srinivasan, A.: The Aleph Manual (2001)Google Scholar
  22. Wogulis, J., Pazzani, M.: A methodology for evaluationg theory revision systems: results with Audrey II. In: Proc. 13th Int. Join Conf. on Artificial Intelligence, pp. 1128–1134 (1993)Google Scholar
  23. Wrobel, S.: First-order theory refinement. In: Raedt, L.D. (ed.) Advances in Inductive Logic Programming, pp. 14–33. IOS Press, Amsterdam (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Aline Paes
    • 1
  • Kate Revoredo
    • 1
  • Gerson Zaverucha
    • 1
  • Vitor Santos Costa
    • 1
  1. 1.Department of Systems Engineering and Computer Science – COPPEFederal University of Rio de Janeiro (UFRJ)Rio de JaneiroBrasil

Personalised recommendations