TildeCRF: Conditional Random Fields for Logical Sequences

  • Bernd Gutmann
  • Kristian Kersting
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)


Conditional Random Fields (CRFs) provide a powerful instrument for labeling sequences. So far, however, CRFs have only been considered for labeling sequences over flat alphabets. In this paper, we describe TildeCRF, the first method for training CRFs on logical sequences, i.e., sequences over an alphabet of logical atoms. TildeCRF’s key idea is to use relational regression trees in Dietterich et al.’s gradient tree boosting approach. Thus, the CRF potential functions are represented as weighted sums of relational regression trees. Experiments show a significant improvement over established results achieved with hidden Markov models and Fisher kernels for logical sequences.


Hide Markov Model Regression Tree Conditional Random Field Logical Sequence Ground Atom 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Anderson, C.R., Domingos, P., Weld, D.S.: Relational Markov Models and their Application to Adaptive Web Navigation. In: Proc. of the 8th Int. Conf. on Knowledge Discovery and Data Mining (KDD-2002), pp. 143–152 (2002)Google Scholar
  2. 2.
    Blockeel, H., De Raedt, L.: Top-down Induction of First-order Logical Decision Trees. Artificial Intelligence 101(1–2), 285–297 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    De Raedt, L., Kersting, K.: Probabilistic Inductive Logic Programming. In: Ben-David, S., Case, J., Maruoka, A. (eds.) ALT 2004. LNCS, vol. 3244, pp. 19–36. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Dietterich, T., Ashenfelter, A., Bulatov, Y.: Training conditional random fields via gradient tree boosting. In: Proc. 21st International Conf. on Machine Learning, pp. 217–224. ACM Press, New York (2004)Google Scholar
  5. 5.
    Fürnkranz, J.: Round Robin Classification. Journal of Machine Learning Research (JMLR) 2, 721–747 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Kersting, K., De Raedt, L., Raiko, T.: Logial Hidden Markov Models. Journal of Artificial Intelligence Research (JAIR) 25, 425–456 (2006)zbMATHGoogle Scholar
  7. 7.
    Kersting, K., Gärtner, T.: Fisher Kernels for Logical Sequences. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS, vol. 3201, pp. 205–216. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Kersting, K., Raiko, T.: Say EM for Selecting Probabilistic Models for Logical Sequences. In: Proc. of the 21st Conf. on Uncertainty in Artificial Intelligence (UAI 2005), pp. 300–307 (2005)Google Scholar
  9. 9.
    Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proc. 18th Int. Conf. on Machine Learning (ICML 2001), pp. 282–289 (2001)Google Scholar
  10. 10.
    Lloyd, J.W.: Foundations of Logic Programming, 2nd edn. Springer, Berlin (1989)Google Scholar
  11. 11.
    McCallum, A.: Effciently inducing features of conditional random fields. In: Proc. of the 21st Conference on Uncertainty in Artificial Intelligence (UAI 2003) (2003)Google Scholar
  12. 12.
    Quanttoni, A., Collins, M., Darrell, T.: Conditional random fields for object recognition. In: Advances in Neural Information Processing Systems 17, pp. 1097–1104 (2005)Google Scholar
  13. 13.
    Quian, N., Sejnowski, T.J.: Predicting the secondary structure of globular proteins using neural network models. JMB 202, 865–884 (1988)CrossRefGoogle Scholar
  14. 14.
    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE 77, 257–285 (1989)CrossRefGoogle Scholar
  15. 15.
    Richardson, M., Domingos, P.: Markov Logic Networks. Machine Learning 62, 107–136 (2006)CrossRefGoogle Scholar
  16. 16.
    Sanghai, S., Domingos, P., Weld, D.: Dynamic probabilistic relational models. In: Proc. of the 8th Int. Joint Conference on Artificial Intelligence (IJCAI 2003 ), pp. 992–997 (2003)Google Scholar
  17. 17.
    Sutton, C., McCallum, A.: Piecewise training of undirected models. In: Proc. of the 21. Conference on Uncertainty in Artificial Intelligence (UAI-2005) (2005)Google Scholar
  18. 18.
    Sutton, C., Rohanimanesh, K., McCallum, A.: Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. In: Proc. 21st International Conf. on Machine Learning. ACM Press, New York (2004)Google Scholar
  19. 19.
    Taskar, B., Abbeel, P., Koller, D.: Discriminative Probabilistic Models for Relational Data. In: Proc. of the 8th Conf. on Uncertainty in Artificial Intelligence (UAI-2002), pp. 485–492 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Bernd Gutmann
    • 1
  • Kristian Kersting
    • 1
  1. 1.Institute for Computer Science, Machine Learning LabUniversity of FreiburgFreiburgGermany

Personalised recommendations