Advertisement

Stacked Structure Learning for Lifted Relational Neural Networks

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10759)

Abstract

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks. While previous work has shown that using LRNNs can lead to state-of-the-art results in various ILP tasks, these results depended on hand-crafted rules. In this paper, we extend the framework of LRNNs with structure learning, thus enabling a fully automated learning process. Similarly to many ILP methods, our structure learning algorithm proceeds in an iterative fashion by top-down searching through the hypothesis space of all possible Horn clauses, considering the predicates that occur in the training examples as well as invented soft concepts entailed by the best weighted rules found so far. In the experiments, we demonstrate the ability to automatically induce useful hierarchical soft concepts leading to deep LRNNs with a competitive predictive power.

Keywords

Structure Learning Algorithm First-order Rules Predicate Lattice Meta-interpretive Learning Target Predicate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

GŠ, MS and FŽ acknowledge support by project no. 17-26999S granted by the Czech Science Foundation. This work was done while OK was with Cardiff University and supported by a grant from the Leverhulme Trust (RPG-2014-164). SS is supported by ERC Starting Grant 637277. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures”.

References

  1. 1.
    Blockeel, H., Uwents, W.: Using neural networks for relational learning. In: ICML 2004 Workshop on Statistical Relational Learning and Its Connection to Other Fields, pp. 23–28 (2004)Google Scholar
  2. 2.
    Bottou, L.: Stochastic gradient descent tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 421–436. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_25 CrossRefGoogle Scholar
  3. 3.
    Cohen, W.W.: TensorLog: a differentiable deductive database. arXiv preprint arXiv:1605.06523 (2016)
  4. 4.
    Davis, J., Burnside, E., de Castro Dutra, I., Page, D., Costa, V.S.: An integrated approach to learning Bayesian networks of rules. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 84–95. Springer, Heidelberg (2005).  https://doi.org/10.1007/11564096_13 CrossRefGoogle Scholar
  5. 5.
    Dinh, Q.T., Exbrayat, M., Vrain, C.: Generative structure learning for Markov logic networks based on graph of predicates. In: IJCAI Proceedings of International Joint Conference on Artificial Intelligence, vol. 22, p. 1249 (2011)Google Scholar
  6. 6.
    Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture (1989)Google Scholar
  7. 7.
    Hájek, P.: Metamathematics of Fuzzy Logic, vol. 4. Springer, Dordrecht (1998).  https://doi.org/10.1007/978-94-011-5300-3 zbMATHCrossRefGoogle Scholar
  8. 8.
    Kok, S., Domingos, P.: Learning the structure of Markov logic networks. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 441–448 (2005)Google Scholar
  9. 9.
    Landwehr, N., Kersting, K., Raedt, L.D.: Integrating naive bayes and FOIL. J. Mach. Learn. Res. 8, 481–507 (2007)zbMATHGoogle Scholar
  10. 10.
    Landwehr, N., Passerini, A., De Raedt, L., Frasconi, P.: kFOIL: learning simple relational kernels. In: AAAI 2006: Proceedings of the 21st National Conference on Artificial Intelligence, pp. 389–394. AAAI Press (2006)Google Scholar
  11. 11.
    Muggleton, S.H., Lin, D., Tamaddoni-Nezhad, A.: Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Mach. Learn. 100(1), 49–73 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Opitz, D.W., Shavlik, J.W.: Heuristically expanding knowledge-based neural networks. In: IJCAI, pp. 1360–1365 (1993)Google Scholar
  13. 13.
    Ralaivola, L., Swamidass, S.J., Saigo, H., Baldi, P.: Graph kernels for chemical informatics. Neural Netw. 18(8), 1093–1110 (2005)CrossRefGoogle Scholar
  14. 14.
    Rocktäschel, T., Riedel, S.: Learning knowledge base inference with neural theorem provers. In: NAACL Workshop on Automated Knowledge Base Construction (AKBC) (2016)Google Scholar
  15. 15.
    Šourek, G., Aschenbrenner, V., Železný, F., Kuželka, O.: Lifted relational neural networks. In: Proceedings of the NIPS Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches (2015)Google Scholar
  16. 16.
    Šourek, G., Aschenbrenner, V., Železný, F., Kuželka, O.: Lifted relational neural networks. arXiv preprint (2015). http://arxiv.org/abs/1508.05128
  17. 17.
    Šourek, G., Manandhar, S., Železný, F., Schockaert, S., Kuželka, O.: Learning predictive categories using lifted relational neural networks. In: Cussens, J., Russo, A. (eds.) ILP 2016. LNCS (LNAI), vol. 10326, pp. 108–119. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63342-8_9 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Czech Technical UniversityPragueCzech Republic
  2. 2.School of CS and InformaticsCardiff UniversityCardiffUK
  3. 3.Department of Computer ScienceKU LeuvenLeuvenBelgium

Personalised recommendations