Skip to main content

Learning from interpretation transition using differentiable logic programming semantics


The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. Differentiable inductive logic programming is a technique for learning a symbolic knowledge representation from either complete, mislabeled, or incomplete observed facts using neural networks. In this paper, we propose a novel differentiable inductive logic programming system called differentiable learning from interpretation transition (D-LFIT) for learning logic programs through the proposed embeddings of logic programs, neural networks, optimization algorithms, and an adapted algebraic method to compute the logic program semantics. The proposed model has several characteristics, including a small number of parameters, the ability to generate logic programs in a curriculum-learning setting, and linear time complexity for the extraction of trained neural networks. The well-known bottom clause positionalization algorithm is incorporated when the proposed system learns from relational datasets. We compare our model with NN-LFIT, which extracts propositional logic rules from retuned connected networks, the highly accurate rule learner RIPPER, the purely symbolic LFIT system LF1T, and CILP++, which integrates neural networks and the propositionalization method to handle first-order logic knowledge. From the experimental results, we conclude that D-LFIT yields comparable accuracy with respect to the baselines when given complete, incomplete, and mislabeled data. Our experimental results indicate that D-LFIT not only learns symbolic logic programs quickly and precisely but also performs robustly when processing mislabeled and incomplete datasets.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

Data availability


  1. Apt, K. R., Blair, H. A., & Walker, A. (1988). Towards a theory of declarative knowledge. In Foundations of deductive databases and logic programming (pp. 89–148). San Mateo: Morgan Kaufmann.

  2. Avila Garcez, A. S., & Zaverucha, G. (1999). The connectionist inductive learning and logic programming system. Applied Intelligence, 11(1), 59–77.

    Article  Google Scholar 

  3. Avila, A. S., Broda, K., & Gabbay, D. M. (2001). Symbolic knowledge extraction from trained neural networks: A sound approach. Artificial Intelligence, 125(1–2), 155–207.

    MathSciNet  MATH  Google Scholar 

  4. Bader, S., Hitzler, P., & Hölldobler, S. (2004). The integration of connectionism and first-order knowledge representation and reasoning as a challenge for artificial intelligence. In Proceedings of the third international conference on information (pp. 22–33).

  5. Bader, S., Hitzler, P., & Witzel, A. (2005). Integrating first-order logic programs and connectionist systems—a constructive approach. In Proceedings of the IJCAI workshop on neural-symbolic learning and reasoning (Vol. 5).

  6. Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. In Proceedings of ICML (Vol, 382, pp. 41–48). New York: ACM Press.

  7. Chaos, A., Aldana, M., Espinosa-Soto, C., Ponce de León, B., Arroyo, A. G., & Alvarez-Buylla, E. R. (2006). From genes to flower patterns and evolution: Dynamic models of gene regulatory networks. Journal of Plant Growth Regulation, 25(4), 278–289.

    Article  Google Scholar 

  8. Cohen, W. W. (1995). Fast effective rule induction. In Proceedings of ICML (pp. 115–123). Elsevier.

  9. Davidich, M. I., & Bornholdt, S. (2008). Boolean network model predicts cell cycle sequence of fission yeast. PLoS ONE, 3(2), e1672.

    Article  Google Scholar 

  10. Davis, J., Burnside, E. S., Dutra, I. C., Page, D., & Costa, V. S. (2005). An integrated approach to learning Bayesian networks of rules. In LNAI: Vol. 3720. Proc. ECML (pp. 84–95). Berlin: Springer.

  11. Evans, R., & Grefenstette, E. (2018). Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61, 1–64.

    MathSciNet  Article  Google Scholar 

  12. Evans, R., Hernández-Orallo, J., Welbl, J., Kohli, P., & Sergot, M. (2019). Making sense of sensory input. Artificial Intelligence, 293, 103438.

    MathSciNet  Article  Google Scholar 

  13. Fauré, A., Naldi, A., Chaouiya, C., & Thieffry, D. (2006). Dynamical analysis of a generic Boolean model for the control of the mammalian cell cycle. Bioinformatics, 22(14), e124–e131.

    Article  Google Scholar 

  14. França, M. V. M., D’Avila Garcez, A. S., & Zaverucha, G. (2015). Relational knowledge extraction from neural networks. In CEUR workshop proceedings (Vol. 1583, pp. 11–12).

  15. França, M. V. M., Zaverucha, G., & D’Avila Garcez, A. S. (2014). Fast relational learning using bottom clause propositionalization with artificial neural networks. Machine Learning, 94(1), 81–104.

    MathSciNet  Article  Google Scholar 

  16. Gentet, E., Tourret, S., & Inoue, K. (2017). Learning from interpretation transition using feed-forward neural networks. In CEUR workshop proceedings (pp. 27–33).

  17. Hitzler, P., & Seda, A. K. (2000). A note on the relationships between logic programs and neural networks. In Proceedings of the 4th irish workshop on formal methods (pp. 1–9).

  18. Hitzler, P., Hölldobler, S., & Seda, A. K. (2004). Logic programs and connectionist networks. Journal of Applied Logic, 2(3), 273–300.

    MathSciNet  Article  Google Scholar 

  19. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780.

    Article  Google Scholar 

  20. Hölldobler, S. (1993). Automated inferencing and connectionist models. Fakultät Informatik. Technische Hochschule Darmstadt. (Doctoral dissertation, Habilitationsschrift).

  21. Hölldobler, S., Kalinke, Y., Hoelldobler, S., & Kalinke, Y. (1991). Towards a new massively parallel computational model for logic programming. In ECAI’94 workshop on combining symbolic and connectioninst processing (pp. 68–77).

  22. Hölldobler, S., Kalinke, Y., & Störr, H. P. (1999). Approximating the semantics of logic programs by recurrent neural networks. Applied Intelligence, 11(1), 45–58.

    Article  Google Scholar 

  23. Inoue, K. (2011). Logic programming for Boolean networks. In Proceedings of IJCAI (pp. 924–930). Menlo Park: AAAI Press.

  24. Inoue, K., & Sakama, C. (2012). Oscillating behavior of logic programs. Correct reasoning-essays on logic-based AI in honour of Vladimir LifschitzIn E. Erdem, J. Lee, Y. Lierler, & D. Pearce (Eds.), LNAI (Vol. 7265, pp. 345–362). Berlin: Springer.

  25. Inoue, K., Ribeiro, T., & Sakama, C. (2014). Learning from interpretation transition. Machine Learning, 94(1), 51–79.

    MathSciNet  Article  Google Scholar 

  26. Kauffman, S. A. (1993). The origins of order: Self-organization and selection in evolution. Oxford: Oxford University Press.

    Google Scholar 

  27. Kazemi, S. M., & Poole, D. (2018). RelNN: a deep neural model for relational learning. In Proceedings of AAAI (pp. 6367–6375). AAAI press.

  28. King, R. D., Srinivasan, A., & Sternberg, M. J. E. (1995). Relating chemical activity to structure: An examination of ILP successes. New Generation Computing, 13(3–4), 411–433.

    Article  Google Scholar 

  29. Kramer, S., Lavrač, N., & Flach, P. (2001). Propositionalization approaches to relational data mining. Relational Data Mining, 262–291.

  30. Lehmann, J., Bader, S., & Hitzler, P. (2010). Extracting reduced logic programs from artificial neural networks. Applied Intelligence, 32(3), 249–266.

    Article  Google Scholar 

  31. Li, F., Long, T., Lu, Y., Ouyang, Q., & Tang, C. (2004). The yeast cell-cycle network is robustly designed. Proceedings of the National Academy of Sciences of the United States of America, 101(14), 4781–4786.

    Article  Google Scholar 

  32. Muggleton, S. (1991). Inductive logic programming. New Generation Computing, 8(4), 295–318.

    Article  Google Scholar 

  33. Muggleton, S. (1995). Inverse entailment and Progol. New Generation Computing, 13(3–4), 245–286.

    Article  Google Scholar 

  34. Muggleton, S., & De Raedt, L. (1994). Inductive logic programming: Theory and methods. The Journal of Logic Programming, 19(1), 629–679.

    MathSciNet  Article  Google Scholar 

  35. Nguyen, H. D., Sakama, C., Sato, T., & Inoue, K. (2018). Computing logic programming semantics in linear algebra. International conference on multi-disciplinary trends in artificial intelligence (pp. 32–48). Cham: Springer.

  36. Phua, Y. J., & Inoue, K. (2019). Learning logic programs from noisy state transition data. ILP (pp. 72–80). Cham: Springer.

    Google Scholar 

  37. Phua, Y. J., Ribeiro, T., & Inoue, K. (2019). Learning representation of relational dynamics with delays and refining with prior knowledge. If CoLoG Journal of Logics and their Applications, 6(4), 695–708.

    MathSciNet  Google Scholar 

  38. Quinlan, J. R. (1993). C4.5: programs for machine learning. San Francisco: Morgan Kaufmann.

  39. Rocktäschel, T., & Riedel, S. (2016). Learning knowledge base inference with neural theorem provers. In Proceedings of the 5th workshop on automated knowledge base construction (pp. 45–50).

  40. Sakama, C., Nguyen, H. D., Sato, T., & Inoue, K. (2018). Partial evaluation of logic programs in vector spaces. In 11th workshop on answer set programming and other computing paradigms. Oxford, UK.

  41. Seda, A. K., & Lane, M. (2004). On approximation in the integration of connectionist and logic-based systems. In Proceedings of the third international conference on information (pp. 297–300).

  42. Seda, A. K. (2006). On the integration of connectionist and logic-based systems. Electronic Notes in Theoretical Computer Science, 161(1), 109–130.

    Article  Google Scholar 

  43. Serafini, L., & Garcez, A. D. A. (2016). Logic tensor networks: deep learning and logical reasoning from data and knowledge. In CEUR workshop proceedings (Vol. 1768).

  44. Šourek, G., Aschenbrenner, V., Železný, F., Schockaert, S., & Kuželka, O. (2018). Lifted relational neural networks: Efficient learning of latent relational structures. Journal of Artificial Intelligence Research, 62, 69–100.

    MathSciNet  Article  Google Scholar 

  45. Srinivasan, A., Muggleton, S., King, R. D., & Sternberg, M. J. E. (1994). Mutagenesis: ILP experiments in a non-determinate biological domain. In LNAI: Vol. 237. Proc. ILP (pp. 217–232). Berlin: Springer.

  46. Tamaddoni-Nezhad, A., & Muggleton, S. (2009). The lattice structure and refinement operators for the hypothesis space bounded by a bottom clause. Machine Learning, 76(1), 37–72.

    Article  Google Scholar 

  47. Tourret, S., Gentet, E., & Inoue, K. (2017). Learning human-understandable description oaf dynamical systems from feed-forward neural networks. International symposium on neural networks (pp. 483–492). Cham: Springer.

  48. Van Emden, M. H., & Kowalski, R. A. (1976). The semantics of predicate logic as a programming language. Journal of the ACM, 23(4), 733–742.

    MathSciNet  Article  Google Scholar 

  49. Wang, W. Y., & Cohen, W. W. (2016). Learning first-order logic embeddings via matrix factorization. In Proceedings of IJCAI (pp. 2132–2138).

  50. Witten, I. H., Frank, E., Hall, M. A., & Pal, C. J. (2017). Data mining: practical machine learning tools and techniques (Fourth ed.). Morgan Kaufmann, ian imorint of Elsevier.

  51. Yang, F., Yang, Z., & Cohen, W. W. (2017). Differentiable learning of logical rules for knowledge base reasoning. In Proceedings of NIPS (pp. 2320–2329).

Download references


This work was supported by the National Key R&D Program of China under Grants 2018YFC1314200 and 2018YFB1003904, and the National Natural Science Foundation of China under Grants 61772035, 61972005, and 61932001; this work was also supported in part by the NII international internship program and JSPS KAKENHI Grant Number JP17H00763.

Author information




Conceptualization: Kun Gao, Katsumi Inoue; Methodology: Kun Gao; Formal analysis and investigation: Kun Gao; Writing - original draft preparation: Kun Gao; Writing - review and editing: Katsumi Inoue, Kun Gao, Yongzhi Cao; Funding acquisition: Hanpin Wang, Yongzhi Cao, Katsumi Inoue; Resources: Hanpin Wang, Yongzhi Cao, Katsumi Inoue; Supervision: Hanpin Wang, Yongzhi Cao, Katsumi Inoue.

Corresponding author

Correspondence to Hanpin Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Code availability (Need to download the LF1T by Python)

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The comments and suggestions of the editors and the anonymous reviewers are gratefully acknowledged.

Editors: Nikos Katzouris, Alexander Artikis, Luc De Raedt, Artur d’Avila Garcez, Sebastijan Dumančić, Ute Schmid, Jay Pujara.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gao, K., Wang, H., Cao, Y. et al. Learning from interpretation transition using differentiable logic programming semantics. Mach Learn (2021).

Download citation


  • Machine learning
  • Differentiable inductive logic programming
  • Explainability
  • Neuro-symbolic method
  • Learning from interpretation transition