Abstract
In this chapter, we provide an overview of Logic Tensor Networks (LTNs, for short), a formalism that makes use of tensor embeddings—n-dimensional vector representations—of elements tied to a logical syntax, which has seen traction in NSR literature in the past few years. After briefly recalling Real Logic, the underlying language of LTNs, we discuss the representation of different kinds of knowledge in formalism, the three main tasks that can be addressed with them (learning, reasoning, and query answering), and finally, describe several use cases that have shown the usefulness of LTNs in many tasks that are central to the construction of intelligent systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The creators of LTN use the word “grounding” as a reference to “symbol grounding”, a concept covered further in Chap. 9.
- 2.
From now on, we use “tensor” to abbreviate “tensor in the Real field”.
References
Bach, S.H., Broecheler, M., Huang, B., Getoor, L.: Hinge-loss markov random fields and probabilistic soft logic. J. Mach. Learn. Res. 18, 1–67 (2017)
Badreddine, S., d’Avila Garcez, A., Serafini, L., Spranger, M.: Logic tensor networks. Artif. Intell. 303, 103649 (2022)
van Bekkum, M., de Boer, M., van Harmelen, F., Meyer-Vitali, A., Teije, A.T.: Modular design patterns for hybrid learning and reasoning systems. Appl. Intell. 51(9), 6528–6546 (2021)
d’Avila Garcez, A., Lamb, L.C.: Neurosymbolic AI: the 3rd wave. CoRR abs/2012.05876 (2020). https://arxiv.org/abs/2012.05876
Hong, J., Pavlic, T.P.: An insect-inspired randomly, weighted neural network with random fourier features for neuro-symbolic relational learning (2021). https://doi.org/10.48550/ARXIV.2109.06663. https://arxiv.org/abs/2109.06663
Hong, J., Pavlic, T.P.: Representing prior knowledge using randomly, weighted feature networks for visual relationship detection (2021). https://doi.org/10.48550/ARXIV.2111.10686. https://arxiv.org/abs/2111.10686
Kautz, H.A.: The third AI summer: AAAI Robert S. Engelmore memorial lecture. AI Magazine 43(1), 105–125 (2022)
Koller, D., Friedman, N., Džeroski, S., Sutton, C., McCallum, A., Pfeffer, A., Abbeel, P., Wong, M.F., Meek, C., Neville, J., et al.: Introduction to Statistical Relational Learning. MIT Press, Cambridge (2007)
Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1), 107–136 (2006)
Sherman, S.: Markov random fields and Gibbs random fields. Isr. J. Math. 14(1), 92–103 (1973)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Shakarian, P., Baral, C., Simari, G.I., Xi, B., Pokala, L. (2023). LTN: Logic Tensor Networks. In: Neuro Symbolic Reasoning and Learning. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-031-39179-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-39179-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-39178-1
Online ISBN: 978-3-031-39179-8
eBook Packages: Computer ScienceComputer Science (R0)