Abstract
The principle of maximum entropy inductively completes the knowledge given by a knowledge base \(\mathcal R\), and it has been suggested to view learning as an operation being inverse to inductive knowledge completion. While a corresponding learning approach has been developed when \(\mathcal R\) is based on propositional logic, in this paper we describe an extension to a relational setting. It allows to learn relational FO-PCL knowledge bases containing both generic conditionals as well as specific conditionals referring to exceptional individuals from a given probability distribution.
The research reported here was partially supported by the Deutsche Forschungsgemeinschaft (grant BE 1700/7-2).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
De Raedt, L., Blockeel, H., Dehaspe, L., Laer, W.V.: Three companions for data mining in first order logic. In: Relational Data Mining, pp. 105–139. Springer (2001)
Fisseler, F.: Learning and Modeling with Probabilistic Conditional Logic. Dissertations in Artificial Intelligence, vol. 328. IOS Press, Amsterdam (2010)
Fisseler, J.: First-order probabilistic conditional logic and maximum entropy. Logic Journal of the IGPL (to appear, 2012)
Fisseler, J., Kern-Isberner, G., Beierle, C., Koch, A., Müller, C.: Algebraic knowledge discovery using haskell. In: Hanus, M. (ed.) PADL 2007. LNCS, vol. 4354, pp. 80–93. Springer, Heidelberg (2007)
Getoor, L., Taskar, B. (eds.): Introduction to Statistical Relational Learning. MIT Press (2007)
Han, J., Kamber, M., Pei, J.: Data Mining: Concepts and Techniques. Morgan Kaufmann (2011)
Kern-Isberner, G.: Conditionals in Nonmonotonic Reasoning and Belief Revision. LNCS (LNAI), vol. 2087. Springer, Heidelberg (2001)
Kern-Isberner, G., Fisseler, J.: Knowledge discovery by reversing inductive knowledge representation. In: Proceedings of the Ninth International Conference on the Principles of Knowledge Representation and Reasoning, KR 2004, pp. 34–44. AAAI Press (2004)
Kern-Isberner, G., Lukasiewicz, T.: Combining probabilistic logic programming with the power of maximum entropy. Artificial Intelligence, Special Issue on Nonmonotonic Reasoning 157(1-2), 139–202 (2004)
Muggleton, S., De Raedt, L., Poole, D., Bratko, I., Flach, P.A., Inoue, K., Srinivasan, A.: ILP turns 20 - Biography and future challenges. Machine Learning 86(1), 3–23 (2012)
Paris, J., Vencovska, A.: In defence of the maximum entropy inference process. International Journal of Approximate Reasoning 17(1), 77–103 (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Potyka, N., Beierle, C. (2012). An Approach to Learning Relational Probabilistic FO-PCL Knowledge Bases. In: Hüllermeier, E., Link, S., Fober, T., Seeger, B. (eds) Scalable Uncertainty Management. SUM 2012. Lecture Notes in Computer Science(), vol 7520. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33362-0_52
Download citation
DOI: https://doi.org/10.1007/978-3-642-33362-0_52
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33361-3
Online ISBN: 978-3-642-33362-0
eBook Packages: Computer ScienceComputer Science (R0)