Constructing Markov Logic Networks from First-Order Default Rules

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9575)

Abstract

Expert knowledge can often be represented using default rules of the form “if A then typically B”. In a probabilistic framework, such default rules can be seen as constraints on what should be derivable by MAP-inference. We exploit this idea for constructing a Markov logic network \(\mathcal {M}\) from a set of first-order default rules D, such that MAP inference from \(\mathcal {M}\) exactly corresponds to default reasoning from D, where we view first-order default rules as templates for the construction of propositional default rules. In particular, to construct appropriate Markov logic networks, we lift three standard methods for default reasoning. The resulting Markov logic networks could then be refined based on available training data. Our method thus offers a convenient way of using expert knowledge for constraining or guiding the process of learning Markov logic networks.

References

  1. 1.
    Benferhat, S., Bonnefon, J.F., da Silva Neves, R.: An overview of possibilistic handling of default reasoning, with experimental studies. Synthese 146(1–2), 53–70 (2005)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Benferhat, S., Dubois, D., Prade, H.: Nonmonotonic reasoning, conditional objects and possibility theory. Artif. Intell. 92(1–2), 259–276 (1997)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Benferhat, S., Dubois, D., Prade, H.: Practical handling of exception-tainted rules and independence information in possibilistic logic. Appl. Intell. 9(2), 101–127 (1998)CrossRefGoogle Scholar
  4. 4.
    Benferhat, S., Dubois, D., Prade, H.: Possibilistic and standard probabilistic semantics of conditional knowledge bases. J. Log. Comput. 9(6), 873–895 (1999)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Berre, D.L., Parrain, A.: The Sat4j library, release 2.2. J. Satisfiability, Boolean Model. Comput. 7, 50–64 (2010)Google Scholar
  6. 6.
    de Salvo Braz, R., Amir, E., Roth, D.: Lifted first-order probabilistic inference. In: Kaelbling, L.P., Saffiotti, A. (eds.) Proceeding of the 19th Joint Conference on Artificial Intelligence, p. 1319 (2005)Google Scholar
  7. 7.
    de Saint-Cyr, F.D., Lang, J., Schiex, T.: Penalty logic and its link with Dempster-Shafer theory. In: Proceedings of the 10th International Conference on Uncertainty in Artificial Intelligence, pp. 204–211 (1994)Google Scholar
  8. 8.
    Friedman, N., Halpern, J.Y., Koller, D.: First-order conditional logic for default reasoning revisited. ACM Trans. Comput. Log. 1(2), 175–207 (2000)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Geffner, H., Pearl, J.: Conditional entailment: bridging two approaches to default reasoning. Artif. Intell. 53(2), 209–244 (1992)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Goldszmidt, M., Morris, P., Pearl, J.: A maximum entropy approach to nonmonotonic reasoning. IEEE Trans. Pattern Anal. Mach. Intell. 15(3), 220–232 (1993)CrossRefGoogle Scholar
  11. 11.
    Halpern, J.Y.: An analysis of first-order logics of probability. Artif. Intell. 46(3), 311–350 (1990)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Kern-Isberner, G., Thimm, M.: A ranking semantics for first-order conditionals. In: Proceedings of the 20th European Conference on Artificial Intelligence, pp. 456–461 (2012)Google Scholar
  13. 13.
    Kraus, S., Lehmann, D., Magidor, M.: Nonmonotonic reasoning, preferential models and cumulative logics. Artif. Intell. 44(1–2), 167–207 (1990)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Lehmann, D., Magidor, M.: What does a conditional knowledge base entail? Artif. Intell. 55(1), 1–60 (1992)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Niu, F., Ré, C., Doan, A., Shavlik, J.W.: Tuffy: scaling up statistical inference in markov logic networks using an RDBMS. PVLDB 4(6), 373–384 (2011)Google Scholar
  16. 16.
    Noessner, J., Niepert, M., Stuckenschmidt, H., Rockit.: Exploiting parallelism and symmetry for map inference in statistical relational models. In: Proceedings of the 27th Conference on Artificial Intelligence, AAAI (2013)Google Scholar
  17. 17.
    Pápai, T., Ghosh, S., Kautz, H.: Combining subjective probabilities and data in training markov logic networks. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012, Part I. LNCS, vol. 7523, pp. 90–105. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  18. 18.
    Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, Burlington (1988)MATHGoogle Scholar
  19. 19.
    Pearl, J., System, Z.: A natural ordering of defaults with tractable applications to nonmonotonic reasoning. In: Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge, pp. 121–135 (1990)Google Scholar
  20. 20.
    Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2), 107–136 (2006)CrossRefGoogle Scholar
  21. 21.
    Riedel, S.: Improving the accuracy and efficiency of MAP inference for Markov logic. In: Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, pp. 468–475 (2008)Google Scholar
  22. 22.
    Tversky, A., Kahneman, D.: Extensional versus intuitive reasoning: the conjunction fallacy in probability judgment. Psychol. Rev. 90(4), 293 (1983)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Ondřej Kuželka
    • 1
  • Jesse Davis
    • 2
  • Steven Schockaert
    • 1
  1. 1.School of Computer Science & InformaticsCardiff UniversityCardiffUK
  2. 2.Department of Computer ScienceKatholieke Universiteit LeuvenLeuvenBelgium

Personalised recommendations