Machine Learning

, Volume 62, Issue 1–2, pp 107–136

Markov logic networks

Article

DOI: 10.1007/s10994-006-5833-1

Cite this article as:
Richardson, M. & Domingos, P. Mach Learn (2006) 62: 107. doi:10.1007/s10994-006-5833-1

Abstract

We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.

Keywords

Statistical relational learning Markov networks Markov random fields Log-linear models Graphical models First-order logic Satisfiability Inductive logic programming Knowledge-based model construction Markov chain Monte Carlo Pseudo-likelihood Link prediction 

Copyright information

© Springer Science + Business Media, Inc. 2006

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringUniversity of WashingtonSeattleUSA

Personalised recommendations