Progress in Artificial Intelligence

, Volume 5, Issue 1, pp 15–26

Learning Bayesian networks with low inference complexity

Regular Paper

DOI: 10.1007/s13748-015-0070-0

Cite this article as:
Benjumeda, M., Larrañaga, P. & Bielza, C. Prog Artif Intell (2016) 5: 15. doi:10.1007/s13748-015-0070-0
  • 75 Downloads

Abstract

One of the main research topics in machine learning nowadays is the improvement of the inference and learning processes in probabilistic graphical models. Traditionally, inference and learning have been treated separately, but given that the structure of the model conditions the inference complexity, most learning methods will sometimes produce inefficient inference models. In this paper we propose a framework for learning low inference complexity Bayesian networks. For that, we use a representation of the network factorization that allows efficiently evaluating an upper bound in the inference complexity of each model during the learning process. Experimental results show that the proposed methods obtain tractable models that improve the accuracy of the predictions provided by approximate inference in models obtained with a well-known Bayesian network learner.

Keywords

Probabilistic graphical models Bayesian networks Arithmetic circuits Network polynomials Structure learning  Thin models 

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Marco Benjumeda
    • 1
  • Pedro Larrañaga
    • 1
  • Concha Bielza
    • 1
  1. 1.Computational Intelligence Group, Departamento de Inteligencia ArtificialUniversidad Politécnica de MadridMadridSpain

Personalised recommendations