Scalable, Efficient and Correct Learning of Markov Boundaries Under the Faithfulness Assumption

  • Jose M. Peña
  • Johan Björkegren
  • Jesper Tegnér
Conference paper

DOI: 10.1007/11518655_13

Part of the Lecture Notes in Computer Science book series (LNCS, volume 3571)
Cite this paper as:
Peña J.M., Björkegren J., Tegnér J. (2005) Scalable, Efficient and Correct Learning of Markov Boundaries Under the Faithfulness Assumption. In: Godo L. (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty. ECSQARU 2005. Lecture Notes in Computer Science, vol 3571. Springer, Berlin, Heidelberg

Abstract

We propose an algorithm for learning the Markov boundary of a random variable from data without having to learn a complete Bayesian network. The algorithm is correct under the faithfulness assumption, scalable and data efficient. The last two properties are important because we aim to apply the algorithm to identify the minimal set of random variables that is relevant for probabilistic classification in databases with many random variables but few instances. We report experiments with synthetic and real databases with 37, 441 and 139352 random variables showing that the algorithm performs satisfactorily.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jose M. Peña
    • 1
  • Johan Björkegren
    • 2
  • Jesper Tegnér
    • 1
    • 2
  1. 1.Computational Biology, Department of Physics and Measurement TechnologyLinköping UniversitySweden
  2. 2.Center for Genomics and BioinformaticsKarolinska InstitutetSweden

Personalised recommendations