Scalable, Efficient and Correct Learning of Markov Boundaries Under the Faithfulness Assumption
- Cite this paper as:
- Peña J.M., Björkegren J., Tegnér J. (2005) Scalable, Efficient and Correct Learning of Markov Boundaries Under the Faithfulness Assumption. In: Godo L. (eds) Symbolic and Quantitative Approaches to Reasoning with Uncertainty. ECSQARU 2005. Lecture Notes in Computer Science, vol 3571. Springer, Berlin, Heidelberg
We propose an algorithm for learning the Markov boundary of a random variable from data without having to learn a complete Bayesian network. The algorithm is correct under the faithfulness assumption, scalable and data efficient. The last two properties are important because we aim to apply the algorithm to identify the minimal set of random variables that is relevant for probabilistic classification in databases with many random variables but few instances. We report experiments with synthetic and real databases with 37, 441 and 139352 random variables showing that the algorithm performs satisfactorily.
Unable to display preview. Download preview PDF.