Chapter

Machine Learning and Knowledge Discovery in Databases

Volume 6913 of the series Lecture Notes in Computer Science pp 113-128

Efficiently Approximating Markov Tree Bagging for High-Dimensional Density Estimation

  • François SchnitzlerAffiliated withCarnegie Mellon UniversityDepartment of EECS and GIGA-Research, Université de Liège
  • , Sourour AmmarAffiliated withCarnegie Mellon UniversityKnowledge and Decision Team, Laboratoire d’Informatique de Nantes Atlantique (LINA) UMR 6241, Ecole Polytechnique de l’Université de Nantes
  • , Philippe LerayAffiliated withCarnegie Mellon UniversityKnowledge and Decision Team, Laboratoire d’Informatique de Nantes Atlantique (LINA) UMR 6241, Ecole Polytechnique de l’Université de Nantes
  • , Pierre GeurtsAffiliated withCarnegie Mellon UniversityDepartment of EECS and GIGA-Research, Université de Liège
  • , Louis WehenkelAffiliated withCarnegie Mellon UniversityDepartment of EECS and GIGA-Research, Université de Liège

* Final gross prices may vary according to local VAT.

Get Access

Abstract

We consider algorithms for generating Mixtures of Bagged Markov Trees, for density estimation. In problems defined over many variables and when few observations are available, those mixtures generally outperform a single Markov tree maximizing the data likelihood, but are far more expensive to compute. In this paper, we describe new algorithms for approximating such models, with the aim of speeding up learning without sacrificing accuracy. More specifically, we propose to use a filtering step obtained as a by-product from computing a first Markov tree, so as to avoid considering poor candidate edges in the subsequently generated trees. We compare these algorithms (on synthetic data sets) to Mixtures of Bagged Markov Trees, as well as to a single Markov tree derived by the classical Chow-Liu algorithm and to a recently proposed randomized scheme used for building tree mixtures.

Keywords

mixture models Markov trees bagging randomization