Learning Bayesian Network Equivalence Classes from Incomplete Data

  • Hanen Borchani
  • Nahla Ben Amor
  • Khaled Mellouli
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4265)


This paper proposes a new method, named Greedy Equivalence Search-Expectation Maximization (GES-EM), for learning Bayesian networks from incomplete data. Our method extends the recently proposed GES algorithm to deal with incomplete data. Evaluation of generated networks was done using expected Bayesian Information Criterion (BIC) scoring function. Experimental results show that GES-EM algorithm yields more accurate structures than the standard Alternating Model Selection-Expectation Maximization (AMS-EM) algorithm.


Equivalence Class Bayesian Network Incomplete Data Directed Acyclic Graph Hide Variable 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Andersson, S.A., Madigan, D., Perlman, M.D.: A characterization of Markov equivalence classes for acyclic digraphs. Annals of Statistics 25, 505–541 (1997)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Binder, J., Koller, D., Russell, S., Kanazawa, K.: Adaptive probabilistic networks with hidden variables. Machine Learning 29, 213–244 (1997)MATHCrossRefGoogle Scholar
  3. 3.
    Chickering, D.M.: Optimal Structure Identification With Greedy Search. Journal of Machine Learning Research, 507–554 (2002)Google Scholar
  4. 4.
    Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incompete data via the EM algorithm. Journal of the Royal Statistical Society B39, 1–38 (1977)MathSciNetGoogle Scholar
  5. 5.
    Dor, D., Tarsi, M.: A simple algorithm to construct a consistent extension of a partially oriented graph. Tech. Rep. R-185, Cognitive Systems Laboratory (1992)Google Scholar
  6. 6.
    François, O., Leray, P.: Bayesian network structural learning and incomplete data. In: International and Interdisciplinary conference on AKRR, Espo, Finland (2005)Google Scholar
  7. 7.
    Friedman, N.: Learning belief networks in the presence of missing values and hidden variables. In: 14th Conference of Machine Learning (1997)Google Scholar
  8. 8.
    Jensen, F.V.: An Introduction to Bayesian Networks. UCL Press, London (1996)Google Scholar
  9. 9.
    Lauritzen, S.L., Spiegelhalter, D.J.: Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society 50(2), 157–224 (1988)MATHMathSciNetGoogle Scholar
  10. 10.
    Meila, M., Jordan, M.I.: Estimating dependency structure as a hidden variable. In: NIPS, vol. 10 (1998)Google Scholar
  11. 11.
    Neal, R.M., Hinton, G.E.: A View of the EM algorithm that justifies incremental sparse and other variants. Learning in Graphical Models (1998)Google Scholar
  12. 12.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible inference. Morgan Kaufmann Publishers, San Francisco (1988)Google Scholar
  13. 13.
    Rubin, D.: Inference and missing data. Biometrika 63, 581–592 (1976)MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Singh, M.: Learning Bayesian networks from incomplete data. In: AAAI 1997 (1997)Google Scholar
  15. 15.
    Spiegelhalter, D.J., Lauritzen, S.L.: Sequential updating of conditional probabilities on directed graphical structures. Networks 20, 579–605 (1990)MATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Verma, T., Pearl, J.: Equivalence and synthesis of causal models. In: Proceedings of the Sixth Conference on UAI. Morgan Kaufmann Publishers, San Francisco (1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hanen Borchani
    • 1
  • Nahla Ben Amor
    • 1
  • Khaled Mellouli
    • 1
  1. 1.Institut Supérieur de Gestion de TunisLARODECLe BardoTunisie

Personalised recommendations