Scaling Up the Greedy Equivalence Search Algorithm by Constraining the Search Space of Equivalence Classes

  • Juan I. Alonso-Barba
  • Luis de la Ossa
  • Jose A. Gámez
  • Jose M. Puerta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6717)


Greedy Equivalence Search (GES) is nowadays the state of the art algorithm for learning Bayesian networks (BNs) from complete data. However, from a practical point of view, this algorithm may not be fast enough to work in high dimensionality domains. This paper proposes some variants of GES aimed to increase its efficiency. Under faithfulness assumption, the modified algorithms preserve the same theoretical properties as the original one, that is, they recover a perfect map of the target distribution in the large sample limit. Moreover, experimental results confirm that, although they carry out much less computations, BNs learnt by those algorithms have the same quality as those learnt by GES.


Equivalence Class Bayesian Network Directed Acyclic Graph Local Search Algorithm Greedy Search 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Abramson, B., Brown, J., Edwards, W., Murphy, A., Winkler, R.L.: Hailfinder: A Bayesian system for forecasting severe weather. Int. J. Forecasting 12(1), 57–71 (1996)CrossRefGoogle Scholar
  2. 2.
    Buntine, W.: Theory refinement on Bayesian networks. In: UAI 1991, pp. 52–60. Morgan Kaufmann Publishers Inc., San Francisco (1991)Google Scholar
  3. 3.
    Chickering, D.: Optimal structure identification with greedy search. J. Mach. Learn. Res. 3, 507–554 (2002)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Chickering, D.M., Geiger, D., Heckerman, D.: Learning Bayesian networks: Search methods and experimental results. In: Proc. AISTATS 1995, pp. 112–128 (1995)Google Scholar
  5. 5.
    Friedman, N., Nachman, I., Pe’er, D.: Learning Bayesian network structure from massive datasets: The ”Sparse Candidate” algorithm. In: UAI 1999, pp. 206–215 (1999)Google Scholar
  6. 6.
    Gámez, J.A., Mateo, J.L., Puerta, J.M.: Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood. Data Min. Knowl. Disc. (2010), doi:10.1007/s10618–010–0178–6 (to appear)Google Scholar
  7. 7.
    Heckerman, D., Geiger, D., Chickering, D.M.: Learning Bayesian networks: The combination of knowledge and statistical data. Mach. Learn. 20(3), 197–243 (1995)zbMATHGoogle Scholar
  8. 8.
    Neapolitan, R.: Learning Bayesian Networks. Prentice-Hall, Englewood Cliffs (2003)Google Scholar
  9. 9.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of plausible inference. Morgan Kaufmann, San Francisco (1988)zbMATHGoogle Scholar
  10. 10.
    Tsamardinos, I., Brown, L.E., Aliferis, C.F.: The max-min hill-climbing Bayesian network structure learning algorithm. Mach. Learn. 65(1), 31–78 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Juan I. Alonso-Barba
    • 1
  • Luis de la Ossa
    • 1
  • Jose A. Gámez
    • 1
  • Jose M. Puerta
    • 1
  1. 1.Department of Computing Systems Intelligent Systems and Data Mining Lab, Albacete Research Institute of InformaticsUniversity of Castilla-La ManchaAlbaceteSpain

Personalised recommendations