Advertisement

MN-EDA and the Use of Clique-Based Factorisations in EDAs

  • Roberto Santana
Part of the Adaptation, Learning, and Optimization book series (ALO, volume 14)

Abstract

This chapter discusses the important role played by factorisations in the study of EDAs and presents the Markov network estimation of distribution algorithm (MN-EDA) as a classical example of the EDAs based on the use of undirected graphs. The chapter also reviews recent work on the use of clique-based decompositions and other approximations methods inspired in the field of statistical physics with direct application to EDAs.

Keywords

Graphical Model Undirected Graph Gibbs Sampler Maximal Clique Marginal Probability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bron, C., Kerbosch, J.: Algorithm 457—finding all cliques of an undirected graph. Communications of the ACM 16(6), 575–577 (1973)MATHCrossRefGoogle Scholar
  2. 2.
    Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence (6), 721–741 (1984)Google Scholar
  3. 3.
    Höns, R.: Estimation of Distribution Algorithms and Minimum Relative Entropy. PhD thesis, University of Bonn, Bonn, Germany (2006)Google Scholar
  4. 4.
    Höns, R., Santana, R., Larrañaga, P., Lozano, J.A.: Optimization by max-propagation using Kikuchi approximations. Technical Report EHU-KZAA-IK-2/07, Department of Computer Science and Artificial Intelligence, University of the Basque Country (November 2007)Google Scholar
  5. 5.
    Jakulin, A., Rish, I.: Bayesian Learning of Markov Network Structure. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 198–209. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  6. 6.
    Kikuchi, R.: A theory of cooperative phenomena. Physical Review 81(6), 988–1003 (1951)MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Kirkpatrick, S., Gelatt, C.D.J., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983)MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    Meila, M., Jordan, M.I.: Learning with mixtures of trees. Journal of Machine Learning Research 1, 1–48 (2000)MathSciNetGoogle Scholar
  9. 9.
    Mendiburu, A., Santana, R., Lozano, J.A.: Introducing belief propagation in estimation of distribution algorithms: A parallel framework. Technical Report EHU-KAT-IK-11/07, Department of Computer Science and Artificial Intelligence, University of the Basque Country (October 2007)Google Scholar
  10. 10.
    Morita, T.: Formal structure of the cluster variation method. Progress of Theoretical Physics Supplements 115, 27–39 (1994)CrossRefGoogle Scholar
  11. 11.
    Mühlenbein, H., Mahnig, T., Ochoa, A.: Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics 5(2), 213–247 (1999)CrossRefGoogle Scholar
  12. 12.
    Pakzad, P., Anantharam, V.: Belief propagation and statistical physics. In: Electronic Proceedings of 2002, Conference on Information Sciences and Systems. Princeton University, Paper No.225, CD-ROM, 3 pages (2002)Google Scholar
  13. 13.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo (1988)Google Scholar
  14. 14.
    Santana, R.: A Markov Network Based Factorized Distribution Algorithm for Optimization. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 337–348. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  15. 15.
    Santana, R.: Estimation of distribution algorithms with Kikuchi approximations. Evolutionary Computation 13(1), 67–97 (2005)CrossRefGoogle Scholar
  16. 16.
    Santana, R., Larrañaga, P., Lozano, J.A.: Mixtures of Kikuchi Approximations. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 365–376. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Santana, R., Larrañaga, P., Lozano, J.A.: Adaptive estimation of distribution algorithms. In: Cotta, C., Sevaux, M., Sörensen, K. (eds.) Adaptive and Multilevel Metaheuristics. SCI, vol. 136, pp. 177–197. Springer (2008)Google Scholar
  18. 18.
    Santana, R., Mühlenbein, H.: Blocked stochastic sampling versus Estimation of Distribution Algorithms. In: Proceedings of the 2002 Congress on Evolutionary Computation CEC 2002, vol. 2, pp. 1390–1395. IEEE Press (2002)Google Scholar
  19. 19.
    Soto, M.R.: A Single Connected Factorized Distribution Algorithm and its Cost of Evaluation. PhD thesis, University of Havana, Havana, Cuba (July 2003) (in Spanish)Google Scholar
  20. 20.
    Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction and Search. Lecture Notes in Statistics, vol. 81. Springer, New York (1993)MATHCrossRefGoogle Scholar
  21. 21.
    Yanover, C., Weiss, Y.: Approximate inference and protein-folding. In: Becker, S., Thrun, S., Obermayer, K. (eds.) Advances in Neural Information Processing Systems 15, pp. 1457–1464. MIT Press, Cambridge (2003)Google Scholar
  22. 22.
    Yedidia, J.S., Freeman, W.T., Weiss, Y.: Understanding belief propagation and its generalizations. Technical Report TR-2001-22, Mitsubishi Electric Research Laboratories (November 2001)Google Scholar
  23. 23.
    Yedidia, J.S., Freeman, W.T., Weiss, Y.: Constructing free energy approximations and generalized belief propagation algorithms. Technical Report TR-2002-35, Mitsubishi Electric Research Laboratories (August 2002)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Intelligent Systems Group, Faculty of InformaticsUniversity of the Basque Country (UPV/EHU)San-SebastianSpain

Personalised recommendations