MOA - Markovian Optimisation Algorithm

  • Siddhartha Shakya
  • Roberto Santana
Part of the Adaptation, Learning, and Optimization book series (ALO, volume 14)


In this chapter we describe Markovian Optimisation Algorithm (MOA), one of the recent developments in MN based EDA. It uses the local Markov property to model the dependency and directly sample from it without needing to approximate a complex join probability distribution model. MOA has a much simpler workflow in comparison to its global property based counter parts, since expensive processes to finding cliques, and building and estimating clique potential functions are avoided. The chapter is intended as an introductory chapter, and describes the motivation and the workflow of MOA. It also reviews some of the results obtained with it.


Mutual Information Bayesian Network Gibbs Sampling Joint Probability Distribution Distribution Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alden, M.A.: MARLEDA: Effective Distribution Estimation Through Markov Random Fields. PhD thesis, Faculty of the Graduate Schoool, University of Texas at Austin, USA (December 2007)Google Scholar
  2. 2.
    Baluja, S., Davies, S.: Using optimal dependency-trees for combinatorial optimization: Learning the structure of the search space. In: Proceedings of the 14th International Conference on Machine Learning, pp. 30–38. Morgan Kaufmann (1997)Google Scholar
  3. 3.
    Besag, J.: Spatial interactions and the statistical analysis of lattice systems (with discussions). Journal of the Royal Statistical Society 36, 192–236 (1974)MathSciNetMATHGoogle Scholar
  4. 4.
    Echegoyen, C., Lozano, J.A., Santana, R., Larrañaga, P.: Exact Bayesian network learning in estimation of distribution algorithms. In: Proceedings of the 2007 Congress on Evolutionary Computation CEC 2007, pp. 1051–1058. IEEE Press (2007)Google Scholar
  5. 5.
    Etxeberria, R., Larrañaga, P.: Global optimization using Bayesian networks. In: Ochoa, A., Soto, M.R., Santana, R. (eds.) Proceedings of the Second Symposium on Artificial Intelligence (CIMAF 1999), Havana, Cuba, pp. 151–173 (1999)Google Scholar
  6. 6.
    Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. In: Fischler, M.A., Firschein, O. (eds.) Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pp. 564–584. Kaufmann, Los Altos (1987)Google Scholar
  7. 7.
    Goldberg, D.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley (1989)Google Scholar
  8. 8.
    Henrion, M.: Propagating uncertainty in Bayesian networks by probabilistic logic sampling. In: Lemmer, J.F., Kanal, L.N. (eds.) Uncertainty in Artificial Intelligence 2, pp. 149–163. North-Holland, Amsterdam (1988)Google Scholar
  9. 9.
    Kikuchi, R.: A Theory of Cooperative Phenomena. Physical Review 81, 988–1003 (1951)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.M.: Combinatorial optimization by learning and simulation of Bayesian networks. In: Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, Stanford, pp. 343–352 (2000)Google Scholar
  11. 11.
    Larrañaga, P., Lozano, J.A.: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers (2002)Google Scholar
  12. 12.
    Lauritzen, S.L.: Graphical Models. Oxford University Press (1996)Google Scholar
  13. 13.
    Lauritzen, S.L., Spiegelhalter, D.J.: Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society B 50, 157–224 (1988)MathSciNetMATHGoogle Scholar
  14. 14.
    Li, S.Z.: Markov Random Field modeling in computer vision. Springer (1995)Google Scholar
  15. 15.
    Metropolis, N.: Equations of state calculations by fast computational machine. Journal of Chemical Physics 21, 1087–1091 (1953)CrossRefGoogle Scholar
  16. 16.
    Mitchell, M.: An Introduction To Genetic Algorithms. MIT Press, Cambridge (1997)Google Scholar
  17. 17.
    Mühlenbein, H., Mahnig, T., Ochoa, A.R.: Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics 5(2), 215–247 (1999)MATHCrossRefGoogle Scholar
  18. 18.
    Mühlenbein, H., Paaß, G.: From Recombination of Genes to the Estimation of Distributions: I. Binary Parameters. In: Ebeling, W., Rechenberg, I., Voigt, H.-M., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 178–187. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  19. 19.
    Murphy, K.: Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, University of California, Berkeley (2002)Google Scholar
  20. 20.
    Murray, I., Ghahramani, Z.: Bayesian Learning in Undirected Graphical Models: Approximate MCMC algorithms. In: Twentieth Conference on Uncertainty in Artificial Intelligence (UAI 2004), Banff, Canada, July 8-11 (2004)Google Scholar
  21. 21.
    Ochoa, A., Soto, M.R., Santana, R., Madera, J., Jorge, N.: The factorized distribution algorithm and the junction tree: A learning perspective. In: Ochoa, A., Soto, M.R., Santana, R. (eds.) Proceedings of the Second Symposium on Artificial Intelligence (CIMAF 1999), Havana, Cuba, March 1999, pp. 368–377 (1999)Google Scholar
  22. 22.
    Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman Publishers, Palo Alto (1988)Google Scholar
  23. 23.
    Pelikan, M.: Bayesian optimization algorithm: From single level to hierarchy. PhD thesis, University of Illinois at Urbana-Champaign, Urbana, IL, Also IlliGAL Report No. 2002023 (2002)Google Scholar
  24. 24.
    Pelikan, M., Goldberg, D.E., Cantú–Paz, E.: BOA: The Bayesian Optimization Algorithm. In: Banzhaf, W., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference GECCO 1999, vol. I, pp. 525–532. Morgan Kaufmann, San Francisco (1999)Google Scholar
  25. 25.
    Pelikan, M., Sastry, K., Butz, M.V., Goldberg, D.E.: Hierarchical BOA on random decomposable problems. IlliGAL Report No. 2006002, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL (January 2006)Google Scholar
  26. 26.
    Santana, R.: A Markov Network Based Factorized Distribution Algorithm for Optimization. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 337–348. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  27. 27.
    Santana, R.: Estimation of Distribution Algorithms with Kikuchi Approximations. Evolutionary Computation 13, 67–98 (2005)CrossRefGoogle Scholar
  28. 28.
    Santana, R., Bielza, C., Larrañaga, P., Lozano, J.A., Echegoyen, C., Mendiburu, A., Armanñanzas, R., Shakya, S.: MATEDA 2.0: Estimation of distribution algorithms in MATLAB. Journal of Statistical Software 35(7), 1–30 (2010)Google Scholar
  29. 29.
    Shakya, S.: DEUM: A Framework for an Estimation of Distribution Algorithm based on Markov Random Fields. PhD thesis, The Robert Gordon University, Aberdeen, UK (April 2006)Google Scholar
  30. 30.
    Shakya, S., Brownlee, A., McCall, J., Fournier, F., Owusu, G.: DEUM – A Fully Multivariate EDA Based on Markov Networks. In: Chen, Y.-p. (ed.) Exploitation of Linkage Learning. ALO, vol. 3, pp. 71–93. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  31. 31.
    Shakya, S., McCall, J.: Optimisation by Estimation of Distribution with DEUM framework based on Markov Random Fields. International Journal of Automation and Computing 4, 262–272 (2007)CrossRefGoogle Scholar
  32. 32.
    Shakya, S., McCall, J., Brown, D.: Updating the probability vector using MRF technique for a univariate EDA. In: Onaindia, E., Staab, S. (eds.) Proceedings of the Second Starting AI Researchers’ Symposium. Frontiers in Artificial Intelligence and Applications, vol. 109, pp. 15–25. IOS press, Valencia (2004)Google Scholar
  33. 33.
    Shakya, S., McCall, J., Brown, D.: Using a Markov Network Model in a Univariate EDA: An Emperical Cost-Benefit Analysis. In: Proceedings of Genetic and Evolutionary Computation COnference (GECCO 2005), pp. 727–734. ACM, Washington, D.C. (2005)CrossRefGoogle Scholar
  34. 34.
    Shakya, S., McCall, J., Brown, D.: Solving the Ising spin glass problem using a bivariate EDA based on Markov Random Fields. In: Proceedings of IEEE Congress on Evolutionary Computation (IEEE CEC 2006), pp. 3250–3257. IEEE Press, Vancouver (2006)Google Scholar
  35. 35.
    Shakya, S., Santana, R.: An EDA based on local Markov property and Gibbs sampling. In: Proceedings of Genetic and Evolutionary Computation COnference (GECCO 2008). ACM Press, Atlanta (2008)Google Scholar
  36. 36.
    Shakya, S., Santana, R.: A markovianity based optimisation algorithm. Technical Report Technical Report EHU-KZAA-IK-3/08, Department of Computer Science and Artificial Intelligence, University of the Basque Country (September 2008)Google Scholar
  37. 37.
    Shakya, S., Santana, R.: A markovianity based optimisation algorithm. Genetic Programming and Evolvable Machines (2011) (in press)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Business Modelling and Operational Transformation PracticeBT Innovate & DesignIpswichUK
  2. 2.Intelligent Systems Group. Faculty of InformaticsUniversity of the Basque Country (UPV/EHU)San-SebastianSpain

Personalised recommendations