Adaptive Estimation of Distribution Algorithms

  • Roberto Santana
  • Pedro Larrañaga
  • José A. Lozano
Part of the Studies in Computational Intelligence book series (SCI, volume 136)


Estimation of distribution algorithms (EDAs) are evolutionary methods that use probabilistic models instead of genetic operators to lead the search. Most of current proposals on EDAs do not incorporate adaptive techniques. Usually, the class of probabilistic model employed as well as the learning and sampling methods are static. In this paper, we present a general framework for introducing adaptation in EDAs. This framework allows the possibility of changing the class of probabilistic models during the evolution. We present a number of measures, and techniques that can be used to evaluate the effect of the EDA components in order to design adaptive EDAs. As a case of study we present an adaptive EDA that combines different classes of probabilistic models and sampling methods. The algorithm is evaluated in the solution of the satisfiability problem.


Estimation of distribution algorithm adaptive probabilistic model SAT  


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Abbeel, P., Koller, D., Ng, A.Y.: Learning factor graphs in polynomial time and sample complexity. Journal of Machine Learning Research 7, 1743–1788 (2006)MathSciNetGoogle Scholar
  2. 2.
    Bosman, P.A., Grahl, J.: Matching inductive search bias and problem structure in continuous estimation of distribution algorithms. European Journal of Operational Research (to appear, 2007)Google Scholar
  3. 3.
    Bron, C., Kerbosch, J.: Algorithm 457—finding all cliques of an undirected graph. Communications of the ACM 16(6), 575–577 (1973)CrossRefzbMATHGoogle Scholar
  4. 4.
    Etxeberria, R., Larrañaga, P.: Global optimization using Bayesian networks. In: Ochoa, A., Soto, M.R., Santana, R. (eds.) Proceedings of the Second Symposium on Artificial Intelligence (CIMAF 1999), Havana, Cuba, pp. 151–173 (1999)Google Scholar
  5. 5.
    Grahl, J., Bosman, P.A., Rothlauf, F.: The correlation-triggered adaptive variance scaling idea. In: Proceedings of the 8th annual conference on Genetic and evolutionary computation. GECCO 2006, pp. 397–404. ACM Press, New York (2006)CrossRefGoogle Scholar
  6. 6.
    Herrera, F., Lozano, M.: Adaptive genetic algorithms based on fuzzy techniques. In: Proceedings of Information Processing and Management of Uncertainty Conference. IPMU 1996, pp. 775–780 (1996)Google Scholar
  7. 7.
    Höns, R., Santana, R., Larrañaga, P., Lozano, J.A.: Optimization by max-propagation using Kikuchi approximations, (submitted for publication, 2007) Google Scholar
  8. 8.
    Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47(2), 498–519 (2001)CrossRefMathSciNetzbMATHGoogle Scholar
  9. 9.
    Larrañaga, P., Lozano, J.A. (eds.): Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Boston (2002)zbMATHGoogle Scholar
  10. 10.
    Mahnig, T., Mühlenbein, H.: Comparing the adaptive Boltzmann selection schedule SDS to truncation selection. In: Evolutionary Computation and Probabilistic Graphical Models. Proceedings of the Third Symposium on Adaptive Systems (ISAS 2001), Havana, Cuba, March 2001, pp. 121–128 (2001)Google Scholar
  11. 11.
    Mühlenbein, H., Höns, R.: The estimation of distributions and the minimum relative entropy principle. Evolutionary Computation 13(1), 1–27 (2005)CrossRefGoogle Scholar
  12. 12.
    Mühlenbein, H., Mahnig, T., Ochoa, A.: Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics 5(2), 213–247 (1999)CrossRefGoogle Scholar
  13. 13.
    Mühlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions I. Binary parameters. In: Ebeling, W., Rechenberg, I., Voigt, H.-M., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 178–187. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  14. 14.
    Mühlenbein, H., Schlierkamp-Voosen, D.: The science of breeding and its application to the breeder genetic algorithm (BGA). Evolutionary Computation 1(4), 335–360 (1993)CrossRefGoogle Scholar
  15. 15.
    Ochoa, A., Soto, M.R., Santana, R., Madera, J.C., Jorge, N.: The Factorized Distribution Algorithm and the junction tree: A learning perspective. In: Ochoa, A., Soto, M.R., Santana, R. (eds.) Proceedings of the Second Symposium on Artificial Intelligence (CIMAF 1999), Havana, Cuba, March 1999, pp. 368–377 (1999) Google Scholar
  16. 16.
    Pettinger, J.E., Everson, R.M.: Controlling genetic algorithms with reinforcement learning. In: Proceedings of the Genetic and Evolutionary Computation Conference GECCO 2002, p. 692. Morgan Kaufmann Publishers Inc., San Francisco (2002)Google Scholar
  17. 17.
    Santana, R.: An analysis of the performance of the mixture of trees factorized distribution algorithm when priors and adaptive learning are used. Technical Report ICIMAF 2002-180, Institute of Cybernetics, Mathematics and Physics, Havana, Cuba (March 2002)Google Scholar
  18. 18.
    Santana, R.: A Markov network based factorized distribution algorithm for optimization. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 337–348. Springer, Heidelberg (2003)Google Scholar
  19. 19.
    Santana, R.: Estimation of distribution algorithms with Kikuchi approximations. Evolutionary Computation 13(1), 67–97 (2005)CrossRefGoogle Scholar
  20. 20.
    Santana, R., Larrañaga, P., Lozano, J.A.: Mixtures of Kikuchi approximations. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 365–376. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  21. 21.
    Santana, R., Ochoa, A., Soto, M.R.: The mixture of trees factorized distribution algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference GECCO 2001, pp. 543–550. Morgan Kaufmann Publishers, San Francisco (2001)Google Scholar
  22. 22.
    Schaffer, J.D., Eshelman, L.J.: On crossover as an evolutionarily viable strategy. In: Belew, R.K., Booker, L.B. (eds.) Proceedings of the 4th International Conference on Genetic Algorithms, pp. 61–68. Morgan Kaufmann, San Francisco (1991)Google Scholar
  23. 23.
    Sebag, M., Schoenauer, M.: Controlling crossover through inductive learning. In: Davidor, Y., Schwefel, H.-P., Männer, R. (eds.) Parallel Problem Solving from Nature – PPSN III, pp. 209–218. Springer, Berlin (1994)Google Scholar
  24. 24.
    Smith, J.E., Fogarty, T.C.: Operator and parameter adaptation in genetic algorithms. Soft Computing - A Fusion of Foundations, Methodologies and Applications 2, 81–87 (1997)Google Scholar
  25. 25.
    Yedidia, J.S., Freeman, W.T., Weiss, Y.: Constructing free energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory 51(7), 2282–2312 (2005)CrossRefMathSciNetGoogle Scholar
  26. 26.
    Zhou, A., Zhang, Q., Jin, Y., Sendhoff, B.: Adaptive modelling strategy for continuous multiobjective optimization. In: Proceedings of the 2007 Congress on Evolutionary Computation CEC 2007. IEEE Press, Singapore (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Roberto Santana
    • 1
  • Pedro Larrañaga
    • 1
  • José A. Lozano
    • 1
  1. 1.Intelligent Systems Group Department of Computer Science and Artificial IntelligenceUniversity of the Basque CountryDonostia - San SebastianSpain

Personalised recommendations