Diversity Loss in General Estimation of Distribution Algorithms
A very general class of EDAs is defined, on which universal results on the rate of diversity loss can be derived. This EDA class, denoted SML-EDA, requires two restrictions: 1) in each generation, the new probability model is build using only data sampled from the current probability model; and 2) maximum likelihood is used to set model parameters. This class is very general; it includes simple forms of many well-known EDAs, e.g. BOA, MIMIC, FDA, UMDA, etc. To study the diversity loss in SML-EDAs, the trace of the empirical covariance matrix is the proposed statistic. Two simple results are derived. Let N be the number of data vectors evaluated in each generation. It is shown that on a flat landscape, the expected value of the statistic decreases by a factor 1–1/N in each generation. This result is used to show that for the Needle problem, the algorithm will with a high probability never find the optimum unless the population size grows exponentially in the number of search variables.
KeywordsProbability Model Diversity Loss Distribution Algorithm Search Variable Bayesian Optimization Algorithm
Unable to display preview. Download preview PDF.
- 2.Baluja, S.: Population-based incremental learning: A method for integrating genetic search based function optimization and competive learning. Technical Report CMU-CS-94-163, Computer Science Department, Carnegie Mellon University (1994)Google Scholar
- 3.Mühlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions i: Binary parameters. In: Proceedings of PPSN IV, pp. 178–187 (1996)Google Scholar
- 4.Shapiro, J.L.: The sensitivity of pbil to its learning rate and how detailed balance can remove it. In: Jong, K.A.D., Poli, R., Rowe, J.E. (eds.) Foundations of Genetic Algorithms, vol. 7, pp. 115–132. Morgan Kaufmann, San Francisco (2003)Google Scholar
- 8.Pelikan, M., Goldberg, D.E., Cantú-Paz, E.: Bayesian optimization algorithm, population sizing, and time to converge. In: Proceedings of GECCO 2000, pp. 275–282 (2000)Google Scholar
- 9.Pelikan, M.: Bayesian optimization algorithm: From single level to hierarchy. PhD thesis, University of Illinois at Urbana-Champaign, Urbana, IL (2002); Also IlliGAL Report No 2002023Google Scholar
- 10.Mühlenbein, H., Mahnig, T.: Evolutionary computation and beyond. In: Uesaka, Y., et al. (eds.) Foundations of Real-World Intelligence, pp. 123–186. CLSI Publications (2001)Google Scholar