Abstract
First we show that all genetic algorithms can be approximated by an algorithm which keeps the population in linkage equilibrium. Here the genetic population is given as a product of univariate marginal distributions. We describe a simple algorithm which keeps the population in linkage equilibrium. It is called the univariate marginal distribution algorithm (UMDA). Our main result is that UMDA transforms the discrete optimization problem into a continuous one defined by the average fitness W(p1, . . . , p n ) as a function of the univariate marginal distributions p i. For proportionate selection UMDA performs gradient ascent in the landscape defined by W(p). We derive a difference equation for p i which has already been proposed by Wright in population genetics. We show that UMDA solves difficult multimodal optimization problems. For functions with highly correlated variables UMDA has to be extended. The factorized distribution algorithm (FDA) uses a factorization into marginal and conditional distributions. For decomposable functions the optimal factorization can be explicitly computed. In general it has to be computed from the data. This is done by LFDA. It uses a Bayesian network to represent the distribution. Computing the network structure from the data is called learning in Bayesian network theory. The problem of finding a minimal structure which explains the data is discussed in detail. It is shown that the Bayesian information criterion is a good score for this problem.
Real World Computing Partnership
GMF — Forschungszentrum Informationstechnik
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
H. Asoh and H. Mühlenbein. On the mean convergence time of evolutionary algorithms without selection and mutation. In Y. Davidor, H.-P. Schwefel, and R. Männer, editors, Proceedings of the 3rd Conference on Parallel Problem Solving from Nature, LNCS 866, pages 88–97. Springer-Verlag, Berlin Heidelberg New York, 1994.
S. Baluja and R. Caruana. Removing the genetics from the standard genetic algorithm. In A. Prieditis and S. Russell, editors, Proceedings of the 12th International Conference on Machine Learning, pages 38–46. Morgan Kaufmann, San Francisco, 1995.
R. R. Bouckaert. Properties of Bayesian network learning algorithms. In R. Lopez de Mantaras and D. Poole, editors, Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pages 102–109. Morgan Kaufmann, San Francisco, 1994.
F. B. Christiansen and M. W. Feldman. Algorithms, genetics and populations: the schemata theorem revisited. Complexity, 3:57–64, 1998.
M. Dorigo and G. Di Caro. The ant colony optimization meta-heuristic. In D. Corne, M. Dorigo, and F. Glover, editors, New Ideas in Optimization. MacGraw-Hill, New York, 1999.
R. Feistel and W. Ebeling. Evolution of Complex Systems. Self-Organization Entropy and Development. Kluwer, Dordrecht, 1989.
B. J. Frey. Graphical Models for Machine Learning and Digital Communication. MIT Press, Cambridge, 1998.
H. Geiringer. On the probability theory of linkage in Mendelian heredity. Annals of Math. Stat., 15:25–57, 1944.
D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989.
G. Harik. Linkage learning via probabilistic modeling in the ecga. Technical Report IlliGal 99010, University of Illinois, Urbana-Champaign, 1999.
J. Hofbauer and K. Sigmund. Evolutionary Games and Population Dynamics. Cambridge University Press, Cambridge, 1998.
J. H. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI, 1975/1992.
M. I. Jordan. Learning in Graphical Models. MIT Press, Cambridge, 1999.
M. Mitchell, J. H. Holland, and S. Forrest. When will a genetic algorithm outperform hill climbing? Advances in Neural Information Processing Systems, 6:51–58, 1994.
H. Mühlenbein and T. Mahnig. Convergence theory and applications of the factorized distribution algorithm. Journal of Computing and Information Technology, 7:19–32, 1999.
H. Mühlenbein and T. Mahnig. FDA — A scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation, 7(4):353–376, 1999.
H. Mühlenbein. Evolution in time and space — the parallel genetic algorithm. In G. Rawlins, editor, Foundations of Genetic Algorithms, pages 316–337. Morgan Kaufmann, San Francisco, 1991.
H. Mühlenbein. The equation for the response to selection and its use for prediction. Evolutionary Computation, 5(3):303–346, 1997.
H. Mühlenbein, M. Gorges-Schleuter, and O. Krämer. Evolution algorithms in combinatorial optimization. Parallel Computing, 7:65–88, 1988.
H. Mühlenbein, T. Mahnig, and A. Rodriguez Ochoa. Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, 5:215–247, 1999.
H. Mühlenbein and D. Schlierkamp-Voosen. The science of breeding and its application to the breeder genetic algorithm. Evolutionary Computation, 1:335–360, 1994.
H. Mühlenbein and H.-M. Voigt. Gene pool recombination in genetic algorithms. In J. P. Kelly and I. H. Osman, editors, Metaheuristics: Theory and Applications, pages 53–62. Kluwer Academic, Norwell, 1996.
T. Nagylaki. Introduction to Theoretical Population Genetics. Biomathematics, Vol. 21. Springer-Verlag, Berlin Heidelberg New York, 1992.
M. Pelikan, D. E. Goldberg, and E. Cantu-Paz. BOA: The Bayesian optimization algorithm. Technical Report IlliGal 99003, University of Illinois, Urbana-Champaign, 1999.
M. Peschel and W. Mende. Predator-Prey-Model: Do We Live in a Volterra World? Akademie-Verlag, Berlin, 1986.
A. Prügel-Bennet and J. L. Shapiro. An analysis of a genetic algorithm for simple random Ising systems. Physica D, 104:75–114, 1997.
L. M. Rattray and J. L. Shapiro. Cumulant dynamics of a population under multiplicative selection, mutation and drift. Theoretical Population Biology. To be published, 1999.
G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 7:461–464, 1978.
V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
H.-M. Voigt. Evolution and Optimization. Akademie-Verlag, Berlin, 1989.
M. Vose. The Simple Genetic Algorithm: Foundations and Theory. MIT Press, Cambridge, 1999.
S. Wright. Random drift and the shifting balance theory of evolution. In K. Kojima, editor, Mathematical Topics in Population Genetics. Springer-Verlag, Berlin Heidelberg New York, 1970.
Byoung-Tak Zhang, P. Ohm, and H. Mühlenbein. Evolutionary induction of sparse neural trees. Evolutionary Computation, 5:213–236, 1997.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Mühlenbein, H., Mahnig, T. (2001). Evolutionary Algorithms: From Recombination to Search Distributions. In: Kallel, L., Naudts, B., Rogers, A. (eds) Theoretical Aspects of Evolutionary Computing. Natural Computing Series. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-04448-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-662-04448-3_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-08676-2
Online ISBN: 978-3-662-04448-3
eBook Packages: Springer Book Archive