A Unified Bayesian Framework for Evolutionary Learning and Optimization

  • Byoung-Tak Zhang
Part of the Natural Computing Series book series (NCS)


A probabilistic evolutionary framework is presented and shown to be applicable to both learning and optimization problems. In this framework, evolutionary computation is viewed as Bayesian inference that iteratively updates the posterior distribution of a population from the prior knowledge and observation of new individuals to find an individual with the maximum posterior probability Theoretical foundations of Bayesian evolutionary computation are given and its generality is demonstrated by showing specific Bayesian evolutionary algorithms for learning and optimization. We also discuss how the probabilistic framework can be used to develop novel evolutionary algorithms that embed evolutionary learning for evolutionary optimization and vice versa.


Posterior Distribution Evolutionary Computation Bayesian Framework Search Point Simple Genetic Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bäck, T. (1996) Evolutionary Algorithms in Theory and Practice. Oxford Univ. PressGoogle Scholar
  2. 2.
    Baluja, S., Caruana, R. (1995) Removing the genetics from the standard genetic algorithm. Technical Report CMU-CS-95-141, Carnegie Mellon UniversityGoogle Scholar
  3. 3.
    Baluja, S., Davies, S. (1997) Using optimal dependency-trees for combinatorial optimization: learning the structure of the search space. In Proc. 14th Int. Conf. on Machine Learning, San Mateo, CA: Morgan-Kaufmann, 30–38Google Scholar
  4. 4.
    Banzhaf, W., Nordin, P., Francone, F. (1998) Genetic Programming: An Introduction. San Mateo, CA: Morgan KaufmannzbMATHCrossRefGoogle Scholar
  5. 5.
    Cherkassky, V., Mulier, F. (1998) Learning from Data: Concepts, Theory, and Methods. New York: WileyzbMATHGoogle Scholar
  6. 6.
    Dayan, P., Neal, G.E., Zemel, R.S. (1995) The Helmholtz machine. Neural Computation, 7, 1022–1037Google Scholar
  7. 7.
    De Bonet, J.S., Isbell, C.L., Viola, P. (1997) MIMIC: Finding optima by estimating probability densities. In Advances in Neural Information Processing Systems 9, Cambridge, MA: The MIT Press, 424–430Google Scholar
  8. 8.
    Fogel, D.B., Ghozeil, A. (1996) Using fitness distributions to design more efficient evolutionary computations. In Proc. 1996 IEEE Int. Conf. on Evolutionary Computation, Piscataway, NJ: IEEE Press, 11–19Google Scholar
  9. 9.
    Fogel, D.B. (ed.) (1998) Evolutionary Computation: The Fossil Record. Priscataway, NJ: IEEE PresszbMATHCrossRefGoogle Scholar
  10. 10.
    Friedberg, R.M. (1958) A learning machine: Part I, IBM Journal of Research and Development, 2, 2–13MathSciNetCrossRefGoogle Scholar
  11. 11.
    Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B. (1995) Bayesian Data Analysis. London: Chapman& HallGoogle Scholar
  12. 12.
    Goldberg, D.E. (1989) Genetic Algorithms in Search, Optimization, and Machine Learning. Realing, MA: Addison-WesleyzbMATHGoogle Scholar
  13. 13.
    Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M. (1995) The wake-sleep algorithm for unsupervised neural networks. Science, 268, 1158–1160CrossRefGoogle Scholar
  14. 14.
    Holland, J.H. (1986) Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In Machine Learning: An Artificial Intelligence Approach, Vol. II, R.S. Michalski, J.G. Carbonell, and T.M. Mitchell, Eds. San Francisco: Morgan Kaufmann, 593–623Google Scholar
  15. 15.
    Koza, J.R. (1992) Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA: The MIT PresszbMATHGoogle Scholar
  16. 16.
    Mühlenbein, H., Mahnig, T. (1999) FDA — A scalable evolutionary algortihm for the optimization of additively decomposed functions. Evolutionary Computation, 7, 353–376CrossRefGoogle Scholar
  17. 17.
    Mühlenbein, H., Mahnig, T., Ochoa, A. (1999) Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, 5, 215–247zbMATHCrossRefGoogle Scholar
  18. 18.
    Mühlenbein, H., Paaß, G. (1996) From recombination of genes to the estimation of distributions I: Binary parameters. In Parallel Problem Solving from Nature IV, LNCS 1141. Berlin: Springer, 178–187CrossRefGoogle Scholar
  19. 19.
    Pelikan, M., Goldberg, D.E., Cantú-Paz, E. (1999) BOA: The Bayesian optimization algorithm. In Proc. 1999 Genetic and Evolutionary Computation Conference, San Mateo, CA: Morgan Kaufmann, 525–532Google Scholar
  20. 20.
    Pelikan, M., Mühlenbein, H. (1999) The bivariate marginal distribution algorithm. Advances in Soft Computing — Engineering Design and Manufacturing. London: Springer, 521–535Google Scholar
  21. 21.
    Rissanen, J. (1986) Stochastic complexity and modeling. Annals of Statistics, 14, 1080–1100MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Schwefel, H.-P. (1995) Evolution and Optimum Seeking, Sixth Generation Computer Technology Series. Wiley: New YorkGoogle Scholar
  23. 23.
    Zhang, B.-T. (1999) A Bayesian framework for evolutionary computation. In Proc. 1999 Congress on Evolutionary Computation (CEC99), IEEE Press, 722–727Google Scholar
  24. 24.
    Zhang, B.-T. (2000) Bayesian methods for efficient genetic programming. Genetic Programming and Evolvable Machines, 1, 217–242zbMATHCrossRefGoogle Scholar
  25. 25.
    Zhang, B.-T., Cho, D.-Y. (2000) Evolving neural trees for time series prediction using Bayesian evolutionary algorithms, In Proc. First IEEE Workshop on Combinations of Evolutionary Computation and Neural Networks (ECNN-2000), IEEE Press, 17–23Google Scholar
  26. 26.
    Zhang, B.-T., Ohm, P., Mühlenbein, H. (1997) Evolutionary induction of sparse neural trees. Evolutionary Computation, 5, 213–236zbMATHCrossRefGoogle Scholar
  27. 27.
    Zhang, B.-T., Paaß, G., Mühlenbein, H. (2000) Convergence properties of incremental Bayesian evolutionary algorithms with single Markov chains. Congress on Evolutionary Computation (CEC-2000), IEEE Press, 938–945Google Scholar
  28. 28.
    Zhang, B.-T., Shin, S.-Y. (2000) Bayesian evolutionary optimization using Helmholtz machines, Int. Conf. on Parallel Problem Solving from Nature (PPSN-2000), LNCS 1917. Berlin: Springer, 827–836Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Byoung-Tak Zhang
    • 1
  1. 1.Biointelligence LaboratorySchool of Computer Science and Engineering Seoul National UniversitySeoulKorea

Personalised recommendations