Advertisement

Empirical analysis of the factors that affect the Baldwin effect

  • Kim W. C. Ku
  • M. W. Mak
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1498)

Abstract

The inclusion of learning in genetic algorithms based on the Baldwin effect is one of the popular approaches to improving the convergence of genetic algorithms. However, the expected improvement may not be easily obtained. This is mainly due to the lack of understanding of the factors that affect the Baldwin effect. This paper aims at providing sufficient evidence to confirm that the level of difficulties for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning can significantly affect the Baldwin effect. The results suggest that combining genetic algorithms inattentively with any learning methods available is not a proper way to construct hybrid algorithms. Instead, the correlation between the genetic operations and the learning methods has to be carefully considered.

Keywords

Genetic Algorithm Mean Square Error Learning Method Phenotypic Change Hybrid Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. H. Ackley and M. L. Littman. A case for Lamarckian evolution. In C. G. Langton, editor, Artificial Life 3, pages 3–10. Reading, Mass.: Addison-Wesley, 1994.Google Scholar
  2. 2.
    R. W. Anderson. Learning and evolution: A quantitative genetics approach. Journal of Theoretical Biology, 175:89–101, 1995.CrossRefGoogle Scholar
  3. 3.
    J. M. Baldwin. A new factor in evolution. American Naturalist, 30:441–451, 1896.CrossRefGoogle Scholar
  4. 4.
    Y. Davidor. A naturally occurring niche & species phenomenon: the model and first results. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 257–262, 1991.Google Scholar
  5. 5.
    W. E. Hart, T. E. Kammeyer, and R. K. Belew. The role of development in genetic algorithms. In L. D. Whitley and M. D. Vose, editors, Foundations of Genetic Algorithms 3, pages 315–332. San Mateo, CA: Morgan Kaufmann Pub., 1995.Google Scholar
  6. 6.
    G. E. Hinton and S. J. Nowlan. How learning can guide evolution. Complex Systems, 1:495–502, 1987.zbMATHGoogle Scholar
  7. 7.
    K. W. C. Ku and M. W. Mak. Exploring the effects of Lamarckian and Baldwinian learning in evolving recurrent neural networks. In Proceedings of the IEEE International Conference on Evolutionary Computation, pages 617–621, 1997.Google Scholar
  8. 8.
    V. Maniezzo. Genetic evolution of the topology and weight distribution of neural networks. IEEE Transactions on Neural Networks, 5(1):39–53, 1994.CrossRefGoogle Scholar
  9. 9.
    G. Mayley. Landscapes, learning costs, and genetic assimilation. Evolutionary Computation, 4(3):213–234, 1997.Google Scholar
  10. 10.
    D. J. Montana and L. Davis. Training feedforward neural network using genetic algorithms. In Proceedings of the Eleventh International Joint Conference on Artifical Intelligence, pages 762–767, 1989.Google Scholar
  11. 11.
    S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Adaptive Behavior, 3:5–28, 1994.Google Scholar
  12. 12.
    P. Turney. Myths and legends of the Baldwin effect. In Proceedings of the Workshop on Evolutionary Computing and Machine Learning at the 13th International Conference on Machine Learning, pages 135–142, 1996.Google Scholar
  13. 13.
    D. Whitley. A genetic algorithm tutorial. Statistics & Computing, 4(2):65–85, 1994.Google Scholar
  14. 14.
    D. Whitley, V. S. Gordon, and K. Mathias. Lamarckian evolution, the Baldwin effect and function optimization. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 6–15. Springer-Verlag, 1994.Google Scholar
  15. 15.
    R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1:87–111, 1989.zbMATHGoogle Scholar
  16. 16.
    R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin and D. E Rumelhart, editors, Backpropagation: Theory, Architectures, and Applications, pages 433–486. Hillsdale, NJ: Lawrence Erlbaum Associates Pub., 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Kim W. C. Ku
    • 1
  • M. W. Mak
    • 1
  1. 1.Department of Electronic EngineeringThe Hong Kong Polytechnic UniversityHong Kong

Personalised recommendations