Empirical analysis of the factors that affect the Baldwin effect
The inclusion of learning in genetic algorithms based on the Baldwin effect is one of the popular approaches to improving the convergence of genetic algorithms. However, the expected improvement may not be easily obtained. This is mainly due to the lack of understanding of the factors that affect the Baldwin effect. This paper aims at providing sufficient evidence to confirm that the level of difficulties for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning can significantly affect the Baldwin effect. The results suggest that combining genetic algorithms inattentively with any learning methods available is not a proper way to construct hybrid algorithms. Instead, the correlation between the genetic operations and the learning methods has to be carefully considered.
KeywordsGenetic Algorithm Mean Square Error Learning Method Phenotypic Change Hybrid Algorithm
Unable to display preview. Download preview PDF.
- 1.D. H. Ackley and M. L. Littman. A case for Lamarckian evolution. In C. G. Langton, editor, Artificial Life 3, pages 3–10. Reading, Mass.: Addison-Wesley, 1994.Google Scholar
- 4.Y. Davidor. A naturally occurring niche & species phenomenon: the model and first results. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 257–262, 1991.Google Scholar
- 5.W. E. Hart, T. E. Kammeyer, and R. K. Belew. The role of development in genetic algorithms. In L. D. Whitley and M. D. Vose, editors, Foundations of Genetic Algorithms 3, pages 315–332. San Mateo, CA: Morgan Kaufmann Pub., 1995.Google Scholar
- 7.K. W. C. Ku and M. W. Mak. Exploring the effects of Lamarckian and Baldwinian learning in evolving recurrent neural networks. In Proceedings of the IEEE International Conference on Evolutionary Computation, pages 617–621, 1997.Google Scholar
- 9.G. Mayley. Landscapes, learning costs, and genetic assimilation. Evolutionary Computation, 4(3):213–234, 1997.Google Scholar
- 10.D. J. Montana and L. Davis. Training feedforward neural network using genetic algorithms. In Proceedings of the Eleventh International Joint Conference on Artifical Intelligence, pages 762–767, 1989.Google Scholar
- 11.S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Adaptive Behavior, 3:5–28, 1994.Google Scholar
- 12.P. Turney. Myths and legends of the Baldwin effect. In Proceedings of the Workshop on Evolutionary Computing and Machine Learning at the 13th International Conference on Machine Learning, pages 135–142, 1996.Google Scholar
- 13.D. Whitley. A genetic algorithm tutorial. Statistics & Computing, 4(2):65–85, 1994.Google Scholar
- 14.D. Whitley, V. S. Gordon, and K. Mathias. Lamarckian evolution, the Baldwin effect and function optimization. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 6–15. Springer-Verlag, 1994.Google Scholar
- 16.R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin and D. E Rumelhart, editors, Backpropagation: Theory, Architectures, and Applications, pages 433–486. Hillsdale, NJ: Lawrence Erlbaum Associates Pub., 1994.Google Scholar