Memetic Algorithms

Part of the Operations Research/Computer Science Interfaces Series book series (ORCS, volume 36)


This chapter introduces and analyzes a memetic algorithm approach for the training of artificial neural networks, more specifically multilayer perceptrons. Our memetic algorithm is proposed as an alternative to gradient search methods, such as backpropagation, which have shown limitations when dealing with rugged landscapes with many poor local optimae. The aim of our work is to design a training strategy that is able to cope with difficult error manyfolds, and to quickly deliver trained neural networks that produce small errors. A method such as the one we proposed might also be used as an “online” training strategy.

Key words

Memetic algorithms neural network metaheuristic algorithms evolutionary algorithms 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Baldwin, J., 1896, A new factor in evolution, American Naturalist 30:441–451.CrossRefGoogle Scholar
  2. Dawkins, R., 1976, The Selfish Gene, Oxford University Press, Oxford, UK.Google Scholar
  3. Dorsey, R. E., Johnson, J. D., and Mayer, W. J., 1994, A genetic algorithm for the training of feedforward neural networks, in: Advances in Artificial Intelligence in Economics, Finance and Management, vol. 1, J. D. Johnson, A. B. Whinston, eds., JAI Press, Greenwich, CT, pp. 93–111.Google Scholar
  4. Eshelman, L., 1990, The CHC adaptive search algorithm: how to have safe search when engaging in non-traditional genetic recombination, in: Foundations of Genetic Algorithms, G. Rawlins, ed., Morgan Kaufmann, San Francisco, pp. 263–283.Google Scholar
  5. Freisleben, B., and Merz, P., 1996, A genetic local search algorithm for solving the symmetric and asymmetric travelling salesman problem, in: Proceedings of the IEEE Conference on Evolutionary Computation, IEEE Press, Piscataway, NJ, pp. 616–621.CrossRefGoogle Scholar
  6. Gallant, R. A., and White, H., 1992, On learning the derivatives of an unknown mapping with multilayer feedforward networks, in: Artificial Neural Networks, Blackwell, Cambridge, MA, pp. 206–223.Google Scholar
  7. Glover, F., 1989, Tabu search, ORSA Journal on Computing 1:190–206.zbMATHMathSciNetGoogle Scholar
  8. Goffe, W. L., Ferrier, G. D., and Rogers, J., 1994, Global optimization of statistical functions with simulated annealing, Journal of Econometrics 60:65–99.zbMATHCrossRefGoogle Scholar
  9. Hart, W., 1994, Adaptive Global Optimization with Local Search, PhD thesis, University of California, San Diego, USA.Google Scholar
  10. Hart, W. E., Krasnogor, N., and Smith, J. E., 2004, Recent Advances in Memetic Algorithms, Series Studies in Fuzziness and Soft Computing, Springer-Verlag.Google Scholar
  11. Hart, W., De Laurentis, J., and Ferguson, L., 2003, On the convergence of an implicitly self-adaptive evolutionary algorithm on one-dimensional unimodal problems, IEEE Trans. on Evolutionary Computation (to appear).Google Scholar
  12. Hinton, G., and Nowland, S., 1987, How learning can guide evolution, Complex Systems 1:495–502.zbMATHGoogle Scholar
  13. Houck, C, Joines, J., Kay, M., and Wilson, J., 1997, Empirical investigation of the benefits of partial Lamarckianism, Evolutionary Computation 5:31–60.PubMedCrossRefGoogle Scholar
  14. Ichimura, T., and Kuriyama, Y., 1998, Learning of neural networks with parallel hybrid GA using a royal road function, in: IEEE International Joint Conference on Neural Networks, IEEE, New York, NY, vol. 2, pp. 1131–1136.Google Scholar
  15. Kallel, L., Naudts, B., and Reeves, C, 2001, Properties of fitness functions and search landscapes, in: Theoretical Aspects of Evolutionary Computing, L. Kallel, B. Naudts and A. Rogers, eds., Springer, Berlin, Heidelberg, New York.Google Scholar
  16. Kirkpatrick, S., Gelatt, C, and Vecchi, M, 1983, Optimization by simulated annealing, Science 220:671–680.MathSciNetADSCrossRefGoogle Scholar
  17. Knowles, J., and Corne, D., 2001, A comparative assessment of memetic, evolutionary and constructive algorithms for the multi-objective d-mst problem, in: 2001 Genetic and Evolutionary Computation Workshop Proceeding, pp. 162–167.Google Scholar
  18. Krasnogor, N., 2002, Studies in the Theory and Design Space of Memetic Algorithms, PhD thesis, University of the West of England, Bristol, U.K.Google Scholar
  19. Krasnogor, N., 2004, Self-generating metaheuristics in bioinformatics: The protein structure comparison case, in: Genetic Programming and Evolvable Machines, Kluwer academic Publishers, vol. 5, pp. 181–201.CrossRefGoogle Scholar
  20. Krasnogor, N., Blackbourne, B., Burke, E., and Hirst, J., 2002, Multimeme algorithms for protein structure prediction, in: Proceedings of the Parallel Problem Solving From Nature, Lecture Notes in Computer Science, Springer, pp. 769–778.Google Scholar
  21. Krasnogor, N., and Gustafson, S., 2004, A study on the use of “self-generation” in memetic algorithms, Natural Computing 3:53–76.zbMATHMathSciNetCrossRefGoogle Scholar
  22. Krasnogor, N., and Pelta, D., 2002, Fuzzy memes in multimeme algorithms: a fuzzy-evolutionary hybrid, in: Fuzzy Sets based Heuristics for Opt., J. Verdegay, ed., Springer.Google Scholar
  23. Krasnogor, N., and Smith, J., 2000, A memetic algorithm with self-adaptive local search: TSP as a case study, in: Proceedings of the Genetic and Evolutionary Computation Conference, D. Whitley, D. Goldberg, E. Cantú-Paz, L. Spector, I. Parmee and H. G. Beyer, eds., Morgan Kaufmann, San Francisco, pp. 987–994.Google Scholar
  24. Krasnogor, N., and Smith, J., 2001, Emergence of profitable search strategies based on a simple inheritance mechanism, in: Proceedings of the Genetic and Evolutionary Computation Conference, L. Spector, E. Goodman, A. Wu, W. Langdon, H. M.Voigt, M. Gen, S. Sen, M. Dorigo, S. Pezeshk, M. Garzon, and E. Burke, eds., Morgan Kaufmann, San Francisco, pp. 432–439.Google Scholar
  25. Krasnogor, N., and Smith, J., 2005, A tutorial for competent memetic algorithms: Model, taxonomy and design issues, IEEE Trans. on Evolutionary Computation 9(5):474–488.CrossRefGoogle Scholar
  26. Laguna, M., and Martí, R., 2002, Neural network prediction in a system for optimizing simulations, IEE Transactions on Operations Engineering 34(3):273–282.Google Scholar
  27. Land, M., 1998, Evolutionary Algorithms with Local Search for Combinatorial Optimization, PhD thesis, University of California, San Diego, USA.Google Scholar
  28. Lecun, Y., 1986, Learning process in an asymmetric threshold network, in: Disordered Systems and Biological Organization, Springer, Berlin, pp. 233–240.Google Scholar
  29. Mayaley, G., 1996, Landscapes, learning costs and genetic assimilation, Evolutionary Computation 4(3):213–234.CrossRefGoogle Scholar
  30. Merz, P., 2000, Memetic Algorithms for Combinatorial Optimization Problems: Fitness Landscapes and Efective Search Strategies, PhD thesis, Department of Electrical Engineering and Computer Science, University of Siegen, Germany.Google Scholar
  31. Merz, P., and Freisleben, B., 1999, Fitness landscapes and memetic algorithm design, in: New Ideas in Optimization, D. Corne, M. Dorigo and F. Glover, eds., McGraw Hill, London.Google Scholar
  32. Montana, D. J., and Davis, L., 1989, Training feedforward neural networks using genetic algorithms, in: Proceedings of the Third International Conference on Genetic Algorithms, Morgan Kaufmann, San Mateo, CA, pp. 379–384.Google Scholar
  33. Moscato, P., 1989, On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms, Caltech Concurrent Computation Program, C3P Report 826.Google Scholar
  34. Moscato, P., 1993, An Introduction to Population Approaches for Optimization and Hierarchical Objective Functions: A Discussion on the Role of Tabu Search, Annals of Operations Research, 41(l–4):85–121.zbMATHCrossRefGoogle Scholar
  35. Moscato, P., 2001, Problemas de Otimizaço NP, Aproximabilidade e Computaçao Evolutiva: Da Pratica a Teoria, PhD thesis, Universidade Estadual de Campinas, Brasil.Google Scholar
  36. Parker, D., 1985, Learning logic, Technical Report TR-87, Center for Computational Research in Economics and Management Science, MIT, Cambridge. MA.Google Scholar
  37. Rudolph, G., 1996, Convergence of evolutionary algorithms in general search spaces, in: Proceedings of the International Congress of Evolutionary Computation, pp 50–54.Google Scholar
  38. Rumelhart, D., Hinton, G., and Williams, R., 1986, Learning internal representations by error propagation, in: Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol.1. D. Rumelhart and J. McCleeland, eds., MIT Press. Cambridge. MA.Google Scholar
  39. Schaffer, J. D., 1994, Combinations of genetic algorithms with neural networks or fuzzy systems, in: Computational Intelligence: Imitating Life, J. M. Zurada, R. J. Marks and C. J. Robinson, eds., IEEE Press, pp. 371–382.Google Scholar
  40. Schaffer, J. D., Whitley, D., and Eshelman, L. J., 1992, Combinations of genetic algorithms and neural networks: A survey of the state of the art, in: COGANN-92 Combinations of Genetic Algorithms and Neural Networks, IEEE Computer Society Press, pp. 1–37.Google Scholar
  41. Sexton, R. S., Alidaee, B., Dorsey, R. E., and Johnson, J. D., 1998, Global optimization for artificial neural networks: A tabu search application, European Journal of Operational Research 106:570–584.zbMATHCrossRefGoogle Scholar
  42. Sexton, R. S., Dorsey, R. E., and Johnson, J. D., 1999, Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing, European Journal of Operational Research 114:589–601.zbMATHCrossRefGoogle Scholar
  43. Topchy, A. P., Lebedko, O. A., and Miagkikh, V. V., 1996, Fast learning in multilayered networks by means of hybrid evolutionary and gradient algorithms, in: Proceedings of International Conference on Evolutionary Computation and its Applications, pp. 390–398.Google Scholar
  44. Turney, P., 1996, How to shift bias: lessons from the Baldwin effect, Evolutionary Computation 4(3):271–295.CrossRefGoogle Scholar
  45. Vavak, F., Fogarty, T., and Jukes, K., 1996, A genetic algorithm with variable range of local search for tracking changing environments, in: Proceedings of the 4th Conference on Parallel Problem Solving from Nature, Lecture Notes in Computer Science 1141, H. M. Voigt, W. Ebeling, I. Rechenberg and H. P. Schwefel, eds., Springer, pp. 376–385.Google Scholar
  46. Werbos, P., 1974, Beyond Regression: New Tools for Prediction and Analisys in the Behavioral Sciences, PhD thesis,. Harvard, Cambridge.Google Scholar
  47. Whitley, L., Gordon, S., and Mathías, K., 1994, Lamarkian evolution, the Baldwin effect, and function optimization, in: Proceedings of the 3rd Conference on Parallel Problem Solving from Nature, Lecture Notes in Computer Science, vol. 866, Y. Davidor, H. P. Schwefel, and R. Manner, eds., Springer, pp. 6–15.Google Scholar
  48. Wolpert, D., and Macready, W., 1997, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation 1(1):67–82.CrossRefGoogle Scholar
  49. Yao, X., 1993, Evolutionary artificial neural networks, Int. Journal of Neural Systems 4(3):203–222.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.School of Computer Science and I. T.University of NottinghamEngland
  2. 2.Departamento Economía AplicadaUniversity of BurgosSpain

Personalised recommendations