Abstract
Learning in artificial neural networks is often cast as the problem of “teaching” a set of stimulus-response (or input-output) pairs to an appropriate mathematical model which abstracts certain known properties of neural networks. A paradigm which has been developed independently of neural network models are genetic algorithms (GA). In this paper we introduce a mathematical framework concerning the manner in which genetic algorithms can learn, and show that gradient descent can be used in this frameork as well. In order to develop this theory, we use a class of stochastic genetic algorithms (GA) based on a population of chromosomes with mutation and crossover, as well as fitness, which we have described earlier in [18].
Preview
Unable to display preview. Download preview PDF.
References
Bharucha-Reid, A.T. “Elements of the Theory of Markov Processes and their Applications,” McGraw-Hill, New York (1960).
Kernighan, B. and Lin, S. “An efficient heuristic procedure for partitioning graphs”, The Bell System Technical Journal, (February 1970).
Holland, J. “Adaptation in Natural and Artificial Systems,” The University of Michigan Press, Ann Arbor, Michigan (1975).
De Jong, Kenneth A. “An Analysis of the Behaviour of a Class of Genetic Adaptive Systems”, Doctoral Thesis, Department of Computer and Communication Sciences, University of Michigan, Ann Arbor (1975).
Gelenbe, E. and Mitrani, I. “Analysis and Synthesis of Computer Systems”, Academic Press, New York and London (1980).
Gelenbe, E. and Pujolle, G. “Introduction to Networks of Queues”, J. Wiley and Sons, New York and London (1988), 2nd Edition (1998).
Muehlenbein, H., Schleuter, G. and Kramm, D. “Evolution algorithms in combinatorial optimization”, Parallel Computing, Vol 7, No 2, (1988).
Goldberg, D. “Genetic Algorithms in Search, Optimization and Machine Learning”, Addison Wesley, New York (1989).
Gelenbe, E. “Multiprocessor Performance”. J. Wiley and Sons, New York and London (1989).
Talbi, E. and Bessière, P. “Un algorithme génétique massivement parallèle pour le problème de partitionement de graphes”. Rapport de recherche. Laboratoire de Génie Informatique de Grenoble (1991).
Vose, M.D. and Liepins, G.E. “Punctuated equilibria in genetic search”, Complex Systems, Vol. 5, (1991) 31–44.
Vose, M.D. “Formalizing genetic algorithms”, Technical Report (CS-91-127), Department of Computer Science, The University of Tennessee (1991).
Vose, M.D. “Modeling simple genetic algorithms”, in Whitley, L. Darrell ed.), “Foundations of Genetic Algorithms 2”, Morgan Kaufmann, San Mateo (1993).
Gelenbe, E. “Learning in the recurrent random neural network”, Neural Computation, Vol. 5, No. 1, (1993) 154–164.
Medhi, J. “Stochastic Processes,” 2nd Edition, Wiley Eastern Ltd, New Delhi (1994).
Gunther, R. “Convergence analysis of canonical genetic algorithms”, IEEE Transactions on Neural Networks, Vol. 5, No. 1, (1994) 96–101.
Xiaofeng Qui, Palmieri, F. “Theoretical analysis of evolutionary algorithms with infinite population size, Parts I and II”, IEEE Transactions on Neural Networks, Vol. 5, No. 1, 102–129, 1994.
Gelenbe, E. “A class of genetic algorithms with analytical solution”, Robotics and Autonomous Systems, Vol. 22, (1997) 59–64.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gelenbe, E. (1998). Learning in genetic algorithms. In: Sipper, M., Mange, D., Pérez-Uribe, A. (eds) Evolvable Systems: From Biology to Hardware. ICES 1998. Lecture Notes in Computer Science, vol 1478. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0057628
Download citation
DOI: https://doi.org/10.1007/BFb0057628
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64954-0
Online ISBN: 978-3-540-49916-9
eBook Packages: Springer Book Archive