Date: 29 Jun 2006

Learning in genetic algorithms

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Learning in artificial neural networks is often cast as the problem of “teaching” a set of stimulus-response (or input-output) pairs to an appropriate mathematical model which abstracts certain known properties of neural networks. A paradigm which has been developed independently of neural network models are genetic algorithms (GA). In this paper we introduce a mathematical framework concerning the manner in which genetic algorithms can learn, and show that gradient descent can be used in this frameork as well. In order to develop this theory, we use a class of stochastic genetic algorithms (GA) based on a population of chromosomes with mutation and crossover, as well as fitness, which we have described earlier in [18].