Learning in genetic algorithms
Learning in artificial neural networks is often cast as the problem of “teaching” a set of stimulus-response (or input-output) pairs to an appropriate mathematical model which abstracts certain known properties of neural networks. A paradigm which has been developed independently of neural network models are genetic algorithms (GA). In this paper we introduce a mathematical framework concerning the manner in which genetic algorithms can learn, and show that gradient descent can be used in this frameork as well. In order to develop this theory, we use a class of stochastic genetic algorithms (GA) based on a population of chromosomes with mutation and crossover, as well as fitness, which we have described earlier in .
Unable to display preview. Download preview PDF.