A Generative Learning Algorithm that uses Structural Knowledge of the Input Domain yields a better Multi-layer Perceptron
Many classifier applications have been developed using the Multi-layer perceptron (MLP) model as representation form. The main difficulty found in designing an architecture based on the model has been, for the most part, induced by a lack of understanding of what each of an MLP’s network components embodies. Expressing the input domain to a classification task in terms of a subspace in R N , the problem to solve consists of computing an appropriate segmentation of the domain so that every input point will be assigned to a region of the space into which only points of the same class have fallen. This can be achieved with an MLP network if every weight vector is computed as the normal to each of the surfaces in the input domain that will induce the same sort of partitioning that is engendered by the classification criteria associated to the problem for which the network has been built. As the Delaunay Triangulation (DT) of a set of points is a geometric structure in which everything one would ever want to know about the proximity of the points from which it was derived is recorded, it provides an ideal source of information for computing the number and form of those weight vectors, enabling the possibility of building an initial maximal network architecture for a particular problem.
Unable to display preview. Download preview PDF.
- A. Bowyer, J. Woodwark, Introduction to Computing with Geometry, Winchester: Information Geometers 1993.Google Scholar
- G. Cybenko, Continuous valued neural networks with two hidden layers are sufficient, technical report, Department of Computer Science, Tufts University, USA 1988.Google Scholar
- J. Hertz, A. Krogh, R. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley 1991.Google Scholar
- J. O’Rourke, Computational Geometry in C, Cambridge University Press 1994.Google Scholar
- E. Pérez-Mii ana, Learning Nature of the Feedforward Neural Networks, PhD thesis, Department of Artificial Intelligence, University of Edinburgh 1997.Google Scholar
- L. Prechelt, Probenl: A Set of Benchmarks and Benchmarking Rules for Neural Network Training Algorithms, Fakultät für Informatik Universität Karlsruhe 1994.Google Scholar