Applications to Neural Networks
In this chapter, we study some applications of the results derived thus far to the problem of designing and/or training neural networks. The past decade has witnessed a tremendous surge of interest in the application of neural networks for a variety of purposes. In essence, almost all of these applications can be summarized as follows: Given a set of randomly generated data, and a family of neural networks all sharing a common “architecture,” construct a neural network from this family that best approximates the data, with high probability. Stated in this form, the problem is one of nonlinear curve-fitting or nonlinear regression. What distinguishes the use of neural networks for this purpose, as opposed to many other standard techniques, is the widespread belief that “neural networks can generalize.” In other words, it is believed that, after a neural network has been “trained” on a sufficiently large number of input-output pairs, it can then correctly predict all future input-output pairs, even for those inputs that the network has not seen previously. In the absence of the generalization ability, there is no reason to use a neural network merely to reproduce known input-output pairs — a simple table look-up scheme would serve just as well. It is shown in Section 3.2, specifically Example 3.6, that perfect generalization by a neural network is an impossibility.
KeywordsEntropy Manifold Expense Hull Eter
Unable to display preview. Download preview PDF.