Advertisement

Artificial Neural Networks

Chapter
  • 914 Downloads
Part of the Modeling and Simulation in Science, Engineering and Technology book series (MSSET)

Abstract

In this chapter we explain some fundamental concepts used in neural networks (NNs) with special regard to the ones applied to forecasts and data analysis. The structure of the back-propagation NN is shown in connection with its use for time-series analysis. The concepts of learning and training errors are explained in some detail and the main types of learning algorithms are exhibited. We also give some exact estimates of the probability that the learning error differs from the training error by more than a small quantity and a priori bounds of the learning error using extreme-value theory for the simple perceptron. Since NNs are also analyzed as a universal approximation of functions, we expose some useful properties of NNs treated in this way in Chapter 5. These topics do not cover all the literature dealing with NN application and theory, but they are good examples of the present situation. Heuristic remarks and rigorously proven statements are also distinguished using mathematical language for exactly proven facts and ordinary language for conjectures and hypotheses. This type of presentation suits the general aim of our book: we want to start from the very beginning of the theory to arrive at the very end of some applications of NNs. In the literature, especially in the applications, there is often confusion between proven facts and conjectures.

Keywords

Support Vector Machine Partition Function Simulated Annealing Relative Entropy Gibbs Measure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Birkhäuser Boston 2006

Personalised recommendations