Abstract
We develop a new algorithm for the learning of feedforward neural networks, by stating the learning process as a parameter estimation problem. We provide an analysis of its convegence and robustness properties. Two different versions of the algorithm are discussed, depending on the way in which the training set is explored during learning. The simulation results, for both classification and function approximation problems, confirm the effectiveness of the proposed algorithm and its advantages with respect to error back-propagation and extended Kalman filter-based learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
A. Alessandri, M. Maggiore, T. Parisini, and R. Zoppoli, Neural approximators for nonlinear sliding-window state observers. In Proceedings of the 35th IEEE Conference on Decision and Control, 1996, pp. 1461–1463.
A. Alessandri, M. Maggiore, and M. Sanguineti. A new learning algorithm with bounded error for feedforward neural networks. Technical Report 97/1, DIST Tech. Report, 1997.
A. Alessandri, M. Maggiore, and M. Sanguineti. Training feedforward neural networks through a parameter-estimation-based algorithm. In Proceedings of NEU-RAP’ 98 Conference, 1998, pp. 225–228.
A. R. Barron. Approximation and estimation bounds for artificial neural networks. Machine Learning, 14:115–133, 1994.
H. Demuth and M. Beale. Neural Network Toolbox — For Use With MATLAB-User’s Guide. The Math Works, Inc., 1995.
J. J. More. The Levenberg-Marquardt algorithm: Implementation and theory. In G. A. Watson, editor, Numerical Analysis, Lecture Notes in Mathematics 630, pages 105–116. Springer-Verlag, 1977.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.
S. Singhal and L. Wu. Training multylayer perceptrons with the extended Kaiman algorithm. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 133–140, 1989.
H. J. Sussmann. Uniqueness of the weights for minimal feedforward nets with a given input-output map. Neural Networks, 5:589–593, 1992.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London Limited
About this paper
Cite this paper
Alessandri, A., Maggiore, M., Sanguineti, M. (1999). Training Feedforward Neural Networks: Convergence and Robustness Analysis. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN VIETRI-98. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0811-5_28
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0811-5_28
Publisher Name: Springer, London
Print ISBN: 978-1-4471-1208-2
Online ISBN: 978-1-4471-0811-5
eBook Packages: Springer Book Archive