Skip to main content

Training Feedforward Neural Networks: Convergence and Robustness Analysis

  • Conference paper
Neural Nets WIRN VIETRI-98

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 131 Accesses

Abstract

We develop a new algorithm for the learning of feedforward neural networks, by stating the learning process as a parameter estimation problem. We provide an analysis of its convegence and robustness properties. Two different versions of the algorithm are discussed, depending on the way in which the training set is explored during learning. The simulation results, for both classification and function approximation problems, confirm the effectiveness of the proposed algorithm and its advantages with respect to error back-propagation and extended Kalman filter-based learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Alessandri, M. Maggiore, T. Parisini, and R. Zoppoli, Neural approximators for nonlinear sliding-window state observers. In Proceedings of the 35th IEEE Conference on Decision and Control, 1996, pp. 1461–1463.

    Google Scholar 

  2. A. Alessandri, M. Maggiore, and M. Sanguineti. A new learning algorithm with bounded error for feedforward neural networks. Technical Report 97/1, DIST Tech. Report, 1997.

    Google Scholar 

  3. A. Alessandri, M. Maggiore, and M. Sanguineti. Training feedforward neural networks through a parameter-estimation-based algorithm. In Proceedings of NEU-RAP’ 98 Conference, 1998, pp. 225–228.

    Google Scholar 

  4. A. R. Barron. Approximation and estimation bounds for artificial neural networks. Machine Learning, 14:115–133, 1994.

    MATH  Google Scholar 

  5. H. Demuth and M. Beale. Neural Network Toolbox — For Use With MATLAB-User’s Guide. The Math Works, Inc., 1995.

    Google Scholar 

  6. J. J. More. The Levenberg-Marquardt algorithm: Implementation and theory. In G. A. Watson, editor, Numerical Analysis, Lecture Notes in Mathematics 630, pages 105–116. Springer-Verlag, 1977.

    Google Scholar 

  7. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.

    Article  Google Scholar 

  8. S. Singhal and L. Wu. Training multylayer perceptrons with the extended Kaiman algorithm. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 1, pages 133–140, 1989.

    Google Scholar 

  9. H. J. Sussmann. Uniqueness of the weights for minimal feedforward nets with a given input-output map. Neural Networks, 5:589–593, 1992.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag London Limited

About this paper

Cite this paper

Alessandri, A., Maggiore, M., Sanguineti, M. (1999). Training Feedforward Neural Networks: Convergence and Robustness Analysis. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN VIETRI-98. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0811-5_28

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0811-5_28

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-1208-2

  • Online ISBN: 978-1-4471-0811-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics