Skip to main content

Back Propagation

  • Chapter
  • First Online:
Artificial Neural Networks

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 931))

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • J.L. McClelland and D.E. Rumelhart (1988) Training hidden units: the Generalized Delta Rule. Chapter 5 in Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises. MIT Press, Cambridge, MA.

    Google Scholar 

  • J.L. Elman (1988) Finding structure in time. CRL Technical Report 8801. Center for research in Language, University of California, San Diego.

    Google Scholar 

  • J. Henseler and P.J. Braspenning (1990) Training complex multi-layer neural networks, Proceedings of the Latvian Signal Processing International Conference, Vol. 2, Riga, 301–305.

    Google Scholar 

  • S. Kirckpatrick, C.D. Gelatt and V. Torre (1983) Optimization by simulated annealing. Science 220, 671–680.

    Google Scholar 

  • Y. Le Cun (1985) A Learning Procedure for Assymetric Threshold Network. Proceedings of Cognitiva '85. (In French), Paris. 599–604.

    Google Scholar 

  • R.P. Lippmann (1987) An introduction to computing with neural nets. IEEE ASSP Magazine 3 (4), 4–22.

    Google Scholar 

  • M.L. Minsky en S.A. Papert (1969, 1988) Perceptrons: An Introduction to Computational Geometry. The MIT Press, Cambridge, MA.

    Google Scholar 

  • D.B. Parker (1985) Learning-Logic. TR-47, MIT, Center for Computational Research in Economics and Management Science. Cambridge, MA.

    Google Scholar 

  • F. Rosenblatt (1958) The Perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review 65, 386–408.

    PubMed  Google Scholar 

  • F. Rosenblatt (1962) Principles of Neurodynamics. Spartan, New York.

    Google Scholar 

  • D.E. Rumelhart, G.E. Hinton and R.J. Williams (1986) Learning internal representations by error propagation. Chapter 8 in Parallel Distributed Processing: Foundations. Vol. 1, MIT Press, Cambridge, MA, 318–362.

    Google Scholar 

  • F.M. Silva and L.B. Almeida (1990) Acceleration techniques for the Backpropagation algorithm. Lecture Notes in Computer Science: Neural Networks 412 (Eds. L.B. Almeida and C.J. Wellekens), 110–119.

    Google Scholar 

  • T. Tollenaere (1990) SuperSAB, fast adaptive Back Propagation with good scaling properties. Neural Networks 3 (5), 561–573, Pergamon-Press.

    Google Scholar 

  • P. Werbos (1974) Beyond Regressions New Tools for Prediction and Analysis in the Behavioral Sciences. M. Sc. thesis, Applied Mathematics, Harvard University, Boston, MA.

    Google Scholar 

  • B. Widrow and M.E. Hoff (1960) Adaptive switching circuits. Record of the 1960 IRE WESCON Convention, New York, IRE, 96–104.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

P. J. Braspenning F. Thuijsman A. J. M. M. Weijters

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Henseler, J. (1995). Back Propagation. In: Braspenning, P.J., Thuijsman, F., Weijters, A.J.M.M. (eds) Artificial Neural Networks. Lecture Notes in Computer Science, vol 931. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0027022

Download citation

  • DOI: https://doi.org/10.1007/BFb0027022

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59488-8

  • Online ISBN: 978-3-540-49283-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics