Skip to main content

Optimierung der Konvergenzgeschwindigkeit von Backpropagation

  • Conference paper
Mustererkennung 1998

Part of the book series: Informatik aktuell ((INFORMAT))

  • 123 Accesses

Zusammenfassung

Für Backpropagation-Netzwerke wird eine neue Methode zur neuronenspezifischen Anpassung der Lernrate vorgestellt, welche eine deutliche Beschleunigung der Konvergenz insbesondere für große und vielschichtige Netzwerke erlaubt. Gezeigt wird dies am Zwei-Spiralen-Benchmark sowie an zwei Real-world-Problemen aus Medizintechnik und Physik.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. Alpsan D, Towsey M, Ozdamar O, Tsoi AC, Ghista DN: Efficacy of Modified Backpropagation and Optimisation Methods on a Real-world Problem. Neural Networks 8 (6): 945–962, 1995.

    Article  Google Scholar 

  2. Barnard E, Holm JEW: A comparative study of optimization techniques for backpropagation. Neurocomputing 6: 19–30, 1994.

    Article  Google Scholar 

  3. Chen JR, Mars P: Stepsize variation methods for accelerating the back propagation algorithm. Proceedings of the International Joint Conference on Neural Networks, Washington, DC, Vol. 1, 601–604, 1990.

    Google Scholar 

  4. Chen K, Yang L, Yu X, Chi H: A self-generating modular neural network architecture for supervised learning. Neurocomputing 16: 33–48, 1997.

    Article  Google Scholar 

  5. Fahlmann SE: Faster-Learning Variations on Back-Propagation: An Empirical Study. In: Touretzky D, Hinton GE, Sejnowski T (Hrsg.): Proc. Of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 38–51, 1989.

    Google Scholar 

  6. Fahlman SE, Lebiere Ch: The cascade-correlation learning architecture. In Touretzky DS (Hrsg.): Advances in Neural Information Processing Systems 2, Morgan Kaufmann, 524–532, 1990.

    Google Scholar 

  7. Higashino J, de Greef BL, Persoon EH: Numerical analysis and adaptation method for learning rate of back propagation. Proceedings of the International Joint Conference on Neural Networks, Washington, DC, Vol. 1, 627–630, 1990.

    Google Scholar 

  8. IBM: Reference manual: Random number generation and testing. C20–8011, 1959.

    Google Scholar 

  9. Jacobs R: Increased Rates of Convergence Through Learning Rate Adaption. Neural Networks 1, 295–307, 1988.

    Article  Google Scholar 

  10. Karayiannis NB: Accelerating the Training of Feedforward Neural Networks Using Generalized Hebbian Rules for Initializing the Internal Representations. IEEE Trans. Neural Networks 7 (2): 419–426, 1996.

    Article  Google Scholar 

  11. Kim HB, Jung SH, Kim TG, Park KH: Fast learning method for back- propagation neural network by evolutionary adaptation of learning rates. Neurocomputing 11: 101–106, 1996.

    Article  MATH  Google Scholar 

  12. Kuncheva LI: Initializing of an RBF network by a genetic algorithm. Neurocomputing 14: 273–288, 1997.

    Article  Google Scholar 

  13. Lang KJ, Witbrock MJ: Learning to Tell Two Spirals Apart. In: Touretzky D, Hinton GE, Sejnowski T (Hrsg.): Proc. Of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 52–59, 1989.

    Google Scholar 

  14. Lawrence S, giles CL, Tsoi AC: What size neural network gives optimal generalization? Convergence properties of backpropagation. Technical Report UMIACS-TR-96–22 and CS-TR-3617, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742

    Google Scholar 

  15. Lengelle R, Denceux T: Training MLPs Layer by Layer Using an Objective Function for Internal Representations. Neural Networks 9 (l): 83–97, 1996.

    Article  Google Scholar 

  16. Looney CG: Stabilization and speedup of convergence in training feedforward neural networks. Neurocomputing 10: 7–31, 1996.

    Article  MATH  Google Scholar 

  17. Magoulas GD, Vrahatis MN, Androulakis GS: Effective Backpropagation Training with Variable Stepsize. Neural Networks 10 (l): 69–82, 1997.

    Article  Google Scholar 

  18. Osowski S, Bojarczak P, Stodolski M: Fast Second Order Learning Algorithm for Feedforward Multilayer Neural Networks and ist Applications. Neural Networks 9 (9): 1583–1596, 1996.

    Article  Google Scholar 

  19. Petrowski A, Dreyfus G, Girault C: Performance Analysis of a Pipelined Backpropagation Parallel Algorithm. IEEE Trans. Neural Networks 4 (6): 970–981, 1993.

    Article  Google Scholar 

  20. Prechelt L: Some notes on neural learning algorithm benchmarking. Neurocomputing 9: 343–347, 1995.

    Article  Google Scholar 

  21. Rathbun TF, Rogers SK, DeSimio MP, Oxley ME: MLP iterative construction algorithm. Neurocomputing 17: 195–216, 1997.

    Article  Google Scholar 

  22. Ridella S, Rovetta S, Zunino R: Circular Backpropagation Networks for Classification. IEEE Trans. Neural Networks 8 (l): 84–97, 1997.

    Article  Google Scholar 

  23. Riedmiller M, Braun H: A direct adaptive method for faster backpropagation learning: the RPROP algorithm, Proc. 1993 IEEE Int. Conf. Neural Networks, vol 1, SanFranzisco 586–591, 1993.

    Google Scholar 

  24. Rumelhart DE, Hinton GE, Williams RJ: Learning Representations by Back-Propagating Errors. Nature 323: 533–536, 1986.

    Article  Google Scholar 

  25. Sarle, W.S., ed. (1997), Neural Network FAQ, part 2 of 7: Learning, periodic posting to the Usenet newsgroup comp. ai. neuralnets, URL:ftp://ftp. sas. com/pub/neural/FAQ. html

    Google Scholar 

  26. Sarle WS: Stopped Training and Other Remedies for Overfitting. Proceedings of the 27th Symposium on the Interface of Computer Science and Statistics, 352–360, 1995.

    Google Scholar 

  27. Sietsma J, Dow RJF: Creating artificial neural networks that generalize. Neural Networks 2: 67–79, 1991.

    Article  Google Scholar 

  28. Tamura S, Tateishi M: Capabilities of a Four-Layered Feedforward Neural Network: Four Layers Versus Three. IEEE Trans. Neural Networks 8 (2): 251–255, 1997.

    Article  Google Scholar 

  29. Tesauro G, Janssens B: Scaling relationships in back propagation learning. Complex Systems 2: 39–44, 1988.

    MATH  Google Scholar 

  30. Vogl TP, Mangis JK, Zigler AK, Zink WT, Alkon DL: Accelerating the convergence of the back-propagation method, Biol. Cybernet. 59 (4/5): 257–264, 1988.

    Article  Google Scholar 

  31. Wagner F, Inst. f. Theoretische Physik, Christian-Albrechts-Universität zu Kiel

    Google Scholar 

  32. Weigend A: On overfitting and the effective number of hidden units. Proceedings of the 1993 Connectionist Models Summer School, 335–342, 1994.

    Google Scholar 

  33. Yu X-H, Chen G-A, Cheng S-X: Dynamic Learning Rate Optimization of the Backpropagation Algorithm. IEEE Trans. Neural Networks 6 (3): 669–677, 1995.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Linder, R., Pöppl, S.J. (1998). Optimierung der Konvergenzgeschwindigkeit von Backpropagation. In: Levi, P., Schanz, M., Ahlers, RJ., May, F. (eds) Mustererkennung 1998. Informatik aktuell. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-72282-0_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-72282-0_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64935-9

  • Online ISBN: 978-3-642-72282-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics