Advertisement

Alternate Learning Algorithm on Multilayer Perceptrons

  • Bumghi Choi
  • Ju-Hong Lee
  • Tae-Su Park
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3991)

Abstract

Multilayer perceptrons have been applied successfully to solve some difficult and diverse problems with the backpropagation learning algorithm. However, the algorithm is known to have slow and false convergence aroused from flat surface and local minima on the cost function. Many algorithms announced so far to accelerate convergence speed and avoid local minima appear to pay some trade-off for convergence speed and stability of convergence. Here, a new algorithm is proposed, which gives a novel learning strategy for avoiding local minima as well as providing relatively stable and fast convergence with low storage requirement. This is the alternate learning algorithm in which the upper connections, hidden-to-output, and the lower connections, input-to-hidden, alternately trained. This algorithm requires less computational time for learning than the backpropagation with momentum and is shown in a parity check problem to be relatively reliable on the overall performance.

Keywords

Multilayer Perceptrons Hide Unit Scaled Conjugate Gradient Algorithm Lower Connection Slow Training 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error propagation. Parallel Distributed Processing 1, ch. 8 (1986)Google Scholar
  2. 2.
    Jacobs, R.A.: Increased Rates of Convergence Through Learning Rate Adaptation. Neural Networks 1, 293–380 (1988)CrossRefGoogle Scholar
  3. 3.
    Vogl, T.P., Magis, J.K., Rigler, A.K., Zink, W.T., Alkon, D.L.: Accelerating the Convergence of the Back-Propagation Method. Biological Cybernetics 59, 257–263 (1988)CrossRefGoogle Scholar
  4. 4.
    Allred, L.G., Kelly, G.E.: Supervised learning techniques for backpropagation networks. In: Proc. of IJCNN, vol. 1, pp. 702–709 (1990)Google Scholar
  5. 5.
    Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: Proc. Int. Joint Conf. Artificial Intelligence, Detroit, pp. 762–767 (1989)Google Scholar
  6. 6.
    Moller, M.S.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–534 (1993)CrossRefGoogle Scholar
  7. 7.
    Fahlman, S.E.: Fast learning variations on backpropagation: An empirical study. In: Proc. Connectionist Models Summer School (1989)Google Scholar
  8. 8.
    Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: Pro. Int. Conf. Neural Networks, vol. 1, pp. 586–591 (1993)Google Scholar
  9. 9.
    Ricoti, L.P., Ragazzini, S., Martinelli, G.: Learning of word stress in a suboptimal second order back-propagation neural networks. In: Proc. 1st Int. Conf. Neural Networks, vol. I, pp. 355–361 (1988)Google Scholar
  10. 10.
    Watrous, R.L.: Learning algorithms for connectionist network: applied gradient methods of nonlinear optimization. In: Proc. 1st. Int. Conf. Neural Networks., vol. II, pp. 619–628 (1987)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Bumghi Choi
    • 1
  • Ju-Hong Lee
    • 2
  • Tae-Su Park
    • 2
  1. 1.Dept. of Computer Science & Information EngInha UniversityKorea
  2. 2.School of Computer Science & Eng.Inha UniversityKorea

Personalised recommendations