Advertisement

Neural Network Training with Second Order Algorithms

  • H. Yu
  • B. M. Wilamowski
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 99)

Abstract

Second order algorithms are very efficient for neural network training because of their fast convergence. In traditional Implementations of second order algorithms [Hagan and Menhaj 1994], Jacobian matrix is calculated and stored, which may cause memory limitation problems when training large-sized patterns. In this paper, the proposed computation is introduced to solve the memory limitation problem in second order algorithms. The proposed method calculates gradient vector and Hessian matrix directly, without Jacobian matrix storage and multiplication. Memory cost for training is significantly reduced by replacing matrix operations with vector operations. At the same time, training speed is also improved due to the memory reduction. The proposed implementation of second order algorithms can be applied to train basically an unlimited number of patterns.

Keywords

Jacobian Matrix Gradient Vector Training Pattern Levenberg Marquardt Neural Network Training 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Cao et al. 2006]
    Cao, L.J., Keerthi, S.S., Ong, C.J., Zhang, J.Q., Periyathamby, U., Fu, X.J., Lee, H.P.: Parallel sequential minimal optimization for the training of support vector machines. IEEE Trans. on Neural Networks 17(4), 1039–1049 (2006)CrossRefGoogle Scholar
  2. [Hagan and Menhaj 1994]
    Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. on Neural Networks 5(6), 989–993 (1994)CrossRefGoogle Scholar
  3. [Hohil et al. 1999]
    Hohil, M.E., Liu, D., Smith, S.H.: Solving the N-bit parity problem using neural networks. Neural Networks 12, 1321–1323 (1999)zbMATHCrossRefGoogle Scholar
  4. [Rumelhart et al. 1986]
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)CrossRefGoogle Scholar
  5. [Wilamowski 2009]
    Wilamowski, B.M.: Neural network architectures and learning algorithms: How not to be frustrated with neural networks. IEEE Industrial Electronics Magazine 3(4), 56–63 (2009)CrossRefGoogle Scholar
  6. [Wilamowski et al. 2008]
    Wilamowski, B.M., Cotton, N.J., Kaynak, O., Dundar, G.: Computing gradient vector and jacobian matrix in arbitrarily connected neural networks. IEEE Trans. on Industrial Electronics 55(10), 3784–3790 (2008)CrossRefGoogle Scholar
  7. [Wilamowski et al. 2010]
    Yu, H., Wilamowski, B.M.: Neural network learning without backpropagation. IEEE Trans. on Neural Networks 21(11) (2010)Google Scholar
  8. [Wilamowski and Yu 2010]
    Yu, H., Wilamowski, B.M.: Improved Computation for Levenberg Marquardt Training. IEEE Trans. on Neural Networks 21(6), 930–937 (2010)CrossRefGoogle Scholar
  9. [Yu and Wilamowski 2009]
    Yu, H., Wilamowski, B.M.: Efficient and reliable training of neural networks. In: Proc. 2nd IEEE Human System Interaction Conf. HSI 2009, Catania, Italy, pp. 109–115 (2009)Google Scholar
  10. [Yu et al. 2009]
    Yu, H., Wilamowski, B.M.: C++ implementation of neural networks trainer. In: Proc. of 13th Int. Conf. on Intelligent Engineering Systems, INES 2009, Barbados (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • H. Yu
    • 1
  • B. M. Wilamowski
    • 1
  1. 1.Department of Electrical and Computer EngineeringAuburn UniversityAuburnUSA

Personalised recommendations