Advertisement

A Fast Learning Algorithm Based on Layered Hessian Approximations and the Pseudoinverse

  • E. J. Teoh
  • C. Xiang
  • K. C. Tan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)

Abstract

In this article, we present a simple, effective method to learning for an MLP that is based on approximating the Hessian using only local information, specifically, the correlations of output activations from previous layers of hidden neurons. This approach of training the hidden layer weights with the Hessian approximation combined with the training of the final output layer of weights using the pseudoinverse [1] yields improved performance at a fraction of the computational and structural complexity of conventional learning algorithms.

Keywords

Hide Layer Extreme Learn Machine Hide Neuron Regularization Term Hide Layer Neuron 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Huang, G., Zhu, Q., Siew, C.: Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, Los Alamitos (2004)Google Scholar
  2. 2.
    Bishop, C.: Exact Calculation of the Hessian Matrix for the Multi-Layer Perceptron. Neural Computation 4(4), 494–501 (1992)CrossRefGoogle Scholar
  3. 3.
    Buntine, W., Weigend, A.: Computing Second Derivatives in Feed-Forward Networks: A Review. IEEE Trans. Neural Networks 5(3), 480–488 (1994)CrossRefGoogle Scholar
  4. 4.
    Press, W., Flannery, B., Teukolsky, S., Vetterling, W.: Numerical Recipes in C Example Book: The Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1994)Google Scholar
  5. 5.
    Scalero, R., Tepedelenlioglu, N.: A Fast New Algorithm for Training Feedforward Neural Networks. IEEE Trans. Signal Processing 40(1) (1992)Google Scholar
  6. 6.
    Parisi, R., Claudio, E.D., Orlandi, G., Rao, B.: A Generalized Learning Paradigm Exploiting the Structure of Feedforward Neural Networks. IEEE Trans. Neural Networks 7(6), 1450–1460 (1996)CrossRefGoogle Scholar
  7. 7.
    Bartlett, P.: The Sample Complexity of Pattern Classication with Neural Networks: The Size of the Weights is More Important than the Size of the Network. IEEE Trans. Information Theory 44(2), 525–536 (1998)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Cover, T.: Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition. IEEE Trans. Electronic. Comput. 14, 326–334 (1965)MATHCrossRefGoogle Scholar
  9. 9.
    Lowe, D.: Adaptive Radial Basis Function Nonlinearities, and the Problem of Generalization. In: 1st IEE International Conference on Artificial Neural Networks, pp. 171–175 (1989)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • E. J. Teoh
    • 1
  • C. Xiang
    • 1
  • K. C. Tan
    • 1
  1. 1.Department of Electrical and Computer EngineeringNational University of SingaporeSingapore

Personalised recommendations