A Fast Learning Algorithm Based on Layered Hessian Approximations and the Pseudoinverse
In this article, we present a simple, effective method to learning for an MLP that is based on approximating the Hessian using only local information, specifically, the correlations of output activations from previous layers of hidden neurons. This approach of training the hidden layer weights with the Hessian approximation combined with the training of the final output layer of weights using the pseudoinverse  yields improved performance at a fraction of the computational and structural complexity of conventional learning algorithms.
KeywordsHide Layer Extreme Learn Machine Hide Neuron Regularization Term Hide Layer Neuron
Unable to display preview. Download preview PDF.
- 1.Huang, G., Zhu, Q., Siew, C.: Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, Los Alamitos (2004)Google Scholar
- 4.Press, W., Flannery, B., Teukolsky, S., Vetterling, W.: Numerical Recipes in C Example Book: The Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1994)Google Scholar
- 5.Scalero, R., Tepedelenlioglu, N.: A Fast New Algorithm for Training Feedforward Neural Networks. IEEE Trans. Signal Processing 40(1) (1992)Google Scholar
- 9.Lowe, D.: Adaptive Radial Basis Function Nonlinearities, and the Problem of Generalization. In: 1st IEE International Conference on Artificial Neural Networks, pp. 171–175 (1989)Google Scholar